Why it matters: The way we build compute is changing. As everyone seeks ways to cope with the slowing of Moore's Law, companies are going to need to move away from general purpose chips like CPUs and GPUs. To squeeze more performance out of our hardware, we are going to need to build more integrated, complex solutions that tie hardware and software more tightly together.
Once upon a time chip companies all specialized on designing one type of chip: Intel made CPUs; Qualcomm made modems; Nvidia made GPUs; Broadcom (pre-Avago) made networking chips. That age is all over. The future of semis will be designing ever more specific chips for ever more specific uses. This change will take many years to play out, but the transition has already begun. This is going to upend the semis industry to the same degree that consolidation over the past 20 years has.
Editor's Note:
Guest author Jonathan Goldberg is the founder of D2D Advisory, a multi-functional consulting firm. Jonathan has developed growth strategies and alliances for companies in the mobile, networking, gaming, and software industries.
There are many causes of this. The simplest is to just say Moore's Law is slowing, so everyone needs to find a new business model. But that does not explain much, so let's unpack it. In the misty past before 2010, Moore's Law meant that chips got 'faster' or 'better' every two years or so. If some customer had a special-purpose chip they needed, they could go out and design their own, but by the time they could get that chip to production, the new CPUs were coming into production, and those usually proved better than the purpose-built chip under design.
Then Moore's Law slowed, we lack sufficient PhD's to say it is over, but it has definitely slowed. So everyone now has to work a bit harder to squeeze performance gains out of their silicon designs. Most obviously, this has opened the doors to all the Roll Your Own silicon coming out from hardware and hyperscaler companies, but the changes are set to blow way past that.
The whole point of a semiconductor is to run some form of software. As we said, in the past, we could win performance gains for that software from denser chips, but now companies are going to have to look at the software side of the problem a bit more closely. Google rolled out its TPU because they wanted something that ran their AI algorithms better. They rolled out the VCU for the same reason, and that chip was actually designed by software engineers. Same story for Apple and its M- and A- series processors. In all of these, the whole point is to optimize the silicon for the software.
Not everyone is going to want or be able to roll their own chips, and so we are starting to see a host of intermediary chips that are not single-type, general purpose compute nor are they entirely customized. AMD's recently-acquired Pensando DPUs are a good example of this intermediary step.
Once upon a time, data centers were essentially warehouses full of CPUs. Now they have to house GPUs, AI accelerators, funky networking loads and a bunch of FPGAs, too. This is often called heterogeneous compute, and it's the opposite of that past CPU uniformity.
Nor are these changes only happening in data centers. The whole notion of "Edge Compute" looks increasingly to be an exercise in custom and semi-custom silicon popping up in all kinds of places - cars, factories and smart cities - to name just a few.
Ultimately, the major chip companies are going to have to decide how to address these changes. Building custom chips is not a great business, but designing semi-custom chips is full of risks not least picking the right designs, supporting them and hoping they land on target.
Established companies are already starting to position themselves for this, and for the first time in a decade the door for start-ups is starting to open a crack.