For more than three decades, x86 architectures dominated computing. Today, general-purpose processing is giving way to specialized accelerators.
We categorize “extreme parallel computing” as the specialized hardware and software for AI training, inference, HPC clusters and advanced analytics.
Nvidia accounts for roughly 25% of the entire data center segment. Our view is that Nvidia will retain that leading share throughout the forecast period — assuming it avoids unforced errors — despite intense competition from hyperscalers, AMD and others.
We have modeled the entire data center market — servers, storage, networking, power, cooling and related infrastructure — from 2019 through 2035. Our research points to a rapid transition away from traditional general-purpose computing toward extreme parallel computing.
We believe the transition from general-purpose, x86, central processing units toward distributed clusters of graphics processing units and specialized accelerators is happening even faster than many anticipated.
InfiniBand emerged as the go-to for ultra-low-latency interconnects. Now, we see that trend permeate hyperscale data centers, with high-performance Ethernet as a dominant standard which will ultimately in our view prove to be the prevailing open network of choice.
Nvidia’s advantage does not hinge on chips alone. Its integration of hardware and software — underpinned by a vast ecosystem — forms a fortress-like moat that is difficult to replicate.
Although competition is strong, none of these players alone threatens Nvidia’s long-term dominance — unless Nvidia makes significant missteps.
The data center — as we have known it — will transform into a distributed, parallel processing fabric where GPUs and specialized accelerators become the norm.
The anticipated shift toward accelerated compute forms the foundation of our bullish stance on data center growth.