Artificial intelligence is the buzzword these days for most of the tech companies, and they are stuffing its various iterations into their latest products, all in the aim of encouraging us to buy their hardware.
The latest generation of CPUs launched by the likes of AMD, Intel, etc. have AI slapped onto everything regardless of whether or not it’s actually adding anything useful to a product.
CPU performance, when it comes to AI, is insignificant and irrelevant. AI demands parallel processing performance like nothing else, and nothing does that better than a graphics card right now.
The vast majority of popular AI tools today require cloud computing to fully function, making it impossible to run them on a local machine.
The one significant exception to this rule is localized upscaling and super-sampling, such as Nvidia’s DLSS and Intel’s XeSS.
Lot of fuss is being made about AI, apart from Nvidia that has installed 100,000 Nvidia H100 GPUs in xAI’s latest AI training system.
AI adds a lot of potency, but outcomes are entirely dependent on how it’s manipulated, managed, and regulated.
AI performance is insignificant on a CPU, to the point of being seriously irrelevant.
Despite the hype surrounding AI, majority of the AI tools popular today require cloud computing to fully function on a machine.
Localized upscaling and super-sampling such as Nvidia’s DLSS and Intel’s XeSS are good examples of AI tools that can be run on a machine, but you're essentially out of luck if you want to use AI to enhance most other tasks such as Adobe’s Generative Fill or ChatGPT.