American AI infrastructure provider Groq aims to provide at least half of the world's AI inference compute by 2027.Groq's language processing unit (LPU) offers superior AI inference capabilities compared to traditional graphics processing units (GPUs).LPUs keep all the model parameters directly within their chips, enabling smooth computation flow and significantly improving speed.Groq CEO states that they are not competing with GPU maker NVIDIA and that GPUs are still necessary for training AI models.