OpenAI has started renting Google’s Tensor Processing Unit (TPU) AI chips through Google Cloud infrastructure for AI inference, moving away from its previous reliance on Nvidia hardware.
Google is not providing OpenAI with access to its most powerful TPU chips, which are reserved for internal use, and OpenAI may potentially expand the use of Google's chips beyond inference to training.
This shift marks OpenAI's strategy to reduce reliance on Microsoft's Azure cloud infrastructure and diversify its computing infrastructure to meet the demands of its AI models.
The move is part of OpenAI's broader plan to transition into a for-profit public-benefit corporation, renegotiating its relationship with Microsoft and expanding its infrastructure with projects like Stargate AI data centre supported by Oracle and SoftBank.