menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Cloud News

>

Google Clo...
source image

Siliconangle

2w

read

276

img
dot

Image Credit: Siliconangle

Google Cloud turbocharges its AI Hypercomputer stack with next-gen TPUs and bigger GPU clusters

  • Google Cloud has unveiled sixth-generation Tensor Processing Units called the Trillium TPU, which will compete with NVidia's powerful GPUs. Also among new infrastructure software offerings is the Hypercompute Cluster, an advanced, scalable clustering system providing unmatched efficiency and performance, plus Hyperdisk ML block storage and Parallelstore parallel file systems. Trillium TPUs provide a four-times boost in AI training and three-times boost in inference throughput compared to previous models, and a 67% increase in energy efficiency. TPUs can be aligned in clusters of up to 256 chips in a low-latency pod linked to additional pods. The A3 Ultra VMs powered by NVidia's H200 Graphics Processing Units will also soon be available, as will C4 VMs based on Google's Axion CPUs.
  • Hypercompute Cluster simplifies infrastructure and workload provisioning so clients can deploy and manage thousands of accelerators as a single unit. The feature also enables low-latency networking, supporting demanding AI and HPC workloads.
  • Trillium TPUs can run larger language models with more weights, larger key value caches and offer improvements for a broad range of model architectures in training and inference.
  • Titanium infrastructure enhances compute and memory resources offered to different workloads while cutting processing overheads. It now supports a range of AI workloads, including demanding ones.
  • Parallelstore parallel file systems also offer increased performance, with clients able to continue using NVidia's GPUs with Google's cloud infrastructure.
  • Google's most recent Axion CPUs are also available in the latest C4A VMs, giving cheaper, yet performance optimised options for general-purpose AI workloads.
  • The A3 Ultra VMs are able to deliver up to 3.2 terabits per second of GPU-to-GPU traffic, with twice as much bandwidth and roughly double the memory capacity.
  • The company also released updated AI-focused storage solutions, which offer faster model load times and increased training times, and provided extra support for the Jupiter optical circuit switching fabric and Cloud Interconnect networking service.
  • Google has continued to maintain its technological advance by making improvements to machine learning software and hardware technologies, topping its rivals in terms of putting customer algorithms like TensorFlow onto customer hardware.
  • The company wants to make it cheaper for customers to run its AI Hypercomputer stack while providing advanced features such as Trillium TPUs, Hypercompute Cluster and Titanium infrastructure.

Read Full Article

like

16 Likes

For uninterrupted reading, download the app