menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Optimizing...
source image

TechBullion

7d

read

283

img
dot

Image Credit: TechBullion

Optimizing Chip Design with Machine Learning-Driven Greedy Algorithms

  • Puneet Gupta introduces a machine learning-enhanced greedy algorithm to resolve hold time violations in advanced SoC designs.
  • Conventional hold-fixing approaches focus on endpoint-based delay cell insertion, but Gupta's methodology adopts a system-level perspective by identifying shared paths and implementing coordinated fixes across multiple violations simultaneously.
  • By leveraging machine learning algorithms, the methodology minimizes disruption to critical paths, optimizes delay insertion strategies, and reduces buffer count by 30-40%.
  • Gupta's methodology demonstrates significant improvements in power efficiency, area savings, and reduction in timing closure iterations, making it a valuable advancement in semiconductor design.

Read Full Article

like

17 Likes

For uninterrupted reading, download the app