menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Deep Learning News

>

This New L...
source image

Analyticsindiamag

1M

read

203

img
dot

Image Credit: Analyticsindiamag

This New Logic Gate Network Reduces Inference Speed to Only 4 Nanoseconds

  • Researchers at Stanford University have introduced convolutional differentiable logic gate networks (LGNs) with logic gate tree kernels.
  • The research combines concepts from machine vision with differentiable logic gate networks, allowing for the training of deeper LGNs and improving training efficiency.
  • The proposed architecture, 'LogicTreeNet', decreases model size and improves accuracy compared to the state of the art.
  • The model achieves inference speeds of only 4 nanoseconds and improves accuracy on MNIST and CIFAR-10 datasets.

Read Full Article

like

12 Likes

For uninterrupted reading, download the app