Researchers at Stanford University have introduced convolutional differentiable logic gate networks (LGNs) with logic gate tree kernels.
The research combines concepts from machine vision with differentiable logic gate networks, allowing for the training of deeper LGNs and improving training efficiency.
The proposed architecture, 'LogicTreeNet', decreases model size and improves accuracy compared to the state of the art.
The model achieves inference speeds of only 4 nanoseconds and improves accuracy on MNIST and CIFAR-10 datasets.