Logic gate networks (LGNs) aim to provide efficient solutions for image classification by learning a network of logic gates.
Training LGNs to solve even simple problems like CIFAR-10 can take days to weeks, with almost half of the network remaining unused, leading to a discretization gap.
Researchers have introduced Gumbel noise with a straight-through estimator during training to speed up training, improve neuron utilization, and reduce the discretization gap in LGNs.
This approach results in training networks 4.5 times faster, reducing the discretization gap by 98%, and eliminating the number of unused gates by 100%.