Researchers found that as deep neural networks become deeper, they start to perform worse due to optimization difficulties.To overcome this, the authors introduced a new architecture called Residual Network (ResNet).ResNet allows layers to learn the difference between the input and the output, enabling successful training of deeper networks.ResNets achieve state-of-the-art accuracy on various tasks.