Differential privacy (DP) in deep learning is a critical concern for maintaining data confidentiality and model utility.Forward-learning algorithms add noise during the forward pass to estimate gradients, providing potential natural differential privacy protection.A new algorithm, DP-ULR, is introduced as a privatized forward-learning algorithm with differential privacy guarantees.DP-ULR achieves competitive performance compared to traditional differential privacy training algorithms based on backpropagation.