Recurrent Neural Network (RNN) is one of the simplest neural network models for sequence data, including speech and text data.
RNNs are foundational sequence models in deep learning, ideal for those looking to master handling sequential data such as text, speech, and stock prices.
We’ll be training a single-layer RNN for a binary classification task.
Forward propagation is the process of passing the input through a neural network to calculate the output.
Backpropagation Through Time (BPTT) is the reverse of forward propagation in recurrent neural networks, where gradients of the loss function are calculated and propagated backward through each time step of the input sequence.
The input could be as simple as a one-hot encoded vector, representing a word.
The goal is to compute the gradient of loss function w.r.t all the learnable parameters of the neural network, i-e: weights and biases.
The M input examples can be incorporated into matrices for efficient computation.
In the vectorized approach all the input examples will be fed into the RNN simultaneously, however, the items of the sequences will be processed sequentially.
We’ll benefit from a temporary variable P, to store our intermediate results.