Deep State Space Models (SSMs) are bringing physics-grounded compute paradigms back into the spotlight.
Recurrent Hamiltonian Echo Learning (RHEL) is a new algorithm proposed to compute loss gradients efficiently for non-dissipative, Hamiltonian systems.
RHEL requires only three forward passes regardless of model size, without explicit Jacobian computation, ensuring consistent gradient estimation.
RHEL has been shown to match the performance of Backpropagation Through Time (BPTT) in training Hamiltonian SSMs on time-series tasks, demonstrating scalability and efficiency.