The reinforcement learning (RL) and model predictive control (MPC) communities have developed vast ecosystems of theoretical approaches and computational tools for solving optimal control problems.
MPCritic is a machine learning-friendly architecture that seamlessly interfaces with MPC tools.
MPCritic utilizes the loss landscape defined by a parameterized MPC problem, focusing on 'soft' optimization over batched training steps.
The versatility of MPCritic is demonstrated on classic control benchmarks in terms of MPC architectures and RL algorithms it can accommodate.