Introducing Hamster Optimization Protocol (HOP) — a new way to optimize deep learning models inspired by a metaphorical game where curious hamsters scurry through the loss landscape, adapt their strategies, and learn from each other.
HOP utilizes a game-inspired approach where hamsters, represented as independent optimizers, explore the loss landscape using learning rate, momentum, exploration noise, and personal best (PBest) to find deeper valleys.
If a hamster gets stuck in a suboptimal solution, it observes and boosts its learning rate when another hamster finds a better solution, increasing the chances for escaping local minima.
Benchmark results show that HOP performs at a similar convergence rate to other popular optimizers, but it demonstrates better capability to escape local minima and adapt to changing landscapes.