Learning in environments with sparse rewards remains a fundamental challenge in reinforcement learning.Artificial curiosity addresses this limitation through intrinsic rewards to guide exploration.Leveraging information geometry, intrinsic rewards can be constrained to concave functions of the reciprocal occupancy.This framework integrates foundational exploration methods into a single, cohesive model.