Researchers have proposed a method called selective uncertainty propagation for confidence interval construction in offline reinforcement learning.The method is designed to address the challenges of estimating treatment effects and dealing with distributional shifts in real-world RL instances.Selective uncertainty propagation adapts to the level of difficulty associated with distribution shift challenges.The technique has shown promising results in toy environments and is beneficial for offline policy learning.