<ul data-eligibleForWebStory="true">Prioritized experience replay is a crucial component of value-based deep reinforcement learning models.Transitions are typically prioritized based on temporal difference error, but this can favor noisy transitions.Using epistemic uncertainty estimation is proposed to guide transition prioritization from the replay buffer.Epistemic uncertainty quantifies uncertainty that can be reduced by learning, reducing sampled unpredictable transitions.Benefits of epistemic uncertainty prioritized replay are illustrated in tabular toy models and evaluated on the Atari suite.The approach outperformed quantile regression deep Q-learning benchmarks.This method paves the way for uncertainty prioritized replay in reinforcement learning agents.