The article discusses the importance of rule generalization for Reinforcement Learning (RL) agents to perform well on both seen and unseen entities.It highlights the challenge of excessive generalization leading to false-positive results and the need to strike a balance.A novel approach of dynamic rule generalization is proposed using WordNet's hypernym-hyponym relations.The algorithm dynamically generates generalized rules based on information gain from positive and negative examples.The study demonstrates the benefit of integrating symbolic and neural reasoning in RL agents for text-based games like TW-Cooking.Experiments show that the EXPLORER agent outperforms neural-only agents and SOTA models like GATA and CBR on TW-Cooking and TWC games.Different generalization settings in EXPLORER, such as Exhaustive Rule Generalization and IG-based generalization, are compared in the study.Agents are trained with 100 episodes on TW-Cooking domain without pretraining advantage to boost performance.The paper is available on arxiv under CC BY 4.0 DEED license.