Large Language Models (LLMs) struggle with tasks requiring external knowledge.Knowledge Graphs (KGs) can enhance reasoning, but existing methods demand costly fine-tuning or retrieve noisy KG information.Question-Aware Knowledge Graph Prompting (QAP) dynamically assesses KG relevance and incorporates question embeddings into reasoning.Experimental results demonstrate that QAP outperforms state-of-the-art methods in Multiple Choice Question Answering tasks.