Large Language Models (LLMs) have improved our ability to process complex language, but detecting logical fallacies remains a challenge.A study introduces a novel prompt formulation approach for logical fallacy detection.The approach incorporates counterarguments, explanations, and goals to enrich the input text.The method shows substantial improvements over previous models in both supervised and unsupervised settings.