<ul data-eligibleForWebStory="true">Large Language Models (LLMs) and Large Reasoning Models (LRMs) have revolutionized AI with their text generation and problem-solving capabilities.LLMs excel at tasks like text generation, while LRMs focus on reasoning and problem-solving.A recent study by Apple researchers delves into why LLMs and LRMs struggle with problem complexity.LLMs tend to overcomplicate simple problems, while LRMs outperform them in medium-complexity scenarios.Both LLMs and LRMs fail in handling highly complex puzzles, with LRMs displaying a 'giving up' behavior.Overthinking of simple puzzles by LRMs may stem from exaggerated explanations mimicked from training data.LRMs' inability to scale reasoning efforts on complex problems suggests a limitation in their generalization abilities.The study sparks discussions on AI reasoning, highlighting the debate on what constitutes effective reasoning in AI.Implications include the need for improved AI evaluation methods and enhancing models' reasoning adaptability.Future research should focus on developing models that can handle logical steps accurately across various complexities.