menu
techminis

A naukri.com initiative

google-web-stories
Home

>

AI News

>

AI reasoni...
source image

Livescience

2d

read

35

img
dot

Image Credit: Livescience

AI reasoning models aren’t as smart as they were cracked up to be, Apple study claims

  • Artificial intelligence (AI) reasoning models like Meta's Claude and OpenAI's o3 don't actually reason, Apple researchers argue.
  • These models, including DeepSeek's R1, focus on accuracy but fail when tasks become complex.
  • The study shows that frontier large language models face accuracy collapses at higher complexities.
  • Reasoning models work by absorbing training data to generate neural network patterns.
  • However, they tend to 'hallucinate,' providing erroneous responses due to statistical guesswork.
  • Reasoning bots attempt to boost accuracy using 'chain-of-thought' processes for complex tasks.
  • The study found generic models outperform reasoning models in low-complexity tasks.
  • As tasks became complex, reasoning models' performance declined to zero, indicating limitations in maintaining 'chains-of-thought.'
  • Apple's study highlights limitations in current evaluation paradigms of AI reasoning models.
  • The study challenges claims of imminent artificial general intelligence (AGI) advancement in AI.

Read Full Article

like

2 Likes

For uninterrupted reading, download the app