Microsoft has launched the Correction service powered by small and large language models to automatically revise AI-generated text that’s factually incorrect. The service flags text that may be erroneous, and fact-checks it by comparing the text with a source of truth. Microsoft's solution is a cross-referencing, copy-editor-esque service that highlights and rewrites hallucinations. However, experts say the system doesn't address the root cause of hallucinations. The main problem is that text-generating models are statistical systems that identify patterns in series of words and predict which words come next based on examples they are trained on.