Microsoft has introduced a new service, called Correction, aimed at addressing a significant flaw in AI models, hallucinations, or factually incorrect responses. The tool identifies and revises erroneous AI-generated content by cross-referencing with accurate data sources, such as transcripts. Correction, available through Microsoft’s Azure AI Content Safety API, works with various models, including OpenAI’s GPT-4.
While Microsoft promotes Correction as a way to boost AI reliability, experts remain skeptical. Researchers warn that hallucinations are deeply ingrained in how AI models operate. Since these systems rely on statistical patterns rather than actual knowledge, completely eliminating false outputs might be impossible. They also caution that this solution could create new issues, like giving users a false sense of trust in AI outputs.
Despite these concerns, Microsoft is pushing to demonstrate the value of its AI tools, having invested billions in the technology. However, concerns about performance and cost are mounting, with some clients already pausing AI deployments due to inaccuracies and high expenses. Experts argue that AI, still in its developmental stages, is being rushed into industries without fully addressing its flaws.