In a report released on 18. September by the Competition and Markets Authority (CMA), the UK set out guiding principles for the responsible development and use of AI foundational models (‘FMs’) in all sectors of the economy. FMs are AI-based large language models such as Google’s Bard, OpenAI’s ChatGPT, and Microsoft’s Office 365 Copilot. The aim is to prevent a few large tech companies from dominating generative AI models and ensure that AI applications comply with existing UK laws.

The CMA’s seven guiding principles are: accountability, access, diversity, choice, flexibility, fair dealing, and transparency.

The new framework is supposed to bring transparency and consistency to the AI regulatory landscape, increasing public trust in AI use, facilitating responsible innovation, and supporting UK companies to become global AI players.

Why does it matter?

Like other regulators around the globe, the CMA, Britain’s anti-trust body, is vying to mitigate some of the potential harmful impacts of AI without hampering innovation. In April, the G7 ministers approved a ‘risk-based‘ approach that would preserve an open environment for AI innovation. The EU is finalising its AI Act, and the US is also working on AI regulation from multiple angles.

The CMA’s principles are designed to address future opportunities and risks, so companies are clear on how they should design and operate AI models, and users can trust they are safe and reliable.
The new guidelines also come a few weeks before the UK hosts a global AI safety summit, emphasising its path to AI policy as it embraces new regulatory authority to supervise digital markets. Instead of forming a new watchdog, Britain’s choice is to divide the responsibility for regulating AI between the CMA and other bodies that oversee health and safety and human rights.

cross-circle