OpenAI CEO Sam Altman has spent the past month touring world capitals and advocating for global AI regulation. However, OpenAI has reportedly lobbied for significant elements of the world’s most comprehensive AI legislation, the EU’s AI Act, to be amended in ways that would reduce the regulatory burden on the company. TIME obtained documents about OpenAI’s engagement with the EU officials from freedom of information requests.

OpenAI proposed amendments that appeared in the final text of the EU law, which was approved by the European Parliament on 14 June and will proceed to a final round of negotiations before being finalised as soon as January. The company repeatedly argued to European officials in 2022 that the forthcoming AI Act should not consider its general-purpose AI systems as ‘high risk’, which would involve stringent legal requirements like transparency, traceability, and human oversight. According to OpenAI, a system is not considered high-risk only because it has the skills required to be used in such use cases. OpenAI’s arguments align with those previously employed by lobbying efforts of Microsoft and Google, who argued that stringent regulation should be imposed only on companies that explicitly apply AI to high-risk use cases, not on companies that build general-purpose AI systems.

As the final AI Act draft approved by the EU legislators did not include language suggesting that general-purpose AI systems are intrinsically high risk, OpenAI’s lobbying efforts have succeeded. Instead, the approved regulation required adherence to a more limited set of requirements from producers of potent AI systems known as ‘foundation models’. The inclusion of foundation models as a distinct category in the legislation was encouraged by OpenAI.

TIME acquired a copy of the White Paper on the European Union’s Artificial Intelligence Act that OpenAI issued to the EU Commission and Council members in September 2022. In the article, OpenAI argued against a proposed change to the AI Act that would have labelled generative AI systems like ChatGPT and Dall-E as high-risk.

cross-circle