Governments vs ChatGPT: Investigations around the world

ChatGPT, the AI-powered tool that allows you to chat and get answers to almost any question, has taken the world by storm. Now, governments around the world are starting to take notice of these tools and have launched investigations into OpenAI’s ChatGPT.

Here are the countries that are probing OpenAI. They’re in alphabetical order, but the list includes dates for all the official announcements and media releases if you prefer chronological order.

a close up of a computer screen with the word chatgpt on it

CANADA

Canada’s Office of the Privacy Commissioner opened an investigation on 4 April following a complaint alleging that OpenAI collected, used, and disclosed personal information without consent. 

Privacy commissioner  Philippe Dufresne said that the impact of AI on privacy is a crucial concern and that his office must keep pace with rapidly-evolving technology. Dufresne’s office has not disclosed other details since this is an ongoing investigation.

EUROPEAN UNION

Prompted by Italy’s ban and Spain’s request to look into privacy concerns surrounding ChatGPT (see further down), on 13 April, the European Data Protection Board (EDPD) agreed to launch a task force to coordinate the work of European data protection authorities.

So far, there’s little information about the European Data Protection Board’s (EDPB) new task force other than a decision to tackle ChatGPT-related action during the EDPB’s next plenary, scheduled for 26 April. The minutes of EDPB’s 13 April plenary session are not yet available.

FRANCE

As reported in the media on 14 April, France’s data protection regulator (CNIL) opened a formal investigation into ChatGPT after receiving five complaints, three of which were made public. The CNIL did not make any official announcements.

The first complaint, revealed by French journal L’Informé (paywalled), came from lawyer Zoé Villain, president of Janus International (an association for raising awareness of digital issues). In her complaint to the CNIL, sent on 4 April, she explained how OpenAI did not ask her to consent to any terms of use or privacy policy when signing up. She also was unable to access the personal data the company is retaining, in breach of her right of access to such data. 

The second complaint to the CNIL, also on 4 April, came from developer David Libeau, who explained in a blog post that OpenAI lacks transparency and fairness, and fails to safeguard people’s right to data protection.

The third CNIL complaint was initiated on 12 April by member of parliament Éric Bothorel after noticing that ChatGPT often gives erroneous information. To test the tool, he asked ChatGPT for information about himself, which Bothorel says was largely inaccurate (including his date of birth!).

Bothorel also took the initiative to organise a seminar on ChatGPT for French members of parliament. The event will take place at the National Assembly on 9 May.

Meanwhile, the French city of Montpellier has banned its officials from using ChatGPT. The decision was taken after deputy mayor Manu Reynaud recommended the ban as a precaution.

GERMANY

Germany’s data protection conference (DSK), the body of independent German data protection supervisory authorities of its federal and state governments, opened an investigation into ChatGPT (most likely on 10 April; announcement is undated). 

The announcement was made by the North Rhine-Westphalia watchdog; a similar announcement was made by the Commissioner for Data Protection and Freedom of Information of Hesse. Details are otherwise scarce, as the DSK itself has been mum about it.

IRELAND

Ireland’s data protection commissioner is in touch with Italy’s regulator over ChatGPT’s temporary ban in Italy, according to a media report. ‘We will coordinate with all EU data protection authorities in relation to this matter.’ No other details have been made available so far.

ITALY

The Italian Data Protection Authority (GDPD) was the first Western country to impose a temporary limit on OpenAI’s ChatGPT, on 31 March, citing four reasons: a data breach reported on 20 March, unlawful data collection, inaccurate results, and the lack of an age verification system to keep children safe. Read more: Italy’s rage against the machine

In compliance, OpenAI geo-blocked access to ChatGPT to anyone residing in Italy. But its API (the interface that allows other applications to interact with it) and Microsoft Bing (which also uses ChatGPT) remained accessible.

The GDPD has now provided the company with a list of demands it must comply with by 30 April, before the authority may lift its temporary ban. Among them, the Italian government wants OpenAI to let people know how personal data will be used to train the tool and to request consent from users before processing their personal data. 

But a more challenging request is for the company to introduce measures for identifying accounts used by children by 30 September and to implement an age-gating system for underaged users. The age-verification request coincides with efforts by the EU to improve how platforms confirm their users’ age. The EU’s new eID proposal, for instance, will introduce a much-needed framework of certification and interoperability for age-verification measures. The way OpenAI tackles this issue will be a testbed for new measures. 

SPAIN

Spain’s Data Protection Agency (AEPD) announced an independent investigation on 13 April to examine OpenAI’s practices for possible breaches.

The AEPD also said that the week before, it requested the EU’s data protection watchdog to include ChatGPT on the agenda of its next plenary meeting (more below).

SWITZERLAND

On 4 April, the Swiss Federal Data Protection and Information Commissioner (FDPIC) said it was in communication with the Italian Garante to obtain more information about its ban on ChatGPT.

The FDPIC hasn’t started any formal investigation, so for the time being, it is advising users to understand how the company is processing their data before querying or uploading images. The same goes for companies using other AI tools: They must ensure that they inform their users on how they’re processing their data and for which purposes.

UK

The UK has not initiated any investigation either, but on 3 April, the Information Commissioner’s Office reminded organisations using generative AI software that there are no exceptions to the rules governing personal data. 

USA

On 30 March, the Center for Artificial Intelligence and Digital Policy filed a complaint with the US Federal Trade Commission (FTC), requesting it to open an investigation into OpenAI’s practices and to stop the company from issuing new commercial releases of GPT-4.

The concerns are broad: In its 47-page complaint, CAIPD argues that OpenAI’s practices are unfair and deceptive, and contain numerous privacy risks; the company doesn’t provide evidence of safety checks to keep children safe from harmful content; and overall, the company’s practices violate emerging legal norms on AI governance. 

CAIPD’s complaint has long been coming. In March, the organisation’s president, Marc Rotenberg, and chair and research director, Merve Hickok, appealed to US policymakers to introduce guardrails for ensuring algorithmic transparency, fairness, accountability, and traceability, across the entire AI lifecycle. A fortnight later, they hinted they would file a complaint with the FTC.

Read next: Governments vs ChatGPT: Regulation around the world

Sign up for the Digital Watch Weekly to receive the latest analysis and updates on global digital policy. It’s delivered to your inbox every Monday.

Governments vs ChatGPT: Regulation around the world

ChatGPT, the AI-powered tool that has taken the world by storm, has now caught governments’ eyes. In reaction to user complaints and worrying reports (such as those by Europol and the OECD) over data privacy, security, transparency, and other risks, several countries have initiated investigations into the practices of OpenAI, the company behind ChatGPT. 

Governments are also ramping up efforts to introduce regulation that can tackle AI tools such as ChatGPT, collectively known as general purpose AI. These tools might not be designed with a high-risk use in mind, but could nonetheless be used in settings that can lead to unintended consequences.

The three main hotspots for new rules are the EU, China, and the USA. Here are the latest developments:

CHINA

Right now, China is at the front of the regulation race. The Cyberspace Administration of China (CAC) wasted no time in proposing new measures for regulating generative AI services on 11 April, which are open for public comments until 10 May. 

The rules. Providers must ensure that content reflects the country’s core values, and shouldn’t include anything that might disrupt the economic and social order. No infringements of discrimination, false information, or intellectual property are allowed. Tools must undergo a security assessment before being launched.

Who they apply to. The onus of responsibility falls on organisations and individuals that use these tools to generate text, images, and sounds for public consumption. They are also responsible for making sure that pre-trained data is correctly cited and lawfully sourced.

Its no-nonsense approach to regulating tech companies, most evident in recent years, is a good indication that China’s rules will be the first to be implemented.

The industry is also calling for prudence. For example, the Payment & Clearing Association of China has advised its industry members to avoid uploading confidential information to ChatGPT and similar AI tools, over risks of cross-border data leaks.

EUROPE

Well-known for its tough rules on data protection and digital services/markets, the EU is inching closer to seeing its AI Act – proposed by the European Commission two years ago – materialise. While the European Council has adopted its negotiating position, the draft is still being debated by the European Parliament. The act will then need to be negotiated between the three EU institutions in the so-called trilogues (Parliament, Council and Commission. Progress is slow, but sure.

As policymakers debate the text, a group of international AI experts argue that general-purpose AI systems carry serious risks and must not be exempt under the new EU legislation. Under the proposed rules, certain accountability requirements apply only to high-risk systems. The experts argue that software such as ChatGPT needs to be assessed for its potential to cause harm and must also have commensurate safety measures in place. The rules must also look at the entire life cycle of a product.

What does this mean? If the rules are updated to consider, for instance, the development phase of a product, this means that we won’t just wait to look at whether an AI model was trained on copyrighted material, or on private data, after the fact. Rather, a product will be audited before its launch. This is quite similar to what China is proposing and what the USA will consider soon.

The draft rules on general-purpose AI are still up for debate at the European Parliament, so things might still change. 

USA

Well-known for its laissez-faire approach to regulating technological innovation, the USA is taking (baby) steps towards new AI rules.

The Biden administration is studying potential accountability measures for AI systems, such as ChatGPT. In its request for public feedback (which runs until 10 June), the National Telecommunications and Information Administration (NTIA) of the Department of Commerce is looking into new policies for AI audits and assessments that tackle bias, discrimination, data protection, privacy, and transparency. 

What this exercise covers. Everything and anything that falls under the definition of AI systems and automated systems, including technology that can ‘generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments’ (definition by the US National Institute for Standards and Technology). 

What’s next? There’s already a growing interest in regulating AI governance tools, the NTIA writes, so this exercise will help it advise the White House on how to develop an ecosystem of accountability rules. 

Separately, US Senate Democrats are working on new legislation spearheaded by Majority Leader Chuck Schumer. A draft is circulating, but don’t expect anything tangible anytime soon unless the initiative secures bipartisan support.

In a bid to avoid facing intellectual property infringements, music company Universal Music Group has ordered streaming platforms, including Spotify and Apple, to block AI services from scraping melodies and lyrics from copyrighted songs, according to the Financial Times. The company fears AI systems are being trained on artists’ intellectual property. IPR lawsuits are looming.

And if you’re a music fan, here’s Heart on My Sleeve, an AI-generated song generated by someone known only as ‘ghostwriter’ on YouTube, which simulated the voices of Canadian singers Drake and The Weeknd. You can guess what Universal’s reaction was.

YouTube player

Have you read this? Governments vs ChatGPT: Investigations around the world

Sign up for the Digital Watch Weekly to receive the latest analysis and updates on global digital policy. It’s delivered to your inbox every Monday.