Prompted by the rise of generative artificial intelligence systems (AI) such as OpenAI’s ChatGPT, US lawmakers are currently exploring a variety of approaches to govern AI. President Joe Biden held a meeting with the CEOs of OpenAI, Microsoft Corp, and Alphabet Inc in the White House. Congress is said to be similarly engaged. ‘People want to get ahead of AI, partly because they feel like they did not get ahead of social media.’ said Jack Clark, co-founder of high-profile AI startup Anthropic, whose CEO also attended the White House meeting. Interviews of different stakeholders, such as a US senator, congressional staffers, AI companies and interest groups, reveal different options under discussion. The positions differ widely. Some Republicans even reject the regulation of AI, warning that it might ‘become the mechanism for government micromanagement of computer code like search engines and algorithms.’ Overall, stakeholders are discussing different options on the scope, focus and subjects of AI regulation. A risk-based approach, similar to the approach of the European Union, is currently favoured by the business community, including IBM and the US Chamber of Commerce. A risk-based approach differentiates between the possible risks that AI imposes on people’s lives and livelihoods. For example, AI to diagnose cancer would be scrutinised by the Food and Drug Administration. AI for entertainment would remain unregulated as a ‘video recommendation may not pose as high of a risk as decisions made about health or finances.’ stated Jordan Crenshaw of the Chamber’s Technology Engagement Center.
Democratic Senator Michael Bennet advocates, on the other hand a ‘value-based approach’ that would give priority to privacy and civil liberty rights. Stressing that AI could be used to recommend harmful content such as white supremacy or lead to discriminatory mortgage practices. Senator Bennet recently introduced a bill for to build a governmental AI task force. Regarding the subject of regulation, the debate reaches from regulating the developers of AI to the companies using it to interact with their customers. Lawmakers like Senate Majority Leader Chuck Schumer are determined to tackle AI issues in a bipartisan way. Independent experts could test new AI technologies before their release. Transparency and access to relevant data should enable the government to avert harm. However, Congress is polarised, a Presidential election is next year, and lawmakers are addressing other major issues. To Big Tech companies the priority is to avoid ‘premature overreaction’ said Adam Kovacevich, head of the pro-tech Chamber of Progress. OpenAI proposed a standalone AI regulator, which could be called the Office for AI Safety and Infrastructure Security, or OASIS. This agency would mandate that companies obtain licenses before training powerful AI models or operating the data centres that facilitate them. Such a trustworthy body could ‘hold developers accountable’ to safety standards. Most of all, provide an agreement on the standard themselves and which risks should be mitigated. The last major regulator to be created was the Consumer Financial Protection Bureau. It was set up after the 2007-2008 financial crisis.