The European Commission is poised to oppose a US-led effort that aims to exclude the private sector from the inaugural international treaty on Artificial Intelligence (AI). Formulated by the Council of Europe, the treaty seeks to establish a comprehensive framework for AI governance, human rights, democracy, and the rule of law. The US has proposed an automatic exemption for companies, leaving it to participating countries to decide whether to involve the private sector. The Commission rejects this strategy, contending that it would erode human rights safeguards in private industries and run counter to international law.
Additionally, the EU executive strives to harmonize the treaty with the EU’s AI Act, advocating for a wide-ranging scope encompassing the entire lifecycle of AI systems, albeit with allowances for research and development. Furthermore, the Commission endorses excluding AI systems developed exclusively for national security and defense, aligning with existing EU legislation. While seeking alignment, the Commission is cautious about surpassing the AI Act’s provisions unless it bolsters human rights protection, with the exception of endorsing whistleblower safeguards.
Why does this matter?
The disagreement between the EU and the US over exempting the private sector has significant implications for the future regulation and accountability of AI technologies. The outcome will shape the level of human rights protection in the rapidly advancing AI sector, influencing how nations navigate the ethical challenges posed by AI development and deployment. There has been increased concerns that the private sector is excerting too much influencing over digital policies including the AI Act.