As the EU finalises its groundbreaking AI Act, major technology firms are lobbying for lenient regulations to minimise the risk of multi-billion dollar fines. The AI Act, agreed upon in May, is the world’s first comprehensive legislation governing AI. However, the details on how general-purpose AI systems like ChatGPT will be regulated remain unclear. The EU has opened the process to companies, academics, and other stakeholders to help draft the accompanying codes of practice, receiving a surge of interest with nearly 1,000 applications.
A key issue at stake is how AI companies, including OpenAI and Stability AI, use copyrighted content to train their models. While the AI Act mandates companies to disclose summaries of the data they use, businesses are divided over how much detail to include, with some advocating for protecting trade secrets. In contrast, others demand transparency from content creators. Major players like Google and Amazon have expressed their commitment to the process, but there are growing concerns about transparency, with some accusing tech giants of trying to avoid scrutiny.
The debate over transparency and copyright has sparked a broader discussion on the balance between regulation and innovation. Critics argue that the EU’s focus on regulation could stifle technological advancements, while others stress the importance of oversight in preventing abuse. Former European Central Bank chief Mario Draghi recently urged the EU to improve its industrial policy to compete with China and the US, emphasising the need for swift decision-making and significant investment in the tech sector.
The finalised code of practice, expected next year, will not be legally binding but will serve as a guideline for compliance. Companies will have until August 2025 to meet the new standards, with non-profits and startups also playing a role in drafting. Some fear that big tech firms could weaken essential transparency measures, underscoring the ongoing tension between innovation and regulation in the digital era.