Meta and Spotify criticise EU decisions on AI
Several tech companies, including Meta and Spotify, have criticised the European Union for what they describe as inconsistent decision-making on data privacy and AI. A collective letter from firms, researchers, and industry bodies warned that Europe risks losing competitiveness due to fragmented regulations. They urged data privacy regulators to deliver clear, harmonised decisions, allowing European data to be utilised in AI training for the benefit of the region.
The companies voiced concerns about the unpredictability of recent decisions made under the General Data Protection Regulation (GDPR). Meta, known for owning Facebook and Instagram, recently paused plans to collect European user data for AI development, following pressure from EU privacy authorities. Uncertainty surrounding which data can be used for AI models has become a major issue for businesses.
Tech firms have delayed product releases in Europe, seeking legal clarity. Meta postponed its Twitter-like app Threads, while Google has also delayed the launch of AI tools in the EU market. The introduction of Europe’s AI Act earlier this year added further regulatory requirements, which firms argue complicates innovation.
The European Commission insists that all companies must comply with data privacy rules, and Meta has already faced significant penalties for breaches. The letter stresses the need for swift regulatory decisions to ensure Europe can remain competitive in the AI sector.
EU’s AI Act faces tech giants’ resistance
As the EU finalises its groundbreaking AI Act, major technology firms are lobbying for lenient regulations to minimise the risk of multi-billion dollar fines. The AI Act, agreed upon in May, is the world’s first comprehensive legislation governing AI. However, the details on how general-purpose AI systems like ChatGPT will be regulated remain unclear. The EU has opened the process to companies, academics, and other stakeholders to help draft the accompanying codes of practice, receiving a surge of interest with nearly 1,000 applications.
A key issue at stake is how AI companies, including OpenAI and Stability AI, use copyrighted content to train their models. While the AI Act mandates companies to disclose summaries of the data they use, businesses are divided over how much detail to include, with some advocating for protecting trade secrets. In contrast, others demand transparency from content creators. Major players like Google and Amazon have expressed their commitment to the process, but there are growing concerns about transparency, with some accusing tech giants of trying to avoid scrutiny.
The debate over transparency and copyright has sparked a broader discussion on the balance between regulation and innovation. Critics argue that the EU’s focus on regulation could stifle technological advancements, while others stress the importance of oversight in preventing abuse. Former European Central Bank chief Mario Draghi recently urged the EU to improve its industrial policy to compete with China and the US, emphasising the need for swift decision-making and significant investment in the tech sector.
The finalised code of practice, expected next year, will not be legally binding but will serve as a guideline for compliance. Companies will have until August 2025 to meet the new standards, with non-profits and startups also playing a role in drafting. Some fear that big tech firms could weaken essential transparency measures, underscoring the ongoing tension between innovation and regulation in the digital era.