Google faces potential breakup as DOJ targets search monopoly

The US Department of Justice has proposed remedies to dismantle Google‘s dominance in the search market, which analysts warn could undermine the company’s primary profit source and hinder its advancements in AI. The DOJ may seek to compel Google to divest parts of its business, including the Chrome browser and Android operating system, while also considering measures such as barring the collection of sensitive user data, requiring transparency in search results, and allowing websites to opt out of their content being used for AI training.

The proposed changes have already affected Alphabet’s stock, which fell by 1.5% after the announcement. Analysts indicate that if these remedies are put into action, they could diminish Google’s revenue while providing more opportunities for competitors like DuckDuckGo and Microsoft Bing, as well as AI companies such as Meta and Amazon. With Google’s share of the US search ad market expected to fall below 50% for the first time in over a decade by 2025, these remedies are viewed as essential for creating a more competitive landscape.

Despite the ambitious nature of the DOJ’s proposals, some experts are sceptical about their feasibility. Adam Kovacevich from the Chamber of Progress argues that these remedies could encounter legal challenges and may not withstand the appeals process. While investors appear doubtful that a forced breakup of Google will take place, the situation highlights the increasing scrutiny and pressure on the tech giant within a rapidly changing competitive landscape.

California halts AI bill amid industry concerns

California Governor Gavin Newsom has vetoed a contentious AI safety bill, citing concerns that it might stifle innovation and drive companies out of the state. The bill, proposed by Senator Scott Wiener, aimed to impose strict regulations on AI systems, including safety testing and methods for deactivating advanced AI models. Newsom acknowledged the need for oversight but criticised the bill for applying uniform standards to all AI systems, regardless of their specific risk levels.

Despite the veto, Newsom emphasised his commitment to AI safety, directing state agencies to assess the risks of potential catastrophic events tied to AI use. He has also called on AI experts to help develop regulations that are science-based and focus on actual risks. With AI technology advancing rapidly, he plans to work on a more tailored approach with the legislature in the next session.

The AI bill faced mixed reactions from both the tech industry and lawmakers. While companies like Google, Microsoft, and Meta opposed the measure, Tesla’s Elon Musk supported it, arguing that stronger regulations are essential before AI becomes too powerful. The tech industry praised Newsom’s decision, stating that California’s tech economy thrives on competition and openness.

Newsom’s veto has raised questions about the future of AI regulation, both in California and across the US. With federal efforts to regulate AI still stalled, the debate over how best to balance innovation and safety continues.

Super Micro faces US investigation after Hindenburg allegations

The United States Department of Justice is investigating Super Micro Computer, according to a Wall Street Journal report citing sources familiar with the matter. Following the news, shares of the AI server maker fell by about 5%.

Earlier in the month, Super Micro had denied allegations made by short-seller Hindenburg Research, which accused the company of ‘accounting manipulation’ and cited issues like undisclosed related-party transactions and failure to comply with export controls.

Hindenburg revealed its short position in Super Micro in August, prompting a further examination of the company’s financial practices. Super Micro has dismissed the report as containing ‘false or inaccurate statements.’ The server maker did not immediately respond to requests for comment from Reuters.

Meta and Spotify criticise EU decisions on AI

Several tech companies, including Meta and Spotify, have criticised the European Union for what they describe as inconsistent decision-making on data privacy and AI. A collective letter from firms, researchers, and industry bodies warned that Europe risks losing competitiveness due to fragmented regulations. They urged data privacy regulators to deliver clear, harmonised decisions, allowing European data to be utilised in AI training for the benefit of the region.

The companies voiced concerns about the unpredictability of recent decisions made under the General Data Protection Regulation (GDPR). Meta, known for owning Facebook and Instagram, recently paused plans to collect European user data for AI development, following pressure from EU privacy authorities. Uncertainty surrounding which data can be used for AI models has become a major issue for businesses.

Tech firms have delayed product releases in Europe, seeking legal clarity. Meta postponed its Twitter-like app Threads, while Google has also delayed the launch of AI tools in the EU market. The introduction of Europe’s AI Act earlier this year added further regulatory requirements, which firms argue complicates innovation.

The European Commission insists that all companies must comply with data privacy rules, and Meta has already faced significant penalties for breaches. The letter stresses the need for swift regulatory decisions to ensure Europe can remain competitive in the AI sector.

EU’s AI Act faces tech giants’ resistance

As the EU finalises its groundbreaking AI Act, major technology firms are lobbying for lenient regulations to minimise the risk of multi-billion dollar fines. The AI Act, agreed upon in May, is the world’s first comprehensive legislation governing AI. However, the details on how general-purpose AI systems like ChatGPT will be regulated remain unclear. The EU has opened the process to companies, academics, and other stakeholders to help draft the accompanying codes of practice, receiving a surge of interest with nearly 1,000 applications.

A key issue at stake is how AI companies, including OpenAI and Stability AI, use copyrighted content to train their models. While the AI Act mandates companies to disclose summaries of the data they use, businesses are divided over how much detail to include, with some advocating for protecting trade secrets. In contrast, others demand transparency from content creators. Major players like Google and Amazon have expressed their commitment to the process, but there are growing concerns about transparency, with some accusing tech giants of trying to avoid scrutiny.

The debate over transparency and copyright has sparked a broader discussion on the balance between regulation and innovation. Critics argue that the EU’s focus on regulation could stifle technological advancements, while others stress the importance of oversight in preventing abuse. Former European Central Bank chief Mario Draghi recently urged the EU to improve its industrial policy to compete with China and the US, emphasising the need for swift decision-making and significant investment in the tech sector.

The finalised code of practice, expected next year, will not be legally binding but will serve as a guideline for compliance. Companies will have until August 2025 to meet the new standards, with non-profits and startups also playing a role in drafting. Some fear that big tech firms could weaken essential transparency measures, underscoring the ongoing tension between innovation and regulation in the digital era.

Facebook and Instagram data to power Meta’s AI models

Meta Platforms will soon start using public posts on Facebook and Instagram to train its AI models in the UK. The company had paused its plans after regulatory concerns from the Irish privacy regulator and Britain’s Information Commissioner’s Office (ICO). The AI training will involve content such as photos, captions, and comments but will exclude private messages and data from users under 18.

Meta faced privacy-related backlash earlier in the year, leading to its decision to halt the AI model launch in Europe. The company has since engaged with UK regulators, resulting in a clearer framework that allows the AI training plans to proceed. The new strategy simplifies the way users can object to their data being processed.

From next week, Facebook and Instagram users in the UK will receive in-app notifications explaining how their public posts may be used for AI training. Users will also be informed on how to object to the use of their data. Meta has extended the window in which objections can be filed, aiming to address transparency concerns raised by both the ICO and advocacy groups.

Earlier in June, Meta’s AI plans faced opposition from privacy advocacy groups like NOYB, which urged regulators to intervene. These groups argued that Meta’s notifications did not fully meet the EU’s privacy and transparency standards. Meta’s latest updates are seen as an effort to align with these regulatory demands.

Major data centre investment by Amazon in the UK

Amazon has announced plans to invest £8 billion in the UK to expand its data centre operations. The investment will be made by Amazon Web Services (AWS) over the next five years, aiming to meet growing demand for cloud computing, largely driven by AI advancements.

This new investment will add to AWS’s previous contributions of £3 billion since 2022, with facilities already in London and Manchester. The company expects the project to contribute £14 billion to the UK economy and support more than 14,000 jobs by the end of 2028.

AWS’s investment follows significant European cloud computing expansions, including substantial projects in Spain and Germany. After a pause last year, many corporate clients have resumed cloud spending, driven by a renewed interest in AI.

The announcement has been welcomed by the UK government, with Finance Minister Rachel Reeves highlighting its importance ahead of an upcoming investment summit. The exact locations of the new data centres will not be disclosed due to security reasons, but they will meet growing demand around London.