US tech leaders oppose proposed export limits

A prominent technology trade group has urged the Biden administration to reconsider a proposed rule that would restrict global access to US-made AI chips, warning that the measure could undermine America’s leadership in the AI sector. The Information Technology Industry Council (ITI), representing major companies like Amazon, Microsoft, and Meta, expressed concerns that the restrictions could unfairly limit US companies’ ability to compete globally while allowing foreign rivals to dominate the market.

The proposed rule, expected to be released as soon as Friday, is part of the Commerce Department’s broader strategy to regulate AI chip exports and prevent misuse, particularly by adversaries like China. The restrictions aim to curb the potential for AI to enhance China’s military capabilities. However, in a letter to Commerce Secretary Gina Raimondo, ITI CEO Jason Oxman criticised the administration’s urgency in finalising the rule, warning of ‘significant adverse consequences’ if implemented hastily. Oxman called for a more measured approach, such as issuing a proposed rule for public feedback rather than enacting an immediate policy.

Industry leaders have been vocal in their opposition, describing the draft rule as overly broad and damaging. The Semiconductor Industry Association raised similar concerns earlier this week, and Oracle’s Executive Vice President Ken Glueck slammed the measure as one of the most disruptive ever proposed for the US tech sector. Glueck argued the rule would impose sweeping regulations on the global commercial cloud industry, stifling innovation and growth.

While the administration has yet to comment on the matter, the growing pushback highlights the tension between safeguarding national security and maintaining US dominance in the rapidly evolving field of AI.

Grok introduces AI-powered features to wider audience

Elon Musk’s AI venture, xAI, has unveiled a standalone iOS app for its chatbot, Grok, marking its first major expansion beyond the X platform. The app, currently in beta testing across Australia and a few other regions, offers users an array of generative AI features, including real-time web access, text rewriting, summarisation, and even image generation from text prompts.

Grok, described as a ‘maximally truthful and curious’ assistant, is designed to provide accurate answers, create photorealistic images, and analyse uploaded pictures. While previously restricted to paying X subscribers, a free version of the chatbot was launched in November and has recently been made accessible to all users.

The app also serves as a precursor to a dedicated web platform, Grok.com, which is in the works. xAI has touted the chatbot’s ability to produce detailed and unrestricted image content, even allowing creations involving public figures and copyrighted material. This open approach sets Grok apart from other AI tools with stricter content policies.

As the beta rollout progresses, Grok is poised to become a versatile tool for users seeking generative AI capabilities in a dynamic and user-friendly interface.

Blade Runner producer takes legal action over AI image use

Alcon Entertainment, the producer behind Blade Runner 2049, has filed a lawsuit against Tesla and Warner Bros, accusing them of misusing AI-generated images that resemble scenes from the movie to promote Tesla’s new autonomous cybercab. Filed in California, the lawsuit alleges violations of US copyright law and claims Tesla falsely implied a partnership with Alcon through the use of the imagery.

Alcon stated that it had rejected Warner Bros’ request to use official Blade Runner images for Tesla’s cybercab event on October 10. Despite this, Tesla allegedly proceeded with AI-created visuals that mirrored the film’s style. Alcon is concerned this could confuse its brand partners, especially ahead of its upcoming Blade Runner 2099 series for Amazon Prime.

Though no specific damages were mentioned, Alcon emphasized that it has invested hundreds of millions in the Blade Runner brand and argued that Tesla’s actions had caused substantial financial harm.

London-based company faces scrutiny for AI models misused in propaganda campaigns

A London-based company, Synthesia, known for its lifelike AI video technology, is under scrutiny after its avatars were used in deepfake videos promoting authoritarian regimes. These AI-generated videos, featuring people like Mark Torres and Connor Yeates, falsely showed their likenesses endorsing the military leader of Burkina Faso, causing distress to the models involved. Despite the company’s claims of strengthened content moderation, many affected models were unaware of their image’s misuse until journalists informed them.

In 2022, actors like Torres and Yeates were hired to participate in Synthesia’s AI model shoots for corporate projects. They later discovered their avatars had been used in political propaganda, which they had not consented to. This caused emotional distress, as they feared personal and professional damage from the fake videos. Despite Synthesia’s efforts to ban accounts using its technology for such purposes, the harmful content spread online, including on platforms like Facebook.

UK-based Synthesia has expressed regret, stating it will continue to improve its processes. However, the long-term impact on the actors remains, with some questioning the lack of safeguards in the AI industry and warning of the dangers involved when likenesses are handed over to companies without adequate protections.