FBI warns of AI-driven fraud
The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.
Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.
To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.
AI and speed cameras to tackle dangerous Devon road
A notorious stretch of the A361 in Devon will receive £1 million in AI and speed camera technology to improve road safety. The investment, part of a £5 million grant from the Department for Transport (DfT), comes after the road was identified as ‘high risk,’ with three fatalities and 30 serious injuries recorded between 2018 and 2022. AI-powered cameras will detect offences such as drivers using mobile phones and failing to wear seatbelts, while speed cameras will be installed at key locations.
A pilot scheme last August recorded nearly 1,800 potential offences along the route, highlighting the need for stricter enforcement. The latest plans include three fixed speed cameras at Ilfracombe, Knowle, and Ashford, as well as two average speed camera systems covering longer stretches of the road. AI cameras will be rotated between different locations to monitor driver behaviour more effectively.
Councillor Stuart Hughes, Devon County Council’s cabinet member for highways, expressed pride in the region’s adoption of AI for road safety improvements. The remaining £4 million from the DfT grant will be allocated to upgrading junctions and improving access for pedestrians and cyclists along the A361.
Faculty AI develops AI for military drones
Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.
While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.
The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.
Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.
US tech leaders oppose proposed export limits
A prominent technology trade group has urged the Biden administration to reconsider a proposed rule that would restrict global access to US-made AI chips, warning that the measure could undermine America’s leadership in the AI sector. The Information Technology Industry Council (ITI), representing major companies like Amazon, Microsoft, and Meta, expressed concerns that the restrictions could unfairly limit US companies’ ability to compete globally while allowing foreign rivals to dominate the market.
The proposed rule, expected to be released as soon as Friday, is part of the Commerce Department’s broader strategy to regulate AI chip exports and prevent misuse, particularly by adversaries like China. The restrictions aim to curb the potential for AI to enhance China’s military capabilities. However, in a letter to Commerce Secretary Gina Raimondo, ITI CEO Jason Oxman criticised the administration’s urgency in finalising the rule, warning of ‘significant adverse consequences’ if implemented hastily. Oxman called for a more measured approach, such as issuing a proposed rule for public feedback rather than enacting an immediate policy.
Industry leaders have been vocal in their opposition, describing the draft rule as overly broad and damaging. The Semiconductor Industry Association raised similar concerns earlier this week, and Oracle’s Executive Vice President Ken Glueck slammed the measure as one of the most disruptive ever proposed for the US tech sector. Glueck argued the rule would impose sweeping regulations on the global commercial cloud industry, stifling innovation and growth.
While the administration has yet to comment on the matter, the growing pushback highlights the tension between safeguarding national security and maintaining US dominance in the rapidly evolving field of AI.
Amazon invests $11 billion in Georgia
Amazon Web Services (AWS) has announced a $11 billion investment to build new data centres in Georgia, aiming to support the growing demand for cloud computing and AI technologies. The facilities, located in Butts and Douglas counties, are expected to create at least 550 high-skilled jobs and position Georgia as a leader in digital innovation.
The move highlights a broader trend among tech giants investing heavily in AI-driven advancements. Last week, Microsoft revealed an $80 billion plan for fiscal 2025 to expand data centres for AI training and cloud applications. These facilities are critical for supporting resource-intensive AI technologies like machine learning and generative models, which require vast computational power and specialised infrastructure.
The surge in AI infrastructure has also raised concerns about energy consumption. A report from the Electric Power Research Institute suggests data centres could account for up to 9% of US electricity usage by 2030. To address this, Amazon has secured energy supply agreements with utilities like Talen Energy in Pennsylvania and Entergy in Mississippi, ensuring reliable power for its expanding operations.
Amazon’s commitment underscores the growing importance of AI and cloud services, as companies race to meet the demands of a rapidly evolving technological landscape.
Debate over AI regulation intensifies amidst innovation and safety concerns
In recent years, debates over AI have intensified, oscillating between catastrophic warnings and optimistic visions. Technologists, once at the forefront of calling for caution, have been overshadowed by the tech industry’s emphasis on generative AI’s lucrative potential.
Dismissed as ‘AI doomers,’ critics warn of existential threats—from mass harm to societal destabilisation—while Silicon Valley champions the transformative benefits of AI, urging fewer regulations to accelerate innovation. 2023 marked a pivotal year for AI awareness, with luminaries like Elon Musk and over 1,000 experts calling for a development pause, citing profound risks.
US President Biden’s AI executive order aimed to safeguard Americans, and regulatory discussions gained mainstream traction. However, 2024 saw this momentum falter as investment in AI skyrocketed and safety-focused voices dwindled.
High-profile debates, like California’s SB 1047—a bill addressing catastrophic AI risks—ended in a veto, highlighting resistance from powerful tech entities. Critics argued that such legislation stifled innovation, while proponents lamented the lack of long-term safety measures.
Amid this tug-of-war, optimistic narratives, like Marc Andreessen’s essay ‘Why AI Will Save the World,’ gained prominence. Advocating rapid, unregulated AI development, Andreessen and others argued this approach would bolster competitiveness and prevent monopolisation.
Yet, detractors questioned the ethics of prioritising profit over societal concerns, especially as cases like AI-driven child safety failures underscored emerging risks.
Why does it matter?
Looking ahead to 2025, the AI safety movement faces an uphill battle. Policymakers hint at revisiting stalled regulations, signalling hope for progress. However, with influential players opposing stringent oversight, the path to balanced AI governance remains uncertain. As society grapples with AI’s rapid evolution, the challenge lies in addressing its vast potential and inherent risks.
AI data centres strain US power grid
The increasing number of data centres powering AI could pose significant challenges for the United States power grid, as reported by Bloomberg. Findings indicate a connection between data centre activity and ‘bad harmonics,’ a term describing electrical power distortions that can damage appliances, heighten fire risks, and lead to power outages.
Bloomberg’s analysis, using data from Whisker Labs and DC Byte, revealed that over half of homes with the worst power distortions are located within 20 miles of active data centres. AI-driven centres, with their unpredictable energy needs, exacerbate these grid strains, pushing infrastructure beyond its designed limits.
Experts, including Aman Joshi of Bloom Energy, warn that no current grid can handle such intense load fluctuations from multiple data centres. While some utility companies question these findings, the report underscores the urgent need to address the interplay between technological expansion and energy stability.
Goodman Group surges as AI boom fuels data centre demand
Goodman Group has emerged as a standout performer in Australia’s real estate sector this year, with its stock soaring 45.8%, marking its strongest run since 2006. The surge is driven by a boom in AI, which has sparked frenzied demand for data centres. Global tech giants like Amazon, Microsoft, and Meta have poured billions into expanding their data centre capacity, fueling growth for developers like Goodman.
At the end of September, 42% of Goodman’s A$12.8 billion ($7.96 billion) development portfolio was dedicated to data centres, a jump from 37% last year. Analysts like John Lockton of Sandstone Insights see this focus as a key strength, noting the company’s access to land with power supply, a critical factor for future data-centre projects.
Despite the optimism, some caution remains. Analysts warn that soaring valuations in the data-centre sector could cool investor enthusiasm. Goodman’s high stock prices and concerns over risks like obsolescence and increased competition raise questions about long-term returns. Nonetheless, with robust demand for AI infrastructure, Goodman’s pipeline and strategic positioning keep it well-poised for continued growth.