Google enhances Android security with new anti-theft tools

Google is gradually rolling out new security features to protect user data, focusing on preventing unauthorised access in cases of theft. The latest tools, which include Theft Detection Lock, Offline Device Lock, and Remote Lock, were announced in May and are becoming available on various Android devices.

Theft Detection Lock uses AI to lock the screen when it detects movement commonly associated with theft, such as someone snatching the phone. Offline Device Lock automatically secures the screen if a phone remains offline for a while, while Remote Lock allows users to lock their phone remotely using only their phone number, even if they can’t log into Find My Device.

Some users have reported seeing the features on devices like the Xiaomi 14T Pro, though others may need to wait as Google rolls out these updates over time. Users are encouraged to ensure their Google Play Services are updated to potentially access these features sooner.

The new security options are supported on Android 10 and up for Theft Detection Lock and Offline Device Lock, while Remote Lock works on devices running Android 5 and higher.

EU enlists experts to draft AI regulation rules

The European Union has chosen a team of AI experts to help shape the guidelines for compliance with its upcoming AI Act. On 30 September 2024, the European Commission convened the first meeting of working groups responsible for drafting a ‘code of practice’ to guide how companies should meet the law’s requirements. The selected experts include figures like AI pioneer Yoshua Bengio, former UK policy adviser Nitarshan Rajkumar, and Marietje Schaake from Stanford University.

These working groups, which also feature representatives from major tech companies such as Google and Microsoft, will address issues like copyright and risk management. Although the code of practice won’t be legally binding, it will serve as a checklist for companies to prove compliance with the AI Act, which takes full effect in 2025. Firms that claim to follow the law but ignore the code may face legal challenges.

A key focus will be on the transparency of AI training data, a contentious issue in the industry. Some AI companies resist sharing details about the data used to train their models, citing trade secrets. The code of practice is expected to clarify how much information companies will need to disclose, with the potential for increased legal scrutiny over the use of copyrighted content.

Spotify enhances AI-powered playlists for premium users

Spotify is expanding its AI Playlist tool, which helps premium users create personalised playlists using generative AI, to four new markets, including the United States and Canada. Currently in beta, the feature allows subscribers to tailor their playlists with additional text prompts, enhancing the listening experience.

Launched earlier in the United Kingdom and Australia, AI Playlist is now being extended to Ireland and New Zealand as part of Spotify’s strategy to attract new subscribers. The company aims to differentiate itself from growing competition with Apple and Amazon by integrating more AI-driven features into its platform.

While the tool offers users customisable music choices, it is currently limited to music-related prompts and will not respond to queries about current events or brands. Spotify also provides other AI-powered tools like ‘daylist’, a playlist that updates daily, and ‘AI DJ’, which recommends music based on individual listening habits.

Spotify‘s paying subscribers rose 12% year-on-year, reaching 246 million in the second quarter. The company’s continued focus on AI innovations reflects its commitment to offering unique features to its global user base.

Voiceitt brings personalised AI speech recognition to remote work

Israeli company Voiceitt aims to revolutionise communication for people with speech impairments through its AI-powered speech recognition system. Using personalised voice models, Voiceitt helps those affected by conditions like cerebral palsy, Parkinson’s, and Down syndrome to communicate more effectively with both people and digital devices.

Voiceitt, launched in 2021 as a vocal translator app, is now integrated with platforms such as WebEx, Zoom, and Microsoft Teams. It allows users to convert non-standard speech into captions and text for video calls and written documents, opening up new opportunities for remote work and communication.

Co-founder Sara Smolley views the project as a personal mission, inspired by her grandmother’s struggle with Parkinson’s disease. Voiceitt is designed to offer accessibility in the workplace and beyond, with users like accessibility advocate Colin Hughes praising its accuracy but also advocating for more features.

As the field of speech recognition advances, Voiceitt partners with major platforms and complies with strict privacy regulations to protect user data. Smolley believes the technology will significantly improve users’ independence and enjoyment of modern technology.

UK user data pulled from LinkedIn’s AI development

LinkedIn has paused the use of UK user data to train its AI models after concerns were raised by the Information Commissioner’s Office (ICO). The Microsoft-owned social network had quietly opted users worldwide into data collection for AI purposes but has now responded to the UK regulator’s scrutiny. LinkedIn acknowledged the concerns and expressed willingness to engage with the ICO further.

The decision to halt AI training with UK data follows growing privacy regulations in the UK and the European Union. These rules limit how tech companies, including LinkedIn, can use personal data to develop generative AI tools like chatbots and writing assistants. Like other platforms, LinkedIn had been leveraging user-generated content to enhance these AI models but has now introduced an opt-out mechanism for UK users to regain control over their data.

Regulatory bodies like the ICO continue to monitor big tech companies, emphasising the importance of privacy rights in the development of AI. As a result, LinkedIn and other platforms may face extended reviews before resuming AI-related activities that involve user data in the UK.

Digital Skills Forum in Bahrain highlights global need for digital education, unveils new toolkit

The International Telecommunication Union (ITU) recently hosted the Digital Skills Forum in Manama, Bahrain, addressing the pressing need for digital skills in today’s technology-driven society. With nearly 700 participants from 44 countries, the forum emphasised urgent calls to action aimed at bridging the digital skills gap that affects billions around the globe.

‘Digital skills have the power to change lives,’ asserted Doreen Bogdan-Martin, ITU Secretary-General, highlighting the union’s dedication to fostering an inclusive digital society. In response to this challenge, ITU introduced the ‘Digital Skills Toolkit 2024,’ a comprehensive resource to support policymakers and stakeholders in crafting effective national strategies to close digital skills gaps.

That toolkit seeks to empower diverse sectors, including private enterprises and academic institutions, by providing essential insights and resources within an ever-evolving technological landscape. Furthermore, the forum underscored the importance of lifelong learning and continuous upskilling, particularly in advanced fields such as AI and cybersecurity. ‘Addressing the digital skills gap requires strong partnerships and a commitment to investing in digital education,’ emphasised Cosmas Luckyson Zavazava, Director of ITU’s Telecommunication Development Bureau.

Bahrain’s leadership in promoting digital skills was prominently featured, reflecting its dedication to international cooperation and innovation. Young entrepreneurs showcased their innovative approaches to digital education, demonstrating the transformative potential of technology in shaping the future.

UN issues final report with key recommendations on AI governance

In a world where AI is rapidly reshaping industries, societies, and geopolitics, the UN advisory body has stepped forward with its final report – ‘Governing AI for Humanity,’ presenting seven strategic recommendations for responsible AI governance. The report highlights the urgent need for global coordination in managing AI’s opportunities and risks, especially in light of the swift expansion of AI technologies like ChatGPT and the varied international regulatory approaches, such as the EU’s comprehensive AI Act and the contrasting regulatory policies of the US and China.

One of the primary suggestions is the establishment of an International Scientific Panel on AI. The body, modelled after the Intergovernmental Panel on Climate Change, would bring together leading experts to provide timely, unbiased assessments of AI’s capabilities, risks, and uncertainties. The International Scientific Panel on AI would ensure that policymakers and civil society have access to the latest scientific understanding, helping to cut through the hype and misinformation that can surround new technological advances.

The AI Standards Exchange implementation would form a standard exchange bringing together global stakeholders, including national and international organizations, to debate and develop AI standards. It would ensure AI systems are aligned with global values like fairness and transparency.

AI Capacity Development Network is also one of the seven key points that would address disparities. The UN here proposes building an AI capacity network that would link centres of excellence globally, provide training and resources, and foster collaboration to empower countries that lack AI infrastructure.

Another key proposal is the creation of a Global AI Data Framework, which would provide a standardised approach to the governance of AI training data. Given that data is the lifeblood of AI systems, this framework would ensure the equitable sharing of data resources, promote transparency, and help balance the power dynamics between big AI companies and smaller emerging economies. The framework could also spur innovation by making AI development more accessible across different regions of the world.

The report further recommends forming a Global Fund for AI to bridge the AI divide between nations. The fund would provide financial and technical resources to countries lacking the infrastructure or expertise to develop AI technologies. The goal is to ensure that AI’s benefits are distributed equitably and not just concentrated in a few technologically advanced nations.

In tandem with these recommendations, the report advocates for a Policy Dialogue on AI Governance, emphasising the need for international cooperation to create harmonised regulations and avoid regulatory gaps. With AI systems impacting multiple sectors across borders, coherent global policies are necessary to prevent a ‘race to the bottom’ in safety standards and human rights protections.

Lastly, the UN calls for establishing an AI Office within the Secretariat, which would serve as a central hub for coordinating AI governance efforts across the UN and with other global stakeholders. This office would ensure that the recommendations are implemented effectively and that AI governance remains agile in rapid technological change.

Through these initiatives, the UN seeks to foster a world where AI can flourish while safeguarding human rights and promoting global equity. The report implies that the stakes are high, and only through coordinated global action can we harness AI’s potential while mitigating its risks.