AV1 robot bridges gap for children unable to attend school

Children who are chronically ill and unable to attend school can now stay connected to the classroom using the AV1 robot, developed by the company No Isolation from Norway. This innovative technology serves as their eyes and ears, allowing them to engage with lessons and interact with friends remotely. Controlled via an app, the robot sits on a classroom desk, enabling students to rotate its view, speak to classmates, and even signal when they want to participate.

The AV1 has been especially valuable for children undergoing long-term treatment or experiencing mental health challenges, helping them maintain a connection with their peers and stay socially included. In the United Kingdom, schools can rent or purchase the AV1, which has been widely adopted, particularly in countries like the UK and Germany, where over 1,000 units are active. For many students, the robot has become a lifeline during extended absences from school.

Though widely praised, there are logistical challenges in introducing the AV1 to schools and hospitals, including administrative hurdles and technical issues like weak Wi-Fi. Despite these obstacles, teachers and families have found the robot to be highly effective, with privacy protections and features tailored to students’ needs, including the option to avoid showing their face on screen.

Research has highlighted the AV1’s potential to keep children both socially and academically connected, and No Isolation has rolled out a training resource, AV1 Academy, to support teachers and schools in using the technology effectively. With its user-friendly design and robust privacy features, the AV1 continues to make a positive impact on the lives of children facing illness and long absences from school.

AI at Europe’s borders sparks human rights concerns

As the European Union implements the world’s first comprehensive regulations on artificial intelligence (AI), human rights groups are raising alarms over exemptions for AI use at Europe’s borders. The EU’s AI Act, which categorises AI systems by risk level and imposes stricter rules for those with higher potential for harm, is set to take full effect by February 2025. While it promises to regulate AI across industries, controversial technologies like facial and emotion recognition are still permitted for border and police authorities, sparking concern over surveillance and discrimination.

With Europe investing heavily in border security, deploying AI-driven watchtowers and algorithms to monitor migration flows, critics argue these technologies could criminalise migrants and violate their rights. Human Rights activists warn that AI may reinforce biases and lead to unlawful pushbacks of asylum seekers. Countries like Greece are testing ground for these technologies and have been accused of using AI for surveillance and discrimination, despite denials from the government.

Campaigners also point out that the EU’s regulations allow European companies to develop and export harmful AI systems abroad, potentially fueling human rights abuses in other countries. While the AI Act represents a step forward in global regulation, activists believe it falls short of protecting vulnerable groups at Europe’s borders and beyond. They anticipate that legal challenges and public opposition will eventually close these regulatory gaps.

AI and ethics in modern society

Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of science fiction into pressing real-world issues. AI technologies now permeate areas such as medicine, public governance, and the economy, making it critical to ensure their ethical use. Multiple actors, including governments, multinational corporations, international organisations, and individual citizens, share the responsibility to navigate these developments thoughtfully.

What is ethics?

Ethics refers to the moral principles that guide individual behaviour or the conduct of activities, determining what is considered right or wrong. In AI, ethics ensures that technologies are developed and used in ways that respect societal values, human dignity, and fairness. For example, one ethical principle is respect for others, which means ensuring that AI systems respect the rights and privacy of individuals.

What is AI?

Artificial Intelligence (AI) refers to systems that analyse their environment and make decisions autonomously to achieve specific goals. These systems can be software-based, like voice assistants and facial recognition software, or hardware-based, such as robots, drones, and autonomous cars. AI has the potential to reshape society profoundly. Without an ethical framework, AI could perpetuate inequalities, reduce accountability, and pose risks to privacy, security, and human autonomy. Embedding ethics in the design, regulation, and use of AI is essential to ensuring that this technology advances in a way that promotes fairness, responsibility, and respect for human rights.

AI ethics and its importance

AI ethics focuses on minimising risks related to poor design, inappropriate applications, and misuse of AI. Problems such as surveillance without consent and the weaponisation of AI have already emerged. This calls for ethical guidelines that protect individual rights and ensure that AI benefits society as a whole.

a person in a white suit

Global and regional efforts to regulate AI ethics

There are international initiatives to regulate AI ethically. For example, UNESCO‘s 2021 Recommendation on the Ethics of AI offers guidelines for countries to develop AI responsibly, focusing on human rights, inclusion, and transparency. The European Union’s AI Act is another pioneering legislative effort, which categorises AI systems by their risk level. The higher the risk, the stricter the regulatory requirements.

The Collingridge dilemma and AI

The Collingridge dilemma points to the challenge of regulating new technologies like AI. Early regulation is difficult due to limited knowledge of the technology’s long-term effects, but once the technology becomes entrenched, regulation faces opposition from stakeholders. AI is currently in a dual phase: while its long-term implications are uncertain, we already have enough examples of its immediate impact—such as algorithmic bias and privacy violations—to justify regulation in key areas.

Asimov’s Three Laws of Robotics: Ethical inspiration for AI

Isaac Asimov’s Three Laws of Robotics, while fictional, resonate with many of the ethical concerns that modern AI systems face today. These laws—designed to prevent harm to humans, ensure obedience to human commands, and prioritise the self-preservation of robots—provide a foundational, if simplistic, framework for responsible AI behaviour.

 Page, Text, Chart, Plot

Modern ethical challenges in AI

However, real-world AI introduces a range of complex challenges that cannot be adequately managed by simple rules. Issues such as algorithmic bias, privacy violations, accountability in decision-making, and unintended consequences complicate the ethical landscape, necessitating more nuanced and adaptive strategies for effectively governing AI systems.

As AI continues to develop, it raises new ethical dilemmas, including the need for transparency in decision-making, accountability in cases of accidents, and the possibility of AI systems acting in ways that conflict with their initial programming. Additionally, there are deeper questions regarding whether AI systems should have the capacity for moral reasoning and how their autonomy might conflict with human values.

Categorising AI and ethics

Modern AI systems exhibit a spectrum of ethical complexities that reflect their varying capabilities and applications. Basic AI operates by executing tasks based purely on algorithms and pre-programmed instructions, devoid of any moral reasoning or ethical considerations. These systems may efficiently sort data, recognise patterns, or automate simple processes, yet they do not engage in any form of ethical deliberation.

In contrast, more advanced AI systems are designed to incorporate limited ethical decision-making. These systems are increasingly being deployed in critical areas such as healthcare, where they help diagnose diseases, recommend treatments, and manage patient care. Similarly, in autonomous vehicles, AI must navigate complex moral scenarios, such as how to prioritise the safety of passengers versus pedestrians in unavoidable accident situations. While these advanced systems can make decisions that involve some level of ethical consideration, their ability to fully grasp and navigate complex moral landscapes remains constrained.

The variety of ethical dilemmas

 Logo, Nature, Outdoors, Person

Legal impacts

The question of AI accountability is increasingly relevant in our technologically driven society, particularly in scenarios involving autonomous vehicles, where determining liability in the event of an accident is fraught with complications. For instance, if an autonomous car is involved in a collision, should the manufacturer, software developer, or vehicle owner be held responsible? As AI systems become more autonomous, existing legal frameworks may struggle to keep pace with these advancements, leading to legal grey areas that can result in injustices. Additionally, AI technologies are vulnerable to misuse for criminal activities, such as identity theft, fraud, or cyberattacks. This underscores the urgent need for comprehensive legal reforms that not only address accountability issues but also develop robust regulations to mitigate the potential for abuse.

Financial impacts

The integration of AI into financial markets introduces significant risks, including the potential for market manipulation and exacerbation of financial inequalities. For instance, algorithms designed to optimise trading strategies may inadvertently favour wealthy investors, perpetuating a cycle of inequality. Furthermore, biased decision-making algorithms can lead to unfair lending practices or discriminatory hiring processes, limiting opportunities for marginalised groups. As AI continues to shape financial systems, it is crucial to implement safeguards and oversight mechanisms that promote fairness and equitable access to financial resources.

Environmental impacts

The environmental implications of AI cannot be overlooked, particularly given the substantial energy consumption associated with training and deploying large AI models. The computational power required for these processes contributes significantly to carbon emissions, raising concerns about the sustainability of AI technologies. In addition, the rapid expansion of AI applications in various industries may lead to increased electronic waste, as outdated hardware is discarded in favour of more advanced systems. To address these challenges, stakeholders must prioritise the development of energy-efficient algorithms and sustainable practices that minimise the ecological footprint of AI technologies.

Social impacts

AI-driven automation poses a profound threat to traditional job markets, particularly in sectors that rely heavily on routine tasks, such as manufacturing and customer service. As machines become capable of performing these jobs more efficiently, human workers may face displacement, leading to economic instability and social unrest. Moreover, the deployment of biassed algorithms can deepen existing social inequalities, especially when applied in sensitive areas like hiring, loan approvals, or criminal justice. The use of AI in surveillance systems also raises significant privacy concerns, as individuals may be monitored without their consent, leading to a chilling effect on free expression and civil liberties.

Psychological impacts

The interaction between humans and AI systems can have far-reaching implications for emotional well-being. For example, AI-driven customer service chatbots may struggle to provide the empathetic responses that human agents can offer, leading to frustration among users. Additionally, emotionally manipulative AI applications in marketing may exploit psychological vulnerabilities, promoting unhealthy consumer behaviours or contributing to feelings of inadequacy. As AI systems become more integrated into everyday life, understanding and mitigating their psychological effects will be essential for promoting healthy human-computer interactions.

Trust issues

Public mistrust of AI technologies is a significant barrier to their widespread adoption. This mistrust is largely rooted in the opacity of AI systems and the potential for algorithmic bias, which can lead to unjust outcomes. To foster trust, it is crucial to establish transparent practices and accountability measures that ensure AI systems operate fairly and ethically. This can include the development of explainable AI, which allows users to understand how decisions are made, as well as the implementation of regulatory frameworks that promote responsible AI development. By addressing these trust issues, stakeholders can work toward creating a more equitable and trustworthy AI landscape.

These complex ethical challenges require global coordination and thoughtful, adaptable regulation to ensure that AI serves humanity’s best interests, respects human dignity, and promotes fairness across all sectors of society. The ethical considerations around AI extend far beyond individual technologies or industries, impacting fundamental human rights, economic equality, environmental sustainability, and societal trust.

As AI continues to advance, the collective responsibility of governments, corporations, and individuals is to build robust, transparent systems that not only push the boundaries of innovation but also safeguard society. Only through an ethical framework can AI fulfil its potential as a transformative force for good rather than deepening existing divides or creating new dangers. The journey towards creating ethically aware AI systems necessitates ongoing research, interdisciplinary collaboration, and a commitment to prioritising human well-being in all technological advancements.

Google’s NotebookLM now supports YouTube and audio file analysis

Google has introduced a major update to its AI-powered note-taking platform, NotebookLM. Users will soon be able to upload YouTube URLs and audio files, such as mp3 and wav formats, for analysis by the Gemini AI.

Previously, NotebookLM allowed users to interact with documents like Google Docs, PDFs, and web pages. Now, a new sharing feature enables public URL generation for Audio Overviews, enhancing collaboration.

Google ensures all uploaded data remains private and secure, without being used to train its AI models. NotebookLM, powered by the Gemini 1.5 Pro model, offers summarisation and idea generation across various content types.

NotebookLM’s latest features position it as a strong rival to Microsoft OneNote’s Copilot and Notion AI. Gemini is also integrated into Google Workspace, offering business customers enterprise-grade data protection.

AI-written police reports spark efficiency debate

Several police departments in the United States have begun using AI to write incident reports, aiming to reduce time spent on paperwork. Oklahoma City’s police department was an early adopter of the AI-powered Draft One software, but paused its use to address concerns raised by the District Attorney’s office. The software analyses bodycam footage and radio transmissions to draft reports, potentially speeding up processes, although it may raise legal concerns.

Paul Mauro, a former NYPD inspector, noted that the technology could significantly reduce the burden on officers, who often spend hours writing various reports. However, he warned that officers must still review AI-generated reports carefully to avoid errors. The risk of inaccuracies or ‘AI hallucinations’ means oversight remains crucial, particularly when reports are used as evidence in court.

Mauro suggested that AI-generated reports could help standardise police documentation and assist in data analysis across multiple cases. This could improve efficiency in investigations by identifying patterns more quickly than manual methods. He also recommended using the technology for minor crimes while legal experts ensure compliance with regulations.

The potential for AI to transform police work has drawn comparisons to the initial resistance to bodycams, which are now widely accepted. While there are challenges, the introduction of AI in police reporting may offer long-term benefits for law enforcement, if implemented thoughtfully and responsibly.

Voiceitt brings personalised AI speech recognition to remote work

Israeli company Voiceitt aims to revolutionise communication for people with speech impairments through its AI-powered speech recognition system. Using personalised voice models, Voiceitt helps those affected by conditions like cerebral palsy, Parkinson’s, and Down syndrome to communicate more effectively with both people and digital devices.

Voiceitt, launched in 2021 as a vocal translator app, is now integrated with platforms such as WebEx, Zoom, and Microsoft Teams. It allows users to convert non-standard speech into captions and text for video calls and written documents, opening up new opportunities for remote work and communication.

Co-founder Sara Smolley views the project as a personal mission, inspired by her grandmother’s struggle with Parkinson’s disease. Voiceitt is designed to offer accessibility in the workplace and beyond, with users like accessibility advocate Colin Hughes praising its accuracy but also advocating for more features.

As the field of speech recognition advances, Voiceitt partners with major platforms and complies with strict privacy regulations to protect user data. Smolley believes the technology will significantly improve users’ independence and enjoyment of modern technology.

UN adopts ‘Pact for the Future’

On 22 September 2024, world leaders convened in New York to adopt the ‘Pact for the Future’ – a comprehensive agreement designed to reimagine global governance in response to contemporary and future challenges.

The ground-breaking Pact includes a Global Digital Compact and a Declaration on Future Generations, aiming to update the international system established by previous generations. The Secretary-General stressed the importance of aligning global governance structures with the realities of today’s world, fostering a more inclusive and representative international system.

The Pact covers many critical areas, including peace and security, sustainable development, climate change, digital cooperation, human rights, and gender equality. It marks a renewed multilateral commitment to nuclear disarmament and advocates for strengthened international frameworks to govern outer space and prevent the misuse of new technologies. To bolster sustainable development, the Pact aims to accelerate the Sustainable Development Goals (SDGs), reform international financial architecture, and enhance measures to tackle climate change by committing to net-zero emissions by 2050.

Digital cooperation is notably addressed through the Global Digital Compact, which outlines commitments to connect all people to the internet, safeguard online spaces, and govern AI. The Compact promotes open-source data and sets the stage for global data governance. It also ensures increased investment in digital public goods and infrastructure, especially in developing countries.

Why does it matter?

The ‘Pact for the Future’ encapsulates a detailed, optimistic vision geared toward creating a sustainable, just, and peaceful global order. The Summit of the Future, which facilitated the adoption of this Pact as an extensively inclusive process, involves millions of voices and contributions from diverse stakeholders. The event was attended by over 4,000 participants, including global leaders and representatives from various sectors, and was preceded by Action Days, which drew more than 7,000 attendees. Such a forum shows firm global commitments to action, including pledges amounting to USD 1.05 billion to advance digital inclusion.

UN issues final report with key recommendations on AI governance

In a world where AI is rapidly reshaping industries, societies, and geopolitics, the UN advisory body has stepped forward with its final report – ‘Governing AI for Humanity,’ presenting seven strategic recommendations for responsible AI governance. The report highlights the urgent need for global coordination in managing AI’s opportunities and risks, especially in light of the swift expansion of AI technologies like ChatGPT and the varied international regulatory approaches, such as the EU’s comprehensive AI Act and the contrasting regulatory policies of the US and China.

One of the primary suggestions is the establishment of an International Scientific Panel on AI. The body, modelled after the Intergovernmental Panel on Climate Change, would bring together leading experts to provide timely, unbiased assessments of AI’s capabilities, risks, and uncertainties. The International Scientific Panel on AI would ensure that policymakers and civil society have access to the latest scientific understanding, helping to cut through the hype and misinformation that can surround new technological advances.

The AI Standards Exchange implementation would form a standard exchange bringing together global stakeholders, including national and international organizations, to debate and develop AI standards. It would ensure AI systems are aligned with global values like fairness and transparency.

AI Capacity Development Network is also one of the seven key points that would address disparities. The UN here proposes building an AI capacity network that would link centres of excellence globally, provide training and resources, and foster collaboration to empower countries that lack AI infrastructure.

Another key proposal is the creation of a Global AI Data Framework, which would provide a standardised approach to the governance of AI training data. Given that data is the lifeblood of AI systems, this framework would ensure the equitable sharing of data resources, promote transparency, and help balance the power dynamics between big AI companies and smaller emerging economies. The framework could also spur innovation by making AI development more accessible across different regions of the world.

The report further recommends forming a Global Fund for AI to bridge the AI divide between nations. The fund would provide financial and technical resources to countries lacking the infrastructure or expertise to develop AI technologies. The goal is to ensure that AI’s benefits are distributed equitably and not just concentrated in a few technologically advanced nations.

In tandem with these recommendations, the report advocates for a Policy Dialogue on AI Governance, emphasising the need for international cooperation to create harmonised regulations and avoid regulatory gaps. With AI systems impacting multiple sectors across borders, coherent global policies are necessary to prevent a ‘race to the bottom’ in safety standards and human rights protections.

Lastly, the UN calls for establishing an AI Office within the Secretariat, which would serve as a central hub for coordinating AI governance efforts across the UN and with other global stakeholders. This office would ensure that the recommendations are implemented effectively and that AI governance remains agile in rapid technological change.

Through these initiatives, the UN seeks to foster a world where AI can flourish while safeguarding human rights and promoting global equity. The report implies that the stakes are high, and only through coordinated global action can we harness AI’s potential while mitigating its risks.