Seeing, moving, living: AI’s promise for accessible technology

Published on February 10 2026
At Expo 2025 in Osaka, a bionic hand called RYO demonstrated movements so natural that it could handle tofu without crushing it. Developed by Kawatec, a Japanese company, RYO replicates 95% of natural hand movements through a combination of myoelectric sensors and machine learning. Watching it pick up an egg, tie a shoelace, or turn a page revealed something remarkable: technology that doesn’t just compensate for loss but expands what’s possible. This is one example of a broader transformation happening across accessible technology. For decades, assistive devices operated on a simple principle: help people work around their limitations. Screen readers […]

At Expo 2025 in Osaka, a bionic hand called RYO demonstrated movements so natural that it could handle tofu without crushing it. Developed by Kawatec, a Japanese company, RYO replicates 95% of natural hand movements through a combination of myoelectric sensors and machine learning. Watching it pick up an egg, tie a shoelace, or turn a page revealed something remarkable: technology that doesn’t just compensate for loss but expands what’s possible.

 Clothing, Glove, Advertisement, Poster

This is one example of a broader transformation happening across accessible technology. For decades, assistive devices operated on a simple principle: help people work around their limitations. Screen readers convert text to speech. Wheelchairs provided mobility. Hearing aids amplify sound. These tools were essential, but they required users to adapt to the technology’s constraints.

AI is changing that equation. Instead of asking people to adapt, the technology adapts to them. It learns individual patterns, anticipates needs, and responds to context in ways that earlier generations of assistive devices simply could not. The shift is from ‘assistive’ (compensating for limitations) to ‘empowering’ (expanding possibilities).

Accessibility technology is where AI’s humanistic promise is most visible and immediate. The applications are concrete, the benefits measurable, and the impact life-changing. But as these technologies advance, they also raise urgent questions about equity, privacy, and governance that the international community cannot ignore.

Seeing: AI for visual accessibility

In September 2025, the New York Commission for the Blind began providing Meta Ray-Ban Smart Glasses to blind and visually impaired high school seniors, college students, and employed adults. The glasses, which retail for USD 300 to USD 379, look like ordinary sunglasses but contain a camera and microphone connected to Meta’s multimodal AI.

The image shows Ray Ban Meta glasses

Users can ask the glasses to describe what they’re looking at. The AI processes the camera feed in real time, identifying objects, reading text, recognising scenes, and providing context. Ed, an adaptive technology instructor who tested the glasses, described using them to read a book to his three-year-old grandson. Lachi, a recording artist who is legally blind, called them ‘promising’ and praised their ‘fresh and stylish’ accessibility approach.

The technology isn’t perfect. Meta AI often refuses to provide detailed descriptions of people due to privacy concerns, a friction point in the human rights vs. technology debate. The glasses require an internet connection and have limited battery life, but users consistently report a very small learning curve and practical utility in daily life.

Compare this to Envision Glasses, which uses a similar concept but targets professional and institutional markets. The Home Edition costs USD 2,499, nearly eight times the price of Meta’s consumer product. Both devices use AI to interpret visual scenes, but the price difference highlights a critical tension in accessible technology: advanced features often come at a cost that puts them out of reach for many who need them.

The image shows Envision Glasses

What makes these devices genuinely new isn’t the camera or the speech output. Basic visual assistance tools have existed for years. The breakthrough is the AI’s ability to understand context. Earlier systems could recognise individual objects or read text in controlled environments. Multimodal AI can look at a refrigerator shelf and explain not just what’s there but which milk carton is about to expire. It can distinguish between a stop sign and a yield sign, describe the expression on someone’s face, or explain why a crowd has gathered on a street corner.

This shift from ‘reading text aloud’ to ‘understanding scenes’ represents a fundamental expansion of what visual assistance technology can do. But it also raises new concerns. These devices require constant camera use, creating privacy implications not only for users but also for those around them. Who consents to being recorded and analysed by someone else’s assistive AI? How do we balance the user’s need for information with bystanders’ right to privacy?

Moving: AI-powered prosthetics

Bionic prosthetics are not new. The Belgrade Hand, developed in Yugoslavia between 1963 and 1969, was already using myoelectric signals to control artificial limbs. Users could contract specific muscles, and the hand would respond with pre-programmed movements. It was brilliant engineering that helped hundreds of people regain function.

The image shows a photograph of the Belgrade Hand

So what makes current prosthetics different? The answer lies in what AI adds to the equation: learning, prediction, and adaptation.

RYO’s INTELIHAND system learns the user’s personal rhythm. Every movement teaches the system how that specific person moves, trimming the learning curve that frustrates many prosthetic users. The electrodes read micro-signals from muscle activity, each flicker of intent is turned into digital data, and AI interprets patterns that would be too subtle or complex for rule-based systems.

The Belgrade Hand had fixed responses: specific muscle contraction produced specific movement. RYO adapts to individual users, learning their unique patterns and adjusting over time. The difference is between a translator who knows the grammar rules and one who understands your personal idioms and speech patterns.

The Utah Bionic Leg, developed by researchers at the University of Utah and licensed to Ottobock (the worldwide leader in prosthetics), shows this even more clearly. It uses sensors on the residual hip muscle to determine the user’s intended movement, then uses AI to bend the prosthetic knee and adjust swing duration accordingly. The system adapts to each user’s specific stride pattern.

More significantly, the AI-based control automatically switches activities. Going from walking to climbing stairs to descending a ramp happens seamlessly, using a combination of muscle contraction signals and sensor data from within the bionic leg to predict whatever behaviour is required for the next step. Earlier prosthetics required manual switching between modes or worked well for one activity but poorly for others.

Separately, BionicM, a South Korean company, developed the Bio Leg, which in 2024 became the first Asian manufacturer to receive a US PDAC reimbursement code for its prosthetic. This matters enormously, since advanced prosthetics can cost tens of thousands of dollars and, without insurance reimbursement, remain inaccessible to most people who need them. The reimbursement code validates the technology and also opens a pathway for broader adoption.

Italian researchers have developed magnet-controlled prosthetics that use a different approach to the same problem: interpreting user intent and translating it into natural movement. Across the field, the trend is consistent. The hardware, robotics, and myoelectric sensors are all built on decades of prior work. What AI contributes is the intelligence layer that makes these devices responsive, adaptive, and increasingly intuitive.

This shift has been called ‘embodied integration‘, a move from ‘prosthetic as tool’ to ‘prosthetic as extension of self‘. When a device learns your patterns, anticipates your needs, and responds without conscious effort, it becomes less like operating a machine and more like moving your own limb.

Beyond prosthetics, Harvard’s soft robotics researchers are developing AI-powered devices for Parkinson’s patients, exoskeletons for mobility assistance, and wearable systems that adapt to changing physical conditions. The pattern is the same: AI enables personalisation and adaptation at a scale that wasn’t possible before.

The diplomatic and governance dimensions

These technologies are expensive. Meta Ray-Ban glasses at USD 300 are relatively affordable, but they still require a smartphone, data plan, and stable internet connection. BionicM’s leg costs significantly more. Envision Glasses at USD 2,499 are out of reach for most people in low-income countries. The World Health Organisation reports that nearly 1 billion people who need assistive products are denied access, with rates as low as 3% in some low-income countries and as high as 90% in high-income countries. This digital divide is a central pillar of digital diplomacy.

Standards and interoperability present another challenge. Who sets the rules for accessible AI? Organisations such as ISO, IEEE, and the W3C have developed accessibility guidelines over the decades, but AI introduces new complexities. Should there be minimum performance standards for AI systems that describe visual scenes? What happens when an AI misidentifies something critical, like a medication label or a street sign? Who is liable?

These questions require international coordination and inclusive decision-making. Standard-setting bodies cannot be dominated by a handful of wealthy countries and large tech companies. Disability advocates and users from diverse contexts must be at the table, not just consulted afterwards. The principle ‘nothing about us without us‘ applies with particular force here.

The image shows a banner advertising the Diplo Artificial Intelligence Policy and Governance online course

Data and privacy create further tensions. Accessible technology often requires extensive personal data. An AI learning your walking pattern needs to track every step. Glasses that describe your environment are recording everything you see. This personalisation is essential to the technology’s effectiveness, but it also creates vulnerabilities.

How do we balance the user’s need for a device that adapts to them with their right to privacy and data protection? The EU AI Act, which came into force in 2024, classifies many AI systems used in accessibility contexts as ‘high-risk’ requiring conformity assessments, human oversight mechanisms, and transparency documentation before deployment. But implementation varies across jurisdictions, and enforcement remains uneven.

Moreover, accessible technology often collects data not just about users but about everyone around them. The privacy concerns about Meta glasses aren’t hypothetical. Users are recording and analysing public spaces, other people’s faces, and private conversations, often without explicit consent from those being observed. Regulatory frameworks designed for static assistive devices don’t easily accommodate these new capabilities.

The Global Disability Inclusion Report, published in March 2025 for the Global Disability Summit, emphasised that technology alone cannot solve systemic problems. A blind person with Meta glasses still faces barriers if a city lacks inclusive infrastructure.

There is a risk of ‘tech solutionism’ here, the belief that better gadgets will solve systemic problems. They won’t. Accessible AI can expand what individuals can do, but it cannot replace the need for inclusive design in public spaces, anti-discrimination laws in employment, or universal healthcare that includes assistive technology.

Promise, caution, and the work ahead

What’s on the horizon looks even more ambitious. Researchers are working on multi-sensory integration, combining visual, audio, and haptic AI into unified systems. Brain-computer interfaces are entering the accessibility space, offering new possibilities for people with severe mobility limitations. Cost curves, particularly for consumer products like Meta glasses, are declining as technology matures and competition intensifies.

Unlike some AI applications, whose benefits are speculative or unevenly distributed, accessible technology has a clear, measurable, positive impact. Lives are genuinely transformed when Ed can read to his grandson or prosthetic users can climb stairs without manually switching modes. These are not abstract gains; they are concrete improvements in daily life that reinforce a vision of AI as an augmentation tool, expanding human capability rather than replacing it.

Privacy frameworks must evolve to account for technologies that are simultaneously personal and public. A blind person using AI glasses is exercising their right to access information, but they are also potentially infringing on others’ privacy. Finding the balance requires nuanced regulation that neither bans beneficial technology nor allows unchecked surveillance.

Human oversight and choice must remain central. Users should control when their devices collect data, how that data is used, and who has access to it. They should be able to understand, at least in broad terms, how their AI-assisted devices make decisions. The EU’s requirement for transparency and explainability in high-risk AI systems is a starting point, but implementation will determine whether these principles translate into real user protections.

However, the role of diplomacy is clear. The UN Convention on the Rights of Persons with Disabilities (CRPD) provides a framework, but it must be updated to address the nuances of AI.

The image shows a banner advertising the Diplo AI Apprenticeship online course

The RYO bionic hand demonstrated what 95% of natural movement looks like. Visitors watched it handle delicate objects, perform fine motor tasks, and respond to user intent with minimal conscious effort. It was an impressive technical achievement, but it was also something more: a demonstration of AI doing what it should, expanding human capability rather than replacing it.

This revolution is not happening in labs alone. It is happening in rehabilitation centres, homes, city streets, and classrooms. But revolutions can be unequal. Right now, access to these technologies depends heavily on geography, income, and insurance status. The question before policymakers, technologists, and the international community is whether we will shape this transformation to be inclusive or allow it to deepen existing inequalities.

Accessible technology shows us what human-centred AI actually looks like in practice. The challenge is ensuring this revolution reaches everyone who needs it, not just those who can afford it. The technology is ready. The governance frameworks are not. Closing that gap is the work ahead.

Author: Slobodan Kovrlija


cross-circle