The headlines have been relentless. Amazon announces thousands of layoffs. UPS cuts jobs as automation accelerates in logistics. Accenture, the consulting giant, plans to eliminate 11,000 positions spanning IT services to business analysis, the kinds of white-collar work once considered immune to automation. Meanwhile, Tesla showcases its Optimus humanoid robot performing tasks that, until recently, seemed firmly in the domain of human workers. We’re watching a transformation unfold in real-time, and the anxiety is palpable. But if we let this conversation remain solely about job numbers and unemployment rates, we miss something fundamental. What matters most is whether we can navigate this transition in a way that preserves human dignity, purpose, and the very things that make work meaningful beyond a paycheck. Yes, job displacement is real. The fear is legitimate. When someone loses their livelihood, the immediate concern is understandably financial: how to pay rent, feed a family, and maintain health insurance. These material concerns cannot be dismissed. Yet work provides us with so much more than income. It gives us identity. When we meet someone new, one of the first questions we ask is ‘What do you do?’ Our work shapes how we see ourselves and how others see us. It provides structure to our days, purpose to our efforts, and often serves as our primary source of social connection. For many, work is where they experience mastery, growth, and the satisfaction of contributing something valuable to the world. When AI and automation threaten to displace workers, they threaten all of these dimensions of human experience. Recent research published in Nature confirms that AI-induced changes to the workplace affect not just economic well-being, but also stress, health, and the intricate sense of purpose that work provides. A robust social safety net might help address the financial dimension, but the challenge goes deeper. What happens to human dignity when the market deems your skills obsolete? How do we maintain a sense of purpose when machines can do what we spent years learning to do, only faster, cheaper, and without breaks? Suppose we focus exclusively on the economics of AI displacement. In that case, we risk implementing solutions that keep people financially afloat while leaving them adrift in terms of meaning, identity, and social connection.Beyond economic anxiety

Throughout history, humans have adapted to technological change. The agricultural revolution, the industrial revolution, and the digital revolution each displaced certain types of work while creating new opportunities. ‘We’ve always figured it out before’ is a common refrain, often delivered with reassuring confidence.
But this moment feels different, and not just because of the pace of change. Studies in technology and social change suggest that workers at high risk of automation experience not only economic uncertainty, but a troubling erosion of autonomy and meaning, as researchers at Bates College have detailed. Meaningful work and human dignity can be compromised if AI replaces not just labour, but the human experience embedded in it. Previous technological revolutions primarily automated physical labour or augmented human cognitive abilities. AI is doing more than just automating physical tasks; it’s automating both physical tasks (through robotics) and increasingly sophisticated cognitive work (through machine learning, language models, and decision-making systems). When humanoid robots can walk warehouse floors and AI can draft legal documents, what remains distinctly, irreplaceably human? The answer matters enormously because it shapes which work we value and, by extension, which humans we value. There’s a real risk of creating a new hierarchy where only certain ‘elite’ forms of human work, perhaps creative direction, high-level strategy, or uniquely human emotional intelligence, are deemed valuable, while everyone else is positioned as either redundant or in competition with machines.
Care work offers an instructive example. Despite being challenging, skilled, and essential, care work, whether childcare, elder care, or nursing, has historically been undervalued economically, in part because it’s been seen as ‘naturally’ human and feminine. Now, as we face the possibility of AI companions and care robots, we’re forced to articulate what makes human care distinct and valuable. Is it just that humans do it ‘better’, or is there something about human presence, empathy, and connection that matters regardless of efficiency metrics? The same question applies across domains. What should remain human, not because machines can’t do it, but because human involvement itself matters? Where do we draw these lines, and who gets to draw them?
Even if we successfully identify which work should remain human and create new forms of meaningful work, there’s another challenge: not everyone can make the transition equally. The standard response to AI displacement is often ‘reskilling’ or ‘upskilling’. Yet, as analyses from the World Economic Forum highlight, for millions facing job loss in the age of AI, the threat is not simply economic but existential; a disruption to identity, belonging, and one’s place in society. Learn to work with AI. Become an AI trainer, prompt engineer, or data annotator. Pivot to fields AI can’t easily automate. This advice isn’t wrong, but it’s incomplete and, frankly, privileged.
Consider a 55-year-old warehouse worker who’s spent three decades mastering logistics and physical coordination. When a humanoid robot can do those tasks, telling them to ‘learn to code’ or become an ‘AI ethics consultant’ isn’t just unhelpful, it’s insulting. The barriers are real: age discrimination in hiring, financial constraints that make extended retraining difficult, geographic limitations if new opportunities are concentrated in tech hubs, and the simple human reality that not everyone has the same aptitudes or desires.
Young people entering the workforce face their own challenges. What career path makes sense when entire fields might transform radically within a decade? How do you invest in expertise when the half-life of skills keeps shrinking?
There’s a bitter irony in humanoid robots like Optimus. They’re being developed to handle ‘dangerous, repetitive, or boring’ tasks, precisely the kind of work that employs millions of people who often have the least economic mobility and fewest alternative options. The people most vulnerable to displacement are the least equipped to transition to whatever comes next.
Current reskilling initiatives, while well-intentioned, rarely address these structural inequalities. They tend to be designed by and for people who already have educational privilege, economic security, and social capital. A truly humane approach to the AI transition would need to be radically more accessible, more dignified, and more responsive to the actual circumstances of displaced workers’ lives.
So what does a humane approach to AI and automation actually look like? There’s no single answer, but we can identify some principles.
Dignity-first design means considering human impact from the start, not as an afterthought. When companies deploy automation, the question shouldn’t only be ‘Will this increase efficiency?’ but also ‘How does this affect the dignity and wellbeing of workers?’ This might mean slower implementation, more investment in transition support, or choosing not to automate certain roles even when technically possible.
Universal access to adaptation recognises that reskilling can not be a privilege available only to those with time, money, and existing credentials. If we are serious about helping people adapt, we need programs that provide financial support during training, are accessible regardless of prior education, and recognize that different people will need different pathways forward. This isn’t charity, it’s an investment in ensuring that technological progress doesn’t create a permanent underclass.
Redefining contribution challenges us to expand how we think about valuable human activity beyond market productivity. If AI creates economic abundance by making production radically more efficient, how do we ensure that abundance is shared? How do we value contributions that don’t fit neatly into traditional employment, community building, creative expression, care for others, and civic participation? This connects to long-running debates about universal basic income, reduced workweeks, and other proposals to decouple survival from employment.
Safety nets with dignity mean that whatever systems we create to support people, whether unemployment insurance, retraining programs, or new social contracts, must preserve agency and respect. Too often, social support comes with surveillance, stigma, and bureaucratic humiliation. A humane safety net would provide security without subjugation.
Preserving human choice recognises that people should have agency in how much they interact with and depend on AI systems. Not everyone will want to work alongside robots or have their performance monitored by algorithms. As we automate, we should create space for people to opt for more human-centred work arrangements, even if they’re less ‘efficient’ by narrow metrics.
AI advancement is inevitable. The technology will continue to improve, costs will continue to fall, and adoption will continue to accelerate. But how we advance, the values we embed, the priorities we set, the people we include in decision-making, none of that is predetermined.
This is the central challenge of humAInism: Can we build a future where technology serves human flourishing for all people, not just those who own the robots or write the algorithms? Can we ensure that increased productivity translates to improved quality of life broadly shared, rather than concentrated wealth alongside mass displacement?
These questions can not be answered by technologists alone. They require input from workers whose jobs are being transformed, from ethicists thinking about dignity and purpose, from policymakers balancing innovation with social stability, and from communities experiencing these changes firsthand.
The layoffs making headlines and the robots rolling off production lines represent inflexion points in the human story, moments that will shape what kind of society we become. How we respond will define whether we let the pursuit of efficiency override human dignity, or insist that our tools, no matter how sophisticated, remain in service of human wellbeing.
The choice is ours, but the window to make it thoughtfully rather than reactively is narrowing. The work of humAInism, ensuring that AI advancement is guided by human values and directed toward human flourishing, has never been more urgent. Ultimately, it’s possible to align innovation with dignity if we insist on it. The question is not whether technology advances, but whether we make dignity a core design principle as we shape the AI-powered future.
Author: Slobodan Kovrlija