Author Image

Do we really need frontier AI for everyday work?

Jovan Kurbalija
Published on February 2 2026
We’re bombarded with news about the latest frontier AI models and their ever-expanding capabilities. But the real question is whether these advances matter for most of us, most of the time. In many everyday tasks, they don’t. A large share of our professional and personal lives is centred around simple and repetitive routines. Think of accounting, law, medicine, public administration, and corporate management, to name a few professions! In language terms, much of daily communication runs on a relatively small ‘working vocabulary’ of no more than 2000 unique words. Our cognitive framings are also repetitive, reaching a maximum of 30 […]

We’re bombarded with news about the latest frontier AI models and their ever-expanding capabilities. But the real question is whether these advances matter for most of us, most of the time. In many everyday tasks, they don’t.

A large share of our professional and personal lives is centred around simple and repetitive routines. Think of accounting, law, medicine, public administration, and corporate management, to name a few professions! In language terms, much of daily communication runs on a relatively small ‘working vocabulary’ of no more than 2000 unique words. Our cognitive framings are also repetitive, reaching a maximum of 30 typical thinking routines.

This practical simplicity leads to the conclusion that for many common tasks, we don’t need frontier models. We need right-sized models, smaller, cheaper, faster systems trained and adapted for specific contexts and workflows. The AI we need is technically available, financially affordable, and ethically desirable as it preserves knowledge that defines our organisations and us.

The sledgehammer problem

Using a frontier model for routine office tasks is often like using a sledgehammer to crack a nut. Frontier models are built to be broadly capable across domains, which comes with trade-offs: higher inference costs, more complex governance and risk management, and greater dependency on large platforms for compute and updates.

By contrast, small language models (SLMs) and other compact architectures can be excellent when paired with clear scope, curated knowledge bases, and well-designed processes. The research and product ecosystem around ‘small but strong’ models has matured quickly, showing that smaller models can be surprisingly capable, especially when trained carefully or specialised to a domain. 

Concise work manuals instead of Britannica

Most professions didn’t need Encyclopaedia Britannica open on the desk to do their jobs. Expertise lives in people, institutions, procedures, and professional communities, not only in giant repositories of general knowledge.

AI is similar. General-purpose frontier models are impressive, but the bulk of real value often comes from smaller, well-integrated tools that fit the job: drafting standard documents, summarising meeting notes, classifying incoming requests, extracting structured fields, or answering questions grounded in a specific organisational knowledge base.

Diplomacy is a good example of where AI’s limits show

Diplomacy, an area in which I’m personally involved, illustrates this ‘right tool for the job’ logic.

A great deal of diplomatic work is built around repetitive rituals: protocol, credentials, formal correspondence, event organisation, reporting cycles, and administrative requirements. Much of that is already codified in manuals and guidance, including at the United Nations and in national diplomatic services. These routines can often be automated with standard software, rules-based systems, or smaller models tuned to specific forms and workflows.

But diplomacy is also an art: persuasion, negotiation, signalling, trust-building, empathy, and cultural interpretation. These are deeply human-intensive activities which AI cannot easily mimic or automate. 

So the ‘sweet spot’ is clear: use smaller and more controllable tools to automate diplomatic routines, and be aware of AI limitations in the human-centred diplomacy of representation, persuasion, and negotiations.

The paradox: we need big AI platforms less than they need us

Here’s the uncomfortable paradox. For many everyday tasks, the benefit of frontier models is limited compared to cheaper, smaller, specialised systems. Yet frontier platforms can thrive because they sit in the middle of our workflows: they capture prompts, organisational know-how, and patterns of behaviour that become valuable data for product improvement, market power, and lock-in.

In other words, big AI platforms may need our data and tacit knowledge more than we need their frontier capabilities for daily work. Big LLMs are often overkill for our concrete needs and use cases. This common-sense reality is overshadowed by the prevailing view that ‘bigger AI is better,’ overlooking the fact that capable, affordable alternatives are just a few clicks away.

And those alternatives are not theoretical. Open-weighted models and compact systems are widely available, making it feasible for organisations (and even individuals) to deploy models with greater control over privacy, cost, and customisation.

A common-sense approach for 2026

In 2026, citizens, companies, and countries should step back and regain common sense in how they use and deploy AI:

Frontier AI will continue to matter, especially for science, advanced reasoning, and broad capability leaps. But for the millions of ‘ordinary’ professional tasks that make up the day-to-day economy, the future is likely to be smaller, more local, more specialised, and more aligned with real workflows than with AI model hype.


cross-circle