From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation

Published on January 9 2026
While "Agentic AI" is often framed as a radical break from the past, its core principles autonomy, negotiation, and trust have been studied for decades under the banner of Agreement Technologies. Today’s breakthrough isn't just about autonomous action; it’s about the arrival of Large Language Models (LLMs) as a flexible orchestration layer on top of these established foundations.

Agentic AI is often presented as something radically new: systems that can plan, act, and even negotiate on our behalf. In diplomatic and policy circles, this framing can create the impression of a sharp break with earlier forms of artificial intelligence. Yet many of the ideas behind so-called ‘agentic’ behaviour (autonomy, interaction, negotiation, trust) have been studied and applied for decades in the field of Multi-Agent Systems (MAS) and what is known as Agreement Technologies.

Decision-support software and early AI tools have long played a quiet role in international decision-making, from logistics planning to scenario modeling. What is new is not the idea of software agents interacting or negotiating, but the arrival of large language models (LLMs) as a new layer on top of these established foundations. This layer enables more natural interaction, more flexible orchestration, and broader accessibility, while still stopping well short of replacing human diplomats.

Seen this way, today’s agentic AI systems are best understood not as autonomous diplomatic actors, but as tools that can extend and change how negotiation support, simulations, and trust management are designed and used.

The agentic AI narrative

In current discourse, agentic AI usually refers to systems that can pursue goals with limited supervision. Such systems may break complex objectives into subtasks, call external tools or APIs, and coordinate multiple AI components to complete a workflow. In enterprise settings, examples include assistants that manage calendars or travel, systems that monitor transactions and trigger investigations, or software that dynamically reconfigures supply chains.

From there, it is a short conceptual step to more sensitive domains. If AI systems can coordinate actions and manage trade-offs in business contexts, could similar approaches support complex international negotiations, or help explore diplomatic options before humans engage directly?

The image shows a banner with the text Diplomatic Theory and Practice, Online Course April 2026

For observers outside computer science, this can feel like a genuine shift: AI that no longer only responds to queries, but initiates actions and coordinates with other systems. For researchers in autonomous agents and multi-agent systems, however, much of this terrain is familiar. The main challenge has never been intelligence in isolation, but managing interaction among multiple actors with different goals, information, and constraints.

Agreement technologies: familiar logic in technical form

Multi-agent systems can be understood, in simple terms, as societies of software actors. Each agent has its own objectives and partial view of the world, but must communicate, coordinate, or sometimes compete with others in a shared environment. Early research in this area quickly revealed that many hard problems are not about individual decision-making, but about reaching and sustaining agreements among diverse actors.

‘Agreement technologies’ emerged as an umbrella term for methods that address this challenge. At their core are mechanisms for automated negotiation and auctions, where agents exchange offers and counter-offers according to defined protocols in order to allocate resources or tasks under constraints. Over time, this work expanded beyond purely numerical bargaining.

Computational argumentation allows agents to exchange structured reasons for and against proposals, making it possible to represent debates about fairness, risk, or feasibility in a formal way. Normative systems and electronic contracts encode obligations, permissions, and prohibitions so that agents can reason about compliance and sanctions. Trust and reputation models aggregate past interactions, such as whether commitments were honoured, to inform future decisions about cooperation.

For a diplomatic audience, these concepts should sound immediately familiar. Offers and counter-offers, argumentation, norms, obligations, and credibility are not inventions of AI research; they are central features of diplomacy itself. What agreement technologies add is formal structure. They translate these ideas into representations and protocols that software can use to support, simulate, and analyze complex interactions.

Three diplomacy use‑cases

Climate and climate‑finance pre‑negotiation

Climate diplomacy involves complex trade-offs across emissions targets, timelines, finance, and technology transfer, often among dozens of actors with different capacities and vulnerabilities. In principle, multi-agent models can represent these actors using AI agents configured with emissions profiles, economic structures, and political constraints.

Negotiation and auction mechanisms can then be used to explore combinations of commitments, finance flows, and timelines that satisfy basic constraints across parties. Such systems would not produce agreements, but they could expand the range of options visible to negotiators before formal talks begin.

Argumentation components can attach structured explanations to proposed packages, such as fairness metrics or cost-benefit analyses, making it clearer why certain configurations are attractive or unacceptable to specific actors. Used carefully, these tools could help surface non-obvious compromises or identify proposals that appear viable on paper but are politically fragile in practice.

Cyber norms and escalation scenarios

In cyberspace, diplomats are working to develop norms that reduce escalation risks while preserving states’ sense of security and deterrence. Here, agreement technologies can serve as a testing ground rather than a decision-maker.

Normative multi-agent models can encode alternative rule sets such as limits on targets, proportional responses, or thresholds for public attribution, and simulate how state and non-state actors might behave in stylized incident scenarios. Agents representing states, alliances, or other stakeholders can be given strategic preferences and domestic constraints, then allowed to interact under different normative assumptions.

The objective is not prediction, but stress-testing. By observing how proposed norms perform when confronted with incentives, misperceptions, or ambiguity, negotiators can better understand where disagreements are likely to arise. When combined with argumentation modules, these models can also highlight recurring points of contention around attribution, proportionality, or responsibility, helping negotiators craft more resilient language.

Trade, logistics, and access to critical routes

Trade and logistics are domains where multi-agent allocation and scheduling techniques are already widely used in industry. Similar approaches could support policy decisions when access to ports, transport corridors, or airspace becomes constrained, whether due to crises, sanctions, or surges in demand.

In such scenarios, agents representing shipping companies, port authorities, or states could participate in structured allocation processes under explicit policy constraints, such as humanitarian priorities or security requirements. Trust and reputation systems could add context by tracking past behaviour, for example repeated violations of schedules or conditions.

These tools would not determine outcomes on their own. Their value lies in making trade-offs explicit, revealing bottlenecks, and providing structured input to political judgement, while leaving final decisions and exceptions firmly in human hands.

What LLM‑based agentic AI actually adds

From a technical standpoint, today’s agentic AI systems fit comfortably within existing definitions of intelligent agents and multi-agent systems. They still involve autonomous components that perceive their environment, decide on actions, and interact through defined mechanisms. The continuity with earlier research lies in the use of negotiation protocols, norms, and trust models to manage complex interactions.

What has changed is the engine inside many of these agents. Large language models can interpret natural-language instructions, generate text and code, and decompose loosely defined goals into actionable steps. This affects both the human interface, making systems easier to use, and the flexibility of the agents themselves, which can be re-prompted to adopt new roles or strategies with minimal re-engineering.

A plausible next step is not the emergence of fully autonomous ‘AI diplomats‘, but hybrid systems. In these setups, LLMs handle interaction, drafting, and option generation, while symbolic components and agreement technologies enforce constraints, track commitments, and maintain audit trails. This division of labour aligns with the strengths of each approach: language models excel at communication and pattern recognition, while symbolic systems provide structure, consistency, and verifiability.

Governance questions for diplomatic practice

If agentic AI and agreement technologies become embedded in diplomatic workflows, governance questions move quickly to the foreground. One concerns delegation: which tasks can be safely assigned to AI systems, and under what form of oversight? Exploring options or stress-testing agreements carries different risks from operational roles that might trigger concrete actions, such as notifications, sanctions, or allocations.

The image shows a banner with the text Agentic AI transforms enterprise workflows 2026

Another concerns transparency and accountability. For negotiation support, it is not enough that a system produces a plausible recommendation. Diplomatic services may require records that show how a proposal was generated, which assumptions it relied on, and how alternatives were evaluated. While LLMs pose challenges in this respect, agreement technologies can help by imposing structured protocols and logging requirements around generative components.

Finally, interoperability matters. If states and international organizations deploy their own AI-supported negotiation tools, shared standards will be needed to govern how proposals, constraints, and justifications are exchanged. Earlier work on multi-agent systems developed communication standards such as FIPA-ACL; analogous efforts may be required in diplomatic contexts to ensure security, traceability, and mutual trust.

A support layer, not a substitute

Viewed in this light, agentic AI is less a rupture than an invitation to revisit a mature body of work on multi-agent systems and agreement technologies. The central question for diplomacy is not whether AI will appear in negotiation processes, but whether diplomatic communities will actively shape how and where it is used.

By grounding new tools in established mechanisms for negotiation, norms, and trust, and by keeping humans firmly in control, diplomacy can draw on AI as a support layer rather than a substitute. The opportunity lies in deliberate integration, not technological surprise.

Author: Slobodan Kovrlija


cross-circle