For much of the public debate around artificial intelligence, attention has been fixed on capability: how powerful models are becoming, what tasks they can automate, and how close they are to matching or surpassing human performance. Recent evidence, however, suggests that this focus may be misplaced. In many domains, AI is still uneven, brittle, and far from the revolutionary force it is often portrayed to be. Yet something meaningful has already changed. Even before generative AI reaches the level many predict, our relationship with information has begun to shift. The change is not driven by machines passing as human, but by how easily large volumes of content can now introduce doubt into what we see and hear. Readers, viewers, and institutions hesitate before trusting what they see, not because it is obviously false, but because verifying authenticity has become impractical at scale. One consequence of this change is a significant idea now gaining traction: the notion that ‘human-made‘ or ‘AI-free‘ content may soon require explicit labelling.

The erosion of trust did not begin when AI became highly intelligent. It began when synthetic content became abundant. Text, images, audio, and video can now be generated at negligible cost and near-instant speed. Most of this material is not malicious, and much of it is clearly identifiable. The issue is not that synthetic content is perfect, but that it is widespread and often believable. Faced with this volume, people increasingly assume uncertainty rather than authenticity.
As a result, trust is no longer formed through close inspection. Few readers have the time, expertise, or tools to verify sources individually. Instead, trust becomes indirect, shaped by platforms, signals, and assumptions instead of personal verification.
This helps explain why debates about AI often feel disconnected from measured assessments of its actual performance. Even if models remain imperfect, the social cost of uncertainty has already arrived. Institutions that depend on credibility, such as journalism, education, and diplomacy, are particularly affected by this shift.
Early responses to this challenge focused on detection. A number of tools emerged to analyse micro-expressions, lighting inconsistencies, audio artefacts, and other forensic signals. Multi-layered detection pipelines proved more effective than any single method, and platform-level interventions helped slow the spread of synthetic media when properly deployed.
Over time, however, it became clear that detection alone would not be sufficient. As generative techniques improve, distinguishing authentic from synthetic content based on appearance alone becomes harder.
This led to a shift from perception toward provenance. Instead of asking whether the content looks real, attention shifted to where it came from and how it moved. Cryptographic signing at the moment of creation, secure metadata, and verifiable audit trails offer a way to give genuine content an additional signal of authenticity. Unverified material is not automatically false, but verified media can carry greater weight in high-stakes contexts such as journalism, legal proceedings, or diplomatic communication.
Even provenance systems have clear limits. They work best in formal or controlled environments and only when many actors consistently adopt them. Once content moves across platforms or informal channels, those signals are often lost or ignored. In response, people fall back on simpler cues. Instead of verifying everything, they rely on labels and other visible signals to make quick judgments about what they see.
When systems become too complex for individuals to evaluate on their own, people rely on labels. Consumers do not test food for chemical composition; they look for labels such as organic or fair trade. Products are marked BPA-free. Dietary practices rely on certifications like halal or kosher. These labels do not eliminate all risk and do not guarantee quality. They exist to make everyday decisions manageable.
The idea of ‘AI-free‘ or ‘human-made‘ content follows the same pattern. It is not a rejection of technology, but a practical response to information environments that exceed personal verification capacity. Rather than checking every source, people look for clear signals, often called Content Credentials, to help them decide what to rely on.
In this context, labelling is not about purity. It is about reducing uncertainty and making decisions possible without constant doubt.

If human authorship begins to function as a label, it also becomes a differentiator and potentially a premium. This raises uncomfortable questions:
These dynamics matter beyond media markets. In diplomacy and governance, credibility is a form of power. If trust becomes something that must be certified, then access to certification and the standards behind it will shape whose voices carry authority.
The issue is not nostalgia for a pre-AI past, but equity in future information systems. A world in which authenticity is priced risks reinforcing existing inequalities rather than addressing them.
These debates are no longer theoretical. Provenance-based initiatives such as the Content Authenticity Initiative (C2PA), supported by companies including Adobe, Microsoft, and the BBC, already allow creators to attach cryptographically signed metadata to media. At the regulatory level, frameworks such as the European Union’s AI Act introduce obligations to label certain forms of synthetic media, including deepfakes.
At the same time, these systems face clear practical limits:
Labelling also raises legitimate concerns. Critics argue that marking content as ‘human-made‘ risks stigmatising AI-assisted creativity or ignores the fact that most media has long involved layers of editing, automation, and collaboration. Others worry that certification schemes could privilege large platforms and well-resourced creators, while smaller or low-income producers struggle to comply. These concerns are particularly salient in the Global South, where locally produced content may be subject to standards defined elsewhere, or where verification infrastructure is unevenly available.
In practice, this has led to hybrid approaches rather than rigid distinctions. Some platforms and creators already experiment with labels such as ‘AI-assisted‘ or ‘human-curated‘, acknowledging the growing spectrum between fully human and fully synthetic production. In this sense, labelling is less about enforcing purity and more about preserving meaningful signals in contexts such as elections, journalism, and diplomatic communication, where provenance still matters even if perfection is unattainable.
Certification is never neutral. Deciding what counts as ‘AI-free‘ immediately raises boundary questions. Is spellcheck allowed? Translation tools? Accessibility aids? What about human-AI collaboration that assists but does not generate? Recent 2025 debates, such as the Irish Data Protection Commission’s stance on AI in public communications, and C2PA pilots for hybrid labeling standards (built on W3C Verifiable Credentials), highlight that these are active governance choices, not merely technical details
Standards shape what counts as legitimate, and those who enforce them gain influence. If regions adopt different rules, labelling may become fragmented instead of helpful. Voluntary labels and legal requirements may also overlap, creating confusion.
These issues can’t be sidestepped. The real challenge is to acknowledge them and design labelling systems that are transparent, inclusive, and able to evolve as technology and norms change.
The rise of ‘human-made‘ labels does not mean that artificial intelligence has failed, nor that its benefits should be dismissed. It reflects a practical response to an information environment where content is produced at unprecedented speed and scale, and where certainty is harder to maintain.
As AI becomes more common in writing and media production, the more relevant question is how clearly we define the role we still expect humans to play. Authorship, responsibility, and accountability do not disappear automatically, but they do require conscious choices about when and how they matter. For diplomatic actors, the priority is not flawless verification, but interoperable standards that safeguard discourse in multilingual, multipolar contexts.
From this perspective, labelling is less about drawing a final line and more about signalling intent. It shows that societies are already setting boundaries and priorities, even as the technology itself continues to develop.
Author: Slobodan Kovrlija