AI Regulation meets enforcement reality: How the rules actually work

Published on March 16 2026
There is endless chatter about regulating AI and setting rules and boundaries to keep things safe. But it often ends up sounding vague, buried in jargon, or too abstract. What does it actually mean in practice? How will such rules be enforced? How will someone check if rules are followed? Recently, a wave of concrete actions has hit: the International AI Safety Report, the EU AI Act entering its enforcement phase, and California’s new laws on powerful frontier AI models and companion chatbots. To cut through the fog, we’ll examine these rules closely, what they mean in practice, and how […]

There is endless chatter about regulating AI and setting rules and boundaries to keep things safe. But it often ends up sounding vague, buried in jargon, or too abstract. What does it actually mean in practice? How will such rules be enforced? How will someone check if rules are followed?

Recently, a wave of concrete actions has hit: the International AI Safety Report, the EU AI Act entering its enforcement phase, and California’s new laws on powerful frontier AI models and companion chatbots. To cut through the fog, we’ll examine these rules closely, what they mean in practice, and how they affect everyday users.

Recent moves: What’s happening now

In early 2026, over 100 experts from more than 30 countries released the second International AI Safety Report. It is something like a global health check on super-smart AI. The report flags risks from models that could outthink humans in tricky ways and offers shared scientific input to guide governments that are writing rules. There are no fines or legal obligations directly attached to this document, but it clearly shapes the direction of regulation in places like the EU and the United States.

Europe’s AI Act, already in force since 2024, enters a new phase this summer. Think of it as traffic laws for AI: it bans the most dangerous ‘vehicles’, such as social-scoring systems; requires safety checks for high-risk ones, such as job-hiring tools or biometric systems; and requires labels for chatbots. 

California’s rules, in force since January 2026, target big labs that train cutting-edge AI models and apps that mimic human friends. These include emotional chatbots that could mislead lonely or vulnerable users, especially children and teenagers.

These steps are not just ideas on paper. They are starting to shape how large companies design, test, and release AI tools, and they will slowly spill over into markets and regulators far from Brussels and Sacramento.

The image shows a photograph of a smiling man
Yoshua Bengio, chair of the International AI Safety Report 2026, which informs many of today’s AI governance debates.

How it works in practice: No AI police chasing every app

It is tempting to imagine a kind of global AI police force that scans every app and website in real time. Reality is much more familiar and much less dramatic. The system looks closer to how food safety or product safety works today: inspectors do not raid every kitchen or workshop every day. They react to complaints, spot-check big suppliers, and pull bad batches or dangerous products when they find them.

There is no blanket monitoring of every tool, every day. The system relies on complaints, reports, and targeted checks. It also relies heavily on self-disclosure, such as public safety frameworks, model cards, or impact assessments, which give outside experts and watchdogs a starting point to question what a company is doing.

Fines, market withdrawals, and user impact

Consequences for the AI companies vary by severity and risk level. Under the EU AI Act, banned AI systems, such as manipulative social-scoring tools, face immediate market withdrawal and fines of up to EUR 35 million or 7% of global annual revenue. High-risk failures, such as biased hiring algorithms, trigger audits, mandated fixes, and penalties of up to USD 15 million or 3% of revenue. Companion chatbot violations, such as failing to disclose AI identity, can lead to lawsuits or Attorney General actions, with a minimum penalty of USD 1,000 per harmed user in California. The message regulators are sending is simple: experimentation is allowed, negligence is not.

As of March 2026, enforcement remains mostly precautionary, with warnings and preparation rather than widespread fines. However, the pattern mirrors Europe’s GDPR (General Data Protection Regulation) privacy rules, where early high-profile cases against companies like Meta established precedents. For everyday users, the benefits include clearer labelling of AI-generated content, built-in crisis referrals within chatbots, and a reduced risk of biased decisions in critical areas such as employment or lending.

Funding the watchdogs?

Designing rules on paper is relatively easy compared to building the institutions that will apply them. The AI Act in Europe, California’s state laws, and similar efforts in other regions all assume that someone will actually pick up the phone, read complaints, open investigations, and write decisions. That means new bodies will be created and existing regulators will need more people, more skills, and more money.

This is where many countries hit practical limits. Wealthy countries in Western Europe can often expand an existing authority, hire technical staff, and send people to international meetings. Smaller or less wealthy countries, may still be struggling with basic digitalization, let alone specialized AI oversight. Some do not yet have a clear idea of which ministry or agency should be in charge of AI at all. In those contexts, simply copying the EU or California on paper risks creating rules that exist in law but cannot be enforced in daily life.

To make this more realistic, countries can pool efforts. They can share regional expert teams, rely on joint investigation units, or agree that one country will host a specialised lab that serves several neighbours. Cross-border networks of regulators, perhaps supported by the EU, the Council of Europe, or UN agencies, can reduce costs and help smaller states avoid being completely dependent on technical advice from the same companies they are supposed to regulate. Without some form of cooperation and capacity building, enforcement will remain uneven, allowing the most powerful players to face minimal scrutiny in weaker jurisdictions.

The image shows a banner for Diplo Academy

A new job market around AI rules

There is also another story behind these AI regulations: they create an entirely new field of work. Once governments commit to enforcing AI rules, they need people who can understand both technology and policy. They need engineers who can read model documentation, lawyers who understand the basics of machine learning, social scientists who can map harms, and auditors who can test systems in a structured way.

EU countries will have to staff new AI oversight teams with experts in technology, law, ethics, and sector-specific knowledge such as health, finance, or energy. California and other US states will need investigators and analysts who can handle technical evidence in court-like settings. International organizations and think tanks will need specialists who can compare regulatory models across regions and support countries that are just starting to think about AI. New bodies, such as European-level boards, national AI councils, or specialized testing labs, can create thousands of roles that sit somewhere between classic IT work and public administration.

These roles will also demand hybrid expertise that current education systems are only beginning to produce. For professionals from many different fields, this opens an additional career path. Instead of building only new AI tools, they can help check the safety, fairness, or security of existing tools. On the other hand, this could enable some countries to position themselves as regional hubs that provide expertise and implementation capacity to neighbours that share similar legal frameworks but lack specialist staff.

The power check

AI giants today hold unusual sway over what we see, how we work, and how we interact. Their models encode business assumptions and cultural values that are often more aligned with corporate interests than with collective human needs. It is not healthy to let a handful of companies set the default mental environment for large parts of the planet without any meaningful counterweight.

That is why these regulations are not just technical details for lawyers and engineers. They are one way for societies to push back and say that nobody, no matter how rich or innovative, operates without independent oversight. At their best, these rules introduce some friction and transparency into a space that has been running on speed and hype. They do not stop AI development. They simply ask it to share the road, respect some common rules, and slow down when lives or basic rights are at stake.

AI regulation is still young and imperfect, and enforcement will be uneven for some time. But it is moving from talk to traction. Just as seatbelts and crash tests changed cars without killing mobility, clear rules on AI can protect people while keeping the ride going. The real challenge now is to give regulators the people and tools they need, so that these promises become more than just another set of buzzwords.

Author: Slobodan Kovrlija


cross-circle