Author Image

When AI writes the rules: How to avoid fake laws governing real life

Jovan Kurbalija
Published on April 28 2026
AI systems are increasingly capable of producing legal language and rules that look authoritative, including cases where outputs have echoed or fabricated legal references, as highlighted in South Africa. The real question is how societies can distinguish between useful AI assistance and 'fake laws' and why human institutions must remain the final gatekeepers of legitimacy and enforcement.

Last week, South Africa unveiled its first draft national AI policy, aiming to position the country as a continental leader in innovation. The plan included ambitious new institutions: a National AI Commission, an Ethics Board, and tax breaks for private sector collaboration.

But just days later, the celebration turned into embarrassment.

According to Reuters, South Africa’s government was forced to withdraw the draft after reviewers discovered a fatal flaw: the policy was riddled with fake resources and citations that didn’t exist. The research supporting the country’s AI strategy had likely been generated by an AI.

The hallucination crisis in lawmaking

This isn’t a minor typo. AI hallucinated policies and supporting resources. It is not surprising, as Large language models are advanced guessing machines, not providers of verified facts. Even when they are fake, texts can look perfectly correct and legitimate.

South Africa’s Minister of Communications and Digital Technologies, Solly Malatsi, acknowledged the failure with refreshing honesty:

The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened.

He noted that this lapse ‘has compromised the integrity and credibility of the draft policy.’

The problem with AI-generated laws

South Africa’s incident is not an exception. As policymakers rush to keep up with technology, we are seeing more examples of AI-drafted regulations being submitted for review. In the United States, a federal judge in California sanctioned two law firms for submitting a legal brief containing a fake citation generated by AI.

The problem isn’t that AI is used. AI is a useful tool that should be used. The danger lies in how it is being used.

Legal documents and policies require precision, grounding, and contextualisation. Generic AI models often fail at all three:

How to fix a problem (without banning AI)

Precise, grounded, and contextualised are three guidelines that should help avoid AI incidents, as one happened to South African parliamentarians. The solution, as suggested by Diplo’s experience, lies in a two-pronged approach: developing institutional AI and increasing AI literacy.

If South Africa had had institutional AI anchored into local knowledge and context, such a hallucination could have been avoided. Moreover, AI would be a genuine and useful tool reflecting the topical and temporal context of policy development and law drafting.

But more importantly, we need to build AI competencies among policymakers. This requires a shift in pedagogy. We cannot teach policymakers to simply use AI; they must understand how it works.

One promising approach is an AI apprenticeship, a method where learners build AI tools themselves. By developing models, they learn about technical limitations (such as hallucinations) firsthand. They learn to verify data, demand citations, and understand the policy impacts of the code.

Conclusion

South Africa’s withdrawal of its AI policy is a ‘canary in the coal mine.’ It shows that AI-generated text is seductive but dangerous for high-stakes negotiations and law-making.

As Minister Malatsi stated: 

This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It’s a lesson we take with humility.

If we fail to build precise, grounded AI tools and train policymakers to use them properly, we won’t just have fake citations in a draft. We will have fake laws governing real people.

AI Apprenticeship

cross-circle