Policymakers worldwide are caught between awe and apprehension over AI. They recognise its potential to accelerate productivity and scientific progress while worrying about threats to jobs, human rights, and social cohesion. Yet they’re missing a critical risk: AI is becoming a code of opacity within government. Without adequate oversight, AI systems can facilitate corruption—eroding public trust in both the technology and the institutions deploying it. Many governments are using AI to inform decisions, to prepare budgets and analyses, and to make decisions about the allocation of resources. For example, the Dutch government relied on a self-learning algorithm to evaluate childcare benefit claims. Unfortunately, the AI system was built on an inaccurate and incomplete dataset. Over time, the algorithm “learned” to flag immigrants, lower-income individuals, and people with non-Western appearances as fraudsters. Civil servants approved these determinations without careful review and offered no mechanisms for affected families to challenge the decision. After years of complaints, a Parliamentary inquiry stopped this misuse of AI. Soon thereafter, the Dutch government collapsed. The Netherlands’ misuse of AI was a form of AI-enabled corruption. While there is no universal definition of corruption, many international organisations define it as the abuse of entrusted power for private gain—including non-financial gain for oneself or others. There are many ways AI and corruption are linked, leading to unfair processes or uneven outcomes. Furthermore, individuals, firms, and governments that misuse AI can deny responsibility and blame problems on AI’s opacity and complexity. Unfortunately, the Netherlands’ experience is a cautionary tale and an early signal of what other countries could soon face. According to a 2023 review of policies and programs reported to the OECD, more than 60 countries are using taxpayer dollars to create, disseminate, or conduct research on AI. Many of these same governments increasingly rely on AI systems to allocate resources such as access to credit, education, and information. If such systems rely on biased, inaccurate, incomplete, or unrepresentative data, or the algorithm is poorly designed or based on pseudoscience, AI use can create risks for both human rights and human autonomy. Governments are also using AI for national security purposes, such as targeting support, intelligence analysis, proactive forecasting, and streamlined command and control in conflicts in both Europe and the Middle East. Policymakers in the US and China increasingly describe competition in AI as an existential race, which each side must win leading to huge sums off private and public investment. Even as governments seek to expand AI’s role across sensitive domains, a different set of risks has begun to draw attention. In 2025, the UN Office of Drugs and Crime and the Organization of American States co-convened a panel to examine whether AI could yield the greatest corruption. AI deployers and users deserve a better understanding of how AI misuse might yield corruption. First, AI systems are already corrupting political processes. In Indonesia, the political party Golkar used AI in 2024 to reanimate Suharto, the longtime dictator who died in 2008. The deepfake Suharto endorsed several candidates, including his son-in-law, who won. Second, governments must exercise caution when integrating AI systems into their decision-making. As the Dutch government found, relying on inaccurate, incomplete, or unrepresentative data sets can cause real harm and further erode trust. Individuals, too, should be cautious integrating AI into their lives. AI systems can share non-factual information and deceive or manipulate individuals. The US National Institute of Standards and Technology recently noted that generative AI presents a wide range of risks, and there is no foolproof method for protecting AI from misdirection, a form of corruption of AI. Third, AI‘s opacity can undermine trust in both the technology and in the policies and institutions that govern it. AI, like corruption, operates as a black box. Developers often cannot explain how systems reach certain conclusions, and struggle to correct unwanted outcomes—such as when a driverless car killed a pedestrian. The OECD, warns that “insufficient transparency, explainability and public understanding of AI can erode accountability and cause public resistance; and the over-reliance on AI can widen digital divides and allow systemic errors to propagate, weakening citizen trust in government.” Recent polls signal the decline in trust in AI, AI companies, and AI governance around the world. Policymakers and their constituents must break through this code of opacity and pay closer attention to the risk to democracy and good governance presented by AI. They should collaborate to draft clear guidelines for developing, procuring, and deploying AI that address areas of vulnerability, including corruption. They should incentivise transparency in model development, which can facilitate external audits of government use of AI. Finally, they should empower individuals and civil society organisations to challenge AI misuse through accessible appeal mechanisms. Susan Ariel Aaronson is a professor at George Washington University and co-PI of the NSF-NIST Institute for Trustworthy AI in Law and Society.