The NTIA’s AI Accountability Policy Report advocates for increased openness in AI systems, independent inspections, and penalties for posing unacceptable risks or making false claims. According to the NTIA press release, ‘Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm.’
Why does it matter?
Following President Biden’s executive order on AI last October and the administration’s efforts to leverage AI’s potential while mitigating its risks, the NTIA issued eight sets of policy recommendations to assist in safe, secure, and trustworthy AI innovation. They include guidance and standards for audits and auditors, support for people and tools for research, and regulation through independent audits and regulatory inspections of high-risk AI systems, strengthening its capacity to address risks and practices related to AI across sectors of the economy.
The federal government will invest in the resources necessary for independent assessment of AI systems, including by further supporting the newly established AI Safety Institute housed at the National Institute for Standards and Technology (NIST) and by creating and funding a National AI Research Resource with datasets to test for equity, efficacy, computing, the cloud infrastructure required to perform stringent and independent evaluations, and a workforce development program.
NTIA will also collaborate with private sector partners to develop accountability mechanisms and with other branches of the federal government to support the policy recommendations included in the report.