On 16 January 2024, Singapore’s Infocomm Media Development Authority and AI Verify Foundation released a draft Model Governance Framework for Generative AI. It calls upon all major stakeholders, including policymakers, industry leaders, the research community, and the general public, to collaboratively contribute to a responsible AI landscape.
This framework introduces nine key dimensions:
- Accountability: Incentivize responsibility along the AI development chain.
- Data: Ensure quality and fairness in data use, especially for contentious data.
- Trusted Development and Deployment: Prioritize transparency and industry best practices.
- Incident Reporting: Establish structures for timely reporting and continuous improvement.
- Testing and Assurance: Encourage third-party testing for trust-building.
- Security: Adapt existing frameworks to address new threat vectors.
- Content Provenance: Enhance transparency in AI-generated content.
- Safety and Alignment R&D: Accelerate global cooperation for improved model alignment.
- AI for Public Good: Focus on uplifting individuals and businesses through responsible AI access, public sector adoption, worker upskilling, and sustainable development.
This framework underscores a commitment to responsible governance, emphasizing transparency, accountability, and security in the dynamic landscape of generative AI.