Companies that want to demonstrate that they develop responsible AI can do so by using Singapore’s recently launched A.I. Verify, a governance testing framework and toolkit to be piloted in the country. 

The framework defines technical tests and process checks that will allow AI developers and owners to verify the performance of their AI systems against a set of principles such as transparency, safety and resilience, accountability, and human agency and oversight. While it relies on self-testing and does not guarantee that tested AI systems are completely safe, the framework is expected to help foster public trust in AI. 

Google, Meta, and Microsoft are among a handful of companies which have already tested the framework or provided feedback.

cross-circle