The Australian Human Rights Commissioner has released a Human Rights and Technology report outlining a series of recommendations to ensure that human rights are upheld in laws, policies, funding, and education on artificial intelligence (AI). Among them is a recommendation for a moratorium on some high-risk uses of facial recognition technologies (FRT) and on the use of ‘black box’ or opaque AI in decision making by corporations and by government agencies. Other recommendations envision stronger protections against harmful uses of AI, especially when AI is used in high-risk areas such as policing, social security, and banking. The report also outlines five key principles to ensure a human rights approach to AI and other new technologies: participation in decision making of all stakeholders affected by the new technologies; accountability and effective monitoring of compliance with human rights standards by government and non-state actors; non-discrimination and equality in the development and use of technologies; empowerment and capacity building, to allow the community to understand the impact of new technologies on their lives; and legality, enabling the law to recognise that human rights are legally enforceable including in the use of new technology.