The US Defense Advanced Research Projects Agency (DARPA) has selected 17 organisations to work on its Guaranteeing Artificial Intelligence Robustness against Deception (GARD) programme. GARD aims to develop a new generation of defences against adversarial deception attacks on machine learning (ML) models, and will focus on: (a) the development of theoretical foundations for defensible ML and a lexicon of new defence mechanisms based on them; (b) the creation and testing of defensible systems in a diverse range of settings; (c) the construction of a new testbed for characterising ML defensibility relative to threat scenarios. Intel and Georgia Tech have been selected to lead the programme, which will also involve entities such as Carnegie Mellon University, IBM Research – Almaden, and the Massachusetts Institute of Technology (MIT).