As the European Union implements the world’s first comprehensive regulations on artificial intelligence (AI), human rights groups are raising alarms over exemptions for AI use at Europe’s borders. The EU’s AI Act, which categorises AI systems by risk level and imposes stricter rules for those with higher potential for harm, is set to take full effect by February 2025. While it promises to regulate AI across industries, controversial technologies like facial and emotion recognition are still permitted for border and police authorities, sparking concern over surveillance and discrimination.
With Europe investing heavily in border security, deploying AI-driven watchtowers and algorithms to monitor migration flows, critics argue these technologies could criminalise migrants and violate their rights. Human Rights activists warn that AI may reinforce biases and lead to unlawful pushbacks of asylum seekers. Countries like Greece are testing ground for these technologies and have been accused of using AI for surveillance and discrimination, despite denials from the government.
Campaigners also point out that the EU’s regulations allow European companies to develop and export harmful AI systems abroad, potentially fueling human rights abuses in other countries. While the AI Act represents a step forward in global regulation, activists believe it falls short of protecting vulnerable groups at Europe’s borders and beyond. They anticipate that legal challenges and public opposition will eventually close these regulatory gaps.