Shortly after Colombian presidential candidate Miguel Uribe Turbay was shot at a political rally in June, hundreds of videos of the attack flooded social media. Some of these turned out to be deepfakes made with artificial intelligence, forcing police and prosecutors to spend hours checking and debunking them during the investigation. A teenager was eventually charged.

Increasing adoption of AI is transforming Latin America’s justice system by helping tackle case backlogs and improve access to justice for victims. But it is also exposing deep vulnerabilities through its rampant misuse, bias, and weak oversight as regulators struggle to keep up with the pace of innovation.

Law enforcement doesn’t yet “have the capacity to look at these judicial matters beyond just asking whether a piece of evidence is real or not,” Lucia Camacho, public policy coordinator of Derechos Digitales, a digital rights group, told Rest of World. This may prevent victims from accessing robust legal frameworks and judges with knowledge of the technology, she said.

Justice systems across the world are struggling to address harms from deepfakes that are increasingly used for financial scams, in elections, and to spread nonconsensual sexual imagery. There are currently over 1,300 initiatives in 80 countries and international organizations to regulate AI, but not all of these are laws and nor do they all cover deepfakes, according to the Organisation for Economic Co-operation and Development.