CityOpinion

EPS is using AI for all the wrong reasons

AI is now being used in police bodycams in Edmonton. There seems to be, though, more consequences than upsides to this project.

Is the Edmonton Police Service (EPS) heading in the right direction with using artificial intelligence (AI) with their bodycams? Since December of last year, EPS has been testing AI in their bodycams. They claim it is to help identify what they deem “high-risk offenders.” It still feels bizarre, though, that there’s a possibility of being identified by a machine-learning algorithm regardless of criminal status. It is only trained to target those on a watchlist, but no one wants Edmonton to become a surveillance state. 

The way this AI facial recognition system works is not all figured out. Earlier this month a document was released outlining how the AI will be used. EPS, though, was rather unhelpful by declining to answer any questions relating to said document. But, there are parts we do know. As most will know, police officers in Alberta are required to wear bodycams. This is to both ensure the safety of police officers and civilians. But this is nothing new. The AI, however, is.

Facial recognition is only supposed to work for those on the watchlist, but proactive surveillance wouldn’t be impossible. The AI feature also only works when the bodycam is on, with good lighting, and within four metres of the person. The Office of the Information and Privacy Commissioner (OIPC) also gave the assessment that EPS was within legal means to use this service. Automatic identification of bystanders is less likely because of the need for good conditions, EPS argued. But, every month — every week even — AI seems to be getting better. 

The biggest concern for everyone right now should be privacy. No one should be fearing the possibility of being put in an AI facial recognition database just for existing. Since OpenAI’s big boom in late 2022, there has been nothing but issues. 

There are simply too many issues with AI to name them all, but recently it was found that AI has the ability to speed up cyber attacks. An AI company based out of San Francisco, Anthropic, has proven to enhance the accuracy of cyber attacks. This would make it easier for hackers to plan and execute their attacks. Agencies such as the Ontario Security Commission (OSC) have questioned whether there needs to be a new approach to AI security as it is continually transforming each day. And with the Government of Canada announcing in 2025 Budget the plan to invest in AI — is this all moving too fast? It seems the answer is almost definitely yes. 

The company behind these police bodycams, though, are not speeding up cyber attacks and are instead suppliers of weapons. The company, Axon, who manufactures the bodycams, is based out of the United States (U.S). Much of what they’re known for is supplying weapons and technologies to law-enforcement, military, and civilians. It is a noteworthy piece of information, but should not be a means of scaring anyone away.

Police brutality and racial stereotyping still exist in Canada, making the inclusion of AI feel like a step in the wrong direction. As much as AI has become a common part of many peoples’ lives, it’s hard to say anyone really knows exactly what it’s doing. AI is continually being used for harm like in the case of sped up cyber attacks. It should not even be in consideration to let EPS use a similar technology under these circumstances. Maybe there’s a future where this technology can be used for good alongside EPS, but it should be left alone for the moment.

Mackenzie Bengtsson

Mackenzie Bengtsson is the 2025-26 Deputy Opinion Editor.

Related Articles

Back to top button