Balancing promise and risk: Why Alberta needs its own AI law
Alberta needs to implement AI regulation to protect its citizens and industries.

In the past few years, artificial intelligence (AI) has enveloped us. It has been quietly shaping everything from our university classes to posts on our social media feeds. Tech leaders and policymakers tell us it is inevitable, part of technological progress we cannot avoid, and that any pushback unnecessarily hinders growth. It has become accepted as our new norm, while the risks remain hidden beneath the guise of innovation and convenience.
AI does have the potential to create many advancements but without safeguards it can cause far more harm than good. Alberta must regulate AI through its own provincial law tailored to the specific needs of the province. This can ensure maximum benefit while avoiding the pitfalls of an uncontrolled technology.
AI Governance should function as one of these safeguards. University of Alberta Professor, Blair Attard-Frost describes it as a practice that is meant to maximize the benefits and minimize the harms that AI systems can cause to individuals and groups. Instead, much of AI governance is failing to address the needs of its civil society. Often excluding citizen input from the conversations entirely.
Attard-Frost, through her work on counter-governance emphasizes that communities must have the power to question and reshape AI systems that perpetuate harm. Without these checks, AI can reinforce inequalities, exploit personal data, and make decisions without understanding the effect on people’s lives. This has already become clear in creative industries. Generative AI has been trained on millions of artists’ work without consent or compensation. This raises serious ethical questions about intellectual property.
Alberta faces unique circumstances that make provincial legislation necessary. One clear example is the Edmonton Police Service’s use of generative AI to create a facial image of a suspect based on DNA phenotyping. Many people widely criticized the image for reinforcing racial profiling, and the platform subsequently removed it. This illustrates how even a single unregulated AI decision can have significant consequences especially for marginalized communities.
Canada’s federal government made initial steps to regulate AI through its proposed Bill C-27 the Artificial Intelligence and Data Act (AIDA). The government never implemented this act because the recent 2025 snap election terminated all pending bills. AIDA, however, left too many gaps with vague definitions of what counted as “high impact” systems. Furthermore, the public sector including health care, education, and policing — essential sectors that need regulations — are left out of AIDA. This is a similar trend found in the European Union’s AI Act.
Although a movement in the right direction, critics rightfully point out that many lobbyist groups were able to deregulate aspects of the act and countless loopholes remain open. This included allowing companies to determine their own classifications to avoid stricter rules for systems classified as high-risk, having internal self assessments instead of outside audits, and exceptions for law enforcement.
The gaps larger governments have left in their legislation demonstrate the mistakes Alberta must avoid. The Office of the Information and Privacy Commissioner of Alberta (OIPAC), in their report, have also pointed out that relying on AIDA alone leaves gaps open for Alberta’s intra-provincial and public sector AI use.
OIPAC has already recommended safeguards that a provincial AI law could adopt. These include opt-out options for individuals, transparency about when and how AI is used, and clear lines of accountability for decisions made by algorithms. Implementing these measures would not slow innovation. Instead, they would foster public trust and set standards for developers and companies. When people know organizations handle their data responsibly, they are more likely to adopt and rely on AI tools. Businesses also benefit from consistent rules that reduce uncertainty and lower the chance of future issues.
AI is already shaping our daily lives. It will continue to grow on an even stronger level through our government policies and public services. This is why we must have strong local laws to alleviate any risk to our privacy rights and further issues of discrimination. Alberta has a unique opportunity to build a framework that protects citizens while ensuring innovation. Only through this legislation will Albertans be truly able to benefit from the positives of AI.