Artificial intelligence (AI) can solve some of the greatest problems humanity faces today, but it also brings new challenges. To address these challenges, the advantages and detriments of using AI must be weighed and regulated.
AI are computer systems capable of doing a human’s work, typically speech recognition, decision-making, and translation.
One of the earliest uses of AI programs was in 1951 when Dietrich Prinz wrote the first limited chess program. On May 11, 1997, AI gained further global popularity when Deep Blue, an IBM computer, beat the world chess champion. This event showed how AI could solve problems better and faster than humans can.
Today, AI is everywhere in our daily lives, from self-driving cars to voice assistants on our smartphones. According to the Council of Canadian Academics, “AI has both the potential to spur innovation and further scientific understanding.” Students at the University of Alberta even developed a mobile app that tracks the progress of a healing wound.
Recently, Google’s DeepMind announced Gato, a generalist AI capable of doing hundreds of tasks, including even writing poetry, showing how we are getting even closer to an AI with the level of human intelligence.
But this level of advanced AI raises questions about morality, ethics, and adverse effects.
One problem is AI’s ability to complete repetitive tasks, allowing the possibility of AI replacing employees in labour-based jobs. This is especially true in jobs with assembly lines. AI-controlled robots are more efficient than human beings since they can move faster and don’t need to rest. This is already happening in Amazon fulfillment centres, where AI pack items, sort packages, and move cargo around the warehouse.
In the next decade, it is estimated one billion jobs will be lost globally due to AI. This will lead to a permanent shift in the global job market.
There is also potential for companies to manipulate AI for higher profitability by using vast amounts of data to target users’ personal biases to sell their products. Uber customers have complained that their Uber ride is more expensive when the phone’s battery is low. The AI recognizes that the customer does not want to be stranded with a dead phone, so it charges a premium.
But AI can also lead to humanity solving some of its most pressing problems.
AI is already being used to track deforestation in the Amazon and design green intelligent cities in China. As well, AI is useful in the fight against cancer by improving diagnosis and patient care, developing drugs to fight cancer faster than normally possible, and predicting where tumours might regrow in the future.
To use AI to its full potential, we need to protect users. Whenever there is any interaction between humans and AI, people must have the autonomy to make their own decisions. AI should strive to augment and amplify human skills, not aim to manipulate or deceive.
To ensure this, there needs to be more regulation in AI.
In 2018, the European Union passed guidelines on AI development within Europe. However, Canada does not have similar framework.
Since developing AI is very costly, having regulations in place before granting funding is one of the best ways to regulate the usage and development of AI.
Additionally, we need better safeguards for data privacy. Bill C-11 in the Canadian parliament is a start. This bill intends to define AI transparency, along with ensuring that users’ data stays private. However, there is still room for improvement.
There needs to be an expansion of these privacy laws to deny AI access to personal data. Like Uber, many companies use private data to raise company profits. Likewise, public education is vital, so that people are aware of the dangers and risks of AI. This ensures that we understand how AI attempts to exploit people and how to protect our personal information.
AI should also always have human oversight, as it ensures that all ethical and moral obligations are followed.
The most important step is to make sure that the development of AI is transparent and open. The developers should brief the public often about the various features and uses of the AI. They should explain how it intends to use the data available today. Being transparent will improve public trust, and ensure that the developers are responsible for the product they develop.
With the proper framework and guidelines, AI can become a tool that fuels innovation and growth rather than just being used to manipulate users.