Province mulls ban on ‘significant’ AI decisions without Ontarians’ consent
Ontario is considering a prohibition on the use of artificial intelligence to make decisions that would significantly impact someone’s life without their prior consent.
An AI “consent requirement” is among the proposals contained in a white paper released by the province last week that aims to “implement a fundamental right to privacy for Ontarians” and “introduce more safeguards for artificial intelligence.”
The document goes further than measures contained in the federal government’s Bill C-11, Digital Charter Implementation Act, or the European Union’s proposed AI regulations released earlier this month.
At issue in the white paper is businesses’ use of AI for “automated decision making,” in which a computer takes information, often from a wide variety of sources and provides a recommendation based on that data.
The technology has become sophisticated enough for use in judging job candidates, assessing creditworthiness and selecting medical treatments.
Jimmy Lin, co-director of the University of Waterloo’s Artificial Intelligence Institute, told Queen’s Park Today in an interview that a serious policy discussion is needed about reasonable safeguards for the technology, cautioning that a careful balance should be struck between privacy and innovation.
Lin worries an advance consent requirement may be too onerous, leaving Ontarians missing out on the potential life improvements from AI.
“Google understands the search behaviour of millions of individuals, which they use to improve searches for everybody’s benefit … What would happen if Google had to ask each searcher to use their behavioural data?” he asked. “So protection of the individual needs to be balanced with the collective interests of society.”
Privacy watchdog calls for risk assessment
The proposed EU regulations, which are expected to influence policies worldwide, sort applications of AI into risk categories — something Ontario’s privacy watchdog is also pushing for.
Ontario privacy commissioner Patricia Kosseim’s office has also recommended AI never be used by the government in secret and that all Ontarians should benefit “economically and socially” from its use. Kosseim’s team is reviewing the white paper and will give a formal response in the coming weeks.
“The use of AI, particularly by governments, to make or guide decisions raises serious implications for access, privacy and other human rights,” the privacy watchdog’s office told Queen’s Park Today. “Ontarians need to be able to trust that organizations — public and private alike — will protect their personal data and use it ethically.”
Under the EU framework, automated decision-making with a “significant” impact on people’s lives would be classified as “high-risk” and subject to the strictest standards under this framework. It also gives EU citizens the right not to be subjected to automated decisions that could produce legal or other lasting consequences.
Ottawa’s Bill C-11 — which is currently at second-reading stage — would require organizations that use automated-decision systems to be able to provide Canadians with an explanation of the AI’s decision, what data was used and how, upon request.
Lin said that while he agrees with the EU’s risk level-based approach, providing a clear understanding of why an AI made a decision one way or another can be very difficult.
“Obviously, we want our models to be transparent, but if you ask a brain surgeon how they arrived at a particular diagnosis, they may not be able to offer you a satisfying explanation either,” said Lin.
There is an active debate in the AI field, said Lin, about whether it’s possible to design AIs to make their algorithmic decisions comprehensible to the average person without sacrificing accuracy. If not, there will need to be a discussion about tradeoffs between effectiveness and transparency.
Federal protections don’t go far enough, says Ontario white paper
The Ontario white paper argues that Bill C-11 does not do enough to protect Ontarians from the dangers of AI automated decision-making.
“Explanations are not sufficient to restore control to Ontarians. Individuals must also be protected from these systems,” reads the white paper.
“While these technologies offer valuable innovations, they have also increased the capabilities for surveillance in modern society, and therefore heightened the associated risks for individual rights.”
In addition to the recommended consent requirement for “significant” decision-making, the paper says Ontarians should have the right to know what personal information is used in a decision, the parameters of the decision, to have it reviewed by a human being and to contest the decision.
The province is hoping its recommendations will be incorporated into Bill C-11, but with the House of Commons about to rise for the summer this week and a possible federal election looming this fall, time is running out to get the legislation passed.
The Ministry of Government and Consumer Services told Queen’s Park Today that Bill C-11 is “flawed and would strip back protections.” It has invited federal Innovation, Science and Industry Minister François-Philippe Champagne to discuss potential amendments. (Canada’s privacy commissioner also has significant qualms with the legislation.)
A harmonized national policy would be preferable to a jurisdictional approach, said the ministry.
“However, if our province finds that a made-in-Ontario privacy law is our only option, we will be ready.”