I wrote this Op-Ed for my Law 432.D course titled ‘Accountable Computer Systems.’ This blog will likely be posted on the course website but as I am presenting on a few topics related, I wanted it to be available to the general public in advance. I do note that after writing this blog, my more in-depth literature review uncovered many more administrative lawyers talking about accountability. However, I still believe we need to properly define accountability and can take lessons from Joshua Kroll’s work to do so.
Introduction
Canadian administrative law, through judicial review, examines whether decisions made by Government decision-makers (e.g. government officials, tribunals, and regulators) are reasonable, fair, and lawful.[i]
Administrative law governs the Federal Court’s review of whether an Officer has acted in a reasonable[ii] or procedurally fair[iii] way, for example in the context of Canadian immigration and citizenship law, where an Officer has decided to deny a Jamaican mother’s permanent residence application on humanitarian and compassionate grounds[iv] or strip Canadian citizenship away from a Canadian-born to Russian foreign intelligence operatives charged with espionage in the United States.[v]
Through judicial review and subsequent appellate Court processes, the term accountability has yet to be meaningfully engaged with in Canadian administrative case law.[vi] On the contrary, in computer science accountability is quick becoming a central organizing principle and governance mechanism.[vii] Technical and computer science specialists are designing technological tools based on accountability principles that justify its use and perceived sociolegal impacts.
Accountability will need to be better interrogated within the Canadian administrative law context, especially as Government bodies increasingly render decisions utilizing computer systems (such as AI-driven decision-making systems) [viii] that are becoming subject to judicial review.[ix]
An example of this is the growing litigation around Immigration, Refugees and Citizenship Canada’s (“IRCC”) use of decision-making systems utilizing machine-learning and advanced analytics.[x]
Legal scholarship is just starting to scratch the surface of exploring administrative and judicial accountability and has done so largely as a reaction to AI systems challenging traditional human decision-making processes. In the Canadian administrative law literature I reviewed, the discussion of accountability has not involved defining the term beyond stating it is a desirable system aim.[xi]
So, how will Canadian courts perform judicial review and engage with a principle (accountability) that it hardly knows?
There are a few takeaways from Joshua Kroll’s 2020 article, “Accountability in Computer Systems” that might be good starting points for this collaboration and conversation.
Defining Accountability – and the Need to Broaden Judicial Review’s Considerations
Kroll defines “accountability” as a “a relationship that involves reporting information to that entity and in exchange receiving praise, disapproval, or consequences when appropriate.”[xii]
Kroll’s definition is important as it goes beyond thinking of accountability only as a check-and-balance oversight and review system,[xiii] but also one that requires mutual reporting in a variety of positive and negative situations. His definition embraces, rather than sidesteps, the role of normative standards and moral responsibility.[xiv]
This contrasts with administrative judicial review, a process that is usually only engaged when an individual or party is subject to a negative Government decision (often a refusal or denial of a benefit or service, or the finding of wrongdoing against an individual).[xv]
As a general principle that is subject to a few exceptions, judicial review limits the Court’s examination to the ‘application’ record that was before the final human officer when rendering their negative decision.[xvi] Therefore, it is a barrier to utilize judicial review to seek clarity from the Government about the underlying data, triaging systems, and biases that may form the context for the record itself.
I argue that Kroll’s definition of accountability provides room for this missing context and extends accountability to the reporting the experiences of groups or individuals who receive the positive benefits of Government decisions when others do not. The Government currently holds this information as private institutional knowledge, with fear that broader disclosure could lead to scrutiny that might expose fault-lines such as discrimination and Charter[xvii] breaches/non-compliance.[xviii]
Consequentially, I do not see accountability’s language fitting perfectly into our currently existing administrative law context, judicial review processes, and legal tests. Indeed, even the process of engaging with accountability’s definition in law and tools for implementation will challenge the starting point of judicial review’s deference and culture of reasons-based justification[xix] as being sufficient to hold Government to account.
Rethinking Transparency in Canadian Administrative Law
Transparency is a cornerstone concept in Canadian administrative law. Like accountability, this term is also not well-defined in operation, beyond the often-repeated phrase of a reasonable decision needing to be “justified, intelligent, and transparent.”[xx] Kroll challenges the equivalency of transparency with accountability. He defines transparency as “the concept that systems and processes should be accessible to those affected either through an understanding of their function, through input into their structure, or both.”[xxi] Kroll argues that transparency is a possible vehicle or instrument for achieving accountability but also one that can be both insufficient and undesirable,[xxii] especially where it can still lead to illegitimate participants or lead actors to alter their behaviour to violate an operative norm.[xxiii]
The shortcomings of transparency as a reviewing criterion in Canadian administrative law are becoming apparent in IRCC’s use of automated decision-making (“ADM”) systems. Judicial reviews to the Federal Court are asking judges to consider the reasonableness, and by extension transparency of decisions made by systems that are non-transparent – such as security screening automation[xxiv] and advanced analytics-based immigration application triaging tools.[xxv]
Consequently, IRCC and the Federal Court have instead defended and deconstructed pro forma template decisions generated by computer systems[xxvi] while ignoring the role of concepts such as bias, itself a concept under-explored and under-theorized in administrative law.[xxvii] Meanwhile, IRCC has denied applicants and Courts access to mechanisms of accountability such as audit trails and the results of the technical and equity experts who are required to review these systems for gender and equity-based bias considerations.[xxviii]
One therefore must ask – even if full technical system transparency were available, would it be desirable for Government decision-makers to be transparent about their ADM systems,[xxix] particularly with outstanding fears of individuals gaming the system,[xxx] or worse yet – perceived external threats to infrastructure or national security in certain applications.[xxxi] Where Baker viscerally exposed an Officer’s discrimination and racism in transparent written text, ADM systems threaten to erase the words from the page and provide only a non-transparent result.
Accountability as Destabilizing Canadian Administrative Law
Adding the language of accountability will be destabilizing for administrative judicial review.
Courts often recant in Federal Court cases that it is “not the role of the Court to make its own determinations of fact, to substitute its view of the evidence or the appropriate outcome, or to reweigh the evidence.”[xxxii] The seeking of accountability may ask Courts to go behind and beyond an administrative decision, to function in ways and to ask questions they may not feel comfortable asking, possibly out of fear of overstepping the legislation’s intent.
A liberal conception of the law seeks and gravitates towards taxonomies, neat boxes, clean definitions, and coherent rules for consistency.[xxxiii] On the contrary, accountability acknowledges the existence of essentially contested concepts[xxxiv] and the layers of interpretation needed to parse out various accountability types,[xxxv] and consensus-building. Adding accountability to administrative law will inevitably make law-making become more complex. It may also suggest that judicial review may not be as effective as an ex-ante tool,[xxxvi] and that a more robust, frontline, regulatory regime may be needed for ADMs.
Conclusion: The Need for Administrative Law to Develop Accountability Airbags
The use of computer systems to render administrative decisions, more specifically the use of AI which Kroll highlights as engaging many types of accountability,[xxxvii] puts accountability and Canadian administrative law on an inevitable collision course. Much like the design of airbags for a vehicle, there needs to be both technical/legal expertise and public education/awareness needed of both what accountability is, and how it works in practice.
It is also becoming clearer that those impacted and engaging legal systems want the same answerability that Kroll speaks to for computer systems, such as ADMs used in Canadian immigration.[xxxviii] As such, multi-disciplinary experts will need to examine computer science concepts and accountable AI terminology such as explainability[xxxix] or interpretability[xl] alongside their administrative law conceptual counterparts, such as intelligibility[xli] and justification.[xlii]
As this op-ed suggests, there are already points of contention, (but also likely underexplored synergies), around the definition of accountability, the role of transparency, and whether the normative or multi-faceted considerations of computer systems are even desirable in Canadian administrative law.
References
[i] Government of Canada, “Definitions” in Canada’s System of Justice. Last Modified: 01 September 2021. Accessible online <https://www.justice.gc.ca/eng/csj-sjc/ccs-ajc/06.html> See also: Legal Aid Ontario, “Judicial Review” (undated). Accessible online: <https://www.legalaid.on.ca/faq/judicial-review/>
[ii] The Supreme Court of Canada in Canada (Minister of Citizenship and Immigration) v. Vavilov, 2019 SCC 65 (CanLII), [2019] 4 SCR 653, <https://canlii.ca/t/j46kb> [“Vavilov”] set out the following about reasonableness review:
[15] In conducting a reasonableness review, a court must consider the outcome of the administrative decision in light of its underlying rationale in order to ensure that the decision as a whole is transparent, intelligible and justified. What distinguishes reasonableness review from correctness review is that the court conducting a reasonableness review must focus on the decision the administrative decision maker actually made, including the justification offered for it, and not on the conclusion the court itself would have reached in the administrative decision maker’s place.
[iii]The question for the Court to determine is whether “the procedure was fair having regard to all of the circumstances” and “whether the applicant knew the case to meet and had a full and fair chance to respond”. See: Ahmed v. Canada (Citizenship and Immigration), 2023 FC 72 at para 5; Canadian Pacific Railway Company v. Canada (Attorney General), 2018 FCA 69 at paras 54-56.
[iv] Baker v. Canada (Minister of Citizenship and Immigration), 1999 CanLII 699 (SCC), [1999] 2 SCR 817, <https://canlii.ca/t/1fqlk>. In Baker, Canadian immigration officer refused Ms. Mavis Baker, a Jamaican citizen and mother of eight children, for permanent residence on humanitarian and compassionate grounds. The Officer’s notes contained inappropriate comments relating to the Applicant’s attempts to stay in Canada and her personal circumstances as a mother with mental health challenges. Among other important findings, the Court found that the Officer had acted with a reasonable apprehension of bias and contrary to the duty of procedural fairness. Justice L’Heureux-Dube formulated a non-exhaustive five-part test for procedural fairness at paras 23-27:
- Nature of decision made and the process followed in making it;
- Nature of the statutory scheme;
- Importance of the decision to the individual or individuals affected;
- The legitimate expectations of the person challenging the decision; and
- Deference to the decision-maker’s choice of procedures.
[v] In Vavilov, the Supreme Court of Canada heard the appeal of Alexander Vavilov, born in Canada to foreign nationals who were working on assignment in Canada as Russian intelligence agents. The Canadian Registrar on Citizenship cancelled his citizenship certificate finding that he was the child of the representatives of the Russian government. As such, the Registrar found that Mr. Vavilov was exempt from the general rule that individuals born in Canada would be automatically granted Canadian citizenship. The Supreme Court of Canada found the Registrar’s interpretation unreasonable and ruled that Vavilov is a Canadian citizen. The Supreme Court of Canada heard this case as part of a trilogy of cases which re-examined the nature and scope of administrative judicial review. The decision focused on developing a revised framework for a presumptive reasonableness review of administrative decisions.
[vi] In the Supreme Court of Canada’s leading administrative law decision, Vavilov, there is only one mention of the word “accountability” at para 13 which cautions decision-makers to be accountable for their analysis. From an operationalization perspective this tells us little about how accountability is applied or analyzed in a legal context. There is no mention of accountability in the previous leading precedential case Dunsmuir v. New Brunswick, 2008 SCC 9 (CanLII), [2008] 1 SCR 190, <https://canlii.ca/t/1vxsm> [“Dunsmuir”] nor in the leading case on procedural fairness, Baker.
[vii] Joshua A. Kroll “Accountability in computer systems.” The Oxford handbook of ethics of AI (2020): 181-196. Accessed online: <https://academic.oup.com/edited-volume/34287/chapter-abstract/290661049?redirectedFrom=fulltext&login=false>
[viii] Various Canadian Government agencies have published Algorithmic Impact Assessment (“AIA”) for their implementation of algorithmic decision-making systems in areas such as social benefits and immigration. See: Government of Canada, Open Government Portal. Accessed online: <https://search.open.canada.ca/opendata/?sort=metadata_modified+desc&search_text=Algorithmic+Impact+Assessment&page=1>
[ix] Kiss v. Canada (Citizenship and Immigration), 2023 FC 1147 (CanLII), <https://canlii.ca/t/jzwtx>
[x]IRCC is using these AI-based ADM systems to aid the automation of positive eligibility findings for certain temporary, permanent resident applicants, and flag high risk files.
[xi] Paul Daly, “Artificial Administration: Administrative Law, Administrative Justice and Accountability in the Age of Machines”, [Source not specified], 2023 CanLIIDocs 1258, Accessed online: <https://canlii.ca/t/7n4jw> at 18-26. See also in Australian context on judicial accountability: Felicity Bell, Lyria Benett Moses, et al, “AI Decision-Making and the Courts: A guide for Judges, Tribunal Members and Court Administrators” The Australian Institute of Judicial Administration Inc., at 46-49.
[xii] Kroll at 184.
[xiii] Kroll discusses these concepts in his piece at 184, 186-187.
[xiv] Ibid at 184 and 192.
[xv] In addition to matters of judicial review in Federal jurisdiction (e.g., immigration, Indigenous, tax, and intellectual property decisions), there are also judicial reviews by Division courts. The Supreme Court of B.C. reviews, for example residential tenancy, motor vehicle, and worker’s compensation issues, among others. See: “What is Judicial Review”, Supreme Court BC Online Help Guide, Accessed online: <https://supremecourtbc.ca/civil-law/getting-started/what-is-jr>
[xvi] For an explanation of this rule and the context, see Stratas J.A.’s decision in Bernard v. Canada (Revenue Agency), 2015 FCA 263 (CanLII), <https://canlii.ca/t/gmb0m> at paras 13-28.
[xvii] Canadian Charter of Rights and Freedoms, s 7, Part 1 of the Constitution Act, 1982, being Schedule B to the Canada Act 1982 (UK), 1982, c 11.
[xviii] Immigration, Refugees and Citizenship Canada, “Guide de politique sur le soutien automatisé à la prise de décision version de 2021/Policy Playbook on Automated Support for Decision-making 2021 edition (Bilingual)” as made available on Will Tao, Vancouver Immigration Blog, (11 May 2023) [“Policy Playbook”], Accessed online: <https://vancouverimmigrationblog.com/guide-de-politique-sur-le-soutien-automatise-a-la-prise-de-decision-version-de-2021-policy-playbook-on-automated-support-for-decision-making-2021-edition-bilingual/> at 5.
[xix] Vavilov at paras 2 and 26.
[xx] Vavilov at paras 15, 95-96; Dunsmuir at para 47.
[xxi] Kroll at 193-194.
[xxii] Ibid.
[xxiii] Ibid.
[xxiv] Canada Border Services Agency, “Algorithmic Impact Assessment for Security Screening Automation,” Interim Release of Access to Information Act Request A-2023-18296.
[xxv] See, e.g.: Kiss
[xxvi] See e.g. Haghshenas v. Canada (Citizenship and Immigration), 2023 FC 464 (CanLII), <https://canlii.ca/t/jwhkd>
[xxvii] One of the challenges that has arisen in the context of Baker, supra, is Officer’s being careful to not make explicitly biased statements and stick to template reasons that become difficult to challenge for bias. Furthermore, there has been little exploration of the definition of bias, other than providing for a high threshold test for reasonable apprehension of bias.
[xxviii] Bias is a consideration in the Directive of Automated Decision Making (“DADM”) See: Government of Canada, Treasury Board Secretariat, Directive on Automated Decision-Making (Ottawa: Treasury Board Secretariat, 2019) online: Treasury Board Secretariat, Accessible online: <https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592> (Last modified: 25 April 2023) (“DADM”) and also asks a question about data in the AIA questionnaire.
[xxix] A reviewer asked a very engaging question about whether ADM systems inherently lack transparency or if there is a lack of a mandate for ADMs to be transparent. This piece does not purport to answer this question but highlights an example of IRCC not wanting their ADM system to be transparent.
[xxx] IRCC raises this concern in their Policy Playbook at page 12.
[xxxi] National security concerns are raised in the Policy Playbook (at page 32). They are also cited in redactions to the Algorithmic Impact Assessment for Security Screening Automation. It has also led to the Government’s motions to redact portions of the Certified Tribunal Records (see e.g. Kiss at paras 25-30).
[xxxii] See e.g. Li v. Canada (Citizenship and Immigration), 2023 FC 1753 (CanLII), <https://canlii.ca/t/k2123> at para 28.
[xxxiii] See similar critique in Patricia J. Williams, The Alchemy of Race and Rights (Cambridge MA: Harvard University Press, 1998), at 6.
[xxxiv] Kroll at 16.
[xxxv] Ibid at 184.
[xxxvi] Kroll discusses the ex-ante approach at 186.
[xxxvii] Ibid at 184.
[xxxviii] Ibid at 185-186, 189-192.
[xxxix] A helpful reviewer recommended looking at different perspectives of accountability. They specifically recommended looking at Finale Doshi-Velez, Mason Kortz, et al, “Accountability of AI Under the Law: The Role of Explanation” 3 November 2017, Berkman Center Research Publication, Forthcoming, Available at SSRN: <https://ssrn.com/abstract=3064761> or <http://dx.doi.org/10.2139/ssrn.3064761>. While it is beyond the scope of this paper to do a full comparison between Kroll and Doshi-Velez et al. perspectives, I note that Doshi-Velez et al. considers explanation “but one tool” to hold AI systems to account (at page 10). Doshi-Velez et al. defines explanation as “a human-interpretable description of the process by which a decision-maker took a particular set of inputs and reached a particular conclusion,” and notes that “an explanation should permit an observer to determine the extent to which a particular input was determinative of influential on the output.” Similarly, Kroll discusses some of the challenges of demanding full causal explanations of human functionaries within a system (at page 187) and as well the scientific approach’s focus on full, mechanistic explanations (at page 189). I have decided to centre Kroll’s discussion, both as it was mandatory course reading material but more importantly because it focused on attempting to define accountability. I note here, however, that Kroll’s discussion of explanations and of answerability (at page 184) shares attributes to the way Doshi-Velez et al. discusses explanation in the societal, legal, and technical contexts (at page 4-8). There may also be other definitions of accountability in computer science, other areas of the law (such as tort and contract), and other disciplines that should be engaged in a longer, more thorough study of accountability.
[xl] The same reviewer who recommended that I consider transparency also recommended a reading on interpretable machine learning systems. See: Carnegie Melon University, CMU ML Blog, Accessed online: <https://blog.ml.cmu.edu/2020/08/31/6-interpretability/>. Kroll does not specifically discuss or use the word interpretability. This goes to highlight again, the lack of definitional and terminology alignment, possibly not only between law and computer science, but within computer science itself. We see similar issues in law, with the way Courts interchange terminology and descriptors, particularly as it pertains to the reasonableness standard discussed earlier.
[xli] Intelligibility is another term that I would argue is not well-defined in Canadian administrative law but as become a commonly used moniker for a reasonable decision (along side transparency and justification). It appears six times in Vavilov. This term may be related to the related “intelligible standard” that legislatures are held to, see: Canadian Foundation for Children, Youth and the Law v. Canada (Attorney General), 2004 SCC 4 (CanLII), [2004] 1 SCR 76, <https://canlii.ca/t/1g990> at para 16, where the Court states:
A law must set an intelligible standard both for the citizens it governs and the officials who must enforce it. The two are interconnected. A vague law prevents the citizen from realizing when he or she is entering an area of risk for criminal sanction. It similarly makes it difficult for law enforcement officers and judges to determine whether a crime has been committed. This invokes the further concern of putting too much discretion in the hands of law enforcement officials, and violates the precept that individuals should be governed by the rule of law, not the rule of persons. The doctrine of vagueness is directed generally at the evil of leaving “basic policy matters to policemen, judges, and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application”: Grayned v. City of Rockford, 408 U.S. 104 (1972), at p. 109.
[xlii] Justification is common theme in Vavilov, especially around the ‘culture of justification” discussed by the majority. This manifests in focusing on the decision and reasons actually made by the decision-maker (Vavilov at paras 14-15). The majority also highlighted the concept of responsive justification, where if a decision has a particularly harsh consequence for the affected individual, the decision maker must explain why the decision best reflects the legislature’s intent (Vavilov at para 133).