Artificial Intelligence

Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

Blog Posts

Why the 30-Year Old Florea Presumption Should Be Retired in Face of Automated Decision Making in Canadian Immigration

In the recent Federal Court decision of Hassani v. Canada (Citizenship and Immigration), 2023 FC 734, Justice Gascon writes a paragraph that I thought would be an excellent starting point for a blog. Not only does it capture the state of administrative decision-making in immigration and highlight some of the foundational pieces, but also I want to focus on one part of it that I may respectfully suggest, needs a re-think.

Hassani involved an Iranian international student who was refused a study permit to attend a Professional Photography program at Langara College. She was refused on two factors – [1] that she did not have significant family ties outside Canada and that [2] her purpose of visit was not consistent with a temporary stay given the details she had provided in her application. On the facts, it is definitely questionable that this case even went to hearing given the Applicant had no family ties in Canada and all her family ties were indeed outside Canada and in Iran. Nevertheless, Justice Gascon did a very good job analyzing the flaws within the Officer’s two findings.

There is one paragraph, 26, that is worth breaking down further – and there’s one foundational principle cited that I think needs a major rethink.

Justice Gascon writes:

[26] I do not dispute that a decision maker is generally not required to make an explicit finding on each constituent element of an issue when reaching its final decision. I also accept that a decision maker is presumed to have weighed and considered all the evidence presented to him or her unless the contrary is shown (Florea v Canada (Minister of Employment and Immigration)[1993] FCJ No 598 (FCA) (QL) at para 1). I further agree that failure to mention a particular piece of evidence in a decision does not mean that it was ignored and does not constitute an error (Cepeda-Gutierrez v Canada (Minister of Citizenship and Immigration)1998 CanLII 8667 (FC), [1998] FCJ No 1425 (QL) [Cepeda-Gutierrez] at paras 16–17). Nevertheless, it is also well established that a decision maker should not overlook contradictory evidence. This is particularly true with respect to key elements relied upon by the decision maker to reach its conclusion. When an administrative tribunal is silent on evidence clearly pointing to an opposite conclusion and squarely contradicting its findings of fact, the Court may intervene and infer that the tribunal ignored the contradictory evidence when making its decision (Ozdemir v Canada (Minister of Citizenship and Immigration)2001 FCA 331 at paras 9–10Cepeda-Gutierrez at para 17). The failure to consider specific evidence must be viewed in context, and it will lead to a decision being overturned when the non-mentioned evidence is critical, contradicts the tribunal’s conclusion and the reviewing court determines that its omission means that the tribunal disregarded the material before it (Penez at paras 24–25). This is precisely the case here with respect to Ms. Hassani’s family ties in Iran. (emphasis added)

 

What is the Florea Presumption?

As stated by Justice Gascon, the principle in Florea v Canada (Minister of Employment and Immigration),[1993] FCJ No 598 (FCA) pertains to a Tribunal’s weighing of evidence and the presumption that they have considered all the evidence before them. It puts the onus on the Applicant stating otherwise, to establish the contrary.

As the Immigration and Refugee Board Legal Services chapter on Weighing Evidence states:

Rather, the panel is presumed on judicial review to have weighed and considered all of the evidence before it, unless the contrary is established. (see: https://irb.gc.ca/en/legal-policy/legal-concepts/Documents/Evid%20Full_e-2020-FINAL.pdf)

This case and principle is often cited in refugee, humanitarian and compassionate grounds matters, inadmissibility cases, and IRB matters.

Reviewing case law for the last two years (since 2021), I did find a handful of the thirty-cases I reviewed that did engage this case and principle in a temporary resident context.

See e.g. study permit JR – Marcelin v. Canada (Citizenship and Immigration) 2021 FC 761 – Madam Justice Roussel at para 16 [JR dismissed]; PNP Work Permit – Shang v. Canada (Citizenship and Immigration), 2021 FC 633 at para 65 citing Basanti v Canada (Citizenship and Immigration), 2019 FC 1068 at para 24  – Madam Justice Kane [JR allowed];  Minor Child TRV Refusal – Dardari v. Canada (Citizenship and Immigration) 2021 FC 493 at para 39 – adding the portion – and is not obliged to refer to each piece of evidence submitted by the applicant – Madam Justice St-Louis [JR dismissed];

Related to this is the long-standing and oft-cited decision of Cepeda-Gutierrez v. Canada (Citizenship and Immigration) 1998 FC No 1425 in which Justice Evans re-iterated that an Agency stating they considered all evidence before it (even as a boilerplate statement) would usually be enough to suffice and assure parties and the Court of this. He writes:

[16]      On the other hand, the reasons given by administrative agencies are not to be read hypercritically by a court (Medina v. Canada (Minister of Employment and Immigration) (1990), 12 Imm. L.R. (2d) 33 (F.C.A.)), nor are agencies required to refer to every piece of evidence that they received that is contrary to their finding, and to explain how they dealt with it (see, for example, Hassan v. Canada (Minister of Employment and Immigration) (1992), 147 N.R. 317 (F.C.A.). That would be far too onerous a burden to impose upon administrative decision-makers who may be struggling with a heavy case-load and inadequate resources. A statement by the agency in its reasons for decision that, in making its findings, it considered all the evidence before it, will often suffice to assure the parties, and a reviewing court, that the agency directed itself to the totality of the evidence when making its findings of fact.

(emphasis added)

 

Why the Florea Presumption Should Be Reversed For Temporary Resident Applications and Any Decision Utilizing Advanced Analytics/AI/Chinook/Cumulus/Harvester

My argument is that this presumption that all evidence has been considered, as well as the boilerplate template language stating that it was considered, should not apply universally in 2023.

We know enough (again not enough about the system writ large) but enough to know that systems such as Chinook were created to facilitate the processing of temporary resident applications in hundreds of seconds, to extract data into excel tables for bulk processing, and to automate eligibility approvals. These were done specifically allow Officers to spend less time and consider enough, not all, of the evidence before them to render a decision.

I think the fact that applications are being auto-approved for eligibility, simply on a set of rules that are inputted primarily based on biometric information of an applicant should be enough to raise concerns that the systems even require consideration of most of the evidence submitted by an applicant.

All the materials on bulk processing that IRCC has released in the past few years, has been focused on the fact that not all documents need to be reviewed (not wording that states: review Additional Documents, as required).

 

IRCC Officer Training Guide Obtained Through ATIP

 

IRCC Visa Office Training Guide Obtained Through ATIP

If you look at the Daponte Affidavit and the original Module 3 Prompt that was created, it does not add confidence to the requirement that all documents needed to necessarily be reviewed:

Daponte Affidavit from Ocran

We learned that in response to concerns, they added to Chinook a prompt reminder for Officers to review all materials, but it is clear Chinook has gone far beyond ‘review and initial assessment’ to bulk processing.

Even with Cumulus, it is clear that if some docs that are not coverted to e-Docs they have to be pulled up separately in GCMS, the very tedious process that tools such as Cumulus seek to avoid.

Cumulus Training Manual Obtained Through ATIP

I would presume that it would be much easier for an Officer to make decision based on these summary extractions then to go into the documents.

Cumulus Training Guide Obtained Through ATIP

The documents are viewed below, much more akin to a ‘preview’ mode.

Cumulus Training Guide Obtained Through ATIP

Harvester, a tool that facilitates the conversion of documents into a reviewable format is similarly based on what documents can be extracted.

Harvester User Guide Obtained Via ATIP

Based on the way it is described and how some offices can exclude certain documents, it already suggests not all documents make it to the purview of the Officer.

Most importantly, as a constraint is time. As Andrew Koltun has uncovered, IRCC spends 101 seconds on average, with Chinook processing. https://theijf.org/nearly-40-per-cent-of-student-visa-applications-from-india-rejected-for-vague-reasons#

Respectfully, 101 seconds cannot be enough to consider but one or two documents – max – before rendering a decision. The future use of Large-Language Models and OCR to extract key […]

Read More »

Cautious Concern But Missing Crucial Context – Justice Brown’s Decision in Haghshenas

After the Federal Court’s decision in Ocran v. MCI (Canada) 2022 FC 175it was almost inevitable that we would be talking again about Chinook. Counsel (including ourselves) have been raising the use of Chinook and the concerns of Artificial Intelligence in memorandums of argument and accompanying affidavits, arguing – for example – that many of the standard template language used fall short of the Vavilov standard and in many cases are non-responsive or reflective to the Applicant’s submissions.

We have largely been successful in getting cases consented on using this approach, yet I cannot say our overall success in resolving judicial reviews have followed suite. Indeed, recently we have been stuck at the visa office more on re-opening than we have been in the past.

Today, the Federal Court rendered a decision that again engaged in Chinook and in this case also touched on Artificial Intelligence. Many took to Twitter and Linkedin to express concern about bad precedent. Scholars such as Paul Daly also weighed in on Justice Brown’s decision, highlighting that there is simply a lot we do not know about how Chinook is deployed. 

I might take a different view than many on this case. While I think it might be read (and could be pointed to as precedent by the Department of Justice) as a decision upholding the reasonableness and fairness of utilizing Chinook and AI, I also think there was no record that tied in how the process affects the outcome, clearly the link that Justice Brown was concerned about.

Haghshenas v. Canada (MCI) 2023 FC 464

Mr. Haghshenas had his C-11 (LMIA exempt) work permit refused on the basis that he would not leave Canada at the end of his authorized stay pursuant to subsection 200(1) of the IRPR. It is interesting that in the Certified Tribunal Record and specifically the GCMS notes, there is no mention of Chinook 3+ as is commonly disclosed now. However, there is the wording of Indicators (meaning risk indicators) as N/A and Processing Word Flag as N/A. These are Module 5 flags, that make up one of the columns in the Chinook spreadsheet, so it is presumable that Chinook could have been used. However, we do note the screenshots that were part of the CTR do not appear to include the Chinook tab or any screenshot of what Chinook looked at. From the record, this lack of transparency on what tool was actually used did not appear to be challenged.

Ultimately, the refusal decision itself is actually quite personalized – not carrying the usual pure template characteristics of Module 4 Refusal Notes generator. There is personalized assessment of the actual business plan, the profits considered (and labelled speculative by the Officer), and concerns about whether registration under the licensed contractor process has been done. From my own experiences, this decision seems quite removed from the usual Module 3 and perhaps suggests that either Chinook was not fully engaged OR that the functionality of Chinook has gotten much better to the point where it’s use becomes blurred. It could reasonably be both.

In upholding the procedural fairness and reasonableness of the decision, Justice Brown does engage in two areas about a discussion of Chinook and AI.

In dismissing the Applicant’s argument on procedural fairness, Justice Brown writes:

[24] As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.

He writes later, under the reasonableness of decision, heading:

[28] Regarding the use of the “Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of artificial intelligence is irrelevant given that (a) an Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision as required by Vavilov.

Justice Brown appeared to be concerned with the lack of the Applicant’s tying of the process of utilizing artificial intelligence or Chinook to how it actually impacted the reasonableness or fairness of the decision. Justice Brown is looking at the final decision and correctly suggests – an Officer made it, the Record justifies it – how it got from A to C is not the reviewable decision it is the A of the input provided to the Officer and the C of the Officer’s decision.

I want to question about the missing B – the context.

It is interesting to note also, in looking at the Record, that the Respondent (Minister) did not engage in any discussion of Chinook or AI. The argument was solely raised by the Applicant – in two paragraphs in the written memorandum of argument and one paragraph in the reply. The Applicant’s argument, one rejected by Justice Brown, was that the uncertainty of the reliability, efficacy, and lack of communication created an uncertainty of how these tools were used, which ultimately impacted the fairness/reasonableness.

The Applicant captures these arguments in paragraphs 9, 10 , and 32 of their memorandum, writing:

The nature of the decision and the process followed in making it

9. While the reason originally given to the Applicant was that the visa officer (the
decision maker) believed that the Applicant would not leave Canada based on the
purpose of visit, the reasons now given during these proceedings reveal that the
background rationale of the decision maker does not support refusal based on
purpose of visit. In fact, the application was delayed for nearly five months and in
the end the decision was arrived at with the help of Artificial Intelligence
technology of Chinook 3+. It is not certain as to what information was analysed
by the aforesaid software and what was presented to the decision maker to
make up a decision. It can be presumed that not enough of human input has
gone into it, which is not appropriate for a complicated case involving business
immigration. It is also not apt in view of the importance of the decision to the
individual, who has committed a great deal of funds for this purpose. (emphasis added)

10. Chinook is a processing tool that it developed to deal with the higher volume of
applications. This tool allows DMs to review applications more quickly.
Specifically, the DM is able to pull information from the GCMS system for many
applications at the same time, review the information and make decisions and
generate notes  in using a built-in note generator, in a fraction of the time it
previously took to review the same number of applications. It can be presumed
that not enough human input has gone into it, which is not appropriate for a
complicated case involving business immigration. In the case at hand, Chinook
Module 5- indicator management tool was used, which consists of risk indicators
and local word flags. A local word flag is used to assist in prioritizing applications.
It is left up to Chinook to search for these indicators and flags and create a
report, which is then copy and pasted into GCMS by the DM. The present case is
one that deserved priority processing being covered by GATS. Since the
appropriate inputs may not have been fed into the mechanised processes of
Chinook, which would flag priority in suchlike GATS cases, the DM¶s GCMS
notes read 3processing priority word flag: N/A . This is clearly wrong and betrays
the fallout in using technology to supplant human input. The use of Chinook has
caused there to be a lack of effective oversight on the decisions being generated.
It is also not apt in view of the importance of the decision to the individual, who
has committed a great deal of funds for this purpose (Baker supra). (emphasis added)

32. On the issue of Chinook, while it can be believed that faced with a large volume of
cases, IRCC has been working to develop efficiency-enhancing tools to assist
visa officers in the decision-making process. Chinook is one such tool. IRCC has
been placing heavy reliance on it for more than a year now. However, as always
with use of any technology, there are questions about its reliability and efficacy for
the purpose it sets out to achieve. There are concerns about the manner in which
information is processed and analysed. The working of the system is still unclear
to the general public. A decision rendered using it cannot be termed reasonable until it is elaborated to all stakeholders to what extent has machine replaced human input and how it impacts the final outcome. The test set by the Supreme Court in Vavilov has not been met.

The Applicant appeared to be almost making an argument that the complexity of the Applicant’s case suggested Chinook should not have been used and therefore a human should have reviewed it. However – there seemed to have been a gap in engaging both the fact that IRCC did not indicate it had used Chinook and that the reasons actually were more than normally responsive to the facts. I think also, the argument that a positive world flag should have been implemented but was not, ultimately did not get picked up the Court – but lacked a record of affidavit evidence or a challenge to the CTR […]

Read More »

Coach Will: New Vocabulary Words Tomorrow’s Immigration Practitioners Will Need To Know

As a resource, and to buy time as I am writing more substantive blogs, I wanted to share a #CoachWill blog on new vocabulary, terminology that tomorrow’s immigration practitioners will need to know, learn, advise their clients on, and spend time with. I am still very much learning these terms and their impact, but it gives us a mutual starting point to grow our knowledge of how Canadian immigration law will be impacted moving forward:

 

Advanced Analytics: which is composed of both Predictive and Prescriptive components, consists of using computer technology to analyze past behaviours, with the goal of discovering patterns that enable predictions of future behaviours. With the aid of a team of computer science, data, IT, and program specialists, AA may result in the creation of a model that can perform risk triage and enable automated approvals on a portion of cases, thereby achieving significant productivity gains and reducing processing times. [As defined in IRCC’s China-Advanced Analytics TRV Privacy Impact Assessment]

Artificial Intelligence: Encompassing a broad range of technologies and approaches, Al is essentially the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. [As defined in IRCC’s Policy Playbook on Automation]

 

Automated decision support system: Includes any information technology designed to directly support a human decision-maker on an administrative decision (for example, by providing a recommendation), and/or designed to make an administrative decision in lieu of a human decision-maker. This includes systems like eTA or Visitor Record and Study Permit Extension automation in GCMS. [As defined in IRCC’s Policy Playbook on Automation]

 

Black Box: Opaque software tools working outside the scope of meaningful scrutiny and accountability. Usually deep learning systems. Their behaviour can be difficult to interpret and explain, raising concerns over explainability, transparency, and human control. [As defined in IRCC’s Policy Playbook on Automation]

 

Deep learning/neural network is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data. While a neural network with a single layer can still make approximate predictions, additional hidden layers can help to optimize and refine for accuracy. [As defined by IBM: https://www.ibm.com/cloud/learn/deep-learning#:~:text=Deep%20learning%20is%20a%20subset,from%20large%20amounts%20of%20data

 

Exploration zone: The exploration zone – also referred to as a “sandbox” – is the environment used for
research, experimentation and testing related to advanced analytics and Al. Data, codes and software
are isolated from those in production so that they can be tested securely.
“Fettering” of a decision-maker’s discretion: Fettering occurs when a decision-maker does not
genuinely exercise independent judgment in a matter. This can occur when a decision-maker binds
him/herself to a fixed rule of policy, another person’s opinion, or the outputs of a decision support
system. Although an administrative decision-maker may properly be influenced by policy considerations
and other factors, he or she must put his or her mind to the specific circumstances of the case and not
focus blindly on one input (e.g. a risk score provided by an algorithmic system) to the exclusion of other
relevant factors. [As defined in IRCC’s Policy Playbook on Automation]

 

“Fettering” of a decision-maker’s discretion: Fettering occurs when a decision-maker does not
genuinely exercise independent judgment in a matter. This can occur when a decision-maker binds
him/herself to a fixed rule of policy, another person’s opinion, or the outputs of a decision support
system. Although an administrative decision-maker may properly be influenced by policy considerations
and other factors, he or she must put his or her mind to the specific circumstances of the case and not
focus blindly on one input (e.g. a risk score provided by an algorithmic system) to the exclusion of other
relevant factors. [As defined in IRCC’s Policy Playbook on Automation]

 

Machine learning: A sub-category of artificial intelligence, machine learning refers to algorithms and statistical models that learn and improve from examples, data, and experience, rather than following pre-programmed rules. Machine learning systems effectively perform a specific task without using explicit instructions, relying on models and inference instead. [As defined in IRCC’s Policy Playbook on Automation]

 

A minimum viable product (MVP) is a development technique in which a new product or website is developed with sufficient features to satisfy early adopters. The final, complete set of features is only designed and developed after considering feedback from the product’s initial users. [As defined by Techopedia – https://www.techopedia.com/definition/27809/minimum-viable-product-mvp

 

Predictive Analytics: brings together advanced analytics capabilities spanning ad-hoc statistical analysis, predictive modeling, data mining, text analysis, optimization, real-time scoring and machine learning. These tools help organizations discover patterns in data and go beyond knowing what has happened to anticipating what is likely to happen next. [As defined in IRCC’s China-Advanced Analytics TRV Privacy Impact Assessment]

 

Prescriptive Analytics: Prescriptive Analytics is an advanced analytics technology that can provide recommendations to decision-makers and help them achieve business goals by solving complicated optimization problems. [As defined in IRCC’s China-Advanced Analytics TRV Privacy Impact Assessment]

 

Process automation: Also called “business automation” (and sometimes even “digital transformation”), process automation is the use of digital technology to perform routine business processes in a workflow. Process automation can streamline a business for simplicity and improve productivity by taking mundane repetitive tasks from humans and giving them to machines that can do them faster. A wide variety of activities can be automated, or more often, partially automated, with human intervention maintained at strategic points within workflows. In the domain of administrative decision-making at IRCC, “process automation” is used in contrast with “automated decision support,” the former referring to straightforward administrative tasks and the latter reserved for activities involving some degree of judgment. [As defined in IRCC’s Policy Playbook on Automation]

[Last Updated: 19 April 2022 – we will continue to update as new terms get updated]

 

 

 

 

 

 

 

 

Read More »

Chinook is AI – IRCC’s Own Policy Playbook Tells Us Why

One of the big debates around Chinook is whether or not it is Artificial Intelligence (“AI”). IRCC’s position has been that Chinook is not AI because there is a human ultimately making decisions.

In this piece, I will show how the engagement of a human in the loop is a red herring, but also how the debate skews the real issue that automation, whether for business function only or to help administer administrative decision, can have adverse impacts – if unchecked by independent review.

The main source of my argument that Chinook is AI is from IRCC itself – the Policy Playbook on Automated Support on Decision-Making 2021. This an internal document, which has been updated yearly, but likely captures the most accurate ‘behind the scenes’ snapshot of where IRCC is heading. More on that in future pieces.

AI’s Definition per IRCC

The first, and most important thing is to start with the definition of Artificial intelligence within the Playbook.

The first thing you will notice is that the Artificial Intelligence is defined so broadly by IRCC, which seems to go against the narrow definition it seems to paint with respect to defining Chinook.

Per IRCC, AI is:

If you think of Chinook dealing with the cognitive problem of attempting to issue bulk refusals – and utilizing computer science (technology) – to apply to learning, problem solving and pattern recognition – it is hard to imagine that a system would even be needed if it weren’t AI.

Emails among IRCC, actively discuss the use of Chinook to monitor approval and refusal rates utilizing “Module 6”

Looking at the Chinook Module’s themselves, Quality Assurance (“QA”) is built in as a module. It is hard to imagine a QA system that looks at refusal and approval rates, and automates processes and is not AI.

As this article points out:

Software QA is typically seen as an expensive necessity for any development team; testing is costly in terms of time, manpower, and money, while still being an imperfect process subject to human error. By introducing artificial intelligence and machine learning into the testing process, we not only expand the scope of what is testable, but also automate much of the testing process itself.

Given the volume of files that IRCC is dealing with, it is unlikely that the QA process relies only on humans and not technology (else why would Chinook be implemented). And if it involves technology and automation (a word that shows up multiple times in the Chinook Manual) to aid the monitoring of a subjective administrative decision – guess what – it is AI.

We also know also that Chinook is underpinned with ways to process data, look at historical approval and refusal rates, and flag risks. It also integrates with Watchtower to review the risk of applicants.

It is important to note that even in the Daponte Affidavit in Ocran that alongside ATIPs is the only information we have about Chinook, the focus has always been on the first five modules. Without knowledge of the true nature of something like Module 7 titled ‘ToolBox’ it is certainly premature to be able to label the whole system as not AI.

 

Difficult to Argue Chinook is Purely Process Automation Given Degree of Judgment Exercised by System in Setting Up Findecs (Final Decisions)

Where IRCC might be trying to carve a distinction is between process automation/digital transformation and automated decision support systems.

One could argue, for example, that most of Chinook is process automation.

For example, the very underpinning of Chinook is it allows for the entire application to be made available to the Officer in one centralized location, without opening the many windows that GCMS required. Data-points and fields auto populate from an application and GCMS into a Chinook Software, allowing the Officer to render decisions easier. We get this. It is not debatable.

But does it cross into automated decision support system? Is there some degree of judgment that needs to be applied when applying Chinook that is passed on to technology that would traditionally be done by humans.

As IRCC defines:

The Chinook directly assists an Officer in approving or refusing a case. Indeed, Officers have to apply discretion in refusing, but Chinook presents and automates the process. Furthermore, it has fundamentally reversed the decision-making processing, making it a decide first, justify later approach with the refusal notes generator. Chinook without AI generating the framework, setting up the bulk categories, automating an Officer’s logical reasoning process, simply does not exist.

These systems replace the process of Officer’s  needing to manually review documents and render a final decision, taking notes to file, to justify their decision. It is to be noted that this is still the process at low volume/Global North visa offices where decisions do this and are reflected in the extensive GCMS notes.

In Chinook, any notes taken are hidden and deleted by the system, and a template of bulk refusal reasons auto-populate, replace, and shield the actual factual context of the matter from scrutiny.

Hard to see how this is not AI. Indeed, if you look at the comparables provided – the eTA, Visitor Record and Study Permit Extension automation in GCMS, similar automations with GCMS underpin Chinook. There may be a little more human interaction, but as discussed below – a human monitoring or implementing an AI/advanced analytics/triage system doesn’t remove the AI elements.

 

Human in the Loop is Not the Defining Feature of AI

The defense we have been hearing from IRCC is that there is a human ultimately making a decision, therefore it cannot be AI.

This is obscuring a different concept called human-in-the-loop, which the Policy Playbook suggests actually needs to be part of all automated decision-making processes. If you are following, what this means is the defense of a human is involved (therefore not AI), is actually a key defining requirement IRCC has placed on AI-systems.

It is important to note that there is certainly is a spectrum of application of AI at IRCC that appears to be leaning away from human-in-the-loop. For example, IRCC has disclosed in their Algorithmic Impact Assessment (“AIA”) for the Advanced Analytics Triage of Overseas Temporary Resident Visa (“TRV”) Applications that there is no human in the loop with the automation of Tier 1 approvals. The same system without a human-in-the-loop is done for automating eligibility approvals in the Spouse-in-Canada program, which I will write about shortly.

 

Why the Blurred Line Between Process Automation and Automated Decision-Making Process Should Not Matter – Both Need Oversight and Review

Internally, this is an important distinguishing characteristic for IRCC because it appears that at least internal/behind-the-scenes strategizing and oversight (if that is what the Playbook represents) applies only to automated decision-support systems and not business automations. Presumably such a classification may allow for less need for review and more autonomy by the end user (Visa Officer).

From my perspective, we should focus on the last part of what IRCC states in their playbook – namely that ‘staff should consider whether automation that seems removed from final decisions may inadvertently contribute to an approval or a refusal.’

To recap and conclude, the whole purpose of Chinook is to be able to render the approval and refusal in a quicker and bulk fashion to save Officer’s time. Automation of all functions within Chinook, therefore, contribute to a final decision – and not inadvertently but directly. The very manner in which decisions are made in immigration shifts as a result of the use of Chinook.

Business automation cannot and should not be used as a cover for the ways that what appear routine automations actually affect processing that would have had to be done by humans, providing them the type of data, displaying it on the screen, in a manner that can fetter their discretion and alter the business of old.

That use of computer technology – the creation of Chinook – is 100% definable as the implementation of AI.

 

Read More »
About Us
Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary.

Let’s Get in Touch

Translate »