Artificial Intelligence

Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

Blog Posts

Chinook is AI – IRCC’s Own Policy Playbook Tells Us Why

One of the big debates around Chinook is whether or not it is Artificial Intelligence (“AI”). IRCC’s position has been that Chinook is not AI because there is a human ultimately making decisions.

In this piece, I will show how the engagement of a human in the loop is a red herring, but also how the debate skews the real issue that automation, whether for business function only or to help administer administrative decision, can have adverse impacts – if unchecked by independent review.

The main source of my argument that Chinook is AI is from IRCC itself – the Policy Playbook on Automated Support on Decision-Making 2021. This an internal document, which has been updated yearly, but likely captures the most accurate ‘behind the scenes’ snapshot of where IRCC is heading. More on that in future pieces.

AI’s Definition per IRCC

The first, and most important thing is to start with the definition of Artificial intelligence within the Playbook.

The first thing you will notice is that the Artificial Intelligence is defined so broadly by IRCC, which seems to go against the narrow definition it seems to paint with respect to defining Chinook.

Per IRCC, AI is:

If you think of Chinook dealing with the cognitive problem of attempting to issue bulk refusals – and utilizing computer science (technology) – to apply to learning, problem solving and pattern recognition – it is hard to imagine that a system would even be needed if it weren’t AI.

Emails among IRCC, actively discuss the use of Chinook to monitor approval and refusal rates utilizing “Module 6”

Looking at the Chinook Module’s themselves, Quality Assurance (“QA”) is built in as a module. It is hard to imagine a QA system that looks at refusal and approval rates, and automates processes and is not AI.

As this article points out:

Software QA is typically seen as an expensive necessity for any development team; testing is costly in terms of time, manpower, and money, while still being an imperfect process subject to human error. By introducing artificial intelligence and machine learning into the testing process, we not only expand the scope of what is testable, but also automate much of the testing process itself.

Given the volume of files that IRCC is dealing with, it is unlikely that the QA process relies only on humans and not technology (else why would Chinook be implemented). And if it involves technology and automation (a word that shows up multiple times in the Chinook Manual) to aid the monitoring of a subjective administrative decision – guess what – it is AI.

We also know also that Chinook is underpinned with ways to process data, look at historical approval and refusal rates, and flag risks. It also integrates with Watchtower to review the risk of applicants.

It is important to note that even in the Daponte Affidavit in Ocran that alongside ATIPs is the only information we have about Chinook, the focus has always been on the first five modules. Without knowledge of the true nature of something like Module 7 titled ‘ToolBox’ it is certainly premature to be able to label the whole system as not AI.

 

Difficult to Argue Chinook is Purely Process Automation Given Degree of Judgment Exercised by System in Setting Up Findecs (Final Decisions)

Where IRCC might be trying to carve a distinction is between process automation/digital transformation and automated decision support systems.

One could argue, for example, that most of Chinook is process automation.

For example, the very underpinning of Chinook is it allows for the entire application to be made available to the Officer in one centralized location, without opening the many windows that GCMS required. Data-points and fields auto populate from an application and GCMS into a Chinook Software, allowing the Officer to render decisions easier. We get this. It is not debatable.

But does it cross into automated decision support system? Is there some degree of judgment that needs to be applied when applying Chinook that is passed on to technology that would traditionally be done by humans.

As IRCC defines:

The Chinook directly assists an Officer in approving or refusing a case. Indeed, Officers have to apply discretion in refusing, but Chinook presents and automates the process. Furthermore, it has fundamentally reversed the decision-making processing, making it a decide first, justify later approach with the refusal notes generator. Chinook without AI generating the framework, setting up the bulk categories, automating an Officer’s logical reasoning process, simply does not exist.

These systems replace the process of Officer’s  needing to manually review documents and render a final decision, taking notes to file, to justify their decision. It is to be noted that this is still the process at low volume/Global North visa offices where decisions do this and are reflected in the extensive GCMS notes.

In Chinook, any notes taken are hidden and deleted by the system, and a template of bulk refusal reasons auto-populate, replace, and shield the actual factual context of the matter from scrutiny.

Hard to see how this is not AI. Indeed, if you look at the comparables provided – the eTA, Visitor Record and Study Permit Extension automation in GCMS, similar automations with GCMS underpin Chinook. There may be a little more human interaction, but as discussed below – a human monitoring or implementing an AI/advanced analytics/triage system doesn’t remove the AI elements.

 

Human in the Loop is Not the Defining Feature of AI

The defense we have been hearing from IRCC is that there is a human ultimately making a decision, therefore it cannot be AI.

This is obscuring a different concept called human-in-the-loop, which the Policy Playbook suggests actually needs to be part of all automated decision-making processes. If you are following, what this means is the defense of a human is involved (therefore not AI), is actually a key defining requirement IRCC has placed on AI-systems.

It is important to note that there is certainly is a spectrum of application of AI at IRCC that appears to be leaning away from human-in-the-loop. For example, IRCC has disclosed in their Algorithmic Impact Assessment (“AIA”) for the Advanced Analytics Triage of Overseas Temporary Resident Visa (“TRV”) Applications that there is no human in the loop with the automation of Tier 1 approvals. The same system without a human-in-the-loop is done for automating eligibility approvals in the Spouse-in-Canada program, which I will write about shortly.

 

Why the Blurred Line Between Process Automation and Automated Decision-Making Process Should Not Matter – Both Need Oversight and Review

Internally, this is an important distinguishing characteristic for IRCC because it appears that at least internal/behind-the-scenes strategizing and oversight (if that is what the Playbook represents) applies only to automated decision-support systems and not business automations. Presumably such a classification may allow for less need for review and more autonomy by the end user (Visa Officer).

From my perspective, we should focus on the last part of what IRCC states in their playbook – namely that ‘staff should consider whether automation that seems removed from final decisions may inadvertently contribute to an approval or a refusal.’

To recap and conclude, the whole purpose of Chinook is to be able to render the approval and refusal in a quicker and bulk fashion to save Officer’s time. Automation of all functions within Chinook, therefore, contribute to a final decision – and not inadvertently but directly. The very manner in which decisions are made in immigration shifts as a result of the use of Chinook.

Business automation cannot and should not be used as a cover for the ways that what appear routine automations actually affect processing that would have had to be done by humans, providing them the type of data, displaying it on the screen, in a manner that can fetter their discretion and alter the business of old.

That use of computer technology – the creation of Chinook – is 100% definable as the implementation of AI.

 

Read More »

The Play is Under Review: A Closer Look at IRCC’s Policy Playbook on Automated Decision Making (Pending Feature)

Over the next several weeks, I’ll be doing a series of shorter blog posts on IRCC’s Policy Playbook on Automated Support for Decision-making (2021 edition).

The first one (hopefully released this week or by the weekend) will be about IRCC’s concerns that applicants are “gaming by claiming” and their preference for “objective evidence” for the inputs of IRCC’s Chinook system.

We will focus our attention of the manual we find could drastically change the landscape for applicants, practitioners, and the courts reviewing decision. We will get critical on ways we expect transparency in the use of AI as we move forward.

I am also doing two parallel judicial review of AI decisions as part of my practice right now, and will keep everyone informed as to how those cases are going and things we are learning.

Should be exciting. Welcome to this space, and looking forward to the conversation.

Read More »

Predictive/Advanced Analytics + Chinook – Oversight = ?

In September 2021’s issue of Lexbase, my mentor Richard Kurland, provides further insight into what happens behind the scenes of Immigration, Refugees, and Citizenship Canada (“IRCC”) processing, specifically providing a section titled: “Overview of the Analytics-Based Triage of Temporary Resident Visa Applications.

At the outset, a big thank you to the “Insider” Richard Kurland for the hard digging that allows for us to provide this further analysis.

 

What the Data Suggests

I encourage all of you to check out the first two pages from the Lexbase issue, as it contains direct disclosure from IRCC’s Assistant Director, Admissibility opening up the process by way Artificial Intelligence is implemented for Temporary Resident Visas (‘TRVs’), specifically in China and India, the two countries that have implemented it so far. By way of this June 2020 disclosure, we confirm that IRCC has been utilizing these systems for online applications since April 2018 for China, August 2018 for India, and for Visa Application Centre (“VAC”) based applications since January 2020.

To summarize (again – go read Lexbase and contact Richard Kurland for all the specific details and helpful tables), we learn that there is a three Tier processing system in play. This filters the simplest applications (Tier 1), medium complexity applications (Tier 2), and higher complexity applications (Tier 3). While human officers are involved in all three Tiers, Tier 1 allows a model to recommend approval based on analytics, where as Tier 2 and Tier 3 are flagged for manual processing. IRCC claims that the process is only partially automated.

The interesting factor, and given we have been as a law firm focusing a lot on India, is how the designated of a Tier 2 file drives the approval rates from the high nineties (%) to 63% for online India apps to 37%  for India VAC applications. Moving to Tier 3, it is only 13% for online India and 5% for India VAC. The deeming of a file Tier 3 appears to make refusal a near surety.

What is fascinating is how this information blends usage of “Officer Rules,” the first stage filter which  actually precedes the computerized Three Tier triages and is targeted at cases with higher likelihood of ineligibility or inadmissibility.

The Officer Rules system would be the system utilized at other global visa offices that do not use the computerized AI decision-making of India and China. Looking specifically at the case of India, the Officer Rules system actually approves cases at a much higher rate (53% for online India, and 38% for India VAC).

These rates are in-fact comparable to Tier 2 moderately complex cases – ones that presumably do not contain the serious ineligibility and inadmissibility concerns of Officer Rules or Tier 3 . It suggests that the addition of technology can sway even a moderately complex case into the same outcomes as a hand-pulled out complex case.

Ultimately, this suggests that complete human discretion or time spent assessing factors can be much more favourable than when machines contribute to overall decision-making.

It Comes Down to Oversight and How These Systems Converge

Recently, we’ve been discussing in Youtube videos (here and here), podcasts, and articles about IRCC’s Chinook system for processing applications. Using an excel-based model (although moving now to an Amazon-based model in their latest version), applicants data are extracted into rows, that contain batch information for several applicants, presumably allowing for all the analytics to be assessed.

Given we know IRCC takes historic approval rates and data as a main driving factor, it is reasonable to think Immigration Officers are given these numbers as internal targets. I am sure, as well, that with major events like COVID and the general dissuasion of travel to Canada, that these goalposts can be moved and expanded at direction.

An excel-based system tracking approvals and refusals likely put these stats front and centre to an officer’s discretion (or a machine’s) on an application. Again to utilize a teaching analogy (clearly I miss teaching), I utilized a similar ‘Speedgrader’ type app which forced me, mid-marking, to often to revisit exams that I had already graded because I had awarded the class average marks that were too high. I have no doubt a parallel system exists with IRCC.

What this all means, as my colleague, Zeynab Ziaie has pointed out in our discussions, there are major concerns that Chinook and the AI systems have not been developed and rolled out with adequate lawyer/legal input and oversight, which leads to questions about accountability. Utilizing the Chinook example, what if the working notes that are deleted contain the very information needed to justify or shed light on how an application was processed.

My question, in follow-up, is how are the predictive/advanced analytics systems utilized by India and China for TRVs influencing Chinook? Where is the notation to know whether one’s file was pre-assessed by “Officer’s Rule” or through the Tiers. I quickly reviewed a few GCMS notes prior to this call, and though we know whether a file was pre-accessed, we have no clue which Tier it landed on.

Furthermore, how do we ensure that the visa-office subjective “Officer Rules” or the analytical factors that make up the AI system are not being applied in a discriminatory manner to filter cases into a more complex/complex stream. For example, back in 2016 I pointed how the Visa-Office training guides in China regionally and geographically discriminate against those applying from certain Provinces assigning character traits and misrepresentation risks. We know in India, thanks to the work of my mentor Raj Sharma, that the Indian visa offices have a training guide on genuine relationships and marriage fraud that may not accord with realities.

Assuming that this AI processing system is still being used only for TRVs and not for any other permits, it must be catching (with the assistance of Chinook’s key word indicators no less) words such as marriage, the names of rural communities, marital status, perhaps the addresses of unauthorized agents, and businesses that often have been used as a cover for support letters. Within that list there’s a mix of good local knowledge, but also the very stereotypes that have historically kept families apart and individuals from being able to visit without holding a study permit or work permit.

If we find out, for example, that filtering for complex cases only happens at visa offices with high refusal rates or in the Global South, does that make the system unduly discriminatory?

We acknowledge of course that the very process of having to apply to enter the borders, the division of TRV and electronic Travel Authorization (eTA) requiring countries is discriminatory by nature, but what happens when outcomes on similar facts are so discrepant?

In other areas of national bureaucracy, Governments have moved to blind processing to try and limit discrimination around ethnic names, or base decisions on certain privileges (ability to travel and engage in previous work), and remove identifying features that might lead to bias. For immigration it is the opposite, you see their picture, their age, and where they are from, and why they want to come (purpose of visit). As we have learned from Chinook, that is the baseline information that is being extracted for Officers to base their decisions on.

When – as a society – do we decide to move away (as we have) on what were once harmful norms to new realities? Who is it that makes the call or calls for reviews for things such as consistency or whether a particular discriminatory input in the AI system is no-longer consistent with Charter values?

Right now, it is all in the Officer’s discretion and by extension, the Visa Offices, but I would recommend some unified committee of legal experts and race/equity scholars need to be advising on the strings of the future, inevitable, AI systems. This would also unify things across visa offices so that there is less discrepancy in the way systems render decisions. While it makes sense that heavier volume visa offices have more tools as their disposal, it should not depend on where you live to receive less access to human decision-makers or to an equal standard of decision-making. We do not want to get to a place where immigration applicants are afraid to present their stories or speak their truths for fear of being filtered by artificial intelligence. From my perspective, we are better of being transparent and setting legitimate expectations.

What are your thoughts on the introduction of AI, the interaction with Chinook, and the need for oversight? Feel free to engage in the comments below or on social media!

Thanks again for reading.

Read More »
About Us
Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary.

Let’s Get in Touch

Translate »