Artificial Intelligence

Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

Blog Posts

Coach Will: New Vocabulary Words Tomorrow’s Immigration Practitioners Will Need To Know

As a resource, and to buy time as I am writing more substantive blogs, I wanted to share a #CoachWill blog on new vocabulary, terminology that tomorrow’s immigration practitioners will need to know, learn, advise their clients on, and spend time with. I am still very much learning these terms and their impact, but it gives us a mutual starting point to grow our knowledge of how Canadian immigration law will be impacted moving forward:

 

Advanced Analytics: which is composed of both Predictive and Prescriptive components, consists of using computer technology to analyze past behaviours, with the goal of discovering patterns that enable predictions of future behaviours. With the aid of a team of computer science, data, IT, and program specialists, AA may result in the creation of a model that can perform risk triage and enable automated approvals on a portion of cases, thereby achieving significant productivity gains and reducing processing times. [As defined in IRCC’s China-Advanced Analytics TRV Privacy Impact Assessment]

Artificial Intelligence: Encompassing a broad range of technologies and approaches, Al is essentially the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. [As defined in IRCC’s Policy Playbook on Automation]

 

Automated decision support system: Includes any information technology designed to directly support a human decision-maker on an administrative decision (for example, by providing a recommendation), and/or designed to make an administrative decision in lieu of a human decision-maker. This includes systems like eTA or Visitor Record and Study Permit Extension automation in GCMS. [As defined in IRCC’s Policy Playbook on Automation]

 

Black Box: Opaque software tools working outside the scope of meaningful scrutiny and accountability. Usually deep learning systems. Their behaviour can be difficult to interpret and explain, raising concerns over explainability, transparency, and human control. [As defined in IRCC’s Policy Playbook on Automation]

 

Deep learning/neural network is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data. While a neural network with a single layer can still make approximate predictions, additional hidden layers can help to optimize and refine for accuracy. [As defined by IBM: https://www.ibm.com/cloud/learn/deep-learning#:~:text=Deep%20learning%20is%20a%20subset,from%20large%20amounts%20of%20data

 

Exploration zone: The exploration zone – also referred to as a “sandbox” – is the environment used for
research, experimentation and testing related to advanced analytics and Al. Data, codes and software
are isolated from those in production so that they can be tested securely.
“Fettering” of a decision-maker’s discretion: Fettering occurs when a decision-maker does not
genuinely exercise independent judgment in a matter. This can occur when a decision-maker binds
him/herself to a fixed rule of policy, another person’s opinion, or the outputs of a decision support
system. Although an administrative decision-maker may properly be influenced by policy considerations
and other factors, he or she must put his or her mind to the specific circumstances of the case and not
focus blindly on one input (e.g. a risk score provided by an algorithmic system) to the exclusion of other
relevant factors. [As defined in IRCC’s Policy Playbook on Automation]

 

“Fettering” of a decision-maker’s discretion: Fettering occurs when a decision-maker does not
genuinely exercise independent judgment in a matter. This can occur when a decision-maker binds
him/herself to a fixed rule of policy, another person’s opinion, or the outputs of a decision support
system. Although an administrative decision-maker may properly be influenced by policy considerations
and other factors, he or she must put his or her mind to the specific circumstances of the case and not
focus blindly on one input (e.g. a risk score provided by an algorithmic system) to the exclusion of other
relevant factors. [As defined in IRCC’s Policy Playbook on Automation]

 

Machine learning: A sub-category of artificial intelligence, machine learning refers to algorithms and statistical models that learn and improve from examples, data, and experience, rather than following pre-programmed rules. Machine learning systems effectively perform a specific task without using explicit instructions, relying on models and inference instead. [As defined in IRCC’s Policy Playbook on Automation]

 

A minimum viable product (MVP) is a development technique in which a new product or website is developed with sufficient features to satisfy early adopters. The final, complete set of features is only designed and developed after considering feedback from the product’s initial users. [As defined by Techopedia – https://www.techopedia.com/definition/27809/minimum-viable-product-mvp

 

Predictive Analytics: brings together advanced analytics capabilities spanning ad-hoc statistical analysis, predictive modeling, data mining, text analysis, optimization, real-time scoring and machine learning. These tools help organizations discover patterns in data and go beyond knowing what has happened to anticipating what is likely to happen next. [As defined in IRCC’s China-Advanced Analytics TRV Privacy Impact Assessment]

 

Prescriptive Analytics: Prescriptive Analytics is an advanced analytics technology that can provide recommendations to decision-makers and help them achieve business goals by solving complicated optimization problems. [As defined in IRCC’s China-Advanced Analytics TRV Privacy Impact Assessment]

 

Process automation: Also called “business automation” (and sometimes even “digital transformation”), process automation is the use of digital technology to perform routine business processes in a workflow. Process automation can streamline a business for simplicity and improve productivity by taking mundane repetitive tasks from humans and giving them to machines that can do them faster. A wide variety of activities can be automated, or more often, partially automated, with human intervention maintained at strategic points within workflows. In the domain of administrative decision-making at IRCC, “process automation” is used in contrast with “automated decision support,” the former referring to straightforward administrative tasks and the latter reserved for activities involving some degree of judgment. [As defined in IRCC’s Policy Playbook on Automation]

[Last Updated: 19 April 2022 – we will continue to update as new terms get updated]

 

 

 

 

 

 

 

 

Read More »

Chinook is AI – IRCC’s Own Policy Playbook Tells Us Why

One of the big debates around Chinook is whether or not it is Artificial Intelligence (“AI”). IRCC’s position has been that Chinook is not AI because there is a human ultimately making decisions.

In this piece, I will show how the engagement of a human in the loop is a red herring, but also how the debate skews the real issue that automation, whether for business function only or to help administer administrative decision, can have adverse impacts – if unchecked by independent review.

The main source of my argument that Chinook is AI is from IRCC itself – the Policy Playbook on Automated Support on Decision-Making 2021. This an internal document, which has been updated yearly, but likely captures the most accurate ‘behind the scenes’ snapshot of where IRCC is heading. More on that in future pieces.

AI’s Definition per IRCC

The first, and most important thing is to start with the definition of Artificial intelligence within the Playbook.

The first thing you will notice is that the Artificial Intelligence is defined so broadly by IRCC, which seems to go against the narrow definition it seems to paint with respect to defining Chinook.

Per IRCC, AI is:

If you think of Chinook dealing with the cognitive problem of attempting to issue bulk refusals – and utilizing computer science (technology) – to apply to learning, problem solving and pattern recognition – it is hard to imagine that a system would even be needed if it weren’t AI.

Emails among IRCC, actively discuss the use of Chinook to monitor approval and refusal rates utilizing “Module 6”

Looking at the Chinook Module’s themselves, Quality Assurance (“QA”) is built in as a module. It is hard to imagine a QA system that looks at refusal and approval rates, and automates processes and is not AI.

As this article points out:

Software QA is typically seen as an expensive necessity for any development team; testing is costly in terms of time, manpower, and money, while still being an imperfect process subject to human error. By introducing artificial intelligence and machine learning into the testing process, we not only expand the scope of what is testable, but also automate much of the testing process itself.

Given the volume of files that IRCC is dealing with, it is unlikely that the QA process relies only on humans and not technology (else why would Chinook be implemented). And if it involves technology and automation (a word that shows up multiple times in the Chinook Manual) to aid the monitoring of a subjective administrative decision – guess what – it is AI.

We also know also that Chinook is underpinned with ways to process data, look at historical approval and refusal rates, and flag risks. It also integrates with Watchtower to review the risk of applicants.

It is important to note that even in the Daponte Affidavit in Ocran that alongside ATIPs is the only information we have about Chinook, the focus has always been on the first five modules. Without knowledge of the true nature of something like Module 7 titled ‘ToolBox’ it is certainly premature to be able to label the whole system as not AI.

 

Difficult to Argue Chinook is Purely Process Automation Given Degree of Judgment Exercised by System in Setting Up Findecs (Final Decisions)

Where IRCC might be trying to carve a distinction is between process automation/digital transformation and automated decision support systems.

One could argue, for example, that most of Chinook is process automation.

For example, the very underpinning of Chinook is it allows for the entire application to be made available to the Officer in one centralized location, without opening the many windows that GCMS required. Data-points and fields auto populate from an application and GCMS into a Chinook Software, allowing the Officer to render decisions easier. We get this. It is not debatable.

But does it cross into automated decision support system? Is there some degree of judgment that needs to be applied when applying Chinook that is passed on to technology that would traditionally be done by humans.

As IRCC defines:

The Chinook directly assists an Officer in approving or refusing a case. Indeed, Officers have to apply discretion in refusing, but Chinook presents and automates the process. Furthermore, it has fundamentally reversed the decision-making processing, making it a decide first, justify later approach with the refusal notes generator. Chinook without AI generating the framework, setting up the bulk categories, automating an Officer’s logical reasoning process, simply does not exist.

These systems replace the process of Officer’s  needing to manually review documents and render a final decision, taking notes to file, to justify their decision. It is to be noted that this is still the process at low volume/Global North visa offices where decisions do this and are reflected in the extensive GCMS notes.

In Chinook, any notes taken are hidden and deleted by the system, and a template of bulk refusal reasons auto-populate, replace, and shield the actual factual context of the matter from scrutiny.

Hard to see how this is not AI. Indeed, if you look at the comparables provided – the eTA, Visitor Record and Study Permit Extension automation in GCMS, similar automations with GCMS underpin Chinook. There may be a little more human interaction, but as discussed below – a human monitoring or implementing an AI/advanced analytics/triage system doesn’t remove the AI elements.

 

Human in the Loop is Not the Defining Feature of AI

The defense we have been hearing from IRCC is that there is a human ultimately making a decision, therefore it cannot be AI.

This is obscuring a different concept called human-in-the-loop, which the Policy Playbook suggests actually needs to be part of all automated decision-making processes. If you are following, what this means is the defense of a human is involved (therefore not AI), is actually a key defining requirement IRCC has placed on AI-systems.

It is important to note that there is certainly is a spectrum of application of AI at IRCC that appears to be leaning away from human-in-the-loop. For example, IRCC has disclosed in their Algorithmic Impact Assessment (“AIA”) for the Advanced Analytics Triage of Overseas Temporary Resident Visa (“TRV”) Applications that there is no human in the loop with the automation of Tier 1 approvals. The same system without a human-in-the-loop is done for automating eligibility approvals in the Spouse-in-Canada program, which I will write about shortly.

 

Why the Blurred Line Between Process Automation and Automated Decision-Making Process Should Not Matter – Both Need Oversight and Review

Internally, this is an important distinguishing characteristic for IRCC because it appears that at least internal/behind-the-scenes strategizing and oversight (if that is what the Playbook represents) applies only to automated decision-support systems and not business automations. Presumably such a classification may allow for less need for review and more autonomy by the end user (Visa Officer).

From my perspective, we should focus on the last part of what IRCC states in their playbook – namely that ‘staff should consider whether automation that seems removed from final decisions may inadvertently contribute to an approval or a refusal.’

To recap and conclude, the whole purpose of Chinook is to be able to render the approval and refusal in a quicker and bulk fashion to save Officer’s time. Automation of all functions within Chinook, therefore, contribute to a final decision – and not inadvertently but directly. The very manner in which decisions are made in immigration shifts as a result of the use of Chinook.

Business automation cannot and should not be used as a cover for the ways that what appear routine automations actually affect processing that would have had to be done by humans, providing them the type of data, displaying it on the screen, in a manner that can fetter their discretion and alter the business of old.

That use of computer technology – the creation of Chinook – is 100% definable as the implementation of AI.

 

Read More »

The Play is Under Review: A Closer Look at IRCC’s Policy Playbook on Automated Decision Making (Pending Feature)

Over the next several weeks, I’ll be doing a series of shorter blog posts on IRCC’s Policy Playbook on Automated Support for Decision-making (2021 edition).

The first one (hopefully released this week or by the weekend) will be about IRCC’s concerns that applicants are “gaming by claiming” and their preference for “objective evidence” for the inputs of IRCC’s Chinook system.

We will focus our attention of the manual we find could drastically change the landscape for applicants, practitioners, and the courts reviewing decision. We will get critical on ways we expect transparency in the use of AI as we move forward.

I am also doing two parallel judicial review of AI decisions as part of my practice right now, and will keep everyone informed as to how those cases are going and things we are learning.

Should be exciting. Welcome to this space, and looking forward to the conversation.

Read More »

Predictive/Advanced Analytics + Chinook – Oversight = ?

In September 2021’s issue of Lexbase, my mentor Richard Kurland, provides further insight into what happens behind the scenes of Immigration, Refugees, and Citizenship Canada (“IRCC”) processing, specifically providing a section titled: “Overview of the Analytics-Based Triage of Temporary Resident Visa Applications.

At the outset, a big thank you to the “Insider” Richard Kurland for the hard digging that allows for us to provide this further analysis.

 

What the Data Suggests

I encourage all of you to check out the first two pages from the Lexbase issue, as it contains direct disclosure from IRCC’s Assistant Director, Admissibility opening up the process by way Artificial Intelligence is implemented for Temporary Resident Visas (‘TRVs’), specifically in China and India, the two countries that have implemented it so far. By way of this June 2020 disclosure, we confirm that IRCC has been utilizing these systems for online applications since April 2018 for China, August 2018 for India, and for Visa Application Centre (“VAC”) based applications since January 2020.

To summarize (again – go read Lexbase and contact Richard Kurland for all the specific details and helpful tables), we learn that there is a three Tier processing system in play. This filters the simplest applications (Tier 1), medium complexity applications (Tier 2), and higher complexity applications (Tier 3). While human officers are involved in all three Tiers, Tier 1 allows a model to recommend approval based on analytics, where as Tier 2 and Tier 3 are flagged for manual processing. IRCC claims that the process is only partially automated.

The interesting factor, and given we have been as a law firm focusing a lot on India, is how the designated of a Tier 2 file drives the approval rates from the high nineties (%) to 63% for online India apps to 37%  for India VAC applications. Moving to Tier 3, it is only 13% for online India and 5% for India VAC. The deeming of a file Tier 3 appears to make refusal a near surety.

What is fascinating is how this information blends usage of “Officer Rules,” the first stage filter which  actually precedes the computerized Three Tier triages and is targeted at cases with higher likelihood of ineligibility or inadmissibility.

The Officer Rules system would be the system utilized at other global visa offices that do not use the computerized AI decision-making of India and China. Looking specifically at the case of India, the Officer Rules system actually approves cases at a much higher rate (53% for online India, and 38% for India VAC).

These rates are in-fact comparable to Tier 2 moderately complex cases – ones that presumably do not contain the serious ineligibility and inadmissibility concerns of Officer Rules or Tier 3 . It suggests that the addition of technology can sway even a moderately complex case into the same outcomes as a hand-pulled out complex case.

Ultimately, this suggests that complete human discretion or time spent assessing factors can be much more favourable than when machines contribute to overall decision-making.

It Comes Down to Oversight and How These Systems Converge

Recently, we’ve been discussing in Youtube videos (here and here), podcasts, and articles about IRCC’s Chinook system for processing applications. Using an excel-based model (although moving now to an Amazon-based model in their latest version), applicants data are extracted into rows, that contain batch information for several applicants, presumably allowing for all the analytics to be assessed.

Given we know IRCC takes historic approval rates and data as a main driving factor, it is reasonable to think Immigration Officers are given these numbers as internal targets. I am sure, as well, that with major events like COVID and the general dissuasion of travel to Canada, that these goalposts can be moved and expanded at direction.

An excel-based system tracking approvals and refusals likely put these stats front and centre to an officer’s discretion (or a machine’s) on an application. Again to utilize a teaching analogy (clearly I miss teaching), I utilized a similar ‘Speedgrader’ type app which forced me, mid-marking, to often to revisit exams that I had already graded because I had awarded the class average marks that were too high. I have no doubt a parallel system exists with IRCC.

What this all means, as my colleague, Zeynab Ziaie has pointed out in our discussions, there are major concerns that Chinook and the AI systems have not been developed and rolled out with adequate lawyer/legal input and oversight, which leads to questions about accountability. Utilizing the Chinook example, what if the working notes that are deleted contain the very information needed to justify or shed light on how an application was processed.

My question, in follow-up, is how are the predictive/advanced analytics systems utilized by India and China for TRVs influencing Chinook? Where is the notation to know whether one’s file was pre-assessed by “Officer’s Rule” or through the Tiers. I quickly reviewed a few GCMS notes prior to this call, and though we know whether a file was pre-accessed, we have no clue which Tier it landed on.

Furthermore, how do we ensure that the visa-office subjective “Officer Rules” or the analytical factors that make up the AI system are not being applied in a discriminatory manner to filter cases into a more complex/complex stream. For example, back in 2016 I pointed how the Visa-Office training guides in China regionally and geographically discriminate against those applying from certain Provinces assigning character traits and misrepresentation risks. We know in India, thanks to the work of my mentor Raj Sharma, that the Indian visa offices have a training guide on genuine relationships and marriage fraud that may not accord with realities.

Assuming that this AI processing system is still being used only for TRVs and not for any other permits, it must be catching (with the assistance of Chinook’s key word indicators no less) words such as marriage, the names of rural communities, marital status, perhaps the addresses of unauthorized agents, and businesses that often have been used as a cover for support letters. Within that list there’s a mix of good local knowledge, but also the very stereotypes that have historically kept families apart and individuals from being able to visit without holding a study permit or work permit.

If we find out, for example, that filtering for complex cases only happens at visa offices with high refusal rates or in the Global South, does that make the system unduly discriminatory?

We acknowledge of course that the very process of having to apply to enter the borders, the division of TRV and electronic Travel Authorization (eTA) requiring countries is discriminatory by nature, but what happens when outcomes on similar facts are so discrepant?

In other areas of national bureaucracy, Governments have moved to blind processing to try and limit discrimination around ethnic names, or base decisions on certain privileges (ability to travel and engage in previous work), and remove identifying features that might lead to bias. For immigration it is the opposite, you see their picture, their age, and where they are from, and why they want to come (purpose of visit). As we have learned from Chinook, that is the baseline information that is being extracted for Officers to base their decisions on.

When – as a society – do we decide to move away (as we have) on what were once harmful norms to new realities? Who is it that makes the call or calls for reviews for things such as consistency or whether a particular discriminatory input in the AI system is no-longer consistent with Charter values?

Right now, it is all in the Officer’s discretion and by extension, the Visa Offices, but I would recommend some unified committee of legal experts and race/equity scholars need to be advising on the strings of the future, inevitable, AI systems. This would also unify things across visa offices so that there is less discrepancy in the way systems render decisions. While it makes sense that heavier volume visa offices have more tools as their disposal, it should not depend on where you live to receive less access to human decision-makers or to an equal standard of decision-making. We do not want to get to a place where immigration applicants are afraid to present their stories or speak their truths for fear of being filtered by artificial intelligence. From my perspective, we are better of being transparent and setting legitimate expectations.

What are your thoughts on the introduction of AI, the interaction with Chinook, and the need for oversight? Feel free to engage in the comments below or on social media!

Thanks again for reading.

Read More »
About Us
Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary.

Let’s Get in Touch

Translate »