IRCC Artificial Intelligence

Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

Blog Posts

Five AI-Decision Making Questions We Need Answers To From IRCC

In this short post, I will canvass five relatively urgent questions we need the collective answers to as we represent clients who are now being addressed by artificial-intelligence built decision-making systems. For clarity and to adopt IRCC’s status quo, I will not consider Chinook to be one of those systems, BUT it is clear Chinook interacts with AI and the role of Chinook as it pertains to decisions, especially as advanced analytics skips eligibility assessment become increasingly more important.

1) If IRCC is basing Advanced Analytics decisions of historical data, what historical data is being utilized? Does it represent a reasonable/ideal officer and how can it be re-programmed?

How do we ensure it represents an ideal period (not a stressed officer/overburdened)? IRCC has been overburdened with applications for the last decade having to create systems to shortcut decision-making and has been openly acknowledging their resource crunch. If historical data does not represent what we want for future processing – how can projections be changed. How, in practice, does bias get stripped or de-programmed out of data? We have seen positive impacts (for example Nigerian study permit approval rates) since recent advocacy but is that programmed in manually by a human? and how?

2) How does Advanced Analytics interact with Chinook?

In the past Chinook was utilized for only a portion of cases, we understand to both bulk approve cases and bulk refuse. If Advanced Analytics serves to provide auto-positive eligibility, why is Chinook even needed to sort the Applicant’s information to decide whether to approve or refuse. Is there column in Chinook that allows an Officer to see if Eligibility has already been met (i.e. it was AA’d) and therefore altering their application and use of Chinook? The fear is Chinook becomes just a refusal tool and is no longer needed for approvals.

Furthermore, what does an Officer see when they have to perform eligibility assessment? Are they given any information about data trends/key risk indicators/etc. that Advanced Analytics helped generate presumably during the triage? Is it something the Officer has to dig for in separate module of Chinook or is it displayed right in their face as they render a decision to remind them?

Are Officer’s made aware if a case goes into manual review for example as QA for an Automated Decision? How are those cases tracked?

3) What is the incentive to actually process a non-AA decision if AA decisions can be processed more accurately/quickly?

For those files that are triaged to the non-Green/Human bin, if it becomes a numbers game and the situation is no longer ‘first in, first out’, why even process the complex cases anymore? Why not fill the slots with newer AA/low risk cases that will create less challenges and just let decisions that are complicated or require human intervention to set for one, two years until the Applicant seeks a withdrawal? Other than mandamus, what remedies will Applicants have to resolve their cases. It is simply about complaining hard enough to get pulled out of review and for an eventual refusal? How do we ensure we do not refuse all Tier 2/3 cases as a matter of general practice as we get more Tier 1 applications in the door (likely from visa-exempt, Global North countries).

4) What does counsel for the Department of Justice see in GCMS/Rule 9 Reasons versus what we see?

Usually, the idea of a tribunal record or GCMS is that it a central record of an Applicant’s file but with increasing redactions, it is becoming less and less clear who has access to what information. Client’s are triaged utilizing “bins” but those bins are stripped from the GCMS notes we get. Are they also stripped for DOJ or not? Right now local word flags and risk indicators are stripped for applicants, but are they also stripped for DOJ? What about the audit trail that exists for each applicant that we have not been able to obtain via ATIP?

Taking it a step further – what constitutes a Tribunal Record anymore? Is it only what was submitted by the Applicant and what is in the Officer’s final decision? I know my colleague, Steven Meurrens has started to get even email records between Officers, but there’s a lack of clarity on what that Tribunal Record consists of and whether it necessarily must include the audit trail, risk indicators, and local word flags. Should it include the algorithms?

How does one even try to make fettering arguments if we do not know what the Officer had access to before rendering a decision (how they were possibly fettered)?

The other question becomes how do we let the judiciary know about these systems? Does it go up as a DOJ-led reference (and who can intervene and be on the other side)? The strategic litigation likely will be implemented again in a weak fact case. How do we ensure counsel on the other side is prepared for this so they can not only fight back but provide a counternarrative to the judiciary on these issues?

5) Will the Triaging Rules ever be Made Public? 

Currently, the AI is quite basic from our understanding. There are key rules inputted and applications that meet the requirements go through a decision-tree that leads to auto-eligibility approvals. However, as these AA programs adopt more machine learning components, allowing them to spot out and sniff out new flags, new rules, new issues – will there be some transparency around what the rules are? Should there be different treatment between rules that are more on the security/intelligence/system integrity side versus more black and white rules such as only individual applicants can get tier one processing, or applicant’s must not have had a previous refusal to benefit from X, or holding a U.S. visa or previous Canadian visa over past ten years is a Tier 1 factor.

If the ultimate goal is also to use these rules to try and affect processing (lower number of applicants and raise approvals), presumably telling the public these factors so they may be dissuaded from applying when they do not have a strong case could be of benefit.

Just some random Monday morning musings as we dig further. Stay tuned.

Read More »

The Play is Under Review: A Closer Look at IRCC’s Policy Playbook on Automated Decision Making (Pending Feature)

Over the next several weeks, I’ll be doing a series of shorter blog posts on IRCC’s Policy Playbook on Automated Support for Decision-making (2021 edition).

The first one (hopefully released this week or by the weekend) will be about IRCC’s concerns that applicants are “gaming by claiming” and their preference for “objective evidence” for the inputs of IRCC’s Chinook system.

We will focus our attention of the manual we find could drastically change the landscape for applicants, practitioners, and the courts reviewing decision. We will get critical on ways we expect transparency in the use of AI as we move forward.

I am also doing two parallel judicial review of AI decisions as part of my practice right now, and will keep everyone informed as to how those cases are going and things we are learning.

Should be exciting. Welcome to this space, and looking forward to the conversation.

Read More »
About Us
Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary.

Let’s Get in Touch

Translate »