Let’s Get in Touch
Recent Blog Posts
One of the more fascinating modules in Chinook is Module 5 – Indicator Management.
Many of you who have received ATIPs for Officer’s GCMS notes or received Rule 9 Reasons from the Federal Court probably see this in your GCMS notes:
But what if this is missing? Folks have yet to see any actual risk indicators processing priority word flags actually show up in ATIP, well here is probably why.
Indicators and Word Flags are Deleted If There is any Indicator/Word Flag
This email exchange from October 2020 between IRCC program officers and ATIP Officers (I won’t get into why I find this problematic in this piece) tells you why.
In this email, guidance is being provided to only use the wording “Indicator: N/A Processing Priority Word Flag: N/A” where there is Indicator or Priority Flag. In other words, the entire section is omitted where there is an Indicator or Priority Word Flag.
Hence the title of this piece.
Question then becomes, how does one actually challenge the lack of disclosure of a risk flag or priority word flag in a decision? For example, in Federal Court. In litigation, reverse engineered explanations will be put forward for why the decision was reasonable – but without the actual indicator/word flag – a large chunk of the decision or perhaps the impetus behind a fettered decision will be missing.
Furthermore, is it one way access. A big defense of the transparency of fairness of Chinook is that the same information is available in GCMS as is in Chinook minus the deletions of working notes (that apparently are not substantive). However, as we have discovered otherwise these notes can be substantive and if Officers are recommended to use standard form wording in refusing cases – we might only be able to rely on things such as risk flags/work indicators – but these are being deleted from GCMS notes and Rule 9 reasons. What if the Department of Justice has access to them (from their client) but we do not. Does that create a procedural fairness issue?
Let’s take a step back and look at what we know so far about Module 5.
Below I will write a running commentary of paras 48-53 of the Daponte Affidavit.
Module 5: Indicator Management (Risk Indicators and Local Word Flags)
38. As described above, Module 5 allows a Chinook user to submit requests to a Chinook administrator to add, renew, or modify “risk indicators” and “local word flags”. “Risk indicators” and “local word flags” are intended to assist Decision-Makers in their review of Applications.
It is to be noted, we still do not know how the system flags/indicates these words to the case. Where it shows up (in what module) to trigger action.
39. “Risk indicators” are used to notify Decision-Makers of trends that IRCC has detected, such as a trend that a falsified document was submitted by a certain company in a high number of Applications from different clients or otherwise to highlight a particular factor of concern.
40. “Risk indicators” are also utilized to notify Decision-Makers of potentially low risk Applications; for example, if an international medical conference is being held in Canada, a “risk indicator” may be created to identify entry for such purpose to be of low risk to program integrity.
41. “Risk indicators” may apply to all Applications or to a specific migration office. The inclusion of “risk indicators” within Chinook allows Decision-Makers to view applicable indicators in a centralized manner when determining an Application.
While it is presumed that some of the larger “Risk indicators” are big picture anti-fraud pieces, what about the local office ones? What if something – single, older woman going to attend wedding is an indicator at one visa office, but not at the other? Is local knowledge and Officer’s expertise enough of a justification? Does there need to be oversight?
42. An approved “risk indicator” within Chinook is linked to set criteria. For example, a “risk indicator” may be linked to a client’s declared occupation, such as “petroleum geologist”, or intended employer, such as “Acme Oil”, or a specified combination of criteria, such as “petroleum geologist” for “Acme Oil”.
Again, the specific I understand – the broader flag of “petroleum geologist” seems like it has the possibility of discriminating and I would want it subject to independent oversight.
43. Approved “risk indicators” are presented in the Module 3 Report, along with a recommendation that Decision-Makers perform an activity in assessing an Application, such as a review of proof of credentials or an employment offer letter. The recommendation, however, does not direct Decision-Makers to arrive at any specific conclusion in conducting their assessment, but rather suggests steps to be taken to ascertain information.
I would be interested to seeing what the approval and refusal rates are for cases that are flagged. It would seem to be like a lower Tier flag that could create major challenges. Even though it does not direct a decision, it is hard to see how this does not fetter a discretion with a word such as ‘flag.’
Local Word Flags
44. A “local word flag” is used to assist in triaging an Application in order to ensure priority processing of time-sensitive Applications, such as an Application to attend a wedding or a funeral.
45. A “local word flag” is specific to a particular migration office. For example, the Beijing migration office may obtain approval from the Chinook administrator to include words associated to a wedding, such as “wedding”, “marriage”, or “ceremony”. The matched word found in any Application at the Beijing migration office is then presented in the Module 3 Report.
What separates a risk flag versus a word flag? A local word flag seems to support ‘priority processing’ but how many of these decisions are positive per word versus ultimately, negative?
46. There is a process to create a “risk indicator” or “local word flag” within Chinook. An IRCC Risk Assessment Officer (“RAO”) or other approved user may submit requests to create such an indicator. A Chinook administrator then reviews requests for approval within Module 5. Each submission must be justified through rationale statements and are subject to modification or denial by the administrator.
This is not surprising. We are aware of this process, although I would mention that from an ATIP on the RAO email account I only saw one Mod5 request (perhaps others redacted) but you can see it below. I also share a copy of what type of flags can be raised.
47. Following the above example, a RAO may find that a number of WP applications have included falsified letters of offer under the name of a specific company, such as “Acme Oil”. The RAO may then submit a request that the company name be included as a “risk indicator” due to concerns of falsified documentation.
This is by all accounts a very positive use of risk indicators. Why not let those who have applied know they have been flagged and perhaps these flags can be accumulated (and some even publicly shared) so we do not have repeat applicants falling for the same trap?
48. Chinook searches for “risk indicators” and “local word flags” in all Applications that are contained in a Module 3 Report. However, such indicators appear in the Module 3 Report only when they may be relevant to a particular Application.
Hence the N/A on several applications. That makes sense.
49. “Risk indicators” and “local word flags” are valid for four months from the date of approval, after which a Chinook administrator may renew or modify the indicator.
What oversight is there of this individual? Their role? Their anti-racism training? Is there a committee or only ONE administrator?
50. As noted above, Decision-Makers or other assigned Chinook users are to “copy and paste” any “risk indicators” or “local word flags” presented in the Module 3 Report into GCMS, where they will be retained. If there are no such indicators, Decision-Makers are to note that these are not applicable to an Application by recording “N/A” in GCMS. I expand on this process immediately below.
Again – why the language of N/A shows up in GCMS.
COMPLETION OF APPLICATION PROCESSING WITHIN CHINOOK
51. Once Decision-Makers finalize decisions for all Applications in a given Module 3 Report, they are to ensure that the decision, reasons, and any “risk indicators” or “local word flags” in the Module 3 Report are recorded in GCMS using the steps described in the paragraphs that follow.
Again, the problem is it is recorded in GCMS but it is disappeared for the Applicant trying to access their own GCMS. Is this fair?
52. Decision-Makers are to click a button labelled “Action List” located within Column A of the Module 3 Report, which organizes data for ease of transfer into GCMS. The created “Action List” presents the decision, reasons for refusal if applicable, and any “risk indicators” or “local word flags” for each Application. If there were no “risk indicators” or “local word flags” associated with a given Application, then Decision-Makers must populate the corresponding GCMS “Notes” field with “N/A” to reflect that no such terms were present in the Module 3 Report.
Which is what we saw with the Rule 9 excerpt I took out. Again, we’ve seen this.
53. Decision-Makers are then required to “copy and paste” the final decision from Chinook into the “Final” field contained in GCMS. Decision-Makers, or assigned Chinook users on their behalf, are also required to “copy and paste” any reasons for decision and the field contents for “risk indicators” and “local word flags” from Chinook into the “Notes” field of GCMS.
So, as counsel, we need to really figure out how to get our hands on these risk indicators because often times – we may be trapped against a flag on our clients, without us even knowing and with the bulk nature by which these flags are being triggered – that will limit the transparency of the final decision. Clients may […]
One of the big debates around Chinook is whether or not it is Artificial Intelligence (“AI”). IRCC’s position has been that Chinook is not AI because there is a human ultimately making decisions.
In this piece, I will show how the engagement of a human in the loop is a red herring, but also how the debate skews the real issue that automation, whether for business function only or to help administer administrative decision, can have adverse impacts – if unchecked by independent review.
The main source of my argument that Chinook is AI is from IRCC itself – the Policy Playbook on Automated Support on Decision-Making 2021. This an internal document, which has been updated yearly, but likely captures the most accurate ‘behind the scenes’ snapshot of where IRCC is heading. More on that in future pieces.
AI’s Definition per IRCC
The first, and most important thing is to start with the definition of Artificial intelligence within the Playbook.
The first thing you will notice is that the Artificial Intelligence is defined so broadly by IRCC, which seems to go against the narrow definition it seems to paint with respect to defining Chinook.
Per IRCC, AI is:
If you think of Chinook dealing with the cognitive problem of attempting to issue bulk refusals – and utilizing computer science (technology) – to apply to learning, problem solving and pattern recognition – it is hard to imagine that a system would even be needed if it weren’t AI.
Emails among IRCC, actively discuss the use of Chinook to monitor approval and refusal rates utilizing “Module 6”
Looking at the Chinook Module’s themselves, Quality Assurance (“QA”) is built in as a module. It is hard to imagine a QA system that looks at refusal and approval rates, and automates processes and is not AI.
As this article points out:
Software QA is typically seen as an expensive necessity for any development team; testing is costly in terms of time, manpower, and money, while still being an imperfect process subject to human error. By introducing artificial intelligence and machine learning into the testing process, we not only expand the scope of what is testable, but also automate much of the testing process itself.
Given the volume of files that IRCC is dealing with, it is unlikely that the QA process relies only on humans and not technology (else why would Chinook be implemented). And if it involves technology and automation (a word that shows up multiple times in the Chinook Manual) to aid the monitoring of a subjective administrative decision – guess what – it is AI.
We also know also that Chinook is underpinned with ways to process data, look at historical approval and refusal rates, and flag risks. It also integrates with Watchtower to review the risk of applicants.
It is important to note that even in the Daponte Affidavit in Ocran that alongside ATIPs is the only information we have about Chinook, the focus has always been on the first five modules. Without knowledge of the true nature of something like Module 7 titled ‘ToolBox’ it is certainly premature to be able to label the whole system as not AI.
Difficult to Argue Chinook is Purely Process Automation Given Degree of Judgment Exercised by System in Setting Up Findecs (Final Decisions)
Where IRCC might be trying to carve a distinction is between process automation/digital transformation and automated decision support systems.
One could argue, for example, that most of Chinook is process automation.
For example, the very underpinning of Chinook is it allows for the entire application to be made available to the Officer in one centralized location, without opening the many windows that GCMS required. Data-points and fields auto populate from an application and GCMS into a Chinook Software, allowing the Officer to render decisions easier. We get this. It is not debatable.
But does it cross into automated decision support system? Is there some degree of judgment that needs to be applied when applying Chinook that is passed on to technology that would traditionally be done by humans.
As IRCC defines:
The Chinook directly assists an Officer in approving or refusing a case. Indeed, Officers have to apply discretion in refusing, but Chinook presents and automates the process. Furthermore, it has fundamentally reversed the decision-making processing, making it a decide first, justify later approach with the refusal notes generator. Chinook without AI generating the framework, setting up the bulk categories, automating an Officer’s logical reasoning process, simply does not exist.
These systems replace the process of Officer’s needing to manually review documents and render a final decision, taking notes to file, to justify their decision. It is to be noted that this is still the process at low volume/Global North visa offices where decisions do this and are reflected in the extensive GCMS notes.
In Chinook, any notes taken are hidden and deleted by the system, and a template of bulk refusal reasons auto-populate, replace, and shield the actual factual context of the matter from scrutiny.
Hard to see how this is not AI. Indeed, if you look at the comparables provided – the eTA, Visitor Record and Study Permit Extension automation in GCMS, similar automations with GCMS underpin Chinook. There may be a little more human interaction, but as discussed below – a human monitoring or implementing an AI/advanced analytics/triage system doesn’t remove the AI elements.
Human in the Loop is Not the Defining Feature of AI
The defense we have been hearing from IRCC is that there is a human ultimately making a decision, therefore it cannot be AI.
This is obscuring a different concept called human-in-the-loop, which the Policy Playbook suggests actually needs to be part of all automated decision-making processes. If you are following, what this means is the defense of a human is involved (therefore not AI), is actually a key defining requirement IRCC has placed on AI-systems.
It is important to note that there is certainly is a spectrum of application of AI at IRCC that appears to be leaning away from human-in-the-loop. For example, IRCC has disclosed in their Algorithmic Impact Assessment (“AIA”) for the Advanced Analytics Triage of Overseas Temporary Resident Visa (“TRV”) Applications that there is no human in the loop with the automation of Tier 1 approvals. The same system without a human-in-the-loop is done for automating eligibility approvals in the Spouse-in-Canada program, which I will write about shortly.
Why the Blurred Line Between Process Automation and Automated Decision-Making Process Should Not Matter – Both Need Oversight and Review
Internally, this is an important distinguishing characteristic for IRCC because it appears that at least internal/behind-the-scenes strategizing and oversight (if that is what the Playbook represents) applies only to automated decision-support systems and not business automations. Presumably such a classification may allow for less need for review and more autonomy by the end user (Visa Officer).
From my perspective, we should focus on the last part of what IRCC states in their playbook – namely that ‘staff should consider whether automation that seems removed from final decisions may inadvertently contribute to an approval or a refusal.’
To recap and conclude, the whole purpose of Chinook is to be able to render the approval and refusal in a quicker and bulk fashion to save Officer’s time. Automation of all functions within Chinook, therefore, contribute to a final decision – and not inadvertently but directly. The very manner in which decisions are made in immigration shifts as a result of the use of Chinook.
Business automation cannot and should not be used as a cover for the ways that what appear routine automations actually affect processing that would have had to be done by humans, providing them the type of data, displaying it on the screen, in a manner that can fetter their discretion and alter the business of old.
That use of computer technology – the creation of Chinook – is 100% definable as the implementation of AI.
The Play is Under Review: A Closer Look at IRCC’s Policy Playbook on Automated Decision Making (Pending Feature)
Over the next several weeks, I’ll be doing a series of shorter blog posts on IRCC’s Policy Playbook on Automated Support for Decision-making (2021 edition).
The first one (hopefully released this week or by the weekend) will be about IRCC’s concerns that applicants are “gaming by claiming” and their preference for “objective evidence” for the inputs of IRCC’s Chinook system.
We will focus our attention of the manual we find could drastically change the landscape for applicants, practitioners, and the courts reviewing decision. We will get critical on ways we expect transparency in the use of AI as we move forward.
I am also doing two parallel judicial review of AI decisions as part of my practice right now, and will keep everyone informed as to how those cases are going and things we are learning.
Should be exciting. Welcome to this space, and looking forward to the conversation.
I have a tradition every year of listening to the same Death Cab for Cutie song, The New Year.
“So this is the new year
And I have no resolutions
For self assigned penance
For problems with easy solutions”
The pursuit of ‘easy’ seems to be the antithesis of my current path. In 2021 (after a late 2020 move), I started a new Firm and had a new baby, each of which has taken it’s relative toll. I’m ready for a reset, a change of focus, and a quieter year. I look forward to announcing those details in early February.
Gratitude for another Clawbies Win
I was definitely pleasantly surprised that I received another Clawbies (my third!) for Best Law and Commentary Blog in Canada. This year’s award is dedicated to my readers. Without the engagement, I’ve received on topics such as Chinook and our broader policy discussions, I would not have had the motivation to write. This year, my writing was split largely between this blog and my Firm’s blog.
I suspect 2022 to bring similar things, but I definitely realize how much I miss regular writing after my brief hiatus. I am going to try my best to spend my mornings writing – as regularly as I can amid my year focused on system-building, conference organizing, and too much creative day-dreaming (more on that to come too).
Question of 2022: Question of Inequity, Technology, and If (or How) the Courts Will Respond
If I were to crystal ball the central and most pressing issue in 2022, I would suggest it is that of the inequity, particularly technology facilitated inequity, that the current Canadian immigration systems have created. The follow-up question will be how (if at all) the Courts will choose to respond to these arguments, which should be brought forward more.
The Supreme Court of Canada in Vavilov has emphasized the importance of individual’s affected by a decision to be able to present their case fully and fairly. What does that mean within a system that appears to be molding what that means.
 The principles of justification and transparency require that an administrative decision maker’s reasons meaningfully account for the central issues and concerns raised by the parties. The principle that the individual or individuals affected by a decision should have the opportunity to present their case fully and fairly underlies the duty of procedural fairness and is rooted in the right to be heard: Baker, at para. 28. The concept of responsive reasons is inherently bound up with this principle, because reasons are the primary mechanism by which decision makers demonstrate that they have actually listened to the parties.
Let me give just a few examples of where I think there is clear system-built inequity. Study plans – for many of my clients in the Global South are not required documents for all applicants. Indeed, my colleague Patrick Bissonnette and I are preparing for a webinar in March where we will explore how there appears to quite a discrepancy between the instructions directed at applicants depending on visa office. Even more troubling, some applicants from high refusal visa offices are not given clear and complete instructions on what such letter should even include, or ultimately recommended to keep their plans to 1 or 2 pages. On the back end, cases (both where IRCC was successful and unsuccessful) are increasingly going after the ‘vague’ nature of the study plans submitted. This vagueness is entirely created by the system, but with ultimate consequences being borne by the Applicant.
I would suggest the same concern is raised about IRCC’s temporary resident portals, limiting uploads to 2MB for applicants. The reality is that 2MB isn’t fair where each visa office has vastly different requirements. In addition to study plans, many applicants from high refusal countries also need to add additional documents about their parents, sources of income, and ties. As we uncovered in our discussion of VESPA for TRV-exempt countries, cases are prima facie approved at a rate of 95+%. For those clients from high refusal countries, they struggle to be able to legibly combine documents and even properly categorize them under the new portal. I have spent much of the later part of 2021 having to re-apply and pursue legal remedies for folks who used the temporary resident portals, where their submissions were reduced and attachments had to be randomly submitted in a way a visa officer would likely have missed.
The other big question comes in the rollout of the use of AI (the China and India TRV model) to other visa officers and lines of work. For IRCC these systems have been working great, but on the other side we’re seeing only the back end of either quick approvals or refusals with very limited justification (as a result of Chinook’s use on the back-end). My hope is that in addition to a bit more transparency (and independent oversight) on the AI system expansion process, that IRCC can do proper outreach on the ongoing use of Chinook or Chinook’s pending replacement.
We have to remember that the Courts too are (and I have to say I am very pleasantly surprised, some what crushing) the recent move to technology. Still, AI and the administrative choices surrounding use of technology will be a whole new conversation to be had. My hope is that this conversation is not simply about deference to the experts. The experts themselves need to ensure their systems do not reproduce yesterday’s inequities.
I will be doing a lot of writing on this in 2022 and cannot wait to share what I uncover!
Ttfn. 2022 let’s go.
My Value Proposition
My Canadian immigration/refugee legal practice is based on trust, honesty, hard-work, and communication. I don’t work for you. I work with you.
You know your story best, I help frame it and deal with the deeper workings of the system that you may not understand. I hope to educate you as we work together and empower you.
I aim for that moment in every matter, big or small, when a client tells me that I have become like family to them. This is why I do what I do.
I am a social justice advocate and a BIPOC. I stand with brothers and sisters in the LGBTQ2+ and Indigenous communities. I don’t discriminate based on the income-level of my clients – and open my doors to all. I understand the positions of relative privilege I come from and wish to never impose them on you. At the same time, I also come from vulnerability and can relate to your vulnerable experiences.
I am a fierce proponent of diversity and equality. I want to challenge the racist/prejudiced institutions that still underlie our Canadian democracy and still simmer in deep-ceded mistrusts between cultural communities. I want to shatter those barriers for the next generation – our kids.
I come from humble roots, the product of immigrant parents with an immigrant spouse. I know that my birth in this country does not entitle me to anything here. I am a settler on First Nations land. Reconciliation is not something we can stick on our chests but something we need to open our hearts to. It involves acknowledging wrongdoing for the past but an optimistic hope for the future.
I love my job! I get to help people for a living through some of their most difficult and life-altering times. I am grateful for my work and for my every client.