International Students and the Law

Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

Blog Posts

A Closer Look at How IRCC’s Officer and Model Rules Advanced Analytics Triage Works

As IRCC ramps up to bring in advanced analytics to all their Lines of Business (LOBs), it is important to take a closer look at what the foundational model, the China TRV Application Process, looks like. Indeed, we know that this TRV model will be the TRV model for the rest of the world sometime this year (if not already).

While this chart is from a few years back, reflecting as I have discussed in many recent presentations and podcasts, how behind we are in this area, my understanding that this three Tier system is still the model in place

Over the next few posts, I’ll try and break down the model in more detail.

This first post will serve as an overview to the process.

I have included a helpful chart, explaining how an application goes from intake to decision made and passport request.

While I will have blog posts, that go into more detail about what ‘Officer Rules’ and ‘Model Rules’ are, here is the basic gist of it. A reminder it only represents the process to approval NOT refusal, and such a similar chart was not provided.

Step 1) Officer’s Rules Extract Applications Out Based on Visa Office-Specific Rules

Each Visa Office has it’s own Officer’s Rules. If an application triggers one of those rules, it no longer gets processed via the Advanced Algorithm/AI model. Think about it as a first filter, likely for those complex files that need a closer look at by IRCC.

You will recall in our discussion of Chinook, the presence of “local word flags” and “risk indicators.” There is no evidence I have yet which links these two pieces together, but presumably the Officer Rules must also be triggered by certain words and flags.

Other than this, we are uncertain about what Officer’s Rules are and we should not expect to know. However, we do know that the SOPs (Standard Operating Procedures) at each Visa Office then apply, rather than the AA/AI model. What it suggests is that the SOPs (and access to these documents) may have the trigger for the word flags.

Step 2) Application of Model Rules

This is where the AA/AI kick in. Model Rules (which I will discuss in a future blog post) are created by IRCC data experts to replicate a high confidence ability to separate applications into Tiers. Tier 1 are the applications that to a high level of confidence, should lead the Applicant to obtain positive eligibility findings. Indeed, Tier 1 Applications are decided with no human in the loop but the computer system will approve them. If the Application is likely to fail the eligibility process, and lead to negative outcomes, it goes to Tier 3. Tier 3 requires Officer review, and – unsurprisingly – has the highest refusal rate as we have discussed in this previous piece.

It is those files that are between positive and negative (the ‘maybe files’) and also the ones that do not fit in the Model Rules nor Officer Rules that become Tier 2. Officers also have to review these cases, but the approval rates are better than Tier 3.

3) Quality Assurance

The Quality Assurance portion of this model, has 10% of all files, filtered to Tier 2 to verify the accuracy of the model.

The models themselves become ‘production models’ when a high level of confidence is met, and they are finalize – such as the ones we have seen for China TRV, India TRV, we believe also China and India Study Permits, but also likely cases such as VESPA (yet this part has not been confirmed). Before it becomes a Production Model, it is in the Exploratory model zone.

How do we know there is a high QA? Well this is where we look at the scoring of the file.

I will break down (and frankly need more research) into this particular model later and it will be the subject of a later piece, but applications are scored to ensure the model is working effectively.

It is interesting that Chinook also has a QA function (and a whole QA Chinook module 6), so it appears there’s even more overlap between the two systems, probably akin to a front-end/back-end type relationship.

4) Pre-Assessment

Tier 1 applications go straight to admissibility review, but those in 2 and 3 go to pre-assessment review by a Clerk.

Important to note here and in the module that these clerks and officers appear to be citing in CPC-O, not the local visa offices abroad. This may also explain why so many more decisions are being made by Canadian decision-makers, even though it may be ultimately delivered or associated with a primary visa office abroad.

But here-in lies a bit of our confusion.

Based on a 2018 ATIP we did, we know that they are triaging different of cases based on case types into “Bins” so certain officers or at least certain lettered numbers – would handle like cases. Yet, this appears to have been the India model then, but the China TRV model seems to centralize it more in Ottawa. Where does the local knowledge and expertise come in? Are there alternative models now that send the decisions to the local visa office or is it only Officer’s rules? Is this perhaps why decisions rendered on the TRVs from India and China are lacking the actual local knowledge that we used to see in decisions because they have been taken outside of the hands of those individuals.

Much of the work locally used to be done on verifying employers, confirming certain elements, but is that now just for those files that are taken out of the triage and flagged as being possible admissibility concerns? Much to think about here.

Again, note that Chinook as a pre-assessment module also that seems to be responsible for many of the same things, so perhaps Chinook is also responsible for presenting the results of that analysis in a more Officer friendly way but why is it also directing the pre-assessment, if it is being done by officers?

5) Eligibility Assessment

What is important to note that this stage is Eligibility where there is no automated approval is still being done by Officers. What we do not know is if there is any guidance directed at Officers to approve/refuse a certain number of Tier 2 or Tier 3 applicants. This information would be crucial. We also know IRCC is trying to automate refusals, so we need to track carefully what that might look like down the road as it intersects with negative eligibility assessments.

6) Admissibility Review + Admissibility Hits

While this likely will be the last portion to be automated, given the need to cross-verify many different sources we also know that IRCC has programs in place such as Watchtower, again the Risk Flags, which may or may not trigger admissibility review. Interestingly enough, even cases where it seems admissibility (misrep) may be at play, it seems to also lead to eligibility refusals or concerns. I would be interested in knowing whether the flagging system also occurs as the eligibility level or whether there is a feedback/pushback system so a decision can be re-routed to eligibility (on an A16 IRPA issue for example).

KEY: Refusals Not Reflected in Chart

What does the refusal system look like? This becomes another key question as decisions are often skipping even the biometrics or verifications and going straight to refusal. This chart obviously would look much more complicated, with probably many more steps at which a refusal can be rendered without having to complete the full eligibility assessment.

Is there a similar map? Can we get access to it?

 

Conclusion – we know nothing yet, but this also changes everything

This model, and this idea of an application being taken out of the assembly line at various places, going through different systems of assessment, really in my mind suggest that we as applicant’s counsel know very little about how our applications will be processed in the future. These systems do not support cookie cutter lawyering, suggest flags may be out of our control and knowledge, and ultimately lead us to question what and who makes up a perfect Tier 1 application.

Models like this also give credence to IRCC’s determination to keep things private and to keep the algorithms and code away from prying investigators and researchers, and ultimately those who may want to take advantage of systems.

Yet, the lack of transparency and concerns that we have about how these systems filter and sort appear very founded. Chinook mirrors much of what is in the AA model. We have our homework cut out for us.

Read More »

Three Belated Crystal Ball Predictions for Canadian Immigration in 2022

While March may seem for some a little late to be predicting a year’s events (given Q1 is nearing it’s end), I will take a contrarian position that is not. Right now is perhaps the perfect time to try and make a prediction. All the big picture pieces are out of the way. We know what the levels plan looks like, especially in terms of the reduction of CECs landed in 2022.

Prediction 1: 2022 will be about AI vs. IA

I believe IRCC is full throttle trying to implement AI (Advanced Analytics) across all their Lines of Business (LOBs), from temporary to permanent residence to citizenship. The speed by which artificial intelligence can be implemented with public support to process high volume of applications to Canada will be pitted against the impact of international affairs/crises/refugee producing situation.  If Ukraine is the new precedent set by IRCC to tackle refugee/humanitarian wars and crises, which politically appears it will have to be – so that the Government can appear anti-racist (see prediction 3), this will inevitably delay/shift resources. If AI can be quickly implemented to deal with the quick decisions (both approvals/refusals) this might be the best solution for the Government. Meanwhile those who are more critical of AI systems (myself included) might ask for more caution in the process;

Email headings between senior A2SC (Advanced Analytics) folks, received via ATIP

 

Prediction 2: TEERing Up the Economic Immigration System Will Leave Some Behind 

The new TEER system replaces NOC in a year where economic permanent resident applications, largely filled by NOC B positions, are backlogged and paused. How will IRCC adapt and change the rules of the game, with the implementation of TEER. What does this mean for the future of FSW/CEC?

If the math is as it is above, we could see a shrinking of NOC 0AB so that that the 70% of unit groups once eligible (NOC 0, A,B) turns into 59% (Tier 0, 1, 2). While it seems like a lot of what will occur will be ‘mergers’, I am eager to see what happens to tweener jobs such as administrative assistant and retail sales supervisor. I suspect, the first place we will see a major impact will be in the Federal Skilled Worker where we may move to exclusion lists or targeted draws for specific TEER categories.

 

Prediction 3: IRCC will be forced/asked to clean up the house on anti-racismIRCC’s Anti-Racism Polaris Report and recent concerns (including the next Parliamentary Study) about discrepant processing rates will lead the Department to try and address this in policy options and offerings.

The emails between IRCC staff looking into preventing bias and anti-racism in systems is good work in the right direction, but growing calls will be for an independent oversight commission or ombudsperson.

Immigration is so deeply entrenched with racist roots from our history of exclusion, now manifested in explicit and implicit biases, two tiered systems, secret programs, and different criteria, that I really do not see how we can build an anti-racist system without first tearing down the first one. Economically (in terms of investment into things such as technology) and politically (given we are still considered globally to have a decent/attractive system), I don’t see us doing that.

What you will likely see is a greater platforming and emphasis of the Gender Based Analysis (GBA+) work as well as projects taken up that give at least a cover or presentation of progress. Yet, myself and other critiques are still hopeful that the Government does not shy away from a hard, introspective, look at the systems that have already been developed and paid for to see where key fixes are needed.

I do see that those on the other side, advocates, lawyers, etc. are shifting away from their own Whiteness and once those litigation skills and experiences are transferred to the new generation of racialized lawyers who have a keen sense of justice and have lived/feel the discrepancy, they will start attacking the foundations. I think right now is the perfect time for IRCC to do some public relations/communication work around anti-racism, to pad the intention piece, and build in justifications/explanations/evidence for when these matters eventually get litigated.

As I have presented and said – immigration is itself state-sponsored discrimination. I don’t think we will ever eliminate it to a point where Applicants are happy and Immigration loses its role as a filtering mechanism based on race + citizenship, as a defining feature. Yet, I definitely see a bigger role for those who advocate for safeguards and 2022 as the year some of those safeguards start being introduced.

Read More »

Why If There’s No “N/A” Risk Flag on Your GCMS Notes, You May Have Been Risk Flagged

One of the more fascinating modules in Chinook is Module 5 – Indicator Management.

Many of you who have received ATIPs for Officer’s GCMS notes or received Rule 9 Reasons from the Federal Court probably see this in your GCMS notes:

But what if this is missing? Folks have yet to see any actual risk indicators processing priority word flags actually show up in ATIP, well here is probably why.

Indicators and Word Flags are Deleted If There is any Indicator/Word Flag

This email exchange from October 2020 between IRCC program officers and ATIP Officers (I won’t get into why I find this problematic in this piece) tells you why.

In this email, guidance is being provided to only use the wording “Indicator: N/A Processing Priority Word Flag: N/A” where there is Indicator or Priority Flag. In other words, the entire section is omitted where there is an Indicator or Priority Word Flag.

Hence the title of this piece.

Question then becomes, how does one actually challenge the lack of disclosure of a risk flag or priority word flag in a decision? For example, in Federal Court. In litigation, reverse engineered explanations will be put forward for why the decision was reasonable – but without the actual indicator/word flag – a large chunk of the decision or perhaps the impetus behind a fettered decision will be missing.

Furthermore, is it one way access. A big defense of the transparency of fairness of Chinook is that the same information is available in GCMS as is in Chinook minus the deletions of working notes (that apparently are not substantive). However, as we have discovered otherwise these notes can be substantive and if Officers are recommended to use standard form wording in refusing cases – we might only be able to rely on things such as risk flags/work indicators – but these are being deleted from GCMS notes and Rule 9 reasons. What if the Department of Justice has access to them (from their client) but we do not. Does that create a procedural fairness issue?

 

Let’s take a step back and look at what we know so far about Module 5.

Below I will write a running commentary of paras 48-53 of the Daponte Affidavit.

Module 5: Indicator Management (Risk Indicators and Local Word Flags)

38. As described above, Module 5 allows a Chinook user to submit requests to a Chinook administrator to add, renew, or modify “risk indicators” and “local word flags”. “Risk indicators” and “local word flags” are intended to assist Decision-Makers in their review of Applications.

It is to be noted, we still do not know how the system flags/indicates these words to the case. Where it shows up (in what module) to trigger action.

Risk Indicators

39. “Risk indicators” are used to notify Decision-Makers of trends that IRCC has detected, such as a trend that a falsified document was submitted by a certain company in a high number of Applications from different clients or otherwise to highlight a particular factor of concern.

40. “Risk indicators” are also utilized to notify Decision-Makers of potentially low risk Applications; for example, if an international medical conference is being held in Canada, a “risk indicator” may be created to identify entry for such purpose to be of low risk to program integrity.

41. “Risk indicators” may apply to all Applications or to a specific migration office. The inclusion of “risk indicators” within Chinook allows Decision-Makers to view applicable indicators in a centralized manner when determining an Application.

While it is presumed that some of the larger “Risk indicators” are big picture anti-fraud pieces, what about the local office ones? What if something – single, older woman going to attend wedding is an indicator at one visa office, but not at the other? Is local knowledge and Officer’s expertise enough of a justification? Does there need to be oversight?

42. An approved “risk indicator” within Chinook is linked to set criteria. For example, a “risk indicator” may be linked to a client’s declared occupation, such as “petroleum geologist”, or intended employer, such as “Acme Oil”, or a specified combination of criteria, such as “petroleum geologist” for “Acme Oil”.

Again, the specific I understand – the broader flag of “petroleum geologist” seems like it has the possibility of discriminating and I would want it subject to independent oversight.

43. Approved “risk indicators” are presented in the Module 3 Report, along with a recommendation that Decision-Makers perform an activity in assessing an Application, such as a review of proof of credentials or an employment offer letter. The recommendation, however, does not direct Decision-Makers to arrive at any specific conclusion in conducting their assessment, but rather suggests steps to be taken to ascertain information.

I would be interested to seeing what the approval and refusal rates are for cases that are flagged. It would seem to be like a lower Tier flag that could create major challenges. Even though it does not direct a decision, it is hard to see how this does not fetter a discretion with a word such as ‘flag.’

Local Word Flags

44. A “local word flag” is used to assist in triaging an Application in order to ensure priority processing of time-sensitive Applications, such as an Application to attend a wedding or a funeral.

45. A “local word flag” is specific to a particular migration office. For example, the Beijing migration office may obtain approval from the Chinook administrator to include words associated to a wedding, such as “wedding”, “marriage”, or “ceremony”. The matched word found in any Application at the Beijing migration office is then presented in the Module 3 Report.

What separates a risk flag versus a word flag? A local word flag seems to support ‘priority processing’ but how many of these decisions are positive per word versus ultimately, negative?

Indicator Management

46. There is a process to create a “risk indicator” or “local word flag” within Chinook. An IRCC Risk Assessment Officer (“RAO”) or other approved user may submit requests to create such an indicator. A Chinook administrator then reviews requests for approval within Module 5. Each submission must be justified through rationale statements and are subject to modification or denial by the administrator.

This is not surprising. We are aware of this process, although I would mention that from an ATIP on the RAO email account I only saw one Mod5 request (perhaps others redacted) but you can see it below. I also share a copy of what type of flags can be raised.

47. Following the above example, a RAO may find that a number of WP applications have included falsified letters of offer under the name of a specific company, such as “Acme Oil”. The RAO may then submit a request that the company name be included as a “risk indicator” due to concerns of falsified documentation.

This is by all accounts a very positive use of risk indicators. Why not let those who have applied know they have been flagged and perhaps these flags can be accumulated (and some even publicly shared) so we do not have repeat applicants falling for the same trap?

48. Chinook searches for “risk indicators” and “local word flags” in all Applications that are contained in a Module 3 Report. However, such indicators appear in the Module 3 Report only when they may be relevant to a particular Application.

Hence the N/A on several applications. That makes sense.

49. “Risk indicators” and “local word flags” are valid for four months from the date of approval, after which a Chinook administrator may renew or modify the indicator.

What oversight is there of this individual? Their role? Their anti-racism training? Is there a committee or only ONE administrator?

50. As noted above, Decision-Makers or other assigned Chinook users are to “copy and paste” any “risk indicators” or “local word flags” presented in the Module 3 Report into GCMS, where they will be retained. If there are no such indicators, Decision-Makers are to note that these are not applicable to an Application by recording “N/A” in GCMS. I expand on this process immediately below.

Again – why the language of N/A shows up in GCMS.

COMPLETION OF APPLICATION PROCESSING WITHIN CHINOOK

51. Once Decision-Makers finalize decisions for all Applications in a given Module 3 Report, they are to ensure that the decision, reasons, and any “risk indicators” or “local word flags” in the Module 3 Report are recorded in GCMS using the steps described in the paragraphs that follow.

Again, the problem is it is recorded in GCMS but it is disappeared for the Applicant trying to access their own GCMS. Is this fair?

52. Decision-Makers are to click a button labelled “Action List” located within Column A of the Module 3 Report, which organizes data for ease of transfer into GCMS. The created “Action List” presents the decision, reasons for refusal if applicable, and any “risk indicators” or “local word flags” for each Application. If there were no “risk indicators” or “local word flags” associated with a given Application, then Decision-Makers must populate the corresponding GCMS “Notes” field with “N/A” to reflect that no such terms were present in the Module 3 Report.

Which is what we saw with the Rule 9 excerpt I took out. Again, we’ve seen this.

53. Decision-Makers are then required to “copy and paste” the final decision from Chinook into the “Final” field contained in GCMS. Decision-Makers, or assigned Chinook users on their behalf, are also required to “copy and paste” any reasons for decision and the field contents for “risk indicators” and “local word flags” from Chinook into the “Notes” field of GCMS.

So, as counsel, we need to really figure out how to get our hands on these risk indicators because often times – we may be trapped against a flag on our clients, without us even knowing and with the bulk nature by which these flags are being triggered – that will limit the transparency of the final […]

Read More »

Chinook is AI – IRCC’s Own Policy Playbook Tells Us Why

One of the big debates around Chinook is whether or not it is Artificial Intelligence (“AI”). IRCC’s position has been that Chinook is not AI because there is a human ultimately making decisions.

In this piece, I will show how the engagement of a human in the loop is a red herring, but also how the debate skews the real issue that automation, whether for business function only or to help administer administrative decision, can have adverse impacts – if unchecked by independent review.

The main source of my argument that Chinook is AI is from IRCC itself – the Policy Playbook on Automated Support on Decision-Making 2021. This an internal document, which has been updated yearly, but likely captures the most accurate ‘behind the scenes’ snapshot of where IRCC is heading. More on that in future pieces.

AI’s Definition per IRCC

The first, and most important thing is to start with the definition of Artificial intelligence within the Playbook.

The first thing you will notice is that the Artificial Intelligence is defined so broadly by IRCC, which seems to go against the narrow definition it seems to paint with respect to defining Chinook.

Per IRCC, AI is:

If you think of Chinook dealing with the cognitive problem of attempting to issue bulk refusals – and utilizing computer science (technology) – to apply to learning, problem solving and pattern recognition – it is hard to imagine that a system would even be needed if it weren’t AI.

Emails among IRCC, actively discuss the use of Chinook to monitor approval and refusal rates utilizing “Module 6”

Looking at the Chinook Module’s themselves, Quality Assurance (“QA”) is built in as a module. It is hard to imagine a QA system that looks at refusal and approval rates, and automates processes and is not AI.

As this article points out:

Software QA is typically seen as an expensive necessity for any development team; testing is costly in terms of time, manpower, and money, while still being an imperfect process subject to human error. By introducing artificial intelligence and machine learning into the testing process, we not only expand the scope of what is testable, but also automate much of the testing process itself.

Given the volume of files that IRCC is dealing with, it is unlikely that the QA process relies only on humans and not technology (else why would Chinook be implemented). And if it involves technology and automation (a word that shows up multiple times in the Chinook Manual) to aid the monitoring of a subjective administrative decision – guess what – it is AI.

We also know also that Chinook is underpinned with ways to process data, look at historical approval and refusal rates, and flag risks. It also integrates with Watchtower to review the risk of applicants.

It is important to note that even in the Daponte Affidavit in Ocran that alongside ATIPs is the only information we have about Chinook, the focus has always been on the first five modules. Without knowledge of the true nature of something like Module 7 titled ‘ToolBox’ it is certainly premature to be able to label the whole system as not AI.

 

Difficult to Argue Chinook is Purely Process Automation Given Degree of Judgment Exercised by System in Setting Up Findecs (Final Decisions)

Where IRCC might be trying to carve a distinction is between process automation/digital transformation and automated decision support systems.

One could argue, for example, that most of Chinook is process automation.

For example, the very underpinning of Chinook is it allows for the entire application to be made available to the Officer in one centralized location, without opening the many windows that GCMS required. Data-points and fields auto populate from an application and GCMS into a Chinook Software, allowing the Officer to render decisions easier. We get this. It is not debatable.

But does it cross into automated decision support system? Is there some degree of judgment that needs to be applied when applying Chinook that is passed on to technology that would traditionally be done by humans.

As IRCC defines:

The Chinook directly assists an Officer in approving or refusing a case. Indeed, Officers have to apply discretion in refusing, but Chinook presents and automates the process. Furthermore, it has fundamentally reversed the decision-making processing, making it a decide first, justify later approach with the refusal notes generator. Chinook without AI generating the framework, setting up the bulk categories, automating an Officer’s logical reasoning process, simply does not exist.

These systems replace the process of Officer’s  needing to manually review documents and render a final decision, taking notes to file, to justify their decision. It is to be noted that this is still the process at low volume/Global North visa offices where decisions do this and are reflected in the extensive GCMS notes.

In Chinook, any notes taken are hidden and deleted by the system, and a template of bulk refusal reasons auto-populate, replace, and shield the actual factual context of the matter from scrutiny.

Hard to see how this is not AI. Indeed, if you look at the comparables provided – the eTA, Visitor Record and Study Permit Extension automation in GCMS, similar automations with GCMS underpin Chinook. There may be a little more human interaction, but as discussed below – a human monitoring or implementing an AI/advanced analytics/triage system doesn’t remove the AI elements.

 

Human in the Loop is Not the Defining Feature of AI

The defense we have been hearing from IRCC is that there is a human ultimately making a decision, therefore it cannot be AI.

This is obscuring a different concept called human-in-the-loop, which the Policy Playbook suggests actually needs to be part of all automated decision-making processes. If you are following, what this means is the defense of a human is involved (therefore not AI), is actually a key defining requirement IRCC has placed on AI-systems.

It is important to note that there is certainly is a spectrum of application of AI at IRCC that appears to be leaning away from human-in-the-loop. For example, IRCC has disclosed in their Algorithmic Impact Assessment (“AIA”) for the Advanced Analytics Triage of Overseas Temporary Resident Visa (“TRV”) Applications that there is no human in the loop with the automation of Tier 1 approvals. The same system without a human-in-the-loop is done for automating eligibility approvals in the Spouse-in-Canada program, which I will write about shortly.

 

Why the Blurred Line Between Process Automation and Automated Decision-Making Process Should Not Matter – Both Need Oversight and Review

Internally, this is an important distinguishing characteristic for IRCC because it appears that at least internal/behind-the-scenes strategizing and oversight (if that is what the Playbook represents) applies only to automated decision-support systems and not business automations. Presumably such a classification may allow for less need for review and more autonomy by the end user (Visa Officer).

From my perspective, we should focus on the last part of what IRCC states in their playbook – namely that ‘staff should consider whether automation that seems removed from final decisions may inadvertently contribute to an approval or a refusal.’

To recap and conclude, the whole purpose of Chinook is to be able to render the approval and refusal in a quicker and bulk fashion to save Officer’s time. Automation of all functions within Chinook, therefore, contribute to a final decision – and not inadvertently but directly. The very manner in which decisions are made in immigration shifts as a result of the use of Chinook.

Business automation cannot and should not be used as a cover for the ways that what appear routine automations actually affect processing that would have had to be done by humans, providing them the type of data, displaying it on the screen, in a manner that can fetter their discretion and alter the business of old.

That use of computer technology – the creation of Chinook – is 100% definable as the implementation of AI.

 

Read More »
About Us
Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary.

Let’s Get in Touch

Translate »