Predictive/Advanced Analytics + Chinook – Oversight = ?

In September 2021’s issue of Lexbase, my mentor Richard Kurland, provides further insight into what happens behind the scenes of Immigration, Refugees, and Citizenship Canada (“IRCC”) processing, specifically providing a section titled: “Overview of the Analytics-Based Triage of Temporary Resident Visa Applications.

At the outset, a big thank you to the “Insider” Richard Kurland for the hard digging that allows for us to provide this further analysis.

 

What the Data Suggests

I encourage all of you to check out the first two pages from the Lexbase issue, as it contains direct disclosure from IRCC’s Assistant Director, Admissibility opening up the process by way Artificial Intelligence is implemented for Temporary Resident Visas (‘TRVs’), specifically in China and India, the two countries that have implemented it so far. By way of this June 2020 disclosure, we confirm that IRCC has been utilizing these systems for online applications since April 2018 for China, August 2018 for India, and for Visa Application Centre (“VAC”) based applications since January 2020.

To summarize (again – go read Lexbase and contact Richard Kurland for all the specific details and helpful tables), we learn that there is a three Tier processing system in play. This filters the simplest applications (Tier 1), medium complexity applications (Tier 2), and higher complexity applications (Tier 3). While human officers are involved in all three Tiers, Tier 1 allows a model to recommend approval based on analytics, where as Tier 2 and Tier 3 are flagged for manual processing. IRCC claims that the process is only partially automated.

The interesting factor, and given we have been as a law firm focusing a lot on India, is how the designated of a Tier 2 file drives the approval rates from the high nineties (%) to 63% for online India apps to 37%  for India VAC applications. Moving to Tier 3, it is only 13% for online India and 5% for India VAC. The deeming of a file Tier 3 appears to make refusal a near surety.

What is fascinating is how this information blends usage of “Officer Rules,” the first stage filter which  actually precedes the computerized Three Tier triages and is targeted at cases with higher likelihood of ineligibility or inadmissibility.

The Officer Rules system would be the system utilized at other global visa offices that do not use the computerized AI decision-making of India and China. Looking specifically at the case of India, the Officer Rules system actually approves cases at a much higher rate (53% for online India, and 38% for India VAC).

These rates are in-fact comparable to Tier 2 moderately complex cases – ones that presumably do not contain the serious ineligibility and inadmissibility concerns of Officer Rules or Tier 3 . It suggests that the addition of technology can sway even a moderately complex case into the same outcomes as a hand-pulled out complex case.

Ultimately, this suggests that complete human discretion or time spent assessing factors can be much more favourable than when machines contribute to overall decision-making.

It Comes Down to Oversight and How These Systems Converge

Recently, we’ve been discussing in Youtube videos (here and here), podcasts, and articles about IRCC’s Chinook system for processing applications. Using an excel-based model (although moving now to an Amazon-based model in their latest version), applicants data are extracted into rows, that contain batch information for several applicants, presumably allowing for all the analytics to be assessed.

Given we know IRCC takes historic approval rates and data as a main driving factor, it is reasonable to think Immigration Officers are given these numbers as internal targets. I am sure, as well, that with major events like COVID and the general dissuasion of travel to Canada, that these goalposts can be moved and expanded at direction.

An excel-based system tracking approvals and refusals likely put these stats front and centre to an officer’s discretion (or a machine’s) on an application. Again to utilize a teaching analogy (clearly I miss teaching), I utilized a similar ‘Speedgrader’ type app which forced me, mid-marking, to often to revisit exams that I had already graded because I had awarded the class average marks that were too high. I have no doubt a parallel system exists with IRCC.

What this all means, as my colleague, Zeynab Ziaie has pointed out in our discussions, there are major concerns that Chinook and the AI systems have not been developed and rolled out with adequate lawyer/legal input and oversight, which leads to questions about accountability. Utilizing the Chinook example, what if the working notes that are deleted contain the very information needed to justify or shed light on how an application was processed.

My question, in follow-up, is how are the predictive/advanced analytics systems utilized by India and China for TRVs influencing Chinook? Where is the notation to know whether one’s file was pre-assessed by “Officer’s Rule” or through the Tiers. I quickly reviewed a few GCMS notes prior to this call, and though we know whether a file was pre-accessed, we have no clue which Tier it landed on.

Furthermore, how do we ensure that the visa-office subjective “Officer Rules” or the analytical factors that make up the AI system are not being applied in a discriminatory manner to filter cases into a more complex/complex stream. For example, back in 2016 I pointed how the Visa-Office training guides in China regionally and geographically discriminate against those applying from certain Provinces assigning character traits and misrepresentation risks. We know in India, thanks to the work of my mentor Raj Sharma, that the Indian visa offices have a training guide on genuine relationships and marriage fraud that may not accord with realities.

Assuming that this AI processing system is still being used only for TRVs and not for any other permits, it must be catching (with the assistance of Chinook’s key word indicators no less) words such as marriage, the names of rural communities, marital status, perhaps the addresses of unauthorized agents, and businesses that often have been used as a cover for support letters. Within that list there’s a mix of good local knowledge, but also the very stereotypes that have historically kept families apart and individuals from being able to visit without holding a study permit or work permit.

If we find out, for example, that filtering for complex cases only happens at visa offices with high refusal rates or in the Global South, does that make the system unduly discriminatory?

We acknowledge of course that the very process of having to apply to enter the borders, the division of TRV and electronic Travel Authorization (eTA) requiring countries is discriminatory by nature, but what happens when outcomes on similar facts are so discrepant?

In other areas of national bureaucracy, Governments have moved to blind processing to try and limit discrimination around ethnic names, or base decisions on certain privileges (ability to travel and engage in previous work), and remove identifying features that might lead to bias. For immigration it is the opposite, you see their picture, their age, and where they are from, and why they want to come (purpose of visit). As we have learned from Chinook, that is the baseline information that is being extracted for Officers to base their decisions on.

When – as a society – do we decide to move away (as we have) on what were once harmful norms to new realities? Who is it that makes the call or calls for reviews for things such as consistency or whether a particular discriminatory input in the AI system is no-longer consistent with Charter values?

Right now, it is all in the Officer’s discretion and by extension, the Visa Offices, but I would recommend some unified committee of legal experts and race/equity scholars need to be advising on the strings of the future, inevitable, AI systems. This would also unify things across visa offices so that there is less discrepancy in the way systems render decisions. While it makes sense that heavier volume visa offices have more tools as their disposal, it should not depend on where you live to receive less access to human decision-makers or to an equal standard of decision-making. We do not want to get to a place where immigration applicants are afraid to present their stories or speak their truths for fear of being filtered by artificial intelligence. From my perspective, we are better of being transparent and setting legitimate expectations.

What are your thoughts on the introduction of AI, the interaction with Chinook, and the need for oversight? Feel free to engage in the comments below or on social media!

Thanks again for reading.