Advanced Analytics

Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

Blog Posts

Why the 30-Year Old Florea Presumption Should Be Retired in Face of Automated Decision Making in Canadian Immigration

In the recent Federal Court decision of Hassani v. Canada (Citizenship and Immigration), 2023 FC 734, Justice Gascon writes a paragraph that I thought would be an excellent starting point for a blog. Not only does it capture the state of administrative decision-making in immigration and highlight some of the foundational pieces, but also I want to focus on one part of it that I may respectfully suggest, needs a re-think.

Hassani involved an Iranian international student who was refused a study permit to attend a Professional Photography program at Langara College. She was refused on two factors – [1] that she did not have significant family ties outside Canada and that [2] her purpose of visit was not consistent with a temporary stay given the details she had provided in her application. On the facts, it is definitely questionable that this case even went to hearing given the Applicant had no family ties in Canada and all her family ties were indeed outside Canada and in Iran. Nevertheless, Justice Gascon did a very good job analyzing the flaws within the Officer’s two findings.

There is one paragraph, 26, that is worth breaking down further – and there’s one foundational principle cited that I think needs a major rethink.

Justice Gascon writes:

[26] I do not dispute that a decision maker is generally not required to make an explicit finding on each constituent element of an issue when reaching its final decision. I also accept that a decision maker is presumed to have weighed and considered all the evidence presented to him or her unless the contrary is shown (Florea v Canada (Minister of Employment and Immigration)[1993] FCJ No 598 (FCA) (QL) at para 1). I further agree that failure to mention a particular piece of evidence in a decision does not mean that it was ignored and does not constitute an error (Cepeda-Gutierrez v Canada (Minister of Citizenship and Immigration)1998 CanLII 8667 (FC), [1998] FCJ No 1425 (QL) [Cepeda-Gutierrez] at paras 16–17). Nevertheless, it is also well established that a decision maker should not overlook contradictory evidence. This is particularly true with respect to key elements relied upon by the decision maker to reach its conclusion. When an administrative tribunal is silent on evidence clearly pointing to an opposite conclusion and squarely contradicting its findings of fact, the Court may intervene and infer that the tribunal ignored the contradictory evidence when making its decision (Ozdemir v Canada (Minister of Citizenship and Immigration)2001 FCA 331 at paras 9–10Cepeda-Gutierrez at para 17). The failure to consider specific evidence must be viewed in context, and it will lead to a decision being overturned when the non-mentioned evidence is critical, contradicts the tribunal’s conclusion and the reviewing court determines that its omission means that the tribunal disregarded the material before it (Penez at paras 24–25). This is precisely the case here with respect to Ms. Hassani’s family ties in Iran. (emphasis added)

 

What is the Florea Presumption?

As stated by Justice Gascon, the principle in Florea v Canada (Minister of Employment and Immigration),[1993] FCJ No 598 (FCA) pertains to a Tribunal’s weighing of evidence and the presumption that they have considered all the evidence before them. It puts the onus on the Applicant stating otherwise, to establish the contrary.

As the Immigration and Refugee Board Legal Services chapter on Weighing Evidence states:

Rather, the panel is presumed on judicial review to have weighed and considered all of the evidence before it, unless the contrary is established. (see: https://irb.gc.ca/en/legal-policy/legal-concepts/Documents/Evid%20Full_e-2020-FINAL.pdf)

This case and principle is often cited in refugee, humanitarian and compassionate grounds matters, inadmissibility cases, and IRB matters.

Reviewing case law for the last two years (since 2021), I did find a handful of the thirty-cases I reviewed that did engage this case and principle in a temporary resident context.

See e.g. study permit JR – Marcelin v. Canada (Citizenship and Immigration) 2021 FC 761 – Madam Justice Roussel at para 16 [JR dismissed]; PNP Work Permit – Shang v. Canada (Citizenship and Immigration), 2021 FC 633 at para 65 citing Basanti v Canada (Citizenship and Immigration), 2019 FC 1068 at para 24  – Madam Justice Kane [JR allowed];  Minor Child TRV Refusal – Dardari v. Canada (Citizenship and Immigration) 2021 FC 493 at para 39 – adding the portion – and is not obliged to refer to each piece of evidence submitted by the applicant – Madam Justice St-Louis [JR dismissed];

Related to this is the long-standing and oft-cited decision of Cepeda-Gutierrez v. Canada (Citizenship and Immigration) 1998 FC No 1425 in which Justice Evans re-iterated that an Agency stating they considered all evidence before it (even as a boilerplate statement) would usually be enough to suffice and assure parties and the Court of this. He writes:

[16]      On the other hand, the reasons given by administrative agencies are not to be read hypercritically by a court (Medina v. Canada (Minister of Employment and Immigration) (1990), 12 Imm. L.R. (2d) 33 (F.C.A.)), nor are agencies required to refer to every piece of evidence that they received that is contrary to their finding, and to explain how they dealt with it (see, for example, Hassan v. Canada (Minister of Employment and Immigration) (1992), 147 N.R. 317 (F.C.A.). That would be far too onerous a burden to impose upon administrative decision-makers who may be struggling with a heavy case-load and inadequate resources. A statement by the agency in its reasons for decision that, in making its findings, it considered all the evidence before it, will often suffice to assure the parties, and a reviewing court, that the agency directed itself to the totality of the evidence when making its findings of fact.

(emphasis added)

 

Why the Florea Presumption Should Be Reversed For Temporary Resident Applications and Any Decision Utilizing Advanced Analytics/AI/Chinook/Cumulus/Harvester

My argument is that this presumption that all evidence has been considered, as well as the boilerplate template language stating that it was considered, should not apply universally in 2023.

We know enough (again not enough about the system writ large) but enough to know that systems such as Chinook were created to facilitate the processing of temporary resident applications in hundreds of seconds, to extract data into excel tables for bulk processing, and to automate eligibility approvals. These were done specifically allow Officers to spend less time and consider enough, not all, of the evidence before them to render a decision.

I think the fact that applications are being auto-approved for eligibility, simply on a set of rules that are inputted primarily based on biometric information of an applicant should be enough to raise concerns that the systems even require consideration of most of the evidence submitted by an applicant.

All the materials on bulk processing that IRCC has released in the past few years, has been focused on the fact that not all documents need to be reviewed (not wording that states: review Additional Documents, as required).

 

IRCC Officer Training Guide Obtained Through ATIP

 

IRCC Visa Office Training Guide Obtained Through ATIP

If you look at the Daponte Affidavit and the original Module 3 Prompt that was created, it does not add confidence to the requirement that all documents needed to necessarily be reviewed:

Daponte Affidavit from Ocran

We learned that in response to concerns, they added to Chinook a prompt reminder for Officers to review all materials, but it is clear Chinook has gone far beyond ‘review and initial assessment’ to bulk processing.

Even with Cumulus, it is clear that if some docs that are not coverted to e-Docs they have to be pulled up separately in GCMS, the very tedious process that tools such as Cumulus seek to avoid.

Cumulus Training Manual Obtained Through ATIP

I would presume that it would be much easier for an Officer to make decision based on these summary extractions then to go into the documents.

Cumulus Training Guide Obtained Through ATIP

The documents are viewed below, much more akin to a ‘preview’ mode.

Cumulus Training Guide Obtained Through ATIP

Harvester, a tool that facilitates the conversion of documents into a reviewable format is similarly based on what documents can be extracted.

Harvester User Guide Obtained Via ATIP

Based on the way it is described and how some offices can exclude certain documents, it already suggests not all documents make it to the purview of the Officer.

Most importantly, as a constraint is time. As Andrew Koltun has uncovered, IRCC spends 101 seconds on average, with Chinook processing. https://theijf.org/nearly-40-per-cent-of-student-visa-applications-from-india-rejected-for-vague-reasons#

Respectfully, 101 seconds cannot be enough to consider but one or two documents – max – before rendering a decision. The future use of Large-Language […]

Read More »

A Closer Look at How IRCC’s Officer and Model Rules Advanced Analytics Triage Works

As IRCC ramps up to bring in advanced analytics to all their Lines of Business (LOBs), it is important to take a closer look at what the foundational model, the China TRV Application Process, looks like. Indeed, we know that this TRV model will be the TRV model for the rest of the world sometime this year (if not already).

While this chart is from a few years back, reflecting as I have discussed in many recent presentations and podcasts, how behind we are in this area, my understanding that this three Tier system is still the model in place

Over the next few posts, I’ll try and break down the model in more detail.

This first post will serve as an overview to the process.

I have included a helpful chart, explaining how an application goes from intake to decision made and passport request.

While I will have blog posts, that go into more detail about what ‘Officer Rules’ and ‘Model Rules’ are, here is the basic gist of it. A reminder it only represents the process to approval NOT refusal, and such a similar chart was not provided.

Step 1) Officer’s Rules Extract Applications Out Based on Visa Office-Specific Rules

Each Visa Office has it’s own Officer’s Rules. If an application triggers one of those rules, it no longer gets processed via the Advanced Algorithm/AI model. Think about it as a first filter, likely for those complex files that need a closer look at by IRCC.

You will recall in our discussion of Chinook, the presence of “local word flags” and “risk indicators.” There is no evidence I have yet which links these two pieces together, but presumably the Officer Rules must also be triggered by certain words and flags.

Other than this, we are uncertain about what Officer’s Rules are and we should not expect to know. However, we do know that the SOPs (Standard Operating Procedures) at each Visa Office then apply, rather than the AA/AI model. What it suggests is that the SOPs (and access to these documents) may have the trigger for the word flags.

Step 2) Application of Model Rules

This is where the AA/AI kick in. Model Rules (which I will discuss in a future blog post) are created by IRCC data experts to replicate a high confidence ability to separate applications into Tiers. Tier 1 are the applications that to a high level of confidence, should lead the Applicant to obtain positive eligibility findings. Indeed, Tier 1 Applications are decided with no human in the loop but the computer system will approve them. If the Application is likely to fail the eligibility process, and lead to negative outcomes, it goes to Tier 3. Tier 3 requires Officer review, and – unsurprisingly – has the highest refusal rate as we have discussed in this previous piece.

It is those files that are between positive and negative (the ‘maybe files’) and also the ones that do not fit in the Model Rules nor Officer Rules that become Tier 2. Officers also have to review these cases, but the approval rates are better than Tier 3.

3) Quality Assurance

The Quality Assurance portion of this model, has 10% of all files, filtered to Tier 2 to verify the accuracy of the model.

The models themselves become ‘production models’ when a high level of confidence is met, and they are finalize – such as the ones we have seen for China TRV, India TRV, we believe also China and India Study Permits, but also likely cases such as VESPA (yet this part has not been confirmed). Before it becomes a Production Model, it is in the Exploratory model zone.

How do we know there is a high QA? Well this is where we look at the scoring of the file.

I will break down (and frankly need more research) into this particular model later and it will be the subject of a later piece, but applications are scored to ensure the model is working effectively.

It is interesting that Chinook also has a QA function (and a whole QA Chinook module 6), so it appears there’s even more overlap between the two systems, probably akin to a front-end/back-end type relationship.

4) Pre-Assessment

Tier 1 applications go straight to admissibility review, but those in 2 and 3 go to pre-assessment review by a Clerk.

Important to note here and in the module that these clerks and officers appear to be citing in CPC-O, not the local visa offices abroad. This may also explain why so many more decisions are being made by Canadian decision-makers, even though it may be ultimately delivered or associated with a primary visa office abroad.

But here-in lies a bit of our confusion.

Based on a 2018 ATIP we did, we know that they are triaging different of cases based on case types into “Bins” so certain officers or at least certain lettered numbers – would handle like cases. Yet, this appears to have been the India model then, but the China TRV model seems to centralize it more in Ottawa. Where does the local knowledge and expertise come in? Are there alternative models now that send the decisions to the local visa office or is it only Officer’s rules? Is this perhaps why decisions rendered on the TRVs from India and China are lacking the actual local knowledge that we used to see in decisions because they have been taken outside of the hands of those individuals.

Much of the work locally used to be done on verifying employers, confirming certain elements, but is that now just for those files that are taken out of the triage and flagged as being possible admissibility concerns? Much to think about here.

Again, note that Chinook as a pre-assessment module also that seems to be responsible for many of the same things, so perhaps Chinook is also responsible for presenting the results of that analysis in a more Officer friendly way but why is it also directing the pre-assessment, if it is being done by officers?

5) Eligibility Assessment

What is important to note that this stage is Eligibility where there is no automated approval is still being done by Officers. What we do not know is if there is any guidance directed at Officers to approve/refuse a certain number of Tier 2 or Tier 3 applicants. This information would be crucial. We also know IRCC is trying to automate refusals, so we need to track carefully what that might look like down the road as it intersects with negative eligibility assessments.

6) Admissibility Review + Admissibility Hits

While this likely will be the last portion to be automated, given the need to cross-verify many different sources we also know that IRCC has programs in place such as Watchtower, again the Risk Flags, which may or may not trigger admissibility review. Interestingly enough, even cases where it seems admissibility (misrep) may be at play, it seems to also lead to eligibility refusals or concerns. I would be interested in knowing whether the flagging system also occurs as the eligibility level or whether there is a feedback/pushback system so a decision can be re-routed to eligibility (on an A16 IRPA issue for example).

KEY: Refusals Not Reflected in Chart

What does the refusal system look like? This becomes another key question as decisions are often skipping even the biometrics or verifications and going straight to refusal. This chart obviously would look much more complicated, with probably many more steps at which a refusal can be rendered without having to complete the full eligibility assessment.

Is there a similar map? Can we get access to it?

 

Conclusion – we know nothing yet, but this also changes everything

This model, and this idea of an application being taken out of the assembly line at various places, going through different systems of assessment, really in my mind suggest that we as applicant’s counsel know very little about how our applications will be processed in the future. These systems do not support cookie cutter lawyering, suggest flags may be out of our control and knowledge, and ultimately lead us to question what and who makes up a perfect Tier 1 application.

Models like this also give credence to IRCC’s determination to keep things private and to keep the algorithms and code away from prying investigators and researchers, and ultimately those who may want to take advantage of systems.

Yet, the lack of transparency and concerns that we have about how these systems filter and sort appear very founded. Chinook mirrors much of what is in the AA model. We have our homework cut out for us.

Read More »

Predictive/Advanced Analytics + Chinook – Oversight = ?

In September 2021’s issue of Lexbase, my mentor Richard Kurland, provides further insight into what happens behind the scenes of Immigration, Refugees, and Citizenship Canada (“IRCC”) processing, specifically providing a section titled: “Overview of the Analytics-Based Triage of Temporary Resident Visa Applications.

At the outset, a big thank you to the “Insider” Richard Kurland for the hard digging that allows for us to provide this further analysis.

 

What the Data Suggests

I encourage all of you to check out the first two pages from the Lexbase issue, as it contains direct disclosure from IRCC’s Assistant Director, Admissibility opening up the process by way Artificial Intelligence is implemented for Temporary Resident Visas (‘TRVs’), specifically in China and India, the two countries that have implemented it so far. By way of this June 2020 disclosure, we confirm that IRCC has been utilizing these systems for online applications since April 2018 for China, August 2018 for India, and for Visa Application Centre (“VAC”) based applications since January 2020.

To summarize (again – go read Lexbase and contact Richard Kurland for all the specific details and helpful tables), we learn that there is a three Tier processing system in play. This filters the simplest applications (Tier 1), medium complexity applications (Tier 2), and higher complexity applications (Tier 3). While human officers are involved in all three Tiers, Tier 1 allows a model to recommend approval based on analytics, where as Tier 2 and Tier 3 are flagged for manual processing. IRCC claims that the process is only partially automated.

The interesting factor, and given we have been as a law firm focusing a lot on India, is how the designated of a Tier 2 file drives the approval rates from the high nineties (%) to 63% for online India apps to 37%  for India VAC applications. Moving to Tier 3, it is only 13% for online India and 5% for India VAC. The deeming of a file Tier 3 appears to make refusal a near surety.

What is fascinating is how this information blends usage of “Officer Rules,” the first stage filter which  actually precedes the computerized Three Tier triages and is targeted at cases with higher likelihood of ineligibility or inadmissibility.

The Officer Rules system would be the system utilized at other global visa offices that do not use the computerized AI decision-making of India and China. Looking specifically at the case of India, the Officer Rules system actually approves cases at a much higher rate (53% for online India, and 38% for India VAC).

These rates are in-fact comparable to Tier 2 moderately complex cases – ones that presumably do not contain the serious ineligibility and inadmissibility concerns of Officer Rules or Tier 3 . It suggests that the addition of technology can sway even a moderately complex case into the same outcomes as a hand-pulled out complex case.

Ultimately, this suggests that complete human discretion or time spent assessing factors can be much more favourable than when machines contribute to overall decision-making.

It Comes Down to Oversight and How These Systems Converge

Recently, we’ve been discussing in Youtube videos (here and here), podcasts, and articles about IRCC’s Chinook system for processing applications. Using an excel-based model (although moving now to an Amazon-based model in their latest version), applicants data are extracted into rows, that contain batch information for several applicants, presumably allowing for all the analytics to be assessed.

Given we know IRCC takes historic approval rates and data as a main driving factor, it is reasonable to think Immigration Officers are given these numbers as internal targets. I am sure, as well, that with major events like COVID and the general dissuasion of travel to Canada, that these goalposts can be moved and expanded at direction.

An excel-based system tracking approvals and refusals likely put these stats front and centre to an officer’s discretion (or a machine’s) on an application. Again to utilize a teaching analogy (clearly I miss teaching), I utilized a similar ‘Speedgrader’ type app which forced me, mid-marking, to often to revisit exams that I had already graded because I had awarded the class average marks that were too high. I have no doubt a parallel system exists with IRCC.

What this all means, as my colleague, Zeynab Ziaie has pointed out in our discussions, there are major concerns that Chinook and the AI systems have not been developed and rolled out with adequate lawyer/legal input and oversight, which leads to questions about accountability. Utilizing the Chinook example, what if the working notes that are deleted contain the very information needed to justify or shed light on how an application was processed.

My question, in follow-up, is how are the predictive/advanced analytics systems utilized by India and China for TRVs influencing Chinook? Where is the notation to know whether one’s file was pre-assessed by “Officer’s Rule” or through the Tiers. I quickly reviewed a few GCMS notes prior to this call, and though we know whether a file was pre-accessed, we have no clue which Tier it landed on.

Furthermore, how do we ensure that the visa-office subjective “Officer Rules” or the analytical factors that make up the AI system are not being applied in a discriminatory manner to filter cases into a more complex/complex stream. For example, back in 2016 I pointed how the Visa-Office training guides in China regionally and geographically discriminate against those applying from certain Provinces assigning character traits and misrepresentation risks. We know in India, thanks to the work of my mentor Raj Sharma, that the Indian visa offices have a training guide on genuine relationships and marriage fraud that may not accord with realities.

Assuming that this AI processing system is still being used only for TRVs and not for any other permits, it must be catching (with the assistance of Chinook’s key word indicators no less) words such as marriage, the names of rural communities, marital status, perhaps the addresses of unauthorized agents, and businesses that often have been used as a cover for support letters. Within that list there’s a mix of good local knowledge, but also the very stereotypes that have historically kept families apart and individuals from being able to visit without holding a study permit or work permit.

If we find out, for example, that filtering for complex cases only happens at visa offices with high refusal rates or in the Global South, does that make the system unduly discriminatory?

We acknowledge of course that the very process of having to apply to enter the borders, the division of TRV and electronic Travel Authorization (eTA) requiring countries is discriminatory by nature, but what happens when outcomes on similar facts are so discrepant?

In other areas of national bureaucracy, Governments have moved to blind processing to try and limit discrimination around ethnic names, or base decisions on certain privileges (ability to travel and engage in previous work), and remove identifying features that might lead to bias. For immigration it is the opposite, you see their picture, their age, and where they are from, and why they want to come (purpose of visit). As we have learned from Chinook, that is the baseline information that is being extracted for Officers to base their decisions on.

When – as a society – do we decide to move away (as we have) on what were once harmful norms to new realities? Who is it that makes the call or calls for reviews for things such as consistency or whether a particular discriminatory input in the AI system is no-longer consistent with Charter values?

Right now, it is all in the Officer’s discretion and by extension, the Visa Offices, but I would recommend some unified committee of legal experts and race/equity scholars need to be advising on the strings of the future, inevitable, AI systems. This would also unify things across visa offices so that there is less discrepancy in the way systems render decisions. While it makes sense that heavier volume visa offices have more tools as their disposal, it should not depend on where you live to receive less access to human decision-makers or to an equal standard of decision-making. We do not want to get to a place where immigration applicants are afraid to present their stories or speak their truths for fear of being filtered by artificial intelligence. From my perspective, we are better of being transparent and setting legitimate expectations.

What are your thoughts on the introduction of AI, the interaction with Chinook, and the need for oversight? Feel free to engage in the comments below or on social media!

Thanks again for reading.

Read More »
About Us
Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary.

Let’s Get in Touch

Translate »