Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

Recent Blog Posts

The DADM’s Noticeable Silence: Clarifying the Human Role in the Canadian Government’s Hybrid Decision-Making Systems [Law 432.D – Op-Ed 2]

This is part 2 of a two-part series sharing Op-Eds I wrote for my Law 432.D course titled “Accountable Computer Systems.” This blog will likely go up on the course website in the near future but as I am hoping to speak to and reference things I have written for a presentations coming up, I am sharing here, first. This blog discusses the hot topic of ‘humans in the loop’ for automated decision-making systems [ADM]. As you will see from this Op-Ed, I am quite critical of our current Canadian Government self-regulatory regime’s treatment of this concept.

As a side note, there’s a fantastic new resource called TAG (Tracking Automated Government) that I would suggest those researching this space add to their bookmarks. I found it on X/Twitter through Professor Jennifer Raso’s post. For those that are also more new to the space or coming from it through immigraiton, Jennifer Raso’s research on automated decision-making, particularly in the context of administrative law and frontline decision-makers is exceptional. We are leaning on her research as we develop our own work in the immigration space.

Without further ado, here is the Op-Ed.

The DADM’s Noticeable Silence: Clarifying the Human Role in the Canadian Government’s Hybrid Decision-Making Systems[i]

Who are the humans involved in hybrid automated decision-making (“ADM”)? Are they placed into the system (or loop) to provide justification for the machine’s decisions? Are they there to assume legal liability? Or are they merely there to ensure humans still have a job to do?

Effectively regulating hybrid ADM systems requires an understanding of the various roles played by the humans in the loop and clarity as to the policymaker’s intentions when placing them there. This is the argument made by Rebecca Crootof et al. in their article, “Humans in the Loop” recently published in the Vanderbilt Law Review.[ii]

In this Op-Ed, I discuss the nine roles that humans play in hybrid decision-making loops as identified by Crootof et al. I then turn to my central focus, reviewing Canada’s Directive on Automated Decision-Making (“DADM”)[iii] for its discussion of human intervention and humans in the loop to suggest that Canada’s main Government self-regulatory AI governance tool not only falls short, but supports an approach of silence towards the role of humans in Government ADMs.


What is a Hybrid Decision-Making System? What is a Human in the Loop?

A hybrid decision-making system is one where machine and human actors interact to render a decision.[iv]

The oft-used regulatory definition of humans in the loop is “an individual who is involved in a single, particular decision made in conjunction with an algorithm.[v] Hybrid systems are purportedly differentiable from “human off the loop” systems, where the processes are entirely automated and humans have no ability to intervene in the decision.[vi]

Crootof et al. challenges the regulatory definition and understanding, labelling it as misleading as its “focus on individual decision-making obscures the role of humans everywhere in ADMs.”[vii] They suggest instead that machines themselves cannot exist or operate independent from humans and therefore that regulators must take a broader definition and framework for what constitutes a system’s tasks.[viii] Their definition concludes that each human in the loop, embedded in an organization, constitutes a “human in the loop of complex socio-technical systems for regulators to target.”[ix]

In discussing the law of the loop, Crootof et al. expresses the numerous ways in which the law requires, encourages, discourages, and even prohibits humans in the loop. [x]

Crootof et al. then labels the MABA-MABA (Men Are Better At, Machines Are Better At) trap,[xi] a common policymaker position that erroneously assumes the best of both worlds in the division of roles between humans and machines, without consideration how they can also amplify each other’s weaknesses.[xii] Crootof et al. finds that the myopic MABA-MABA “obscures the larger, more important regulatory question animating calls to retain human involvement in decision-making.”

As Crootof et al. summarizes:

“Namely, what do we want humans in the loop to do? If we don’t know what the human is intended to do, it’s impossible to assess whether a human is improving a system’s performance or whether regulation has accomplished its goals by adding a human”[xiii]


Crootof et al.’s Nine Roles for Humans in the Loop and Recommendations for Policymakers

Crootof sets out nine, non-exhaustive but illustrative roles for humans in the loop. These roles are: (1) corrective; (2) resilience; (3) justificatory; (4) dignitary; (5) accountability; (6) Stand-In; (7) Friction; (8) Warm-Body; and (9) Interface.[xiv] For ease of summary, they have been briefly described in a table attached as an appendix to this Op-Ed.

Crootof et al. discusses how these nine roles are not mutually exclusive and indeed humans can play many of them at the same time.[xv]

One of Crootof et al.’s three main recommendations is that policymakers should be intentional and clear about what roles the humans in the loop serve.[xvi] In another recommendation they suggest that the context matters with respect to the role’s complexity, the aims of regulators, and the ability to regulate ADMs only when those complex roles are known.[xvii]

Applying this to the EU Artificial Intelligence Act (as it then was[xviii]) [“EU AI Act”], Crootof et al. is critical of how the Act separates the human roles of providers and users, leaving nobody responsible for the human-machine system as a whole.[xix]  Crootof et al. ultimately highlights a core challenge of the EU AI Act and other laws – how to “verify and validate that the human is accomplishing the desired goals” especially in light of the EU AI Act’s vague goals.

Having briefly summarized Crootof et al.’s position, the remainder of this Op-Ed ties together a key Canadian regulatory framework, the DADM’s, silence around this question of the human role that Crootof et al. raises.


The Missing Humans in the Loop in the Directive on Automated Decision-Making and Algorithmic Impact Assessment Process

Directive on Automated Decision-Making

Canada’s DADM and its companion tool, the Algorithmic Impact Assessment (“AIA”), are soft-law[xx] policies aimed at ensuring that “automated decision-making systems are deployed in a manner that reduces risks to clients, federal institutions and Canadian Society and leads to more efficient, accurate, and interpretable decision made pursuant to Canadian law.”[xxi]

One of the areas addressed in both the DADM and AIA is that of human intervention in Canadian Government ADMs. The DADM states:[xxii]

Ensuring human intervention


Ensuring that the automated decision system allows for human intervention, when appropriate, as prescribed in Appendix C.


Obtaining the appropriate level of approvals prior to the production of an automated decision system, as prescribed in Appendix C.

Per Appendix C of the DADM, the requirement for a human in the loop depends on the self-assessed impact level scoring system to the AIA by the agency itself. For level 1 and 2 (low and moderate impact)[xxiii] projects, there is no requirement for a human in the loop, let alone any explanation of the human intervention points (see table below extracted from the DADM).

I would argue that to avoid explaining further about human intervention, which would then engage explaining the role of the humans in making the decision, it is easier for the agency to self-assess (score) a project as one of low to moderate impact. The AIA creates limited barriers nor a non-arms length review mechanism to prevent an agency strategically self-scoring a project below the high impact threshold.[xxiv]

Looking at the published AIAs themselves, this concern of the agency being able to avoid discussing the human in the loop appears to play out in practice.[xxv] Of the fifteen published AIAs, fourteen of them are self-declared as moderate impact with only one declared as little-to-no impact. Yet, these AIAs are situated in high-impact areas such as mental health benefits, access to information, and immigration.[xxvi] Each of the AIAs contain the same standard language terminology that a human in the loop is not required.[xxvii]

In the AIA for the Advanced Analytics Triage of Overseas Temporary Resident Visa Applications, for example, IRCC further rationalizes that “All applications are subject to review by an IRCC officer for admissibility and final decision on the application.[xxviii] This seems to engage that a human officer plays a corrective role, but this not explicitly spelled out. Indeed, it is open to contestation from critics who see the Officer role as more as a rubber-stamp (dignitary) role subject to the influence of automation bias.[xxix]


Recommendation: Requiring Policymakers to Disclose and Discuss the Role of the Humans in the Loop

While I have fundamental concerns with the DADM itself lacking any regulatory teeth, lacking the input of public stakeholders through a comment and due process challenge period,[xxx] and driven by efficiency interests,[xxxi] I will set aside those concerns for a tangible recommendation for the current DADM and AIA process.[xxxii]

I would suggest that beyond the question around impact, in all cases of hybrid systems where a human will be involved in ADMs, there needs to be a detailed explanation provided by the policymaker of what roles these humans will play. While I am not naïve to the fact that policymakers will not proactively admit to engaging a “warm body” or “stand-in” human in the loop, it at least starts a shared dialogue and puts some onus on the policymaker to both consider proving, but also disproving a particular role that it may be assigning.

The specific recommendation I have is to require as part of an AIA, a detailed human capital/resources plan that requires the Government agency to identify and explain the roles of the humans in the entire ADM lifecycle, from initiation to completion.

This idea also seems consistent with best practices in our key neighbouring jurisdiction, the United States. On 28 March 2024, a U.S. Presidential Memorandum aimed at Federal Agencies titled “Advancing […]

Read More »

Filling in Three Missing Peer Reviews for IRCC’s Algorithmic Impact Assessments

As a public service, and transparently because I need to also refer to these in my own work in the area, I am sharing three peer reviews that have not yet been published by Immigration, Refugess and Citizenship Canada (“IRCC”) nor made available on the published Algorithmic Impact Assessment (“AIA”) pages from the Treasury Board Secretariat (“TBS”).

First, a recap. Following the 3rd review of the Directive of Automated Decision-Making (“DADM”), and feedback from stakeholders, it was proposed to amend the peer review section to require the completion of a peer review and publication prior to the system’s production.

The previous iteration of the DADM did not require publication nor specify the timeframe for the pper review.  The motivation for this was to increase public trust around Automated Decision-Making Systems (“ADM”). As stated in the proposed amendment summary at page 15:

The absence of a mechanism mandating the release of peer reviews (or related information) creates a missed opportunity for bolstering public trust in the use of automated decision systems through an externally sourced expert assessment. Releasing at least a summary of completed peer reviews (given the challenges of exposing sensitive program data, trade secrets, or information about proprietary systems) can strengthen transparency and accountability by enabling stakeholders to validate the information in AIAs. The current requirement is also silent on the timing of peer reviews, creating uncertainty for both departments and reviewers as to whether to complete a review prior to or during system deployment. Unlike audits, reviews are most effective when made available alongside an AIA, prior to the production of a system, so that they can serve their function as an additional layer of assurance. The proposed amendments address these issues by expanding the requirement to mandate publication and specify a timing for reviews. Published peer reviews (or summaries of reviews) would complement documentation on the results of audits or other reviews that the directive requires project leads to disclose as part of the notice requirement (see Appendix C of the directive) (emphasis added)

Based on Section 1 of the DADM, with the 25 April 2024 date coming, we should see more posted peer reviews for past Algorithmic Impact Assessment (“AIA”).

This directive applies to all automated decision systems developed or procured after . However,

  • 1.2.1 existing systems developed or procured prior to  will have until  to fully transition to the requirements in subsections 6.2.3, 6.3.1, 6.3.4, 6.3.5 and 6.3.6 in this directive;
  • 1.2.2 new systems developed or procured after  will have until  to meet the requirements in this directive. (emphasis added)

The impetus behind the grace period, was set out in their proposed amendment summary at page 8:

TBS recognizes the challenge of adapting to new policy requirements while planning or executing projects that would be subject to them. In response, a 6-month ‘grace period’ is proposed to provide departments with time to plan for compliance with the amended directive. For systems that are already in place on the release date, TBS proposes granting departments a full year to comply with new requirements in the directive. Introducing this period would enable departments to plan for the integration of new measures into existing automation systems. This could involve publishing previously completed peer reviews or implementing new data governance measures for input and output data.

During this period, these systems would continue to be subject to the current requirements of the directive. (emphasis added)

The new DADM section states:

Peer review

  • 6.3.5 Consulting the appropriate qualified experts to review the automated decision system and publishing the complete review or a plain language summary of the findings prior to the system’s production, as prescribed in Appendix C. (emphasis added)

Appendix C for Level 2 – Moderate Impact Projects (for which all of IRCC’s Eight AIA projects are self-classified) the requirement is as follows:

Consult at least one of the following experts and publish the complete review or a plain language summary of the findings on a Government of Canada website:

Qualified expert from a federal, provincial, territorial or municipal government institution

Qualified members of faculty of a post-secondary institution

Qualified researchers from a relevant non-governmental organization

Contracted third-party vendor with a relevant specialization

A data and automation advisory board specified by Treasury Board of Canada Secretariat


Publish specifications of the automated decision system in a peer-reviewed journal. Where access to the published review is restricted, ensure that a plain language summary of the findings is openly available.

We should be expecting then movement in the next two weeks.

As I wrote about here, IRCC has posted one of their Peer Reviews, this one for the International Experience Canada Work Permit Eligibility Model. I will analyze this (alongside other peer reviews) in a future blog and why I think it is important in the questions it raises about automation bias.

In light of the above, I am sharing three Peer Reviews for IRCC AIAs. These may or may not be the final ones that IRCC eventually posts, presumably before 25 April 2024.

I have posted the document below the corresponding name of the AIA. Please note that the PDF viewer does not work on mobile devices. As such I have also added a link to a shared Google doc for your viewing/downloading ease.

(1) Spouse Or Common-Law in Canada Advanced Analytics [Link]

A-2022-00374_ – Stats Can peer review on Spousal AI Model

Note: We know that IRCC has also been utilizing AA for Family Class Spousal-Overseas but as this is a ‘triage only’ model, it appears IRCC has not published a separate AIA for this.

(2) Advanced Analytics Triage for Overseas Temporary Resident Visa Applications [Link]

NRC Data Centre – 2018 Peer Review from A-2022-70246

Note: there is a good chance this was a preliminary peer review before the current model. The core of the analysis is also in French (which I will break down again, in a further blog)

(3) Integrity Trends Analysis Tool (previously known as Watchtower) [Link]

Pages from A-2022-70246 – Peer Review – AA – Watchtower Peer Review

Note: the ITAT was formerly know as Watchtower, and also Lighthouse. This project has undergone some massive changes in response to peer review and other feedback, so I am not sure if there was a more recent peer review before the ITAT was officially published.

I will share my opinions on these peer reviews in future writing, but I wanted to first put it out there as the contents of these peer reviews will be relevant to work and presentations I am doing over the coming months. Hopefully, IRCC themselves publishes the document so scholars can dialogue on this.

My one takeaway/recommendatoin in this context, is that we should follow what Dillon Reisman, Jason Schultz, Kate Crawford, Meredith Whittaker write in their 2018 report titled: “ALGORITHMIC IMPACT ASSESSMENTS: A PRACTICAL FRAMEWORK FOR PUBLIC AGENCY ACCOUNTABILITY” suggest and allow for meaningful access and a comment period.

If the idea is truly for public trust, my ideal process flow sees an entity (say IRCC) publish a draft AIA with peer review and GBA+ report and allow for the Public (external stakeholders/experts/the Bar etc.) to provide comments BEFORE the system production starts. I have reviewed several emails between TBS and IRCC, for example, and I am not convinced of the vetting process for these projects. Much of the questions that need to be ask require an interdisciplinary subject expertise (particularly in immigration law and policy) that I do not see in the AIA approval process nor peer reviews.

What are your thoughts? I will breakdown the peer reviews in a blog post to come.

Read More »

Colliding Concepts and an Immigration Case Study: Lessons on Accountability for Canadian Administrative Law from Computer Systems [Op-Ed 1 for Law 432.D Course]

I wrote this Op-Ed for my Law 432.D course titled ‘Accountable Computer Systems.’ This blog will likely be posted on the course website but as I am presenting on a few topics related, I wanted it to be available to the general public in advance. I do note that after writing this blog, my more in-depth literature review uncovered many more administrative lawyers talking about accountability. However, I still believe we need to properly define accountability and can take lessons from Joshua Kroll’s work to do so.



Canadian administrative law, through judicial review, examines whether decisions made by Government decision-makers (e.g. government officials, tribunals, and regulators) are reasonable, fair, and lawful.[i]

Administrative law governs the Federal Court’s review of whether an Officer has acted in a reasonable[ii] or procedurally fair[iii] way, for example in the context of Canadian immigration and citizenship law, where an Officer has decided to deny a Jamaican mother’s permanent residence application on humanitarian and compassionate grounds[iv] or strip Canadian citizenship away from a Canadian-born to Russian foreign intelligence operatives charged with espionage in the United States.[v]

Through judicial review and subsequent appellate Court processes, the term accountability has yet to be meaningfully engaged with in Canadian administrative case law.[vi] On the contrary, in computer science accountability is quick becoming a central organizing principle and governance mechanism.[vii] Technical and computer science specialists are designing technological tools based on accountability principles that justify its use and perceived sociolegal impacts.

Accountability will need to be better interrogated within the Canadian administrative law context, especially as Government bodies increasingly render decisions utilizing computer systems (such as AI-driven decision-making systems) [viii] that are becoming subject to judicial review.[ix]

An example of this is the growing litigation around Immigration, Refugees and Citizenship Canada’s (“IRCC”) use of decision-making systems utilizing machine-learning and advanced analytics.[x]

Legal scholarship is just starting to scratch the surface of exploring administrative and judicial accountability and has done so largely as a reaction to AI systems challenging traditional human decision-making processes. In the Canadian administrative law literature I reviewed, the discussion of accountability has not involved defining the term beyond stating it is a desirable system aim.[xi]

So, how will Canadian courts perform judicial review and engage with a principle (accountability) that it hardly knows?

There are a few takeaways from Joshua Kroll’s 2020 article, “Accountability in Computer Systems” that might be good starting points for this collaboration and conversation.


Defining Accountability – and the Need to Broaden Judicial Review’s Considerations

Kroll defines “accountability” as a “a relationship that involves reporting information to that entity and in exchange receiving praise, disapproval, or consequences when appropriate.”[xii]

Kroll’s definition is important as it goes beyond thinking of accountability only as a check-and-balance oversight and review system,[xiii] but also one that requires mutual reporting in a variety of positive and negative situations. His definition embraces, rather than sidesteps, the role of normative standards and moral responsibility.[xiv]

This contrasts with administrative judicial review, a process that is usually only engaged when an individual or party is subject to a negative Government decision (often a refusal or denial of a benefit or service, or the finding of wrongdoing against an individual).[xv]

As a general principle that is subject to a few exceptions, judicial review limits the Court’s examination to the ‘application’ record that was before the final human officer when rendering their negative decision.[xvi] Therefore, it is a barrier to utilize judicial review to seek clarity from the Government about the underlying data, triaging systems, and biases that may form the context for the record itself.

I argue that Kroll’s definition of accountability provides room for this missing context and extends accountability to the reporting the experiences of groups or individuals who receive the positive benefits of Government decisions when others do not. The Government currently holds this information as private institutional knowledge, with fear that broader disclosure could lead to scrutiny that might expose fault-lines such as discrimination and Charter[xvii] breaches/non-compliance.[xviii]

Consequentially, I do not see accountability’s language fitting perfectly into our currently existing administrative law context, judicial review processes, and legal tests. Indeed, even the process of engaging with accountability’s definition in law and tools for implementation will challenge the starting point of judicial review’s deference and culture of reasons-based justification[xix] as being sufficient to hold Government to account.


Rethinking Transparency in Canadian Administrative Law

Transparency is a cornerstone concept in Canadian administrative law. Like accountability, this term is also not well-defined in operation, beyond the often-repeated phrase of a reasonable decision needing to be “justified, intelligent, and transparent.”[xx] Kroll challenges the equivalency of transparency with accountability. He defines transparency as “the concept that systems and processes should be accessible to those affected either through an understanding of their function, through input into their structure, or both.”[xxi] Kroll argues that transparency is a possible vehicle or instrument for achieving accountability but also one that can be both insufficient and undesirable,[xxii] especially where it can still lead to illegitimate participants or lead actors to alter their behaviour to violate an operative norm.[xxiii]

The shortcomings of transparency as a reviewing criterion in Canadian administrative law are becoming apparent in IRCC’s use of automated decision-making (“ADM”) systems. Judicial reviews to the Federal Court are asking judges to consider the reasonableness, and by extension transparency of decisions made by systems that are non-transparent – such as security screening automation[xxiv] and advanced analytics-based immigration application triaging tools.[xxv]

Consequently, IRCC and the Federal Court have instead defended and deconstructed pro forma template decisions generated by computer systems[xxvi] while ignoring the role of concepts such as bias, itself a concept under-explored and under-theorized in administrative law.[xxvii] Meanwhile, IRCC has denied applicants and Courts access to mechanisms of accountability such as audit trails and the results of the technical and equity experts who are required to review these systems for gender and equity-based bias considerations.[xxviii]

One therefore must ask – even if full technical system transparency were available, would it be desirable for Government decision-makers to be transparent about their ADM systems,[xxix] particularly with outstanding fears of individuals gaming the system,[xxx] or worse yet – perceived external threats to infrastructure or national security in certain applications.[xxxi] Where Baker viscerally exposed an Officer’s discrimination and racism in transparent written text, ADM systems threaten to erase the words from the page and provide only a non-transparent result.


Accountability as Destabilizing Canadian Administrative Law

Adding the language of accountability will be destabilizing for administrative judicial review.

Courts often recant in Federal Court cases that it is “not the role of the Court to make its own determinations of fact, to substitute its view of the evidence or the appropriate outcome, or to reweigh the evidence.”[xxxii] The seeking of accountability may ask Courts to go behind and beyond an administrative decision, to function in ways and to ask questions they may not feel comfortable asking, possibly out of fear of overstepping the legislation’s intent.

A liberal conception of the law seeks and gravitates towards taxonomies, neat boxes, clean definitions, and coherent rules for consistency.[xxxiii] On the contrary, accountability acknowledges the existence of essentially contested concepts[xxxiv] and the layers of interpretation needed to parse out various accountability types,[xxxv] and consensus-building. Adding accountability to administrative law will inevitably make law-making become more complex. It may also suggest that judicial review may not be as effective as an ex-ante tool,[xxxvi] and that a more robust, frontline, regulatory regime may be needed for ADMs.


Conclusion: The Need for Administrative Law to Develop Accountability Airbags

The use of computer systems to render administrative decisions, more specifically the use of AI which Kroll highlights as engaging many types of accountability,[xxxvii] puts accountability and Canadian administrative law on an inevitable collision course. Much like the design of airbags for a vehicle, there needs to be both technical/legal expertise and public education/awareness needed of both what accountability is, and how it works in practice.

It is also becoming clearer that those impacted and engaging legal systems want the same answerability that Kroll speaks to for computer systems, such as ADMs used in Canadian immigration.[xxxviii] As such, multi-disciplinary experts will need to examine computer science concepts and accountable AI terminology such as explainability[xxxix] or interpretability[xl] alongside their administrative law conceptual counterparts, such as intelligibility[xli] and justification.[xlii]

As this op-ed suggests, there are already points of contention, (but also likely underexplored synergies), around the definition of accountability, the role of transparency, and whether the normative or multi-faceted considerations of computer systems are even desirable in Canadian administrative law.



[i] Government of Canada, “Definitions” in Canada’s System of Justice. Last Modified: 01 September 2021. Accessible online <> See also: Legal Aid Ontario, “Judicial Review” (undated). Accessible online: <>

[ii] The Supreme Court of Canada in Canada (Minister of Citizenship and Immigration) v. Vavilov, 2019 SCC 65 (CanLII), [2019] 4 SCR 653, <> [“Vavilov”] set out the following about reasonableness review:

[15] In conducting a reasonableness review, a court must consider the outcome of the administrative decision in light of its underlying rationale in order to ensure that the decision as a whole is transparent, intelligible and justified. What distinguishes reasonableness review from correctness review is that the court conducting a reasonableness review must focus on the decision the administrative decision maker actually made, including the justification offered for it, and not on the conclusion the court itself would have reached in the administrative decision maker’s place.

[iii]The question for the Court to determine is whether “the procedure was fair having regard to all of the circumstances” and “whether the applicant knew the case to meet and had a full and fair chance to respond”.  See: Ahmed v. Canada (Citizenship and Immigration), 2023 FC 72 at para […]

Read More »

What is an AI Hype Cycle and How Is it Relevant to Canadian Immigration Law?

Recently I have been reading and learning more about AI Hype Cycles.

I first learned this term from Professor Kristen Thomasen when she did a guest lecture for our Legal Methodologies graduate class and discussed it with respect to her own research on drone technology and writing/researching during hype cycles. Since then, in almost AI-related seminar I have attended the term has come up with respect to the current buzz and attention being paid to AI. For example, Timnit Gebru in her talk for the GC Data Conference which I recently attended noted that a lot of what is being repackaged as new AI today was the same work in ‘big data’ that she studied many years back. For my own research, it is important to understand hype cycles to ground my research into more principled and foundational approaches so that I can write and explore the changes in technology while doing slow scholarship notwithstanding changing public discourse and the respective legislative/regulatory changes that might follow.

A good starting point for understanding hype cycles, especially in the AI market, is the Gartner Hype Cycle. Who those who have not heard the term yet, I would recommend checking out the following video:

Gartner reviews technological hype cycles through five phases: (1) innovation trigger; (2) peak of inflated expectations; (3) trough of disillusionment; (4) slope of enlightenment, and plateau of productivity.

It is interesting to see how Gartner has labelled the current cycles:

One of the most surprising things to me on first view is how automatic systems and deicsion intelligence is still on the innovation trigger – early phase on the hype cycle. The other is how many different types of AI technology are on the hype cycle and how many the general public actually know/engage with. I would suggest at most 50% of this list is in the vocabulary and use of even the most educated folks. I also find that from a laypersons perspective (which I consider myself on AI), challenges in classifying whether certain AI concepts fit one category or another or are a hybrid. This means AI societal knowledge is low and even for some of the items that are purportedly on the Slope of Enlightment or Plateau of Productivity.

It is important to note before I move on that that the AI Hype Cycle also has been used in terms outside of the Gartner definition, more in a more criticial sense of technologies that are in a ‘hype’ phase that will eventually ebb and flow. A great article on this and how it affects AI definitions is the piece by Eric Siegel in the Harvard Business Review how the hype around Supervised Machine Learning has been rebranded into a hype around AI and has been spun into this push for Artificial General Intelligence that may or may not be achievable.


Relevance to the Immigration Law Space

The hype cycle is relevant to Canadian immigration law in a variety of ways.

First, on the face, Gartner is a contracting partner of IRCC which means they are probably bringing in the hype cycle into their work and their advice to them.

Second, it brings into question again how much AI-based automated decision-making systems (ADM) is still in the beginning of the hype cycle. It make sense utilizing this framework to understand why these systems are being so heralded by Government in their policy guides and presentation, but also that there could be a peak of inflated expectations on the horizon that may lead to more hybrid decision-making or perhaps a step back from use.

The other question is about whether we are (and I am a primary perpetrator of this) overly-focused on automated-decision making systems without considering the larger AI supply chain that will likely interact. Jennifer Cobbe et al talk about this in their paper “Understanding accountability in algorithmic supply chains” which was assigned for reading in my Accountable Computer Systems course. Not only are there different AI components, providers, downstream/upstream uses, and actors that may be involved in the AI development and application process.

Using immigration as an example, there may be one third-party SAAS that checks photos, another software using black-box AI may engage in facial recognition, and ultimately, internal software that does machine-learning triaging or automation of refusal notes generation. The question of how we hold these systems and their outputs accountable will be important, especially if various components of the system are on different stages of the hype cycle or not disclosed in the final decision to the end user (or immigration applicant).

Third, I think that the idea of hype cycles is very relevant to my many brave colleagues who are investing their time and energy into building their own AI tools or implementing sofware solutions for private sector applicants. The hype cycle may give some guidance as to the innovation they are trying to bring and the timeframe they have to make a splash into the market. Furthermore, immigration (as a dynamic and rapidly changing area of law) and immigrants (as perhaps needing different considerations with respect to technological use, access, or norms) may have their own considerations that may alter Gartner’s timelines.

It will be very interesting to continue to monitor how AI hype cycles drive both private and public innovation in this emerging space of technologies that will significantly impact migrant lives.

Read More »

Part 2B – An Annotated Review of Li and the Unforeseen and Unsettled Legal Consequences of Expanding the Definition of Espionage



Welcome back folks!

I had a bit of a busy several weeks since my last post – as I am taking an accountable computer systems course, learning about encryption, block chain, TOR and all the cool things I wish I knew earlier!

I have not forgotten about the Li decision. I will admit I have lost sleep over it, been confused over it, and had numerous client consultations over it. The recent development of the Named Research Organizations list, although in a much different context, have started to shed light on what institutions may be targetted and flagged. I presume many of these institutions (if not all) are risk indicators in the Integrity Trends Analysis Tool and may trigger the automation of the Security Screening Automation process.

Also, I should be on a podcast with Steven Meurrens and Deanna Okun-Nachoff talking about this decision shortly. I think it will be a fascinating conversation. Will share link!

For the purposes of this blog, however, let us jump back into the Federal Court’s decision in Li v. Canada (MCI) 2023 FC 1753 to get to the heart of the Chief Justice’s analysis.

Let’s start now with VI. Issues at paragraph 24. I do not yet have the benefit of the parties submissions to determination how the issues were framed in factums. Based on what the Chief Justice writes later in his VIII. Analysis at para 29, it seems like this question was framed by the Applicant.

Nevertheless, I think there might be some tension in the framing of the issues and then the setting out of the standard of review in VII. Standard of Review where the Chief Justice re-iterates that the Court’s limited role within the judicial context, the introduction the case itself, and the eventual function of attempting to carve out a definition for espionage.

I will note that this is not rare, however. We have seen it in many contexts, and indeed the Chief Justice has also engaged in a similar discussion of the role of a comparative approach in the s.25 H&C test in Huang v. Canada (MCI), 2019 FC 265.

Finally, for the purposes of this blog to keep it shorter I will focus only on paragraphs 29 to 50 and leave for the next blog the “Application to the Decision” section.

Moving to the Analysis in Section VIII.

The first issue is whether the Officer erred in applying an overly broad term ‘espionage’ under s.34(1)(a) IRPA (see para 29). The Chief Justice notes that there is no definition of the term “espionage” in IRPA, or it appears, in any Act of Parliament. This is crucial because I think it highlights a clear legislative/policy gap that IRCC will need to look to fill.

There are some legislation that engage in for example a definition of economic espionage in the Security of Information Actbut the context of the act and who it has been used to prosecute does make it very different and difficult to translate to the immigration setting. I see this omission as an emerging gap for legislators to step in.

There are two key paragraphs in Li involving the definition of espionage, that frame the decision. The Chief Justice writes at paragraph 31 and 32:

[31] However, Mr. Li submits that the term “espionage” has the following five characteristics:

(1) There is an aspect of secrecy, clandestineness, surreptitiousness, or covertness in the way the information in question is gathered.

(2) The information is collected without the other parties’ knowledge and consent.

(3) The collector, by the time they are actively engaging in information gathering, does so under the control and direction of a foreign entity.

(4) The information is regarded as secretive, as opposed to simply private.

(5) The act is against Canada or contrary to Canada’s interests.

[32] I disagree. In my view, the jurisprudence supports a broader definition of “espionage.” At its most basic level, the concept of “espionage” contemplates the secret, clandestine, surreptitious or covert gathering or reporting of information to a foreign state or other foreign entity or person. When such activity is against Canada or is contrary to Canada’s interests, it falls within the purview of paragraph 34(1)(a).

There are several complications created by the definition generating process: (1) what constitues reporting? (2) what constitutes information? (3) what constitutes a foreign entity? (think of foreign-controlled companies operating in Canada for example) (4) who is a foreign person? (is it entirely immigration-status related or more than that?) (5) We also return back again to what are Canadian interests and are the relevant times of when actions occured and interests considered material?

Also, by way of the way it is gramatically structured does the reporting of information to a foreign entity/person have any modifier. It appears in the Chief Justice’s decision it can be public information, but surely the gathering or reporting of any public information to a foreign person would be an overbroad definition.

The Chief Justice summarizes at paragraph 47:

[47] In summary, and having regard to the foregoing, I consider that the term “espionage” contemplates (i) the secret, clandestine, surreptitious or covert gathering of information on behalf of a foreign government or other foreign entity or person, or (ii) the reporting or communication of information, whether surreptitiously or publicly gathered, to such a recipient. I further consider it reasonable to include within the definition of “espionage” the unauthorized reporting or communication of such information to a third party acting as an intermediary for the transmission of the information to such a recipient. When such activity is against Canada or is contrary to Canada’s interests, it falls within the purview of paragraph 34(1)(a). This is so even if the information in question was gathered in public.

This is interesting as it then adds a modification of unauthorized, but is it a necessary condition. How does one seek authorization? Does it have to be in writing or could it be oral? If knowledge and consent is provided, is this information not authorized for disclosure?

It seems like the words on behalf of have significant play but does it apply to only foreign governments or as well entities or persons.

I think we will also eventually need to get some clarity as to what ‘such a recipient‘ means.

For example, if a permanent resident or international student goes home from a day of work to discuss a public university research project/grant with their foreign national spouse they are working on would that constitute espionage if the information transmitted potentially contrary to Canada’s interests? For example, if the spouse asks how much money the project is worth financially and how much they will get paid, could that constitute espionage under a specific fact patter?

What if the information being collected or gathered is on behalf of themselves but at risk of disclosure in the future (either intentionally or not) to a foreign entity or person that may benefit that entity or indiviudal. What if it is written in a resume or spoken of in a job interview with a potential foreign employer?

If a journalist is a foreign investigative correspondent paid by a foreign entity is looking into the Canadian Government’s international policy through publicly accessible ATIP information would that constitute espionage?

The only thing linking it all would be the act being Contrary to Canada’s interest and requiring some sort of intent to actually gather the information.

As the Chief Justice writes at paragraph 48.

It will suffice if that information, even if publicly available, was communicated or reported upon to a foreign state or other foreign entity or person, without any authorization. 

This suggests that a lack of authorization is a key part of an espionage test, and that the other modifiers of secrecy, clandestineness, surreptitiousness, or covertness are not needed, that neither is control or direction of a foreign entity or person, nor the lack of knowledge or consent (para 48).

I will summize that I am not certain what constitutes espionage after reading this section of the case. For one, I think commas, subsections, and a list are needed for want of misinterpretation or incorrect reading.

I also wonder – had I been a permanent resident or foreign national (and not a Canadian citizen), whether my own advice to my foreign national clients, based on the information I have gathered from my investigative research of Canadian immigration practice and policy (publicly available information), might constitute espionage. I received releases from ATIP which were releasable to me, but certainly not giving me a broad authorization to share online and have it read potentially by foreign entities or persons. I use this information in the interests of access to justice and to critique the system I work in and hopefully transform it for public good. It is, however, arguably contrary to Canada’s interests to have knowledge of things such as Chinook and triage made public?

What about my colleagues who practice in immigration who are permanent residents? Are they committing espionage by advising their clients utilizing information they have gathered through ATIP and information requests?

By writing blogs and sharing them on online platforms am I communicating and reporting? What about posting a video on TikTok or WeChat or another foreign state-owned entity – is this considered communicating and reporting? What if one of my clients were a foreign entity or individual in a country deemed hostile by Canada?

Perhaps I am confused and missing the boat but I feel like I cannot competently advise a client right now – for example – on whether or not to try and seek entry into Canada at this time in the event any past, current, or future action they may take could be deemed espionage.

Did it matter that Li came to study? What if he came to work at McDonalds? What if he came to see a loved one in Canada for two weeks? Is he still at risk to gather information and pass it on to a foreign entity just by his very presence in Canada?

While I will save the heart of my analysis on the reasonable grounds to believe standard to the next blog part where the Chief Justice looks at the application of the law to the facts and […]

Read More »

Part 2A – An Annotated Review of Li and the Unforeseen and Unsettled Legal Consequences of Expanding the Definition of Espionage



As promised, it is time for part II of my blog part series on the Federal Court decision of Li v. Canada (Citizenship and Immigration) 2023 FC 1753.

I will write this blog over several days. Today represents Part 2A which covers Sections I-V of Chief Justice Crampton’s decision.  Sections VI to VIII, which includes the issues and analysis, will form Part 2B. To keep this more accessible to a more general audience and given the broad implications of this decision, I will try my best to keep this as plain language as possible.

Today, I set the scene a bit with a lot of interesting preliminary discussions and factual/legal framing, in the next I engage the core of the legal analysis with a review of the issues, standard of review, and analysis sections of the decision (aforementioned Part 2B) and if you stay until third part (Part 2C) I will highlight some of the unforeseen and unsettled legal consequences created by the decision. This decision is simply a gamechanger for Canadian immigration law as we head to uncertain times.

I want to be clear at the outset that what I will also focus more on the substantive nature of the security regime and inadmissibility, rather than to try and analyze the judgment from a purely administrative law lens of fairness and reasonableness. As such, my concerns too are going to be centered and focused on the uncertainties created by an inadmissibility regime that punishes individuals not necessarily for what they have done, but for what they may do – and my call for a greater personalized and individualized assessment needed for such a finding to be made, given the severe consequences of being labelled as an individual inadmissible for espionage.

As a further prelude, I will say that what I have noticed from the Chief Justice’s last two major decision, Li and his decision in Sidhu, involving the horrible Humboldt Broncos tragedy, shows a willingness to engage in the broader societal impacts of immigration’s administrative law consequences. I am aware that the triage system for selecting cases involves the Chief Justice choosing the assignment of certain cases among the judges. It is not surprising in my mind that he chose these two cases to take on, rendered in close succession, that have generated significant outside attention.

I suspect administrative law will receive more of this “public” attention moving forward, and will be asked to interrogate larger societal questions – involving issues such as racism, bias, technological developments, inadmissibility, Indigenous sovereignty, among other hot button issues. As the Federal Court becomes more accessible and even more relied upon, folks will pay more attention. Decisions that are more responsive, written for losing parties, and aware of the potential consequences of either trying to establish or avoid establishing precedent/precedential value will be very crucial.

Now without further ado, let’s get into Li.


The Li Decision

I. Overview

Similarly to Sidhu, the Chief Justice starts the decision off in paragraph 1 with quite a bold statement. After I read it the first time, I both knew this decision would be impactful but also had a gut sense before even reading the facts that this probably was a decision favouring the Government.

[1] As hostile state actors increasingly make use of non-traditional methods to obtain sensitive information in Canada or abroad, contrary to Canada’s interests, the Court’s appreciation of what constitutes espionage must evolve.

A couple things to note in this first paragraph.

First, the word hostile definitely raises flags. One asks what countries are currently hostile? What is the timeframe considered for the hostility. One also thinks of about Canada’s interests. A few years ago Canada’s interests were apparently economic trade-driven with certain countries. Those interests could change depending on window. In the criminality and equivalency context, Tran advised us about retroactivity/retrospectivity and ensuring alignment of individuals knowing the consequences at the time they commit an action. I think the national security context arguably skews this context signficantly, but here we have now seen individuals who are punished (in a non-criminal sense) for associations they may have had in the past and tying these to possibly forseeable future events that may occur in the future, without even having committed any action per se. We have seen cases like Geng, from last year, where individuals who were once permanent residents of Canada having cleared security checks from years prior are being re-engaged by the systems as the investigative goalposts and geo-politics have shifted.

A reminder and as a framing point, this idea of “Canada’s interests” is from the legislation itself in s.34(1)(a) of IRPA.


  •  (1) A permanent resident or a foreign national is inadmissible on security grounds for

    • (a) engaging in an act of espionage that is against Canada or that is contrary to Canada’s interests;

Second, it is quite telling that the Chief Justice utilized the wording “Court’s appreciation.” To me it represents, and quite correctly so, at least a stated intention to not cross over to stepping into the role of the legislature or to re-litigate the case. He wants to portray this is a case about judicial intepretation. We can assess later how well the decision reflects this, in situ.

The following three paragraphs complete the overview, including summarizing the Applicant’s arguments – namely that the Officer adopted an overly broad definition of the word “espionage” and that evidence was misapprehended and ignored, and stating his decision to reject the application (paras 4-5).

II. Background

Starting at paragraph 6 we start to learn more aobut Mr. Li. He is a PRC citizen. He attempted to apply to the University of Waterloo for a PhD Program in Mechanical and Mechatronics Engineering. There were delays in background checks (a common issue I have recently commented about). The Applicant was given a last extension to obtain a study permit for a PhD project (para 7). This last extension nature may have eventually become a double-edged sword when it came to the matter covering into a JR of the final decision, but also in the parsing of a non-need to certify a question (as we will discuss in the next blog). We learn through the judgment as well that the decision was started by way of what was likely mandamus (para 8).

We learn also that the Minister applied for non-disclosure of certain information in the Certified Tribunal Record (“CTR”) under s.87 of the IRPA. Let us pause here to take a look at that provision.

Application for non-disclosure — judicial review and appeal

 The Minister may, during a judicial review, apply for the non-disclosure of information or other evidence. Section 83 — other than the obligations to appoint a special advocate and to provide a summary — applies in respect of the proceeding and in respect of any appeal of a decision made in the proceeding, with any necessary modifications.

2001, c. 27, s. 87

2008, c. 3, s. 4

2015, c. 20, s. 60

For those interested in another s.87 redaction case where the matter got a bit more complex with the applicant having actually succeeded in removing the redactions, check out Kiss v. Canada (Citizenship and Immigration), 2023 FC 1147 (CanLII), < paras 21-34.

We learn that Mr. Li learns from this redacted certified tribunal record (“CTR”) that the Center for Immigration National Security Screening recommended that there are reasonable grounds to believe he is inadmissible under s.34(1)(a) of IRPA.

Another pause. What is the Center for Immigration National Security Screening? I will not go into too much detail here (a whole other blog topic) but for some light background reading I would suggest looking at the “Evaluation of the Immigration National Security Screening Program” posted by the CBSA.

I also have knowledge that they are utilizing technological automation in these cases through the Security Screening Automation (“SSA”) project, per the unreleased draft Algorithmic Impact Assessment (“AIA”).


What we likely think happened, and we know of several other institutions that have been tagged with risk indicators (using tools such as the Integrity Trends Analysis Tool).

Paragraphs 11 and 13 of the decision then provide some interesting context. We learn that the Respondent represented that it would not rely on redacted information for the purposes for the purpose of responding to the application for judicial review, but also that the Officer did not rely on any redacted information in making the Decision.

I am still awaiting a copy of the file record from the Federal Court, but I do question, especially getting the information we did about the risk indicators in Kiss through this preliminary decision on the Minister’s s.87 motion how it could not have been relied on in some way. Presumably, the redacted information was indicator information, showing how the particular institution was flagged that led to the investigation. How the school (Beihang, we learn in paragraph 15) was flagged, what information was provided to the flaggers, and the technology utilized is something I forsee will be a point of legal conflict moving forward.

The other point to take from this section is the mandamus application, it appears triggered the steps taken and in this case the Chief Justice actually ordered a decision to be made within three weeks (see para 12).

While I have had mandamus claims trigger negative action (a concern that is often, in my opinion, under-discussed as a possibility), I have not yet to date had the Federal Court judge direct a decision to judgement within one of my proceedings. What we learn is that this led to a procedural fairness interview four days before the deadline, and it appears a refusal shortly thereafter.

I question whether there was an opportunity (beyond an interview) for the Applicant to put in evidence, such as expert affidavits, to counter the Government’s position. However, I will note – based on my own experience as counsel – that attempting to gather evidence to counter […]

Read More »

International Students Making Refugee Claims: How Data + Chinook Might Meet

As a recent tweet from Steven Meurrens shows, the relationship between refugee claimants and international students is one that IRCC actively tracks.

Thanks to an Access to Information and Privacy (“ATIP”) request received earlier this year, we finally have our first look into the question of how this data might be operationalized and related to the use of the Chinook’s application processing system.

The example, we have is from January 2021 in a document titled “By the Books: Analysis of Sri Lankan Student Claimants” from IRCC’s Migration Integrity, Integrity Risk Management Branch. The document is highly redacted so we can only share what we know and speculate about what the document might otherwise also say. I have provide the document for review below

Pages from A-2021-18633AdditionalRelease (003) – Sri Lankan Students

What the Document Says

The first part of the document summarizes four key takeaways, with part of the first and last two being redacted. Based on what we see, the summary captures the change in asylum claimants alongside changes in temporary residence and temporary resident visa issuance. This document coinciding with the pandemic, this also factors in. It is likely that the missing points pertain specifically to study permit holders, given the nature of the document and the redactions.

The Background section then delves into a bit of a country-conditions summary of Sri Lanka coinciding with entry as a top 20 source country for asylum claims in Canada. Those who practice in refugee law, will draw similarities to some of the reports found in National Documeptatin packages here. One of the pinpoints of this particular document is of the Tamil diaspora, tying into the entry of Sri Lankan nationals in 2009-2010 from the much documented Ocean Lady and Sun Sea marine vessels.

Keeping in mind this was nearly three years ago, the amount of data in this document is staggering – from tracking the mode of arrival (air versus other methods), versus the documentation of those with status in Canada. The document provides a graph comparing the number of TR approvals and asylum claims before pinpointing students as having the highest claim rate among TR business lines from Sri Lanka.

The data even tracks the time frame from TR (temporary resident) issuance to Asylum Claim date, showing the Government tracking that a majoirty of students intendes to claim when they acquired their study permit. The data further delves into what level of study the students are in. In a redacted section, this data also goes into what select educational institutions they are coming from, noting over half are at select universities.

Based on an unredacted foot note, there is a reference to “Fraudulently obtained S-1 TRVs for Cape Breton University by CBSA, November 2019.”

This data is then combined with nom-compliance and IRCC’s compliance reporting history, aligning with key indicators of potential non-compliance. While it is redacted, it would be interested to see how they tracked compliance alongside actual claims made, given once an individual makes a claim they acknowledge their inadmissibility and be motivated to discontinue studies. The redactions in this section make it difficult to parse this question of whether non-compliant studies led to claims, or claims led to non-compliant studies.

A fully redacted section called “Additional Observation: Address Clustering” presumably talks about the cities in which these claimants are living. This is another further factor that is likely built into the recommendation below.

The Recommendation of Chinook Module 5 Indicators

The Next Steps section is nearly fully redacted, but the first discussed step is not and is very telling in what it states:

Immediate Actions

  1. Creation and Distribution of Sri Lankan Student Indicators

Action: The MIT recommends the construction of indicators for use by visa processing officers in Chinook Module 5. Similar to indicators used in other lines of buisness in other source countries for claims, these indicators will assist officers in identifying potential high risk cases in the Sri Lankan cohort.

Based on the above document, one would suggest that the possible redactions might even pinpoint what those risk indicators look like. I am wondering also how the risk indicators would pick up an individual as Tamil (likely through language), but then also layer on the educational institutional they are attending, the city they are living in (the pockets) and possibly other factors that are discussed in the redacted sections.


Implications: Understanding How Risk Indicators are Created

While we have known for awhile that refugee claims are consideredadverse outcomes for international students, this document truly challenges the breadth and scope of the type of data being used to back the data-based systems within Chinook’s Module 5 Risk Indicators. It suggests that there are data-based calculations and equations being drawn and made of all applicants and form factors outside of an Applicant’s, and currently a reviewing court’s, control.

We have written about risk indicators in the past:

Why If There’s No “N/A” Risk Flag on Your GCMS Notes, You May Have Been Risk Flagged

In this blog, we talk about how these indicators feature prominently in the spreadsheet presentation used by Officers to determine cases.

We’ve also shared that these risk indicators are being flagged through automation and AI on files (see: Integrity Trends Analysis Tool Algorithmic Impact Assessment) and discussed how we have had our own experience litigating a case involving risk flags that were made visible (accidentally) in GCMS, and which may have contributed to a refusal rendered on entirely different grounds than the risk indicators indicated. Through my discussion of bulk decision-making, we discussed how refusals are grouped into buckets that could be informed by the presence of things such as risk indicators.

We know from the IRCC’s response to CIMM Study 8, the following:

  • For indicator management within the Chinook tool, risk indicators are used to notify officers oftrends that IRCC has detected or highlight a particular factor of concern, not to sort visaapplications. Keywords are also used to identify positive considerations such as applications thatmay require expedited processing (e.g. conferences, weddings).

  • Risk indicators are identified and submitted for entry into Chinook by IRCC officers. Indicatorsand keywords are not created by the Chinook tool.

  • The release of specific keywords connected to investigative techniques, trends, and risk profilescould encourage fraud or facilitate the commission of an offence, and are therefore not releasedper section 16(1)(b) of the Access to Information Act.

  • Statistics on the use of indicators and keywords are not tracked globally. If indicators or keywordsare present on an individual application, they would appear in notes in GCMS. Where there are noindicators or word flags on a case processed with Chinook, a “N/A” (not applicable) would appearin the relevant field in GCMS.

However, because of the function of the information being redacted, in only rare ‘accidental’ disclosure cases have we seen what these look like and so far, without the detail of the actual words or combinations flagged.

As far as I am aware, this is the first case where we have seen the actual directive or recommendation to create risk indicators as a function of data, and in cases where the risk (or adverse outcome) is seen as a student making a refugee claim.

My big question is about the data. I provided one example above where I challenge an alleged causation between refugee claimants and non-compliant studies. I can think of other issues, such as how IRCC collects data on whether someone is Tamil are not (presumably through language), but from my own personal knowledge of agents playing a large role in applications arising from Sri Lanka, are there possible assumptions being made on forms that are not competently filled out? Are there missing disaggregations?

The follow-up question is data-vetting. If these indicators are not being tracked, but simultaneously there are is a six month review period for these indicators, who (if anyone) is in the room to interrogate this data or allege possible bias or problematic collection?

Given the power of these indicators to essentially take applications out of the assembly line to approval, I would suggest much more transparency, and a robust and publicly explainable data review process, needs to be published by IRCC to alleviate concerns that myself and other colleagues have about this process.

Read More »

[VIB x HLO] 2023 Federal Court Year in Review (non-IRB Immigration cases) + Policy Insights

I have been reading some very excellent ‘year in reviews’ this year in various spaces across the law. These are fantastic for getting a snapshot on where things have been in an area of law and where things are going.

I would like to humbly add my submission, via a paper I wrote for the Continuing Legal Education British (“CLEBC”) Immigration Issues in Depth conference. For those who have not attended one, I highly recommend it particularly for those engaged in Tribunal (IRB) + Federal Court work. Many senior government lawyers and tribunal leadership attend, and I find it the most intellectual/technical of the many conferences on offer.

This year, I had the privilege of presenting at the jurisprudence panel. My specific topic was non-IRB Federal Court judicial review cases in 2023. Within these limitations, and with my further effort to reduce dimensionality to cases with a potential policy impact (hence, the title: Broader Application and Implications).

I hope you will all enjoy the paper. It took a lot longer than I expected (a lot of sleepless nights, including two while I was at a conference in Montreal) to try to tie together the various strands but hopefully a bit of a roadmap for what was and what will be. The three recommendations near the end are my attempt to be thought-provoking.

Check it out!

Broader Applications and Implications – FINAL

Read More »

A Window into the Humanitarian and Compassionate Grounds Judicial Review Outcomes 2018 – 2022

As we posted about in this blog below, we have received a recent data set from IRCC that appears to be first of its kind in tracking litigation at the Federal Court.

An Early “Lens” Into Predicting JR Outcomes by Country of Citizenship

Today I am going to look at Humantiarian and Compassionate Grounds applications specifically and how the outcomes for judicial review have shifted over time since 2018.

In 2018, there were pretty much two major outcomes – Dismissed at Leave and allowed with both of these outcomes making up nearly 75% of all cases. 14.8% of all decisions ended up in a discontinuance or either consent at leave. The leave dismissal rate was the highest of the five year period at 58.82%.

In 2019, we saw similar outcomes with 80% decisions ending up with this outcome and the other 20% ending up in discontinuances.  Leave dismissals were still high at 52.45%.

2020’s COVID-19 year saw a major statisical shift. While the rate of leave dismissals went down, so drastically too did the allowed rate. Was this a result of compassion fatigue from COVID itself? or did the discontinuances (including consents) lead to weaker cases being heard by the Court. The motivation to consent or seek discontinuance during COVID-19 could also have been spurred by trying to limit the number of cases requiring hearing. It is a very interesting year to try and study and breakdown especially in light of how a future Global pandemic could impact Court processes.

In 2021, the leave dismissal rate stayed consistent around 45.96% but the Allowed rate went back up to higher than pre-pandemic numbers. Discontinued – withdraw and consent rates also went back to pre-pandemic numbers.

The year 2022 is when it starts getting really interesting. In this year, and for the first year ever, to have a higher percentage of Allowed cases than any other year. If we are looking at pandemic impact, perhaps this year is when many of the Applicants who had stayed in Canada during the 2020-2021 year and made applications finally had their decisions made and challenged at Court.

Dismissal at leave was at a five-year low and the discontinuance rose slightly.

So what happened in 2023. Interestingly enough, another type of outcome – withdrawn at leave – has been number one (up until June 2023 of this year). Are these because folks are no longer interested in pursuing JRs? Are they leaving Canada or are they being removed? These numbers are even higher than in 2020 during the pandemic when travel restrictions were in place. How many of these are from successful reconsideration requests?

This year’s data (to-date) raises several correspondingly interesting issues.

To me, these stats highlight the inconsistencies and ebbs and flows. It further suggests that in terms of automated data-based decision-making, humanitarian and compassionate grounds decisions and judicial reviews probably should not be implemented until greater understanding of why numbers have been discrepant year to year.

What are your thoughts? What stands out with respect to the data for you?

Read More »

Why the 30-Year Old Florea Presumption Should Be Retired in Face of Automated Decision Making in Canadian Immigration

In the recent Federal Court decision of Hassani v. Canada (Citizenship and Immigration), 2023 FC 734, Justice Gascon writes a paragraph that I thought would be an excellent starting point for a blog. Not only does it capture the state of administrative decision-making in immigration and highlight some of the foundational pieces, but also I want to focus on one part of it that I may respectfully suggest, needs a re-think.

Hassani involved an Iranian international student who was refused a study permit to attend a Professional Photography program at Langara College. She was refused on two factors – [1] that she did not have significant family ties outside Canada and that [2] her purpose of visit was not consistent with a temporary stay given the details she had provided in her application. On the facts, it is definitely questionable that this case even went to hearing given the Applicant had no family ties in Canada and all her family ties were indeed outside Canada and in Iran. Nevertheless, Justice Gascon did a very good job analyzing the flaws within the Officer’s two findings.

There is one paragraph, 26, that is worth breaking down further – and there’s one foundational principle cited that I think needs a major rethink.

Justice Gascon writes:

[26] I do not dispute that a decision maker is generally not required to make an explicit finding on each constituent element of an issue when reaching its final decision. I also accept that a decision maker is presumed to have weighed and considered all the evidence presented to him or her unless the contrary is shown (Florea v Canada (Minister of Employment and Immigration)[1993] FCJ No 598 (FCA) (QL) at para 1). I further agree that failure to mention a particular piece of evidence in a decision does not mean that it was ignored and does not constitute an error (Cepeda-Gutierrez v Canada (Minister of Citizenship and Immigration)1998 CanLII 8667 (FC), [1998] FCJ No 1425 (QL) [Cepeda-Gutierrez] at paras 16–17). Nevertheless, it is also well established that a decision maker should not overlook contradictory evidence. This is particularly true with respect to key elements relied upon by the decision maker to reach its conclusion. When an administrative tribunal is silent on evidence clearly pointing to an opposite conclusion and squarely contradicting its findings of fact, the Court may intervene and infer that the tribunal ignored the contradictory evidence when making its decision (Ozdemir v Canada (Minister of Citizenship and Immigration)2001 FCA 331 at paras 9–10Cepeda-Gutierrez at para 17). The failure to consider specific evidence must be viewed in context, and it will lead to a decision being overturned when the non-mentioned evidence is critical, contradicts the tribunal’s conclusion and the reviewing court determines that its omission means that the tribunal disregarded the material before it (Penez at paras 24–25). This is precisely the case here with respect to Ms. Hassani’s family ties in Iran. (emphasis added)


What is the Florea Presumption?

As stated by Justice Gascon, the principle in Florea v Canada (Minister of Employment and Immigration),[1993] FCJ No 598 (FCA) pertains to a Tribunal’s weighing of evidence and the presumption that they have considered all the evidence before them. It puts the onus on the Applicant stating otherwise, to establish the contrary.

As the Immigration and Refugee Board Legal Services chapter on Weighing Evidence states:

Rather, the panel is presumed on judicial review to have weighed and considered all of the evidence before it, unless the contrary is established. (see:

This case and principle is often cited in refugee, humanitarian and compassionate grounds matters, inadmissibility cases, and IRB matters.

Reviewing case law for the last two years (since 2021), I did find a handful of the thirty-cases I reviewed that did engage this case and principle in a temporary resident context.

See e.g. study permit JR – Marcelin v. Canada (Citizenship and Immigration) 2021 FC 761 – Madam Justice Roussel at para 16 [JR dismissed]; PNP Work Permit – Shang v. Canada (Citizenship and Immigration), 2021 FC 633 at para 65 citing Basanti v Canada (Citizenship and Immigration), 2019 FC 1068 at para 24  – Madam Justice Kane [JR allowed];  Minor Child TRV Refusal – Dardari v. Canada (Citizenship and Immigration) 2021 FC 493 at para 39 – adding the portion – and is not obliged to refer to each piece of evidence submitted by the applicant – Madam Justice St-Louis [JR dismissed];

Related to this is the long-standing and oft-cited decision of Cepeda-Gutierrez v. Canada (Citizenship and Immigration) 1998 FC No 1425 in which Justice Evans re-iterated that an Agency stating they considered all evidence before it (even as a boilerplate statement) would usually be enough to suffice and assure parties and the Court of this. He writes:

[16]      On the other hand, the reasons given by administrative agencies are not to be read hypercritically by a court (Medina v. Canada (Minister of Employment and Immigration) (1990), 12 Imm. L.R. (2d) 33 (F.C.A.)), nor are agencies required to refer to every piece of evidence that they received that is contrary to their finding, and to explain how they dealt with it (see, for example, Hassan v. Canada (Minister of Employment and Immigration) (1992), 147 N.R. 317 (F.C.A.). That would be far too onerous a burden to impose upon administrative decision-makers who may be struggling with a heavy case-load and inadequate resources. A statement by the agency in its reasons for decision that, in making its findings, it considered all the evidence before it, will often suffice to assure the parties, and a reviewing court, that the agency directed itself to the totality of the evidence when making its findings of fact.

(emphasis added)


Why the Florea Presumption Should Be Reversed For Temporary Resident Applications and Any Decision Utilizing Advanced Analytics/AI/Chinook/Cumulus/Harvester

My argument is that this presumption that all evidence has been considered, as well as the boilerplate template language stating that it was considered, should not apply universally in 2023.

We know enough (again not enough about the system writ large) but enough to know that systems such as Chinook were created to facilitate the processing of temporary resident applications in hundreds of seconds, to extract data into excel tables for bulk processing, and to automate eligibility approvals. These were done specifically allow Officers to spend less time and consider enough, not all, of the evidence before them to render a decision.

I think the fact that applications are being auto-approved for eligibility, simply on a set of rules that are inputted primarily based on biometric information of an applicant should be enough to raise concerns that the systems even require consideration of most of the evidence submitted by an applicant.

All the materials on bulk processing that IRCC has released in the past few years, has been focused on the fact that not all documents need to be reviewed (not wording that states: review Additional Documents, as required).


IRCC Officer Training Guide Obtained Through ATIP


IRCC Visa Office Training Guide Obtained Through ATIP

If you look at the Daponte Affidavit and the original Module 3 Prompt that was created, it does not add confidence to the requirement that all documents needed to necessarily be reviewed:

Daponte Affidavit from Ocran

We learned that in response to concerns, they added to Chinook a prompt reminder for Officers to review all materials, but it is clear Chinook has gone far beyond ‘review and initial assessment’ to bulk processing.

Even with Cumulus, it is clear that if some docs that are not coverted to e-Docs they have to be pulled up separately in GCMS, the very tedious process that tools such as Cumulus seek to avoid.

Cumulus Training Manual Obtained Through ATIP

I would presume that it would be much easier for an Officer to make decision based on these summary extractions then to go into the documents.

Cumulus Training Guide Obtained Through ATIP

The documents are viewed below, much more akin to a ‘preview’ mode.

Cumulus Training Guide Obtained Through ATIP

Harvester, a tool that facilitates the conversion of documents into a reviewable format is similarly based on what documents can be extracted.

Harvester User Guide Obtained Via ATIP

Based on the way it is described and how some offices can exclude certain documents, it already suggests not all documents make it to the purview of the Officer.

Most importantly, as a constraint is time. As Andrew Koltun has uncovered, IRCC spends 101 seconds on average, with Chinook processing.

Respectfully, 101 seconds cannot be enough to consider but one or two documents – max – before rendering a decision. The future use of Large-Language Models and OCR to extract key […]

Read More »

Could the Federal Court Have Avoided the Chinook Abuse of Court Process Tetralogy?

By Mariana Ruiz LadyofHats – the image i made myself using adobe ilustrator using this images as source: [1], [2] ,[3], [4] , [5],[6] .and a diagram found on the book “Pädiatrie” from Karl Heinz Niessen., Public Domain,

What Happened?

In the recent decision of [1] Ardestani v. Canada (Citizenship and Immigration) 2023 FC 874, part of a tetrology of cases where Federal Court justices were critical over attacks on IRCC’s Chinook system, Justice Aylen did not mince words.

She writes:

II. Preliminary Issue

[8] At the commencement of the hearing, counsel for the Applicant advised that he was relying on his written representations but requested that counsel for the Respondent answer five questions related to this matter. As I advised counsel for the Applicant, a hearing of an application for judicial review is not an examination for discovery. Counsel for the Respondent was under no obligation to answer his questions. Moreover, it was not open to the Applicant to raise new issues at the hearing of the application.

[9] I also raised with counsel for the Applicant the fact that two decision of this Court have recently been issued – Raja v Canada (Minister of Citizenship and Immigration), 2023 FC 719 and Haghshenas v Canada (Minister of Citizenship and Immigration), 2023 FC 464 – in which counsel for the Applicant made a number of the same arguments as raised in this application and which were all dismissed by this Court, twice. I asked counsel for the Applicant if he was continuing to pursue these issues notwithstanding the earlier findings of this Court and he indicated that he was.

[10] I find that counsel for the Applicant’s attempt to re-litigate such issues and to transform the hearing of this application into an examination for discovery constitutes an abuse of this Court’s processes. (emphasis added)

In the decision, Justie Aylen comments about the arguments made by applicant against Chinook:

[26] The Applicant asserts that his work permit application was processed using Chinook, which in and of itself is a breach of procedural fairness. Moreover, he asserts that the use of Chinook was improper given the importance of the decision at issue and the degree of complexity of the decision at issue (which involved business immigration). There is also no merit to these assertions. I am not satisfied that the use of Chinook, on its own, constitutes a breach of procedural fairness or that the nature of the application itself has any bearing on the use of Chinook. The evidence before the Court is that the decision was made by an Officer, with the assistance of Chinook. Whether or not there has been a breach of procedural fairness will turn on the particular facts of the case, with reference to the procedure that was followed and the reasons for decision [see Haghshenas, supra].


[34] The Applicant further asserts that the use of Chinook is “concerning”, suggesting essentially that any decision rendered in which Chinook was used cannot be reasonable. I see no merit to this suggestion. The burden rests on the Applicant to demonstrate that the decision itself lacks transparency, intelligibility and/or justification, and baseless musings about how Chinook was developed and operates does not, on its own, meet that threshold. (emphasis added)

While Justice Aylen discusses two other cases Raja and Hagshenas, there was also a third that was released just before Ardestani, in Zargar. All of these cases involve essentially an identical fact pattern of Iranian C11 applicants being refused work permits.

For the interests of summarizing the discuss of Chinook on each, and notwithstanding that I have only seen the full file record in Hagshenas (and have also written a past post, see: here), I will extract block quotes of what judges have said in each of the remaining three decisions about Chinook.


[2] Zargar v. Canada (Citizenship and Immigration), 2023 FC 905 (CanLII), <> – Justice McDonald, Dismissed

[12] Firstly, the allegations regarding: (a) the use of Chinook, (b) reasons only being provided after the judicial review Application was filed, and (c) the length of the processing time were fully canvassed in both Haghshenas v Canada (Citizenship and Immigration), 2023 FC 464 [Haghshenas] and Raja v Canada (Citizenship and Immigration)2023 FC 719 [Raja]. In the absence of any specific evidence to support these allegations in this case, I adopt the analysis from those cases (Haghshenas paras 22-25, 28Raja at paras 28-38) and can likewise conclude that the Applicant has not established any breach of procedural fairness on these grounds. (emphasis added)

Note that there was an apparent lack of evidence filed in Zargar.


[3] Raja v. Canada (Citizenship and Immigration), 2023 FC 719 (CanLII), <> – Justice Ahmed, Dismissed

[24] The Applicant submits that the Officer assessed his work permit application on the basis of irrelevant and extraneous criteria, but does not specify which criteria. The Applicant also submits that the IRCC’s reliance on Chinook, an efficiency-enhancing tool used to organize information related to applicants for temporary residence, undermines the reasonableness of the Officer’s decision.


B. Procedural Fairness

(1) Use of Chinook Processing Tool

[28] The Applicant submits that the Officer’s use of the Chinook processing tool to assist in the assessment of the application is procedurally unfair. The Applicant contends that the tool, which he claims is able to extract information from the GCMS for many applications at a time and generate notes about these applications in “a fraction of the time” it would take to review an application otherwise, results in a lack of adequate assessment of the Applicant’s work permit application.

[29] The Respondent submits that IRCC’s use of the Chinook tool to improve efficiency in addressing a voluminous number of temporary residence applications does not amount to a specific failure of procedural fairness in the Applicant’s case. The Respondent notes that the Applicant has failed to point to any evidence to support that the Officer’s use of the Chinook tool resulted in the omission of a key consideration in the assessment of his application or deprived him of the right to have his case heard. The Respondent contends that the Applicant’s submissions appear to be little more than an objection to IRCC’s use of this tool.


[30] I agree with the Respondent. While it was open to the Applicant to raise the ways that the Chinook processing tool specifically resulted in a breach of procedural fairness in the Officer’s assessment of his case, he has not provided any evidence of such a connection. I would also note that the Chinook tool is not intended to process, assess evidence, or make decisions on applications, and the Applicant has failed to raise any evidence countering this or demonstrating that the tool impacts the fairness of the decision-making process. (emphasis added)

Note – again there appears to be a lack of evidence filed. However I do take issues with the “Chinook tool is not intended to process, assess evidence’ portion. I think there is not enough on the record or in what IRCC has publicly shared to make that statement. It is a processing tool at the end of the day, so a processing tool does process and based on what we know about the modules work (especially Module 5’s risk indicators and local word flags) that it definitely assesses and ‘flags’ the evidence at the very least.

[4]Haghshenas v. Canada (Citizenship and Immigration), 2023 FC 464 (CanLII), <> – Justice Brown, Dismissed

[24] As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance. (emphasis added)


[28] Regarding the use of the “Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of artificial intelligence is irrelevant given that (a) an Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision as required by Vavilov. (emphasis added)

What arises from the above is precious court resources were spent on four identical cases from the same counsel, making identical arguments, rendering nearly identical judgments all on summating ‘we do not have enough in front of us.’

One wonders if the Department of Justice should have just heeded Justice Little’s comments in Ocran v. Canada (Citizenship and Immigration), 2022 FC 175 (CanLII), < ask for a reference but as these tools are constantly evolving, I do understand the trepidation and costs:

V. Matters Raised by the Respondent

[57] The respondent raised additional matters for resolution by this Court about the preparation of GCMS notes generally by visa officers using spreadsheets made with a software-based tool known as the “Chinook Tool”. The respondent sought to resolve an issue about whether contents of Certified Tribunal Record (“CTRs”) were deficient because the spreadsheets are not retained and therefore do not appear in the CTRs prepared for matters such as this application. The respondent also purported to file an affidavit in an effort to provide a factual foundation; the applicant objected to its admissibility and relevance to the proceeding.

[58] In my view, the Court should not resolve the additional matters raised by the respondent on this application. There is no dispute or controversy between these parties […]

Read More »

The Problem with Khaleel: Extrinsic Evidence Versus Applying Local Knowledge

In this post, I am going to do a gentle critique of a Federal Court decision from last year Khaleel v. Canada (MCI) 2022 FC 1385 and highlight the case as an example of the Court showing too much deference to an Officer’s application of local knowledge, without scrutinizing the reasonableness of the evidentary foundation.


In Khaleel, a Pakistan citizen and Kingdom of Saudi Arabia (KSA) temporary resident was refused a temporary resident visa (TRV). Khaleel had a long (and largely negative) immigration history in Canada prior to this TRV refusal, but had applied for a business visa to visit Quesnel, B.C. as part of a required exploratory visit.

The key in this decision is Madam Justice Elliot’s upholding of the refusal, on the Officer’s analysis of Saudization. Madam Justice Elliot upheld the reasonableness of IRCC’s analysis that the Applicant’s future employment prospects were negatively impacted by Saudization. The Applicant served as a sales manager for a bakery in KSA and disclosed this as part of his TRV application.

The Officer writes in the GCMS notes for the refusal (reproduced at para 22 of the decision):

Considering the current economic reforms in KSA (Saudization), PA’s occupation (sales manager) is subject to plans for Saudization reforms. I am not satisfied that PA has strong future employment prospects in KSA. The saudization reforms are ongoing and due to the COVID-19 pandemic, reduction in the foreign workforce and layoffs are fast-tracking.

Khaleel argued that the Officer ignored his evidence, including a letter from their employer in Saudi Arabia speaking to the fact that the position was not impacted by COVID and indeed the business remained opened and demand increased.

Madam Justice Elliot writes:

[27] While the employer spoke to the increase in business they have experienced, the Officer is concerned with the national push to reduce foreign workers in KSA.

[28] The Applicant is part of the foreign workforce in KSA. The Officer’s notes indicate the Applicant’s employment is not a strong tie given the instability of foreign workers and the push to replace them with citizens of KSA which push is accelerating due to COVID-19.

[29] Regardless of the bakery’s success and reliance on the Applicant’s employment, the business like all others in KSA, is equally subject to the government’s policies to prioritize the employment of Saudi nationals. While the Applicant is correct in stating that the employer’s letter was not explicitly cited in the GCMS notes, I find that was reasonable as the letter does not address the Officer’s concerns about the nation-wide Saudization policies targeting foreign workers with temporary status in KSA


[32] As before, the Officer’s concern was the Saudization policies targeting foreign workers with temporary status in KSA. The Applicant’s responsibility for operations, in addition to sales, did not need to be discussed specifically in the reasons as it did not alter the fact that he was at risk as a foreign worker in KSA.

(emphasis added)

The Applicant also challenged, as a matter of procedural fairness reviewable on the correctness standard, the use of extrinsic evidence. Madam Justice Elliot reviewed case law for TRVs emphasizing the Officer’s use of general experience and knowledge of local conditions to draw inferences and reach conclusions without necessarily putting any concerns that may arise to the applicant (at para 57, citing Mohammed v Canada (Citizenship and Immigration)2017 FC 992,). Again, and like many decisions involving visitors, students, and workers (temporary residents), Madam Justice Elliot emphasized the lack of a qualified right to enter Canada and therefore the low procedural fairness owed.

Madam Justice Elliot writes:

[59] The Officer considered the Applicant only had temporary status in KSA. It is entirely reasonable to expect an applicant for a TRV to anticipate concerns of this sort in relation to their likelihood of return at the end of an authorized visit to Canada.

[60] The Officer was not required to notify the Applicant that he would be relying on public sources regarding general country conditions in KSA and conducting his own researchChandidas v Canada (Minister of Citizenship and Immigration), 2013 FC 257 at paras 25, 29-30.

[61] I do not find that the Officer’s reliance on their general experience and knowledge of local conditions in KSA gave rise to a duty of procedural fairness.

Finding Separation Between Reasonable Analysis Based on Local Knowledge and Speculations and Erred-Analysis Based on Undisclosed Extrinsic Evidence

Accepting again the premise that an applicant should be aware that country conditions may be applied (a premise I find problematic – as open-source searches and unpublished/vetted reports and pull up a whole slew of different findings and can often be subject to either partisan politics or propaganda), I think an Applicant should be able to challenge in judicial review the reasonableness of the local knowledge without necessarily having to predict its application. For example, a temporary resident like Khaleel who has been working and travelling between his country of citizenship and residence for many years, working many jobs may not view it as a future concern (on the ground), but global news articles/studies may highlight it a major problem/characteristic/push factor (on a macro-level).

In this case, there are two findings by Madam Justice Elliot that are worth re-examining.

First, Saudization does not apply equally to all individuals (para 29). Open source information makes it clear that it very industry dependent, position dependent, and timing dependent. See e.g. Saudi Arabia: Saudization Requirements Announced for Several Activities and Professions | Fragomen, Del Rey, Bernsen & Loewy LLP

For example, if the Applicant was seeking a short trip to Canada for several weeks but changes would not kick in for another year or two, this could be relevant factor that appears to be missed in the Officer’s analysis.

Second, whether someone is in sales or operations ( para 32) could be relevant as there are different levels for different industries as discussed and there is no discussion in the decision about his Iqama (permit holding) industry. It is also common practice in KSA for permits to be issued for one profession, but applicants to take on jobs in others with the future possibility of switching.

There is a third issue, that I could think of – involving whether or not percentages even mean too much (for example if an industry is going from 10 percent to 25 percent), if ultimately the expansion of hiring writ large of workers would lead to increases in hireability for both foreign and domestic workers. Khaleel was time during the pandemic, but I could see in another context the Applicant providing evidence that the percentage of Saudization itself is not determinative of the number of opportunities.

Of course, Madam Justice Elliot is not tasked in judicial review with stepping in as Officer to re-evaluate the facts or evidence to decide for herself (Valilov at para 83) but I am concerned that blanketly accepting Officer’s ability to do their own research without even citing the source of this research can very easily lead to misinformation – particularly as we head into the age of digital misinformation. Furthermore, as data is increasingly relied upon as the source of data – it could also lead to the shielding of the actual impetus on reasoning (internal statistics) with these boilerplate recitations of an Officer claiming to rely on country conditions. I also feel, at minimum, an Officer should mention – rather than have it implied – where local knowledge and experience has led to a specific finding.

This concern of boilerplate recitations was also expressed by Justice Sadrehashemi in Mundangepfupfu v. Canada (Citizenship and Immigration), 2022 FC 1220 who writes:

[18] The personal circumstances of Ms. Mundangepfupfu were not considered. It is not clear how the country conditions set out by the Officer would affect Ms. Mundangepfupfu, given her living conditions and family support that were described in her applications. The Officer failed to meaningfully account for and respond to key issues and evidence raised by the Applicants, as required (Vavilov at paras 127-128). I agree with the Applicants that this kind of boilerplate recitation of country conditions without an application to the personal circumstances of an applicant could provide the basis for refusing every application for temporary resident status made by a citizen of Zimbabwe. This approach is unreasonable. (emphasis)

Mundangepfupfu at para 18.

Is stating that Saudization applies to all applicants who have temporary resident status in KSA akin to boilerplate recitation? Or is it a reasonable application of an Officer’s local knowledge?

Now let’s assume the Officer actually received facts from the applicant proactively – disputing the application of Saudization, but the Officer still suggest that Saudization will limit the future opportunities of an applicant irregardless of the facts  – just as a broad application of Saudization.

Justice Roy states in Demyati v. Canada (Minister of Citizenship and Immigration) 2018 FC 701:

[16] A visa officer is certainly entitled to rely on common sense and rationality. As I have said before, we do not check common sense at the door when entering a courtroom. What is not allowed is to make a decision based on intuition or a hunch; if a decision is not sufficiently articulated, it will lack transparency and intelligibility required to meet the test of reasonableness. That, I am afraid, is what we are confronted with here.


[20] What appears to have been the most important factor in the refusal was the fact that the applicant is a Syrian national who has been living outside of Syria for most of his life. The decision-maker seems to have concluded that given the situation in his country of origin, he would not be inclined to go back to his country of nationality if his residence status in the United Arab Emirates were to change. […]

Read More »

Guide de politique sur le soutien automatisé à la prise de décision version de 2021/Policy Playbook on Automated Support for Decision-making 2021 edition (Bilingual)

Hi Folks:

I wanted to share a copy of IRCC’s Policy Playbook on Automated Support for Decision-making. We learned from ATIP that even though there is language around the need to frequently update this document to adopt to the changing times and applications of automation and AI, it has not been updated since February 2021.

1A-2023-05333 – Policy Playbook on Automated Decision-Making February 2021 – bilingual

I think this is also the first time I have seen a bilingual version, but this is very crucial as one of the big critiques of IRCC’s Chinook 101 training materials was the apparent lack (or lack of access) to a French version.

This document is foundational to our understanding of where IRCC believes they were going, at least as of two-years ago. Is this document still good? Have the plan migrated to a new document?

Lots of questions raised and so far view answers.

Still on the beat…..

Read More »

The Time the Korean Church Congregation Came Out to Our Immigration Appeal

Created via DALL-E

Having not blogged on here for awhile (admittedly struggling with writer’s block/half-written blogs – the usual) I wanted to take a short trip down memory lane through one of my more memorable cases.

I was representing an older Korean Appellant. He had gone through some traumatic injuries and as a result spent too much time with a family member in the United States (as a Green Card holder) and thereby breaching the residency obligation. It was not an insignificant breach.

The case started off with strong documentary evidence. This was pre-amendments to the IAD Rules, which now make it even more crucial to ensure front-end evidence is provide and letters. We made a very strong paper-based case which supported what occurred at the hearing.

Remember, this was back when there were in-person hearings. It seems like a lifetime ago, but up on the 16th floor of 300 West Georgia there are several hearing rooms. Ironically, we were assigned the smallest one, I believe just a handful of no more than eight seats in the witness booth.

The case already had several witnesses. The Appellant had several children, his spouse, and even a best friend were willing to testify.

One of my strategies, which I think is not only effective but also very necessary is to ensure the Appellant has enough to present their case. Back then, in many residency appeals they would schedule cases only for 2 hours. This was in large part due to backlogs, but also an assumption that removal order (residency obligation cases) were easier – required less witnesses, were less complex. This matter, contrary to that presumption, was quite complex with many layers, a long history, a vulnerable person, and a narrative that needed time to tell through multiple witnesses.

However, at this hearing, we also had another advantage – the entire Korean church congregation that the Appellant belonged to. The family had put the word out and even unexpected to me, twenty ajumas and ajusshi’s showed up at the hearing.

As the Member was about to set preliminary matters, he looked up and saw them all from a semi-circle form around my client like a choir around a conductor. He saw that there were members of the congregation would could not even fit in the room and the door was half propped open.

He respectfully gave everyone a chance to state their name, addressed everyone and thanked them for coming out. He ultimately suggested that they could go home as there was simply not enough space. After the room cleared out, he took at the voluminous disclosure, turned to the Minister, and in essence suggested that this appeared to be a very strong case on paper and whether the Minister still wanted to proceed.

The Minister was not ready to consent yet. We proceeded through direct examination and a cross-examination of the Appellant before we were able to reach a consent. Following this, I shared a lovely lunch with the family in Chinatown, nearby.

It was – to date – probably my most memorable IAD experience. It also goes to show, something I often mentor young lawyers and practitioners on – is the importance of the factual, and beyond that the visceral argument. There is a role in compassion and humanity, even amidst the growing boilerplate application of laws and principles.

I wanted to share this story. Perhaps more to re-inspire myself more than anything else.

Read More »

Cautious Concern But Missing Crucial Context – Justice Brown’s Decision in Haghshenas

After the Federal Court’s decision in Ocran v. MCI (Canada) 2022 FC 175it was almost inevitable that we would be talking again about Chinook. Counsel (including ourselves) have been raising the use of Chinook and the concerns of Artificial Intelligence in memorandums of argument and accompanying affidavits, arguing – for example – that many of the standard template language used fall short of the Vavilov standard and in many cases are non-responsive or reflective to the Applicant’s submissions.

We have largely been successful in getting cases consented on using this approach, yet I cannot say our overall success in resolving judicial reviews have followed suite. Indeed, recently we have been stuck at the visa office more on re-opening than we have been in the past.

Today, the Federal Court rendered a decision that again engaged in Chinook and in this case also touched on Artificial Intelligence. Many took to Twitter and Linkedin to express concern about bad precedent. Scholars such as Paul Daly also weighed in on Justice Brown’s decision, highlighting that there is simply a lot we do not know about how Chinook is deployed. 

I might take a different view than many on this case. While I think it might be read (and could be pointed to as precedent by the Department of Justice) as a decision upholding the reasonableness and fairness of utilizing Chinook and AI, I also think there was no record that tied in how the process affects the outcome, clearly the link that Justice Brown was concerned about.

Haghshenas v. Canada (MCI) 2023 FC 464

Mr. Haghshenas had his C-11 (LMIA exempt) work permit refused on the basis that he would not leave Canada at the end of his authorized stay pursuant to subsection 200(1) of the IRPR. It is interesting that in the Certified Tribunal Record and specifically the GCMS notes, there is no mention of Chinook 3+ as is commonly disclosed now. However, there is the wording of Indicators (meaning risk indicators) as N/A and Processing Word Flag as N/A. These are Module 5 flags, that make up one of the columns in the Chinook spreadsheet, so it is presumable that Chinook could have been used. However, we do note the screenshots that were part of the CTR do not appear to include the Chinook tab or any screenshot of what Chinook looked at. From the record, this lack of transparency on what tool was actually used did not appear to be challenged.

Ultimately, the refusal decision itself is actually quite personalized – not carrying the usual pure template characteristics of Module 4 Refusal Notes generator. There is personalized assessment of the actual business plan, the profits considered (and labelled speculative by the Officer), and concerns about whether registration under the licensed contractor process has been done. From my own experiences, this decision seems quite removed from the usual Module 3 and perhaps suggests that either Chinook was not fully engaged OR that the functionality of Chinook has gotten much better to the point where it’s use becomes blurred. It could reasonably be both.

In upholding the procedural fairness and reasonableness of the decision, Justice Brown does engage in two areas about a discussion of Chinook and AI.

In dismissing the Applicant’s argument on procedural fairness, Justice Brown writes:

[24] As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.

He writes later, under the reasonableness of decision, heading:

[28] Regarding the use of the “Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of artificial intelligence is irrelevant given that (a) an Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision as required by Vavilov.

Justice Brown appeared to be concerned with the lack of the Applicant’s tying of the process of utilizing artificial intelligence or Chinook to how it actually impacted the reasonableness or fairness of the decision. Justice Brown is looking at the final decision and correctly suggests – an Officer made it, the Record justifies it – how it got from A to C is not the reviewable decision it is the A of the input provided to the Officer and the C of the Officer’s decision.

I want to question about the missing B – the context.

It is interesting to note also, in looking at the Record, that the Respondent (Minister) did not engage in any discussion of Chinook or AI. The argument was solely raised by the Applicant – in two paragraphs in the written memorandum of argument and one paragraph in the reply. The Applicant’s argument, one rejected by Justice Brown, was that the uncertainty of the reliability, efficacy, and lack of communication created an uncertainty of how these tools were used, which ultimately impacted the fairness/reasonableness.

The Applicant captures these arguments in paragraphs 9, 10 , and 32 of their memorandum, writing:

The nature of the decision and the process followed in making it

9. While the reason originally given to the Applicant was that the visa officer (the
decision maker) believed that the Applicant would not leave Canada based on the
purpose of visit, the reasons now given during these proceedings reveal that the
background rationale of the decision maker does not support refusal based on
purpose of visit. In fact, the application was delayed for nearly five months and in
the end the decision was arrived at with the help of Artificial Intelligence
technology of Chinook 3+. It is not certain as to what information was analysed
by the aforesaid software and what was presented to the decision maker to
make up a decision. It can be presumed that not enough of human input has
gone into it, which is not appropriate for a complicated case involving business
immigration. It is also not apt in view of the importance of the decision to the
individual, who has committed a great deal of funds for this purpose. (emphasis added)

10. Chinook is a processing tool that it developed to deal with the higher volume of
applications. This tool allows DMs to review applications more quickly.
Specifically, the DM is able to pull information from the GCMS system for many
applications at the same time, review the information and make decisions and
generate notes  in using a built-in note generator, in a fraction of the time it
previously took to review the same number of applications. It can be presumed
that not enough human input has gone into it, which is not appropriate for a
complicated case involving business immigration. In the case at hand, Chinook
Module 5- indicator management tool was used, which consists of risk indicators
and local word flags. A local word flag is used to assist in prioritizing applications.
It is left up to Chinook to search for these indicators and flags and create a
report, which is then copy and pasted into GCMS by the DM. The present case is
one that deserved priority processing being covered by GATS. Since the
appropriate inputs may not have been fed into the mechanised processes of
Chinook, which would flag priority in suchlike GATS cases, the DM¶s GCMS
notes read 3processing priority word flag: N/A . This is clearly wrong and betrays
the fallout in using technology to supplant human input. The use of Chinook has
caused there to be a lack of effective oversight on the decisions being generated.
It is also not apt in view of the importance of the decision to the individual, who
has committed a great deal of funds for this purpose (Baker supra). (emphasis added)

32. On the issue of Chinook, while it can be believed that faced with a large volume of
cases, IRCC has been working to develop efficiency-enhancing tools to assist
visa officers in the decision-making process. Chinook is one such tool. IRCC has
been placing heavy reliance on it for more than a year now. However, as always
with use of any technology, there are questions about its reliability and efficacy for
the purpose it sets out to achieve. There are concerns about the manner in which
information is processed and analysed. The working of the system is still unclear
to the general public. A decision rendered using it cannot be termed reasonable until it is elaborated to all stakeholders to what extent has machine replaced human input and how it impacts the final outcome. The test set by the Supreme Court in Vavilov has not been met.

The Applicant appeared to be almost making an argument that the complexity of the Applicant’s case suggested Chinook should not have been used and therefore a human should have reviewed it. However – there seemed to have been a gap in engaging both the fact that IRCC did not indicate it had used Chinook and that the reasons actually were more than normally responsive to the facts. I think also, the argument that a positive world flag should have been implemented but was not, ultimately did not get picked up the Court – but lacked a record of affidavit evidence or a challenge to the CTR […]

Read More »

OPINION: IRCC Should Prioritize Work-Permit Holding Self-Employed/Contractors Seeking PR via Express Entry

As the debate goes on over whether the changes to Express Entry allowing for the Minister of Immigration, Refugees and Citizenship to tweak invitations and draws to target specific occupations or groups, I have a suggestion in the case it does go that way.

The current system of Canadian Experience Class (“CEC”) and Federal Skilled Worker (“FSW”) (and how the points are divided up) is at odds with the way the economy and workforce is going – around the issue of self-employment/contract work. Anecdotally (I do not yet have stats on this), individuals are now more interested in the gig economy, the ability to pursue multiple opportunities, of working virtually. Many of these types of opportunities are provided a contractor/self-employed basis.

Canada’s much-maligned self-employed program is both limited in scope (with a focus on athletics, arts, music requiring a certain level of cultural activity/world class performance and farmers) and in the excessive processing delays and lack of regulation to ensure carry-through of successful applicants. Spots are few and those clients of mine who have gone through the program recently have taken many years of precarious status to get to the finish line.

What I have seen a trend in at my offices over the pandemic and into this post-pandemic, are individuals who are self-employed/contractors in Canada – many either doing work that does not meet the requirements of the self-employed program but in other areas of research (on grants) or contractual work (as entrepreneurs and small businesses owners) who are excluded from the CEC. While their work counts towards the FSW, because their work does not count towards the Canadian Work Experience points, they often fall short of the draws.

If IRCC does choose this model of micro-managing and selecting occupations and subgroups, perhaps one group that could get early attention would be these individuals. They would not be hard to find in the system. Ask that applicants update their profile to also include self-employed/contractual history in Express Entry, and to put it in the work history section (rather than just in personal history). Based on these submissions, scoop a portion of them through an FSW draw specifically aimed at those who have Canadian contractual/self-employment experience in the past three years.

I really hope we shed light on this group. Among a recent consultation I had was with a PhD researcher who as been in Canada since high school, but because they are performing their work (equivalent to full-time hours) on a grant rather than as an employee they cannot get the extra points to be selected as current Comprehensive Ranking Score (“CRS”) thresholds. Because they are older (having chosen to go the PhD route), they lose points for language. An individual like this is forced by our Economic immigration options to abandon the research they are doing – which significantly benefits Canada – in order to likely hold a survival skilled employment position for a year, only to return back after becoming a PR. This defies logic and does not support our overall goal. Employers, I can even draw an example of my own legal industry, increasing are relying on contractual arrangements to keep doors open and indeed, the flexibility of choosing hours and balancing hybridity (not to mention the potential tax benefits for contractors/self-employed individuals) make these models also attractive for those we contract with.

I hope we shed light on how we are falling short and find solutions to help this important subset of migrants seeking permanency and support in Canada.

Read More »

Harvester: Why IRCC is Harvesting Your Submitted Application Documents With Their Latest Automation Tool


We have re-produced IRCC’s Harvester user guide from 2021 below (with additional redactions added to preserve passwords that were likely erroneously disclosed).

Harvester Program Guide_Redacted 2_Redacted FINAL


What is Harvester?

Per page 5 of the PDF, it is an automation tool that downloads eDOCs from GCMS and organizes (read: reorganizes) the file using clear detailed names. The use of Harvester has improved productivity in pre-assessment by over 25% with minimal training.

Like Chinook (and compatible with Chinook), it also uses an Excel interface and Microsoft Access. Documents are harvested in silos, allowing an Officer to secure, control, and monitor access to a file. Reading between the lines, the use of Microsoft Access also allows all documentation to be displayed on one horizontal screen (to be used , alongside GCMS, and Chinook in a streamlined way. 7-zip is used to encrypt the documentation and similar to Chinook there’s a deletion system after use. Importantly, there appears to be added security functions on who can access the documents and also a trail of records for auditing. I suspect that this could come in handy in future litigation with respect to whether documentation was considered or not. Some docs are excluded from Harvester – either purposely by an Officer where the visa officer does not need to review said doc OR if the harvest does not succeed. I was not able to gleam from my reading where harvests are unsuccessful but one must assume there would be some tech explanation.

Much like Chinook, it appears quite innocuous on the face. It speeds up assessment, heck even I could use a Harvester download and saving (automating) the organization of a file before I review – tasks we often leave to legal assistants and case managers.

However, there may me more than meets the eye. We’re getting a clearer picture of what the Officer actually sees in front of them when they render a decision. What the Chinook 3+ Platform looks like, the various tools and prompts that may or may not be providing information to guide a decision being rendered. Harvester is another one.



I would love feedback from our readers to see if they have any ideas but at this stage, I am looking at a couple major ones.

  1. Does the way we name and number our files mean anything any more? We often are creative with the way we try and flag specific names or combine documents, but how does Harvester extract or parse this apart? Is Harvester used (usable) on all apps or just select types that are already streamlined online?
  2. How meaningful is the ability to view the documents on Microsoft Access. From my understanding Harvester replaces the need to utilize other applications such as possibly PDF, Word, or an image reviewer. What does that mean for the way an Officer scrolls through various documents. What other tools does Microsoft Access provide in this regard (I’ve only watched a few online videos so maybe some of the tech-minded can advise);
  3. Why are there silos created for multiple applications? I am concerned again about this ability to string together various applications and harvest all at the same time. Is there a purpose to this? It would make very much sense within a family of applicants to be able to do so, but why would multiple applications un-related be harvested unless its simply to get the files ‘set up’ for review.

Would love for some of you to take a look at Harvester and let us know what you think!


Read More »

Delaying Decisions on Post-Graduate Work Permit Refusals Have Cruel Implications + Creates Backlogs

(First of all – Happy New Year! I might not be very happy in this post, so I’ll get it out of the way first).

For the past month, we have been dealing with several inquiries from folks who have been refused Post-Graduate Work Permits (PGWPs).

These have mostly been for small administrative issues such as a failure to send a proper final transcript, pay appropriate fees, or uploading issues. Some other cases are those where there is one semester that was not a final semester and that was part-time, usually do to some school scheduling or academic issue. Many had increasing mental health challenges due to COVID-19 related issues, such as the passing away of family members and the need to travel back for those arrangements.

The crux of the problem is that these applications are being refused more than 180 days after the Applicant completes studies. Why is this significant? Well, even though an Applicant is able to restore their status within 90 days of losing their status, at 180 days after the completion of studies, the restoration to PGWP option ceases to exist. Applicants are required to apply for a PGWP within 180 days of completing studies.

Restoration, becomes therefore meaningless as an option outside of the 180 day window. This leads to two applications flooding the system.

  1. Reconsiderations –  many of which (time and time again I find) fail to address the legal test for reconsideration as set out by IRCC and as I have discussed in this past blog.
  2. Temporary Resident Permits – we have been retained for several of these of late and unfortunately it is heading to the 8 month + range for a just graduated student to wait which is simply not feasible for most.
  3. Unnecessary Return to Studies with Unclear Implications of Past Studies – many students go back to school – which makes sense from a Diploma to Bachelors level (perhaps) but for many who graduated from a Bachelors or higher, it really makes little sense to force them to take another program. These decisions are being made rushed, finances are being secured urgently (but with huge impact to families) – all to have to remedy a small admin or one part-time semester issue. It truly is overdoing things.

IRCC needs to urgently render timely decisions on study permit refusals – I would argue 90 days from a student’s completion of studies (i.e. less time if the student applies later) is an absolute maximum time that can be taken (freeing up another 90 for restoration in a feasible time). Given the use of Artificial Intelligence (“AI”) in this space, it should free up Officers to consider some of these cases where there may be admin issue to see if it can be addressed in reconsideration or in applying discretion, rather than having to put students in the loop. Right now, the Courts, are taking a position there is no discretion so litigation is of limited use to force change.

If in fact, the refusal of PGWPs is now a policy directive to try and tackle the backlog or filter the number of PGWP holders perhaps this should be communicated. Students could choose to transition out of classes back home, or return back after graduation, rather than stick around in limbo waiting for a TRP.

Too many mental health issues are being burdened by students who simply are going things that students go through, such as taking part-time classes to better their education outcomes or to save money. Students are making honest mistakes following confusing immigration application instructions. They should not be punished the way they currently are under our Canadian immigration system.

Agree? Disagree? Feel free to engage with me on Twitter or email me at with your thoughts.

#intled #cdnimm


Read More »

Five AI-Decision Making Questions We Need Answers To From IRCC

In this short post, I will canvass five relatively urgent questions we need the collective answers to as we represent clients who are now being addressed by artificial-intelligence built decision-making systems. For clarity and to adopt IRCC’s status quo, I will not consider Chinook to be one of those systems, BUT it is clear Chinook interacts with AI and the role of Chinook as it pertains to decisions, especially as advanced analytics skips eligibility assessment become increasingly more important.

1) If IRCC is basing Advanced Analytics decisions of historical data, what historical data is being utilized? Does it represent a reasonable/ideal officer and how can it be re-programmed?

How do we ensure it represents an ideal period (not a stressed officer/overburdened)? IRCC has been overburdened with applications for the last decade having to create systems to shortcut decision-making and has been openly acknowledging their resource crunch. If historical data does not represent what we want for future processing – how can projections be changed. How, in practice, does bias get stripped or de-programmed out of data? We have seen positive impacts (for example Nigerian study permit approval rates) since recent advocacy but is that programmed in manually by a human? and how?

2) How does Advanced Analytics interact with Chinook?

In the past Chinook was utilized for only a portion of cases, we understand to both bulk approve cases and bulk refuse. If Advanced Analytics serves to provide auto-positive eligibility, why is Chinook even needed to sort the Applicant’s information to decide whether to approve or refuse. Is there column in Chinook that allows an Officer to see if Eligibility has already been met (i.e. it was AA’d) and therefore altering their application and use of Chinook? The fear is Chinook becomes just a refusal tool and is no longer needed for approvals.

Furthermore, what does an Officer see when they have to perform eligibility assessment? Are they given any information about data trends/key risk indicators/etc. that Advanced Analytics helped generate presumably during the triage? Is it something the Officer has to dig for in separate module of Chinook or is it displayed right in their face as they render a decision to remind them?

Are Officer’s made aware if a case goes into manual review for example as QA for an Automated Decision? How are those cases tracked?

3) What is the incentive to actually process a non-AA decision if AA decisions can be processed more accurately/quickly?

For those files that are triaged to the non-Green/Human bin, if it becomes a numbers game and the situation is no longer ‘first in, first out’, why even process the complex cases anymore? Why not fill the slots with newer AA/low risk cases that will create less challenges and just let decisions that are complicated or require human intervention to set for one, two years until the Applicant seeks a withdrawal? Other than mandamus, what remedies will Applicants have to resolve their cases. It is simply about complaining hard enough to get pulled out of review and for an eventual refusal? How do we ensure we do not refuse all Tier 2/3 cases as a matter of general practice as we get more Tier 1 applications in the door (likely from visa-exempt, Global North countries).

4) What does counsel for the Department of Justice see in GCMS/Rule 9 Reasons versus what we see?

Usually, the idea of a tribunal record or GCMS is that it a central record of an Applicant’s file but with increasing redactions, it is becoming less and less clear who has access to what information. Client’s are triaged utilizing “bins” but those bins are stripped from the GCMS notes we get. Are they also stripped for DOJ or not? Right now local word flags and risk indicators are stripped for applicants, but are they also stripped for DOJ? What about the audit trail that exists for each applicant that we have not been able to obtain via ATIP?

Taking it a step further – what constitutes a Tribunal Record anymore? Is it only what was submitted by the Applicant and what is in the Officer’s final decision? I know my colleague, Steven Meurrens has started to get even email records between Officers, but there’s a lack of clarity on what that Tribunal Record consists of and whether it necessarily must include the audit trail, risk indicators, and local word flags. Should it include the algorithms?

How does one even try to make fettering arguments if we do not know what the Officer had access to before rendering a decision (how they were possibly fettered)?

The other question becomes how do we let the judiciary know about these systems? Does it go up as a DOJ-led reference (and who can intervene and be on the other side)? The strategic litigation likely will be implemented again in a weak fact case. How do we ensure counsel on the other side is prepared for this so they can not only fight back but provide a counternarrative to the judiciary on these issues?

5) Will the Triaging Rules ever be Made Public? 

Currently, the AI is quite basic from our understanding. There are key rules inputted and applications that meet the requirements go through a decision-tree that leads to auto-eligibility approvals. However, as these AA programs adopt more machine learning components, allowing them to spot out and sniff out new flags, new rules, new issues – will there be some transparency around what the rules are? Should there be different treatment between rules that are more on the security/intelligence/system integrity side versus more black and white rules such as only individual applicants can get tier one processing, or applicant’s must not have had a previous refusal to benefit from X, or holding a U.S. visa or previous Canadian visa over past ten years is a Tier 1 factor.

If the ultimate goal is also to use these rules to try and affect processing (lower number of applicants and raise approvals), presumably telling the public these factors so they may be dissuaded from applying when they do not have a strong case could be of benefit.

Just some random Monday morning musings as we dig further. Stay tuned.

Read More »

How Much More Likely is an SDS Study Permit to Get Approved Than a Non-SDS Study Permit? – A Stats Look

One of the common questions we get asked by applicants (and indeed rumours fly around constantly on) is whether it makes sense to pursue IRCC’s Student Direct Stream or just go the regular route.

I recently obtained data from an IRCC requests that helps contextualize this question a bit. I decided (for interest of trying to make the data easier to understand) to just look at January to August 2022. This sample size necessarily limits our analysis, but I think it gives us a good microcosm to examine. January to August 2022 is not hindered (as much) by the COVID-19 restrictions of 2019-2020 and 2021 was for all intents and purposes a ‘straddle’ year.

This investigation is important because there have been rumours and allegations for example – that India SDS is not worth the effort (and that locally decided non-SDS cases have a higher refusal rate) or that for Philippines applicants, SDS is pretty much a non-effective process.

Without further ado, here is the raw data. Remember I did not (for purposes of visualization) break down the actual numbers of applications and did not do an ‘averaging’ because it depends on actual total numbers, which will take a bit more time to calculate with the way data was presented.

via IRCC CDO Approval % SDS/NSE by Country of Residence/Citizenship
Jan-22 Feb-22 Mar-22 Apr-22 May-22 Jun-22 Jul-22 Aug-22
India 72% 67% 69% 64% 60% 55% 57% 62%
Nigeria 61% 60% 68% 76% 91% 92% 63% 91%
China 86% 61% 58% 77% 83% 83% 76% 88%
Philippines 40% 40% 38% 50% 53% 40% 46% 48%
Vietnam 79% 82% 82% 66% 62% 74% 71% 82%
Pakistan 25% 40% 43% 40% 59% 56% 77% 67%
Approval % of Non-SDS/NSE by Country of Residence/Citizenship
via IRCC CDO Jan-22 Feb-22 Mar-22 Apr-22 May-22 Jun-22 Jul-22 Aug-22
India 12% 25% 24% 20% 36% 38% 35% 42%
Nigeria 44% 34% 26% 30% 31% 34% 69% 63%
China 74% 48% 72% 78% 82% 84% 90% 82%
Philippines 57% 58% 55% 82% 75% 84% 77% 76%
Vietnam 58% 51% 59% 79% 72% 61% 81% 55%
Pakistan 24% 17% 44% 17% 36% 48% 37% 38%

I have a few big takeaways:

  1. Philippines SDS is the only SDS that has an approval rate that is significantly and consistently below Non-SDS. LJ Dangzalan has been talking about this a ton, but numbers back this up;
  2. The India Non-SDS rumour appears just that. It may be select cases or ‘overselling’ local services but numbers don’t back that up;
  3. Pakistan SDS makes a big difference (and the last four months) show it; and
  4. The Nigerian student advocacy (and Nigerian Student Express) is trending well.


Anything else interesting you can gather from the data that catches your eye?


Read More »
About Us
Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary.

Let’s Get in Touch

Translate »