AI

Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

Blog Posts

What is an AI Hype Cycle and How Is it Relevant to Canadian Immigration Law?

Recently I have been reading and learning more about AI Hype Cycles.

I first learned this term from Professor Kristen Thomasen when she did a guest lecture for our Legal Methodologies graduate class and discussed it with respect to her own research on drone technology and writing/researching during hype cycles. Since then, in almost AI-related seminar I have attended the term has come up with respect to the current buzz and attention being paid to AI. For example, Timnit Gebru in her talk for the GC Data Conference which I recently attended noted that a lot of what is being repackaged as new AI today was the same work in ‘big data’ that she studied many years back. For my own research, it is important to understand hype cycles to ground my research into more principled and foundational approaches so that I can write and explore the changes in technology while doing slow scholarship notwithstanding changing public discourse and the respective legislative/regulatory changes that might follow.

A good starting point for understanding hype cycles, especially in the AI market, is the Gartner Hype Cycle. Who those who have not heard the term yet, I would recommend checking out the following video:

Gartner reviews technological hype cycles through five phases: (1) innovation trigger; (2) peak of inflated expectations; (3) trough of disillusionment; (4) slope of enlightenment, and plateau of productivity.

It is interesting to see how Gartner has labelled the current cycles:

One of the most surprising things to me on first view is how automatic systems and deicsion intelligence is still on the innovation trigger – early phase on the hype cycle. The other is how many different types of AI technology are on the hype cycle and how many the general public actually know/engage with. I would suggest at most 50% of this list is in the vocabulary and use of even the most educated folks. I also find that from a laypersons perspective (which I consider myself on AI), challenges in classifying whether certain AI concepts fit one category or another or are a hybrid. This means AI societal knowledge is low and even for some of the items that are purportedly on the Slope of Enlightment or Plateau of Productivity.

It is important to note before I move on that that the AI Hype Cycle also has been used in terms outside of the Gartner definition, more in a more criticial sense of technologies that are in a ‘hype’ phase that will eventually ebb and flow. A great article on this and how it affects AI definitions is the piece by Eric Siegel in the Harvard Business Review how the hype around Supervised Machine Learning has been rebranded into a hype around AI and has been spun into this push for Artificial General Intelligence that may or may not be achievable.

 

Relevance to the Immigration Law Space

The hype cycle is relevant to Canadian immigration law in a variety of ways.

First, on the face, Gartner is a contracting partner of IRCC which means they are probably bringing in the hype cycle into their work and their advice to them.

Second, it brings into question again how much AI-based automated decision-making systems (ADM) is still in the beginning of the hype cycle. It make sense utilizing this framework to understand why these systems are being so heralded by Government in their policy guides and presentation, but also that there could be a peak of inflated expectations on the horizon that may lead to more hybrid decision-making or perhaps a step back from use.

The other question is about whether we are (and I am a primary perpetrator of this) overly-focused on automated-decision making systems without considering the larger AI supply chain that will likely interact. Jennifer Cobbe et al talk about this in their paper “Understanding accountability in algorithmic supply chains” which was assigned for reading in my Accountable Computer Systems course. Not only are there different AI components, providers, downstream/upstream uses, and actors that may be involved in the AI development and application process.

Using immigration as an example, there may be one third-party SAAS that checks photos, another software using black-box AI may engage in facial recognition, and ultimately, internal software that does machine-learning triaging or automation of refusal notes generation. The question of how we hold these systems and their outputs accountable will be important, especially if various components of the system are on different stages of the hype cycle or not disclosed in the final decision to the end user (or immigration applicant).

Third, I think that the idea of hype cycles is very relevant to my many brave colleagues who are investing their time and energy into building their own AI tools or implementing sofware solutions for private sector applicants. The hype cycle may give some guidance as to the innovation they are trying to bring and the timeframe they have to make a splash into the market. Furthermore, immigration (as a dynamic and rapidly changing area of law) and immigrants (as perhaps needing different considerations with respect to technological use, access, or norms) may have their own considerations that may alter Gartner’s timelines.

It will be very interesting to continue to monitor how AI hype cycles drive both private and public innovation in this emerging space of technologies that will significantly impact migrant lives.

Read More »

Harvester: Why IRCC is Harvesting Your Submitted Application Documents With Their Latest Automation Tool

 

We have re-produced IRCC’s Harvester user guide from 2021 below (with additional redactions added to preserve passwords that were likely erroneously disclosed).

Harvester Program Guide_Redacted 2_Redacted FINAL

 

What is Harvester?

Per page 5 of the PDF, it is an automation tool that downloads eDOCs from GCMS and organizes (read: reorganizes) the file using clear detailed names. The use of Harvester has improved productivity in pre-assessment by over 25% with minimal training.

Like Chinook (and compatible with Chinook), it also uses an Excel interface and Microsoft Access. Documents are harvested in silos, allowing an Officer to secure, control, and monitor access to a file. Reading between the lines, the use of Microsoft Access also allows all documentation to be displayed on one horizontal screen (to be used , alongside GCMS, and Chinook in a streamlined way. 7-zip is used to encrypt the documentation and similar to Chinook there’s a deletion system after use. Importantly, there appears to be added security functions on who can access the documents and also a trail of records for auditing. I suspect that this could come in handy in future litigation with respect to whether documentation was considered or not. Some docs are excluded from Harvester – either purposely by an Officer where the visa officer does not need to review said doc OR if the harvest does not succeed. I was not able to gleam from my reading where harvests are unsuccessful but one must assume there would be some tech explanation.

Much like Chinook, it appears quite innocuous on the face. It speeds up assessment, heck even I could use a Harvester download and saving (automating) the organization of a file before I review – tasks we often leave to legal assistants and case managers.

However, there may me more than meets the eye. We’re getting a clearer picture of what the Officer actually sees in front of them when they render a decision. What the Chinook 3+ Platform looks like, the various tools and prompts that may or may not be providing information to guide a decision being rendered. Harvester is another one.

 

Takeaways

I would love feedback from our readers to see if they have any ideas but at this stage, I am looking at a couple major ones.

  1. Does the way we name and number our files mean anything any more? We often are creative with the way we try and flag specific names or combine documents, but how does Harvester extract or parse this apart? Is Harvester used (usable) on all apps or just select types that are already streamlined online?
  2. How meaningful is the ability to view the documents on Microsoft Access. From my understanding Harvester replaces the need to utilize other applications such as possibly PDF, Word, or an image reviewer. What does that mean for the way an Officer scrolls through various documents. What other tools does Microsoft Access provide in this regard (I’ve only watched a few online videos so maybe some of the tech-minded can advise);
  3. Why are there silos created for multiple applications? I am concerned again about this ability to string together various applications and harvest all at the same time. Is there a purpose to this? It would make very much sense within a family of applicants to be able to do so, but why would multiple applications un-related be harvested unless its simply to get the files ‘set up’ for review.

Would love for some of you to take a look at Harvester and let us know what you think!

 

Read More »

A Closer Look at How IRCC’s Officer and Model Rules Advanced Analytics Triage Works

As IRCC ramps up to bring in advanced analytics to all their Lines of Business (LOBs), it is important to take a closer look at what the foundational model, the China TRV Application Process, looks like. Indeed, we know that this TRV model will be the TRV model for the rest of the world sometime this year (if not already).

While this chart is from a few years back, reflecting as I have discussed in many recent presentations and podcasts, how behind we are in this area, my understanding that this three Tier system is still the model in place

Over the next few posts, I’ll try and break down the model in more detail.

This first post will serve as an overview to the process.

I have included a helpful chart, explaining how an application goes from intake to decision made and passport request.

While I will have blog posts, that go into more detail about what ‘Officer Rules’ and ‘Model Rules’ are, here is the basic gist of it. A reminder it only represents the process to approval NOT refusal, and such a similar chart was not provided.

Step 1) Officer’s Rules Extract Applications Out Based on Visa Office-Specific Rules

Each Visa Office has it’s own Officer’s Rules. If an application triggers one of those rules, it no longer gets processed via the Advanced Algorithm/AI model. Think about it as a first filter, likely for those complex files that need a closer look at by IRCC.

You will recall in our discussion of Chinook, the presence of “local word flags” and “risk indicators.” There is no evidence I have yet which links these two pieces together, but presumably the Officer Rules must also be triggered by certain words and flags.

Other than this, we are uncertain about what Officer’s Rules are and we should not expect to know. However, we do know that the SOPs (Standard Operating Procedures) at each Visa Office then apply, rather than the AA/AI model. What it suggests is that the SOPs (and access to these documents) may have the trigger for the word flags.

Step 2) Application of Model Rules

This is where the AA/AI kick in. Model Rules (which I will discuss in a future blog post) are created by IRCC data experts to replicate a high confidence ability to separate applications into Tiers. Tier 1 are the applications that to a high level of confidence, should lead the Applicant to obtain positive eligibility findings. Indeed, Tier 1 Applications are decided with no human in the loop but the computer system will approve them. If the Application is likely to fail the eligibility process, and lead to negative outcomes, it goes to Tier 3. Tier 3 requires Officer review, and – unsurprisingly – has the highest refusal rate as we have discussed in this previous piece.

It is those files that are between positive and negative (the ‘maybe files’) and also the ones that do not fit in the Model Rules nor Officer Rules that become Tier 2. Officers also have to review these cases, but the approval rates are better than Tier 3.

3) Quality Assurance

The Quality Assurance portion of this model, has 10% of all files, filtered to Tier 2 to verify the accuracy of the model.

The models themselves become ‘production models’ when a high level of confidence is met, and they are finalize – such as the ones we have seen for China TRV, India TRV, we believe also China and India Study Permits, but also likely cases such as VESPA (yet this part has not been confirmed). Before it becomes a Production Model, it is in the Exploratory model zone.

How do we know there is a high QA? Well this is where we look at the scoring of the file.

I will break down (and frankly need more research) into this particular model later and it will be the subject of a later piece, but applications are scored to ensure the model is working effectively.

It is interesting that Chinook also has a QA function (and a whole QA Chinook module 6), so it appears there’s even more overlap between the two systems, probably akin to a front-end/back-end type relationship.

4) Pre-Assessment

Tier 1 applications go straight to admissibility review, but those in 2 and 3 go to pre-assessment review by a Clerk.

Important to note here and in the module that these clerks and officers appear to be citing in CPC-O, not the local visa offices abroad. This may also explain why so many more decisions are being made by Canadian decision-makers, even though it may be ultimately delivered or associated with a primary visa office abroad.

But here-in lies a bit of our confusion.

Based on a 2018 ATIP we did, we know that they are triaging different of cases based on case types into “Bins” so certain officers or at least certain lettered numbers – would handle like cases. Yet, this appears to have been the India model then, but the China TRV model seems to centralize it more in Ottawa. Where does the local knowledge and expertise come in? Are there alternative models now that send the decisions to the local visa office or is it only Officer’s rules? Is this perhaps why decisions rendered on the TRVs from India and China are lacking the actual local knowledge that we used to see in decisions because they have been taken outside of the hands of those individuals.

Much of the work locally used to be done on verifying employers, confirming certain elements, but is that now just for those files that are taken out of the triage and flagged as being possible admissibility concerns? Much to think about here.

Again, note that Chinook as a pre-assessment module also that seems to be responsible for many of the same things, so perhaps Chinook is also responsible for presenting the results of that analysis in a more Officer friendly way but why is it also directing the pre-assessment, if it is being done by officers?

5) Eligibility Assessment

What is important to note that this stage is Eligibility where there is no automated approval is still being done by Officers. What we do not know is if there is any guidance directed at Officers to approve/refuse a certain number of Tier 2 or Tier 3 applicants. This information would be crucial. We also know IRCC is trying to automate refusals, so we need to track carefully what that might look like down the road as it intersects with negative eligibility assessments.

6) Admissibility Review + Admissibility Hits

While this likely will be the last portion to be automated, given the need to cross-verify many different sources we also know that IRCC has programs in place such as Watchtower, again the Risk Flags, which may or may not trigger admissibility review. Interestingly enough, even cases where it seems admissibility (misrep) may be at play, it seems to also lead to eligibility refusals or concerns. I would be interested in knowing whether the flagging system also occurs as the eligibility level or whether there is a feedback/pushback system so a decision can be re-routed to eligibility (on an A16 IRPA issue for example).

KEY: Refusals Not Reflected in Chart

What does the refusal system look like? This becomes another key question as decisions are often skipping even the biometrics or verifications and going straight to refusal. This chart obviously would look much more complicated, with probably many more steps at which a refusal can be rendered without having to complete the full eligibility assessment.

Is there a similar map? Can we get access to it?

 

Conclusion – we know nothing yet, but this also changes everything

This model, and this idea of an application being taken out of the assembly line at various places, going through different systems of assessment, really in my mind suggest that we as applicant’s counsel know very little about how our applications will be processed in the future. These systems do not support cookie cutter lawyering, suggest flags may be out of our control and knowledge, and ultimately lead us to question what and who makes up a perfect Tier 1 application.

Models like this also give credence to IRCC’s determination to keep things private and to keep the algorithms and code away from prying investigators and researchers, and ultimately those who may want to take advantage of systems.

Yet, the lack of transparency and concerns that we have about how these systems filter and sort appear very founded. Chinook mirrors much of what is in the AA model. We have our homework cut out for us.

Read More »

Chinook is AI – IRCC’s Own Policy Playbook Tells Us Why

One of the big debates around Chinook is whether or not it is Artificial Intelligence (“AI”). IRCC’s position has been that Chinook is not AI because there is a human ultimately making decisions.

In this piece, I will show how the engagement of a human in the loop is a red herring, but also how the debate skews the real issue that automation, whether for business function only or to help administer administrative decision, can have adverse impacts – if unchecked by independent review.

The main source of my argument that Chinook is AI is from IRCC itself – the Policy Playbook on Automated Support on Decision-Making 2021. This an internal document, which has been updated yearly, but likely captures the most accurate ‘behind the scenes’ snapshot of where IRCC is heading. More on that in future pieces.

AI’s Definition per IRCC

The first, and most important thing is to start with the definition of Artificial intelligence within the Playbook.

The first thing you will notice is that the Artificial Intelligence is defined so broadly by IRCC, which seems to go against the narrow definition it seems to paint with respect to defining Chinook.

Per IRCC, AI is:

If you think of Chinook dealing with the cognitive problem of attempting to issue bulk refusals – and utilizing computer science (technology) – to apply to learning, problem solving and pattern recognition – it is hard to imagine that a system would even be needed if it weren’t AI.

Emails among IRCC, actively discuss the use of Chinook to monitor approval and refusal rates utilizing “Module 6”

Looking at the Chinook Module’s themselves, Quality Assurance (“QA”) is built in as a module. It is hard to imagine a QA system that looks at refusal and approval rates, and automates processes and is not AI.

As this article points out:

Software QA is typically seen as an expensive necessity for any development team; testing is costly in terms of time, manpower, and money, while still being an imperfect process subject to human error. By introducing artificial intelligence and machine learning into the testing process, we not only expand the scope of what is testable, but also automate much of the testing process itself.

Given the volume of files that IRCC is dealing with, it is unlikely that the QA process relies only on humans and not technology (else why would Chinook be implemented). And if it involves technology and automation (a word that shows up multiple times in the Chinook Manual) to aid the monitoring of a subjective administrative decision – guess what – it is AI.

We also know also that Chinook is underpinned with ways to process data, look at historical approval and refusal rates, and flag risks. It also integrates with Watchtower to review the risk of applicants.

It is important to note that even in the Daponte Affidavit in Ocran that alongside ATIPs is the only information we have about Chinook, the focus has always been on the first five modules. Without knowledge of the true nature of something like Module 7 titled ‘ToolBox’ it is certainly premature to be able to label the whole system as not AI.

 

Difficult to Argue Chinook is Purely Process Automation Given Degree of Judgment Exercised by System in Setting Up Findecs (Final Decisions)

Where IRCC might be trying to carve a distinction is between process automation/digital transformation and automated decision support systems.

One could argue, for example, that most of Chinook is process automation.

For example, the very underpinning of Chinook is it allows for the entire application to be made available to the Officer in one centralized location, without opening the many windows that GCMS required. Data-points and fields auto populate from an application and GCMS into a Chinook Software, allowing the Officer to render decisions easier. We get this. It is not debatable.

But does it cross into automated decision support system? Is there some degree of judgment that needs to be applied when applying Chinook that is passed on to technology that would traditionally be done by humans.

As IRCC defines:

The Chinook directly assists an Officer in approving or refusing a case. Indeed, Officers have to apply discretion in refusing, but Chinook presents and automates the process. Furthermore, it has fundamentally reversed the decision-making processing, making it a decide first, justify later approach with the refusal notes generator. Chinook without AI generating the framework, setting up the bulk categories, automating an Officer’s logical reasoning process, simply does not exist.

These systems replace the process of Officer’s  needing to manually review documents and render a final decision, taking notes to file, to justify their decision. It is to be noted that this is still the process at low volume/Global North visa offices where decisions do this and are reflected in the extensive GCMS notes.

In Chinook, any notes taken are hidden and deleted by the system, and a template of bulk refusal reasons auto-populate, replace, and shield the actual factual context of the matter from scrutiny.

Hard to see how this is not AI. Indeed, if you look at the comparables provided – the eTA, Visitor Record and Study Permit Extension automation in GCMS, similar automations with GCMS underpin Chinook. There may be a little more human interaction, but as discussed below – a human monitoring or implementing an AI/advanced analytics/triage system doesn’t remove the AI elements.

 

Human in the Loop is Not the Defining Feature of AI

The defense we have been hearing from IRCC is that there is a human ultimately making a decision, therefore it cannot be AI.

This is obscuring a different concept called human-in-the-loop, which the Policy Playbook suggests actually needs to be part of all automated decision-making processes. If you are following, what this means is the defense of a human is involved (therefore not AI), is actually a key defining requirement IRCC has placed on AI-systems.

It is important to note that there is certainly is a spectrum of application of AI at IRCC that appears to be leaning away from human-in-the-loop. For example, IRCC has disclosed in their Algorithmic Impact Assessment (“AIA”) for the Advanced Analytics Triage of Overseas Temporary Resident Visa (“TRV”) Applications that there is no human in the loop with the automation of Tier 1 approvals. The same system without a human-in-the-loop is done for automating eligibility approvals in the Spouse-in-Canada program, which I will write about shortly.

 

Why the Blurred Line Between Process Automation and Automated Decision-Making Process Should Not Matter – Both Need Oversight and Review

Internally, this is an important distinguishing characteristic for IRCC because it appears that at least internal/behind-the-scenes strategizing and oversight (if that is what the Playbook represents) applies only to automated decision-support systems and not business automations. Presumably such a classification may allow for less need for review and more autonomy by the end user (Visa Officer).

From my perspective, we should focus on the last part of what IRCC states in their playbook – namely that ‘staff should consider whether automation that seems removed from final decisions may inadvertently contribute to an approval or a refusal.’

To recap and conclude, the whole purpose of Chinook is to be able to render the approval and refusal in a quicker and bulk fashion to save Officer’s time. Automation of all functions within Chinook, therefore, contribute to a final decision – and not inadvertently but directly. The very manner in which decisions are made in immigration shifts as a result of the use of Chinook.

Business automation cannot and should not be used as a cover for the ways that what appear routine automations actually affect processing that would have had to be done by humans, providing them the type of data, displaying it on the screen, in a manner that can fetter their discretion and alter the business of old.

That use of computer technology – the creation of Chinook – is 100% definable as the implementation of AI.

 

Read More »
About Us
Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary.

Let’s Get in Touch

Translate »