Cautious Concern But Missing Crucial Context – Justice Brown’s Decision in Haghshenas

Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

After the Federal Court’s decision in Ocran v. MCI (Canada) 2022 FC 175it was almost inevitable that we would be talking again about Chinook. Counsel (including ourselves) have been raising the use of Chinook and the concerns of Artificial Intelligence in memorandums of argument and accompanying affidavits, arguing – for example – that many of the standard template language used fall short of the Vavilov standard and in many cases are non-responsive or reflective to the Applicant’s submissions.

We have largely been successful in getting cases consented on using this approach, yet I cannot say our overall success in resolving judicial reviews have followed suite. Indeed, recently we have been stuck at the visa office more on re-opening than we have been in the past.

Today, the Federal Court rendered a decision that again engaged in Chinook and in this case also touched on Artificial Intelligence. Many took to Twitter and Linkedin to express concern about bad precedent. Scholars such as Paul Daly also weighed in on Justice Brown’s decision, highlighting that there is simply a lot we do not know about how Chinook is deployed. 

I might take a different view than many on this case. While I think it might be read (and could be pointed to as precedent by the Department of Justice) as a decision upholding the reasonableness and fairness of utilizing Chinook and AI, I also think there was no record that tied in how the process affects the outcome, clearly the link that Justice Brown was concerned about.

Haghshenas v. Canada (MCI) 2023 FC 464

Mr. Haghshenas had his C-11 (LMIA exempt) work permit refused on the basis that he would not leave Canada at the end of his authorized stay pursuant to subsection 200(1) of the IRPR. It is interesting that in the Certified Tribunal Record and specifically the GCMS notes, there is no mention of Chinook 3+ as is commonly disclosed now. However, there is the wording of Indicators (meaning risk indicators) as N/A and Processing Word Flag as N/A. These are Module 5 flags, that make up one of the columns in the Chinook spreadsheet, so it is presumable that Chinook could have been used. However, we do note the screenshots that were part of the CTR do not appear to include the Chinook tab or any screenshot of what Chinook looked at. From the record, this lack of transparency on what tool was actually used did not appear to be challenged.

Ultimately, the refusal decision itself is actually quite personalized – not carrying the usual pure template characteristics of Module 4 Refusal Notes generator. There is personalized assessment of the actual business plan, the profits considered (and labelled speculative by the Officer), and concerns about whether registration under the licensed contractor process has been done. From my own experiences, this decision seems quite removed from the usual Module 3 and perhaps suggests that either Chinook was not fully engaged OR that the functionality of Chinook has gotten much better to the point where it’s use becomes blurred. It could reasonably be both.

In upholding the procedural fairness and reasonableness of the decision, Justice Brown does engage in two areas about a discussion of Chinook and AI.

In dismissing the Applicant’s argument on procedural fairness, Justice Brown writes:

[24] As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.

He writes later, under the reasonableness of decision, heading:

[28] Regarding the use of the “Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of artificial intelligence is irrelevant given that (a) an Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision as required by Vavilov.

Justice Brown appeared to be concerned with the lack of the Applicant’s tying of the process of utilizing artificial intelligence or Chinook to how it actually impacted the reasonableness or fairness of the decision. Justice Brown is looking at the final decision and correctly suggests – an Officer made it, the Record justifies it – how it got from A to C is not the reviewable decision it is the A of the input provided to the Officer and the C of the Officer’s decision.

I want to question about the missing B – the context.

It is interesting to note also, in looking at the Record, that the Respondent (Minister) did not engage in any discussion of Chinook or AI. The argument was solely raised by the Applicant – in two paragraphs in the written memorandum of argument and one paragraph in the reply. The Applicant’s argument, one rejected by Justice Brown, was that the uncertainty of the reliability, efficacy, and lack of communication created an uncertainty of how these tools were used, which ultimately impacted the fairness/reasonableness.

The Applicant captures these arguments in paragraphs 9, 10 , and 32 of their memorandum, writing:

The nature of the decision and the process followed in making it

9. While the reason originally given to the Applicant was that the visa officer (the
decision maker) believed that the Applicant would not leave Canada based on the
purpose of visit, the reasons now given during these proceedings reveal that the
background rationale of the decision maker does not support refusal based on
purpose of visit. In fact, the application was delayed for nearly five months and in
the end the decision was arrived at with the help of Artificial Intelligence
technology of Chinook 3+. It is not certain as to what information was analysed
by the aforesaid software and what was presented to the decision maker to
make up a decision. It can be presumed that not enough of human input has
gone into it, which is not appropriate for a complicated case involving business
immigration. It is also not apt in view of the importance of the decision to the
individual, who has committed a great deal of funds for this purpose. (emphasis added)

10. Chinook is a processing tool that it developed to deal with the higher volume of
applications. This tool allows DMs to review applications more quickly.
Specifically, the DM is able to pull information from the GCMS system for many
applications at the same time, review the information and make decisions and
generate notes  in using a built-in note generator, in a fraction of the time it
previously took to review the same number of applications. It can be presumed
that not enough human input has gone into it, which is not appropriate for a
complicated case involving business immigration. In the case at hand, Chinook
Module 5- indicator management tool was used, which consists of risk indicators
and local word flags. A local word flag is used to assist in prioritizing applications.
It is left up to Chinook to search for these indicators and flags and create a
report, which is then copy and pasted into GCMS by the DM. The present case is
one that deserved priority processing being covered by GATS. Since the
appropriate inputs may not have been fed into the mechanised processes of
Chinook, which would flag priority in suchlike GATS cases, the DM¶s GCMS
notes read 3processing priority word flag: N/A . This is clearly wrong and betrays
the fallout in using technology to supplant human input. The use of Chinook has
caused there to be a lack of effective oversight on the decisions being generated.
It is also not apt in view of the importance of the decision to the individual, who
has committed a great deal of funds for this purpose (Baker supra). (emphasis added)

32. On the issue of Chinook, while it can be believed that faced with a large volume of
cases, IRCC has been working to develop efficiency-enhancing tools to assist
visa officers in the decision-making process. Chinook is one such tool. IRCC has
been placing heavy reliance on it for more than a year now. However, as always
with use of any technology, there are questions about its reliability and efficacy for
the purpose it sets out to achieve. There are concerns about the manner in which
information is processed and analysed. The working of the system is still unclear
to the general public. A decision rendered using it cannot be termed reasonable until it is elaborated to all stakeholders to what extent has machine replaced human input and how it impacts the final outcome. The test set by the Supreme Court in Vavilov has not been met.

The Applicant appeared to be almost making an argument that the complexity of the Applicant’s case suggested Chinook should not have been used and therefore a human should have reviewed it. However – there seemed to have been a gap in engaging both the fact that IRCC did not indicate it had used Chinook and that the reasons actually were more than normally responsive to the facts. I think also, the argument that a positive world flag should have been implemented but was not, ultimately did not get picked up the Court – but lacked a record of affidavit evidence or a challenge to the CTR to obtain supporting evidence for.

In hindsight, to fight this case, an Applicant arguing Chinook/AI was inappropriately applied needs to educate the Court on how (assuming proof can be obtained) their client was prejudiced by the use of the tool, what elements of the tool applied to the Applicant’s case, or how the Officer’s discretion was possibly fettered by the inputs that were put before them – specifically tying it to the reasons that were generated. The challenging thing is, without knowledge (nor education for the Court) on whether this application was triaged into Tiers and how/if Chinook was applied to generate the reasons here – there was no factual context to be able to lay the argument.

I think the Applicant here, by both arguing that Chinook and AI’s application was unclear but that it should have prioritized him, went too far down the path of asking the Court to play policymaker and substitute what should have been done, rather than to emphasize dismantling and breaking down the actual process of how a decision is rendered for the Court’s understanding and possible substantive/procedural concern.

However, again, this decision does not give off the terminology to suggest the usual bulk processing (no factual analysis, pure template language), so ultimately it may have been the wrong case to launch this systemic argument.

The Applicant also highlighted Chinook in their reply in paragraph 32, going back to the argument that the working of the system was unclear, it’s reliability, efficacy unclear, and therefore the decision could not have been reasonable.

Finally, the Applicant in paragraph 18 of their reply double down on the concerns of the software and arguing for an onus on the Respondent to explain and justify the use of the tool. He writes:

As supplemental response to the Respondent’s Memorandum of Arguments, it is
submitted that the Artificial Intelligence tool of Chinook has been relied on to pass
the decision. It is not clear to what extent there was involvement of the said software
in the decision-making process. The contentions and concerns of the applicant as
regards the use of this software have not been adequately addressed by the
respondent. Since it is the respondent that is using the technology, the onus is on it
to explain its methodology. It has to be clarified by the respondent if the visa officer was only assisted, or supplanted by this technology. What appears on the face of it
is that the AI tool has done the entire processing and the visa officer has simply
rephrased the findings in his/her reasons.  (emphasis added)

I think it would have been helpful to engage in the materials that IRCC have made available on the use of Chinook. IRCC has put out information on Chinook (https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-feb-15-17-2022/chinook-development-implementation-decision-making.html), there is both the Daponte Affidavit from Ocran, the cross-examination, and various ATIPs and training manuals where information provided creates concerns about the factual record. In my opinion, this has to be entered in affidavit evidence in order to even begin educating the Court – even if it creates more documentation that the Court may not want to engage in until after a leave decision.

Even in posting about Chinook, IRCC has opened up I believe the place where battles are to be fought – at the Certified Tribunal Record (CTR) stage as to what constitutes a complete CTR.

  • In the Ocran test case chosen to address the issue of completeness of Certified Tribunal Records (CTR) in Chinook cases, the Court dismissed the application for judicial review on February 10, 2022. The Court did not rule on the CTR  issue specifically, as it was not in direct dispute between the parties. (emphasis added)

Department of Justice tried to argue in Ocran that there were would be deficiencies because of the way Chinook captures multiple clients information and how working notes are deleted and cleared – and this is where disputes over whether the Applicant has full meaningful access to the Record and the relevant information that was before the Officer or machine’s triaging the case should be fought.

Ultimately, the argument that I am not seeing right now – is the one on fettering discretion. If an Officer handles a case and knows (even if they are not told) that they have to perform certain tasks which suggests it is medium/high risk (Tier 2/3 or Standard Bin) case, how does that impact their analysis? What data or directives are presented to the Officer within the tools used that may impact their decision-making?

Ultimately, this needs to be fought through affidavits, CTR challenges, possibly even actions – but evidence-based, rather than purely on a policy objection to the use of these tools. The latter, humbly, is an inevitably we will have to deal with.

I hope as immigration counsel, we can collaborate moving forward. We are always eager and willing to help Counsel develop the record and consider unique ways to challenge automated decision-making systems. However, we have to be careful that if we argue it – we have to have the right facts, a solid underlying factual record, and be prepared to go all the way.

About Us

Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary

CONTACT US

We're here to help you

Fill out the form below or email me at info@heronlaw.ca

Name *

Phone

Email *

Best Time to Call *
Subject*
Message

Translate »