In this short post, I will canvass five relatively urgent questions we need the collective answers to as we represent clients who are now being addressed by artificial-intelligence built decision-making systems. For clarity and to adopt IRCC’s status quo, I will not consider Chinook to be one of those systems, BUT it is clear Chinook interacts with AI and the role of Chinook as it pertains to decisions, especially as advanced analytics skips eligibility assessment become increasingly more important.
1) If IRCC is basing Advanced Analytics decisions of historical data, what historical data is being utilized? Does it represent a reasonable/ideal officer and how can it be re-programmed?
How do we ensure it represents an ideal period (not a stressed officer/overburdened)? IRCC has been overburdened with applications for the last decade having to create systems to shortcut decision-making and has been openly acknowledging their resource crunch. If historical data does not represent what we want for future processing – how can projections be changed. How, in practice, does bias get stripped or de-programmed out of data? We have seen positive impacts (for example Nigerian study permit approval rates) since recent advocacy but is that programmed in manually by a human? and how?
2) How does Advanced Analytics interact with Chinook?
In the past Chinook was utilized for only a portion of cases, we understand to both bulk approve cases and bulk refuse. If Advanced Analytics serves to provide auto-positive eligibility, why is Chinook even needed to sort the Applicant’s information to decide whether to approve or refuse. Is there column in Chinook that allows an Officer to see if Eligibility has already been met (i.e. it was AA’d) and therefore altering their application and use of Chinook? The fear is Chinook becomes just a refusal tool and is no longer needed for approvals.
Furthermore, what does an Officer see when they have to perform eligibility assessment? Are they given any information about data trends/key risk indicators/etc. that Advanced Analytics helped generate presumably during the triage? Is it something the Officer has to dig for in separate module of Chinook or is it displayed right in their face as they render a decision to remind them?
Are Officer’s made aware if a case goes into manual review for example as QA for an Automated Decision? How are those cases tracked?
3) What is the incentive to actually process a non-AA decision if AA decisions can be processed more accurately/quickly?
For those files that are triaged to the non-Green/Human bin, if it becomes a numbers game and the situation is no longer ‘first in, first out’, why even process the complex cases anymore? Why not fill the slots with newer AA/low risk cases that will create less challenges and just let decisions that are complicated or require human intervention to set for one, two years until the Applicant seeks a withdrawal? Other than mandamus, what remedies will Applicants have to resolve their cases. It is simply about complaining hard enough to get pulled out of review and for an eventual refusal? How do we ensure we do not refuse all Tier 2/3 cases as a matter of general practice as we get more Tier 1 applications in the door (likely from visa-exempt, Global North countries).
4) What does counsel for the Department of Justice see in GCMS/Rule 9 Reasons versus what we see?
Usually, the idea of a tribunal record or GCMS is that it a central record of an Applicant’s file but with increasing redactions, it is becoming less and less clear who has access to what information. Client’s are triaged utilizing “bins” but those bins are stripped from the GCMS notes we get. Are they also stripped for DOJ or not? Right now local word flags and risk indicators are stripped for applicants, but are they also stripped for DOJ? What about the audit trail that exists for each applicant that we have not been able to obtain via ATIP?
Taking it a step further – what constitutes a Tribunal Record anymore? Is it only what was submitted by the Applicant and what is in the Officer’s final decision? I know my colleague, Steven Meurrens has started to get even email records between Officers, but there’s a lack of clarity on what that Tribunal Record consists of and whether it necessarily must include the audit trail, risk indicators, and local word flags. Should it include the algorithms?
How does one even try to make fettering arguments if we do not know what the Officer had access to before rendering a decision (how they were possibly fettered)?
The other question becomes how do we let the judiciary know about these systems? Does it go up as a DOJ-led reference (and who can intervene and be on the other side)? The strategic litigation likely will be implemented again in a weak fact case. How do we ensure counsel on the other side is prepared for this so they can not only fight back but provide a counternarrative to the judiciary on these issues?
5) Will the Triaging Rules ever be Made Public?
Currently, the AI is quite basic from our understanding. There are key rules inputted and applications that meet the requirements go through a decision-tree that leads to auto-eligibility approvals. However, as these AA programs adopt more machine learning components, allowing them to spot out and sniff out new flags, new rules, new issues – will there be some transparency around what the rules are? Should there be different treatment between rules that are more on the security/intelligence/system integrity side versus more black and white rules such as only individual applicants can get tier one processing, or applicant’s must not have had a previous refusal to benefit from X, or holding a U.S. visa or previous Canadian visa over past ten years is a Tier 1 factor.
If the ultimate goal is also to use these rules to try and affect processing (lower number of applicants and raise approvals), presumably telling the public these factors so they may be dissuaded from applying when they do not have a strong case could be of benefit.
Just some random Monday morning musings as we dig further. Stay tuned.