Artificial Intelligence

Award-Winning Canadian Immigration and Refugee Law and Commentary Blog

Blog Posts

What is an AI Hype Cycle and How Is it Relevant to Canadian Immigration Law?

Recently I have been reading and learning more about AI Hype Cycles.

I first learned this term from Professor Kristen Thomasen when she did a guest lecture for our Legal Methodologies graduate class and discussed it with respect to her own research on drone technology and writing/researching during hype cycles. Since then, in almost AI-related seminar I have attended the term has come up with respect to the current buzz and attention being paid to AI. For example, Timnit Gebru in her talk for the GC Data Conference which I recently attended noted that a lot of what is being repackaged as new AI today was the same work in ‘big data’ that she studied many years back. For my own research, it is important to understand hype cycles to ground my research into more principled and foundational approaches so that I can write and explore the changes in technology while doing slow scholarship notwithstanding changing public discourse and the respective legislative/regulatory changes that might follow.

A good starting point for understanding hype cycles, especially in the AI market, is the Gartner Hype Cycle. Who those who have not heard the term yet, I would recommend checking out the following video:

Gartner reviews technological hype cycles through five phases: (1) innovation trigger; (2) peak of inflated expectations; (3) trough of disillusionment; (4) slope of enlightenment, and plateau of productivity.

It is interesting to see how Gartner has labelled the current cycles:

One of the most surprising things to me on first view is how automatic systems and deicsion intelligence is still on the innovation trigger – early phase on the hype cycle. The other is how many different types of AI technology are on the hype cycle and how many the general public actually know/engage with. I would suggest at most 50% of this list is in the vocabulary and use of even the most educated folks. I also find that from a laypersons perspective (which I consider myself on AI), challenges in classifying whether certain AI concepts fit one category or another or are a hybrid. This means AI societal knowledge is low and even for some of the items that are purportedly on the Slope of Enlightment or Plateau of Productivity.

It is important to note before I move on that that the AI Hype Cycle also has been used in terms outside of the Gartner definition, more in a more criticial sense of technologies that are in a ‘hype’ phase that will eventually ebb and flow. A great article on this and how it affects AI definitions is the piece by Eric Siegel in the Harvard Business Review how the hype around Supervised Machine Learning has been rebranded into a hype around AI and has been spun into this push for Artificial General Intelligence that may or may not be achievable.

 

Relevance to the Immigration Law Space

The hype cycle is relevant to Canadian immigration law in a variety of ways.

First, on the face, Gartner is a contracting partner of IRCC which means they are probably bringing in the hype cycle into their work and their advice to them.

Second, it brings into question again how much AI-based automated decision-making systems (ADM) is still in the beginning of the hype cycle. It make sense utilizing this framework to understand why these systems are being so heralded by Government in their policy guides and presentation, but also that there could be a peak of inflated expectations on the horizon that may lead to more hybrid decision-making or perhaps a step back from use.

The other question is about whether we are (and I am a primary perpetrator of this) overly-focused on automated-decision making systems without considering the larger AI supply chain that will likely interact. Jennifer Cobbe et al talk about this in their paper “Understanding accountability in algorithmic supply chains” which was assigned for reading in my Accountable Computer Systems course. Not only are there different AI components, providers, downstream/upstream uses, and actors that may be involved in the AI development and application process.

Using immigration as an example, there may be one third-party SAAS that checks photos, another software using black-box AI may engage in facial recognition, and ultimately, internal software that does machine-learning triaging or automation of refusal notes generation. The question of how we hold these systems and their outputs accountable will be important, especially if various components of the system are on different stages of the hype cycle or not disclosed in the final decision to the end user (or immigration applicant).

Third, I think that the idea of hype cycles is very relevant to my many brave colleagues who are investing their time and energy into building their own AI tools or implementing sofware solutions for private sector applicants. The hype cycle may give some guidance as to the innovation they are trying to bring and the timeframe they have to make a splash into the market. Furthermore, immigration (as a dynamic and rapidly changing area of law) and immigrants (as perhaps needing different considerations with respect to technological use, access, or norms) may have their own considerations that may alter Gartner’s timelines.

It will be very interesting to continue to monitor how AI hype cycles drive both private and public innovation in this emerging space of technologies that will significantly impact migrant lives.

Read More »

Why the 30-Year Old Florea Presumption Should Be Retired in Face of Automated Decision Making in Canadian Immigration

In the recent Federal Court decision of Hassani v. Canada (Citizenship and Immigration), 2023 FC 734, Justice Gascon writes a paragraph that I thought would be an excellent starting point for a blog. Not only does it capture the state of administrative decision-making in immigration and highlight some of the foundational pieces, but also I want to focus on one part of it that I may respectfully suggest, needs a re-think.

Hassani involved an Iranian international student who was refused a study permit to attend a Professional Photography program at Langara College. She was refused on two factors – [1] that she did not have significant family ties outside Canada and that [2] her purpose of visit was not consistent with a temporary stay given the details she had provided in her application. On the facts, it is definitely questionable that this case even went to hearing given the Applicant had no family ties in Canada and all her family ties were indeed outside Canada and in Iran. Nevertheless, Justice Gascon did a very good job analyzing the flaws within the Officer’s two findings.

There is one paragraph, 26, that is worth breaking down further – and there’s one foundational principle cited that I think needs a major rethink.

Justice Gascon writes:

[26] I do not dispute that a decision maker is generally not required to make an explicit finding on each constituent element of an issue when reaching its final decision. I also accept that a decision maker is presumed to have weighed and considered all the evidence presented to him or her unless the contrary is shown (Florea v Canada (Minister of Employment and Immigration)[1993] FCJ No 598 (FCA) (QL) at para 1). I further agree that failure to mention a particular piece of evidence in a decision does not mean that it was ignored and does not constitute an error (Cepeda-Gutierrez v Canada (Minister of Citizenship and Immigration)1998 CanLII 8667 (FC), [1998] FCJ No 1425 (QL) [Cepeda-Gutierrez] at paras 16–17). Nevertheless, it is also well established that a decision maker should not overlook contradictory evidence. This is particularly true with respect to key elements relied upon by the decision maker to reach its conclusion. When an administrative tribunal is silent on evidence clearly pointing to an opposite conclusion and squarely contradicting its findings of fact, the Court may intervene and infer that the tribunal ignored the contradictory evidence when making its decision (Ozdemir v Canada (Minister of Citizenship and Immigration)2001 FCA 331 at paras 9–10Cepeda-Gutierrez at para 17). The failure to consider specific evidence must be viewed in context, and it will lead to a decision being overturned when the non-mentioned evidence is critical, contradicts the tribunal’s conclusion and the reviewing court determines that its omission means that the tribunal disregarded the material before it (Penez at paras 24–25). This is precisely the case here with respect to Ms. Hassani’s family ties in Iran. (emphasis added)

 

What is the Florea Presumption?

As stated by Justice Gascon, the principle in Florea v Canada (Minister of Employment and Immigration),[1993] FCJ No 598 (FCA) pertains to a Tribunal’s weighing of evidence and the presumption that they have considered all the evidence before them. It puts the onus on the Applicant stating otherwise, to establish the contrary.

As the Immigration and Refugee Board Legal Services chapter on Weighing Evidence states:

Rather, the panel is presumed on judicial review to have weighed and considered all of the evidence before it, unless the contrary is established. (see: https://irb.gc.ca/en/legal-policy/legal-concepts/Documents/Evid%20Full_e-2020-FINAL.pdf)

This case and principle is often cited in refugee, humanitarian and compassionate grounds matters, inadmissibility cases, and IRB matters.

Reviewing case law for the last two years (since 2021), I did find a handful of the thirty-cases I reviewed that did engage this case and principle in a temporary resident context.

See e.g. study permit JR – Marcelin v. Canada (Citizenship and Immigration) 2021 FC 761 – Madam Justice Roussel at para 16 [JR dismissed]; PNP Work Permit – Shang v. Canada (Citizenship and Immigration), 2021 FC 633 at para 65 citing Basanti v Canada (Citizenship and Immigration), 2019 FC 1068 at para 24  – Madam Justice Kane [JR allowed];  Minor Child TRV Refusal – Dardari v. Canada (Citizenship and Immigration) 2021 FC 493 at para 39 – adding the portion – and is not obliged to refer to each piece of evidence submitted by the applicant – Madam Justice St-Louis [JR dismissed];

Related to this is the long-standing and oft-cited decision of Cepeda-Gutierrez v. Canada (Citizenship and Immigration) 1998 FC No 1425 in which Justice Evans re-iterated that an Agency stating they considered all evidence before it (even as a boilerplate statement) would usually be enough to suffice and assure parties and the Court of this. He writes:

[16]      On the other hand, the reasons given by administrative agencies are not to be read hypercritically by a court (Medina v. Canada (Minister of Employment and Immigration) (1990), 12 Imm. L.R. (2d) 33 (F.C.A.)), nor are agencies required to refer to every piece of evidence that they received that is contrary to their finding, and to explain how they dealt with it (see, for example, Hassan v. Canada (Minister of Employment and Immigration) (1992), 147 N.R. 317 (F.C.A.). That would be far too onerous a burden to impose upon administrative decision-makers who may be struggling with a heavy case-load and inadequate resources. A statement by the agency in its reasons for decision that, in making its findings, it considered all the evidence before it, will often suffice to assure the parties, and a reviewing court, that the agency directed itself to the totality of the evidence when making its findings of fact.

(emphasis added)

 

Why the Florea Presumption Should Be Reversed For Temporary Resident Applications and Any Decision Utilizing Advanced Analytics/AI/Chinook/Cumulus/Harvester

My argument is that this presumption that all evidence has been considered, as well as the boilerplate template language stating that it was considered, should not apply universally in 2023.

We know enough (again not enough about the system writ large) but enough to know that systems such as Chinook were created to facilitate the processing of temporary resident applications in hundreds of seconds, to extract data into excel tables for bulk processing, and to automate eligibility approvals. These were done specifically allow Officers to spend less time and consider enough, not all, of the evidence before them to render a decision.

I think the fact that applications are being auto-approved for eligibility, simply on a set of rules that are inputted primarily based on biometric information of an applicant should be enough to raise concerns that the systems even require consideration of most of the evidence submitted by an applicant.

All the materials on bulk processing that IRCC has released in the past few years, has been focused on the fact that not all documents need to be reviewed (not wording that states: review Additional Documents, as required).

 

IRCC Officer Training Guide Obtained Through ATIP

 

IRCC Visa Office Training Guide Obtained Through ATIP

If you look at the Daponte Affidavit and the original Module 3 Prompt that was created, it does not add confidence to the requirement that all documents needed to necessarily be reviewed:

Daponte Affidavit from Ocran

We learned that in response to concerns, they added to Chinook a prompt reminder for Officers to review all materials, but it is clear Chinook has gone far beyond ‘review and initial assessment’ to bulk processing.

Even with Cumulus, it is clear that if some docs that are not coverted to e-Docs they have to be pulled up separately in GCMS, the very tedious process that tools such as Cumulus seek to avoid.

Cumulus Training Manual Obtained Through ATIP

I would presume that it would be much easier for an Officer to make decision based on these summary extractions then to go into the documents.

Cumulus Training Guide Obtained Through ATIP

The documents are viewed below, much more akin to a ‘preview’ mode.

Cumulus Training Guide Obtained Through ATIP

Harvester, a tool that facilitates the conversion of documents into a reviewable format is similarly based on what documents can be extracted.

Harvester User Guide Obtained Via ATIP

Based on the way it is described and how some offices can exclude certain documents, it already suggests not all documents make it to the purview of the Officer.

Most importantly, as a constraint is time. As Andrew Koltun has uncovered, IRCC spends 101 seconds on average, with Chinook processing. https://theijf.org/nearly-40-per-cent-of-student-visa-applications-from-india-rejected-for-vague-reasons#

Respectfully, 101 seconds cannot be enough to consider but one or two documents – max – before rendering a decision. The future use […]

Read More »

Cautious Concern But Missing Crucial Context – Justice Brown’s Decision in Haghshenas

After the Federal Court’s decision in Ocran v. MCI (Canada) 2022 FC 175it was almost inevitable that we would be talking again about Chinook. Counsel (including ourselves) have been raising the use of Chinook and the concerns of Artificial Intelligence in memorandums of argument and accompanying affidavits, arguing – for example – that many of the standard template language used fall short of the Vavilov standard and in many cases are non-responsive or reflective to the Applicant’s submissions.

We have largely been successful in getting cases consented on using this approach, yet I cannot say our overall success in resolving judicial reviews have followed suite. Indeed, recently we have been stuck at the visa office more on re-opening than we have been in the past.

Today, the Federal Court rendered a decision that again engaged in Chinook and in this case also touched on Artificial Intelligence. Many took to Twitter and Linkedin to express concern about bad precedent. Scholars such as Paul Daly also weighed in on Justice Brown’s decision, highlighting that there is simply a lot we do not know about how Chinook is deployed. 

I might take a different view than many on this case. While I think it might be read (and could be pointed to as precedent by the Department of Justice) as a decision upholding the reasonableness and fairness of utilizing Chinook and AI, I also think there was no record that tied in how the process affects the outcome, clearly the link that Justice Brown was concerned about.

Haghshenas v. Canada (MCI) 2023 FC 464

Mr. Haghshenas had his C-11 (LMIA exempt) work permit refused on the basis that he would not leave Canada at the end of his authorized stay pursuant to subsection 200(1) of the IRPR. It is interesting that in the Certified Tribunal Record and specifically the GCMS notes, there is no mention of Chinook 3+ as is commonly disclosed now. However, there is the wording of Indicators (meaning risk indicators) as N/A and Processing Word Flag as N/A. These are Module 5 flags, that make up one of the columns in the Chinook spreadsheet, so it is presumable that Chinook could have been used. However, we do note the screenshots that were part of the CTR do not appear to include the Chinook tab or any screenshot of what Chinook looked at. From the record, this lack of transparency on what tool was actually used did not appear to be challenged.

Ultimately, the refusal decision itself is actually quite personalized – not carrying the usual pure template characteristics of Module 4 Refusal Notes generator. There is personalized assessment of the actual business plan, the profits considered (and labelled speculative by the Officer), and concerns about whether registration under the licensed contractor process has been done. From my own experiences, this decision seems quite removed from the usual Module 3 and perhaps suggests that either Chinook was not fully engaged OR that the functionality of Chinook has gotten much better to the point where it’s use becomes blurred. It could reasonably be both.

In upholding the procedural fairness and reasonableness of the decision, Justice Brown does engage in two areas about a discussion of Chinook and AI.

In dismissing the Applicant’s argument on procedural fairness, Justice Brown writes:

[24] As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.

He writes later, under the reasonableness of decision, heading:

[28] Regarding the use of the “Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of artificial intelligence is irrelevant given that (a) an Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision as required by Vavilov.

Justice Brown appeared to be concerned with the lack of the Applicant’s tying of the process of utilizing artificial intelligence or Chinook to how it actually impacted the reasonableness or fairness of the decision. Justice Brown is looking at the final decision and correctly suggests – an Officer made it, the Record justifies it – how it got from A to C is not the reviewable decision it is the A of the input provided to the Officer and the C of the Officer’s decision.

I want to question about the missing B – the context.

It is interesting to note also, in looking at the Record, that the Respondent (Minister) did not engage in any discussion of Chinook or AI. The argument was solely raised by the Applicant – in two paragraphs in the written memorandum of argument and one paragraph in the reply. The Applicant’s argument, one rejected by Justice Brown, was that the uncertainty of the reliability, efficacy, and lack of communication created an uncertainty of how these tools were used, which ultimately impacted the fairness/reasonableness.

The Applicant captures these arguments in paragraphs 9, 10 , and 32 of their memorandum, writing:

The nature of the decision and the process followed in making it

9. While the reason originally given to the Applicant was that the visa officer (the
decision maker) believed that the Applicant would not leave Canada based on the
purpose of visit, the reasons now given during these proceedings reveal that the
background rationale of the decision maker does not support refusal based on
purpose of visit. In fact, the application was delayed for nearly five months and in
the end the decision was arrived at with the help of Artificial Intelligence
technology of Chinook 3+. It is not certain as to what information was analysed
by the aforesaid software and what was presented to the decision maker to
make up a decision. It can be presumed that not enough of human input has
gone into it, which is not appropriate for a complicated case involving business
immigration. It is also not apt in view of the importance of the decision to the
individual, who has committed a great deal of funds for this purpose. (emphasis added)

10. Chinook is a processing tool that it developed to deal with the higher volume of
applications. This tool allows DMs to review applications more quickly.
Specifically, the DM is able to pull information from the GCMS system for many
applications at the same time, review the information and make decisions and
generate notes  in using a built-in note generator, in a fraction of the time it
previously took to review the same number of applications. It can be presumed
that not enough human input has gone into it, which is not appropriate for a
complicated case involving business immigration. In the case at hand, Chinook
Module 5- indicator management tool was used, which consists of risk indicators
and local word flags. A local word flag is used to assist in prioritizing applications.
It is left up to Chinook to search for these indicators and flags and create a
report, which is then copy and pasted into GCMS by the DM. The present case is
one that deserved priority processing being covered by GATS. Since the
appropriate inputs may not have been fed into the mechanised processes of
Chinook, which would flag priority in suchlike GATS cases, the DM¶s GCMS
notes read 3processing priority word flag: N/A . This is clearly wrong and betrays
the fallout in using technology to supplant human input. The use of Chinook has
caused there to be a lack of effective oversight on the decisions being generated.
It is also not apt in view of the importance of the decision to the individual, who
has committed a great deal of funds for this purpose (Baker supra). (emphasis added)

32. On the issue of Chinook, while it can be believed that faced with a large volume of
cases, IRCC has been working to develop efficiency-enhancing tools to assist
visa officers in the decision-making process. Chinook is one such tool. IRCC has
been placing heavy reliance on it for more than a year now. However, as always
with use of any technology, there are questions about its reliability and efficacy for
the purpose it sets out to achieve. There are concerns about the manner in which
information is processed and analysed. The working of the system is still unclear
to the general public. A decision rendered using it cannot be termed reasonable until it is elaborated to all stakeholders to what extent has machine replaced human input and how it impacts the final outcome. The test set by the Supreme Court in Vavilov has not been met.

The Applicant appeared to be almost making an argument that the complexity of the Applicant’s case suggested Chinook should not have been used and therefore a human should have reviewed it. However – there seemed to have been a gap in engaging both the fact that IRCC did not indicate it had used Chinook and that the reasons actually were more than normally responsive to the facts. I think also, the argument that a positive world flag should have been implemented but was not, ultimately did not get picked up the Court – but lacked a record of affidavit evidence or a challenge to the CTR […]

Read More »

Coach Will: New Vocabulary Words Tomorrow’s Immigration Practitioners Will Need To Know

As a resource, and to buy time as I am writing more substantive blogs, I wanted to share a #CoachWill blog on new vocabulary, terminology that tomorrow’s immigration practitioners will need to know, learn, advise their clients on, and spend time with. I am still very much learning these terms and their impact, but it gives us a mutual starting point to grow our knowledge of how Canadian immigration law will be impacted moving forward:

 

Advanced Analytics: which is composed of both Predictive and Prescriptive components, consists of using computer technology to analyze past behaviours, with the goal of discovering patterns that enable predictions of future behaviours. With the aid of a team of computer science, data, IT, and program specialists, AA may result in the creation of a model that can perform risk triage and enable automated approvals on a portion of cases, thereby achieving significant productivity gains and reducing processing times. [As defined in IRCC’s China-Advanced Analytics TRV Privacy Impact Assessment]

Artificial Intelligence: Encompassing a broad range of technologies and approaches, Al is essentially the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. [As defined in IRCC’s Policy Playbook on Automation]

 

Automated decision support system: Includes any information technology designed to directly support a human decision-maker on an administrative decision (for example, by providing a recommendation), and/or designed to make an administrative decision in lieu of a human decision-maker. This includes systems like eTA or Visitor Record and Study Permit Extension automation in GCMS. [As defined in IRCC’s Policy Playbook on Automation]

 

Black Box: Opaque software tools working outside the scope of meaningful scrutiny and accountability. Usually deep learning systems. Their behaviour can be difficult to interpret and explain, raising concerns over explainability, transparency, and human control. [As defined in IRCC’s Policy Playbook on Automation]

 

Deep learning/neural network is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data. While a neural network with a single layer can still make approximate predictions, additional hidden layers can help to optimize and refine for accuracy. [As defined by IBM: https://www.ibm.com/cloud/learn/deep-learning#:~:text=Deep%20learning%20is%20a%20subset,from%20large%20amounts%20of%20data

 

Exploration zone: The exploration zone – also referred to as a “sandbox” – is the environment used for
research, experimentation and testing related to advanced analytics and Al. Data, codes and software
are isolated from those in production so that they can be tested securely.
“Fettering” of a decision-maker’s discretion: Fettering occurs when a decision-maker does not
genuinely exercise independent judgment in a matter. This can occur when a decision-maker binds
him/herself to a fixed rule of policy, another person’s opinion, or the outputs of a decision support
system. Although an administrative decision-maker may properly be influenced by policy considerations
and other factors, he or she must put his or her mind to the specific circumstances of the case and not
focus blindly on one input (e.g. a risk score provided by an algorithmic system) to the exclusion of other
relevant factors. [As defined in IRCC’s Policy Playbook on Automation]

 

“Fettering” of a decision-maker’s discretion: Fettering occurs when a decision-maker does not
genuinely exercise independent judgment in a matter. This can occur when a decision-maker binds
him/herself to a fixed rule of policy, another person’s opinion, or the outputs of a decision support
system. Although an administrative decision-maker may properly be influenced by policy considerations
and other factors, he or she must put his or her mind to the specific circumstances of the case and not
focus blindly on one input (e.g. a risk score provided by an algorithmic system) to the exclusion of other
relevant factors. [As defined in IRCC’s Policy Playbook on Automation]

 

Machine learning: A sub-category of artificial intelligence, machine learning refers to algorithms and statistical models that learn and improve from examples, data, and experience, rather than following pre-programmed rules. Machine learning systems effectively perform a specific task without using explicit instructions, relying on models and inference instead. [As defined in IRCC’s Policy Playbook on Automation]

 

A minimum viable product (MVP) is a development technique in which a new product or website is developed with sufficient features to satisfy early adopters. The final, complete set of features is only designed and developed after considering feedback from the product’s initial users. [As defined by Techopedia – https://www.techopedia.com/definition/27809/minimum-viable-product-mvp

 

Predictive Analytics: brings together advanced analytics capabilities spanning ad-hoc statistical analysis, predictive modeling, data mining, text analysis, optimization, real-time scoring and machine learning. These tools help organizations discover patterns in data and go beyond knowing what has happened to anticipating what is likely to happen next. [As defined in IRCC’s China-Advanced Analytics TRV Privacy Impact Assessment]

 

Prescriptive Analytics: Prescriptive Analytics is an advanced analytics technology that can provide recommendations to decision-makers and help them achieve business goals by solving complicated optimization problems. [As defined in IRCC’s China-Advanced Analytics TRV Privacy Impact Assessment]

 

Process automation: Also called “business automation” (and sometimes even “digital transformation”), process automation is the use of digital technology to perform routine business processes in a workflow. Process automation can streamline a business for simplicity and improve productivity by taking mundane repetitive tasks from humans and giving them to machines that can do them faster. A wide variety of activities can be automated, or more often, partially automated, with human intervention maintained at strategic points within workflows. In the domain of administrative decision-making at IRCC, “process automation” is used in contrast with “automated decision support,” the former referring to straightforward administrative tasks and the latter reserved for activities involving some degree of judgment. [As defined in IRCC’s Policy Playbook on Automation]

[Last Updated: 19 April 2022 – we will continue to update as new terms get updated]

 

 

 

 

 

 

 

 

Read More »
About Us
Will Tao is an Award-Winning Canadian Immigration and Refugee Lawyer, Writer, and Policy Advisor based in Vancouver. Vancouver Immigration Blog is a public legal resource and social commentary.

Let’s Get in Touch

Translate »