A Face to Open Doors

Welcome to a world where human movement is policed by intelligent machines.

An interactive installation commissioned by IWM London for the Refugee Season (September 2020 to June 2021). 

 

'A brilliantly Orwellian artificial intelligence'  — Londonist 

You are invited to enter a pop-up immigration booth that is an artefact from a future society.

Offering both sanctuary for those leaving their homes and innovative efficiency for those running the system, you will be assessed by an AI Immigration officer. 

While a series of videos play, the camera tracks your facial responses and you are asked to perform specific emotions to determine your credibility. Rather than answer any pesky questions, your face does all the talking for you here. 

Entertaining, smooth, efficient, portable; this is bureaucracy at the end of the world.

Inspired by technological trials within the real-world immigration industry, the installation invites us to reflect on the role of emotion and bias within the systems used to process people fleeing conflict, and the consequences of automation within the asylum machine.

A Face to Open Doors was installed and free to visit at IWM in Lambeth, London between September 24th 2020 and May 24th 2021. We are currently looking for tour opportunities.

IWM_SITE_LAM_007340

REAL WORLD CONTEMPORARY CONTEXT

The decision on whether to grant asylum to people fleeing conflict or persecution can be a matter or life of death. In an asylum case, the burden of proof lies with the asylum seeker. They need to prove that they have a well-founded fear of persecution and that they are unable to seek protection from the authorities within their home country.

Usually, there are no witnesses. And often, due to the manner in which they have fled their home country, little or no evidence that they can present. It is just the individual - and their story. So the decision comes down to a question of whether or not they are believed. 

 Any discrepancy in details - even when it does not directly relate to your claim - can cast your entire case in a dubious light, implying that you may be dishonest. Asylum seekers are often refused for either expressing too much emotion - or too little -  and thus being considered either manipulative or unconvincing. While not looking directly at people in authority is a cultural norm in many countries, avoiding eye contact is often seen as a marker of deception.

New technologies offer to make the complex process of asylum decisions more efficient. Yet, AI and machine learning work in ways that are opaque and untraceable - even to those who design the systems. If machines that use Artificial Intelligence (AI) were introduced into the process of deciding on who should be granted asylum, this would make the decisions harder to explain - and harder to challenge.

If this is the system presently in operation, beset with bias and common sense failings, what will happen if new systems using AI and machine learning are programmed with the same failings?

“In a world full of wars, we have an asylum procedure built around officials’ belief that asylum seekers are likely to be opportunistic liars, that the scars they bear are self-inflicted and that suicide bids are blackmail attempts. This is a covert war against asylum. It is resisted every day by refugees and their lawyers - and is a crime against humanity which shames us.”

Frances Webber
Vice-Chair of Institute of Race Relations, with twenty-five years experience as a practising immigration lawyer

IWM_SITE_LAM_007373
IWM_SITE_LAM_007406
IWM_SITE_LAM_007342

In fact, machine learning is already being integrated into the policing of asylum.

In 2014, in Bucharest Airport and on the US-Mexican border, a series of trials were carried out with a machine called AVATAR. 

“All of our solutions for detecting deception are straight forward to deploy, user friendly to operate and proven reliable by rigorous science.”
discernscience.com

AVATAR stands for “Automated Virtual Agent for Truth Assessments in Real-Time” and relies on software developed by Discern Science. The machine has an infrared eye-tracking camera and a sensor to measure body movement - as well as a microphone, an e-passport scanner and fingerprint reader. It claims to use “linguistic, vocalistic, kinesic, ocular and physiological inputs” to assess deception. 

Using a UK-developed software tool called Silent Talker, the EU has funded an AI-based lie detector project called iBorderCtrl. Annie™ is the world’s first software program that aims to organise resettlement of refugees. It was trialed in 2019. 

CREDITS

Concept and direction: May Abdalla
Creative production: Amy Rose
Interactive design and programming of video: Michael Golembewski 
Interactive design and programming of set: Jack Ratcliffe
Set design and build: Brendan Chitty and Ruth Shepherd
Lighting: Cecilia Gonzalez Barragan and Ana Vilar
Animation of video: Tony Comley 
Animation of hand scanner: Katerina Athanasopoulou
Sound design: Chu-Li Shewring
Voice: Gemma Painton
Graphic Design: Antonis Papamichael and Will Brady
Set dressing and paint: Charlotte Northall and Jane Wheeler 

With thanks to:
Denis Kierans
Frances Webber
Professor Amina Memon
Dr Zoe Given-Wilson
Dr Andy Balmer
Amit Katwala
Theo Middleton 

“There simply is no evidence that machines are able to discern lies from truth because there is no direct nor predictable correlation between lying and any physiological phenomena. We cannot see lies in the body. These decisions remain social, even when technologies are being used. 

We must ask - ‘Who is the person making the decision and how?’ - even when there appears to be no person around."

Dr Andy Balmer
University of Manchester