Artificial Intelligence

Legal action launched over UK gov’t sham marriage algorithm

7th March 2023
Kristian McCann
0

Sham marriages are under scrutiny…or at least, the UK government’s method to determine them. AI implemented by the UK Home Officethat decides on the authenticity of the‘I dos’ of the country is facing a legal challenge over its methods.  

Its use by the governmental department began as early as 2015, with the current iteration being a triage system implemented in April 2019.But a charity claims the system could discriminate against people from certain countries. 

The case against the AI 

The Public Law Project (PLP) legal charity has sent a pre-action letter to the Home Office in February of this year (2023) over the use of the algorithm in deciding which married couples it should flag up for further investigation.  

The system works when once a couple, where one or both aren’tcitizen of the UK, Switzerland, or EEA, give notice with a registrar, their data is sent to the Home Office and the system processes their data and allocates them either a green or red light. If its flagged red, the couplewill then be interviewed by immigration officialsto further scrutinise the relationship or be asked to provide additional documents to show they are in a genuine relationship. 

To decide who it flags, the algorithm assesses couples against eight risk factors. These include the age difference between the couple and any shared travel history, and othercriterias which have not been disclosed. The Home Office has denied the algorithm uses nationality as a factor, yet an equality assessment disclosed to the charity showed Bulgarian, Greek, Romanian and Albanian people are more likely to be referred for investigation. 

This is what the charity claimsshows it could be indirectly discriminating against people on the basis of nationality - something that requires justification in order to be lawful. 

Does AI discriminate?  

The subject of AI discrimination has long been a subject of debate. Dubbed AI bias, the term is used for when you build a system with data that doesnt illustrate a full pictureof a situation.  

There is algorithmic AI bias or data bias, where algorithms are trained using biased data,i.e. you only feed an AI system sample photos of the night so it doesn’t take into account day time into its machinations, and societal AI bias, where the person who inputs data is influenced by their beliefs and therefore transfers those beliefs into a system.  

Because the concerned AI system was developed through machine learning that used Home Office marriage referral cases from a unspecified three-year period, any biases that might exist within this source data are likely to be projected forward by the algorithm and influence its spotting of ‘patterns’ going forward.  

Egregious algorithm use? 

The Home Office have declined requests from PLP to disclose more information about the algorithm on grounds it could undermine its attempts to crack down on sham marriages.  

However, it is this lack of transparency that the group is challenging, as they campaign for more transparency in use of automated decision-making by public bodies so they can be scrutinised 

Previous use of public bodies using algorithms saw the UK’s Department for Education determine A-level and GCSE results for students unable to sit exams due to the Covid pandemic. This was later scrapped after a public outcry over the resultsand widespread belief that the algorithm was wrong.  

The Home Office in 2020 had to drop an algorithm appliedto decide visa applications, after a legal challenge from a different charity alleged it was biased against certain nationalities. 

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier