AI-powered Facial Recognition Technology and Privacy concerns (Part 1)

Created with Microsoft Bing Image Creator powered by DALL-E

AI-powered Facial Recognition Technology and Privacy concerns (Part 1) 


The issue

        Ahead of the 2024 French Olympics, France's data protection agency warned against using facial recognition technology (FRT) due to concerns about the surveillance abilities of FRT intruding upon privacy rights. Although direct use of FRTs are not permitted, using artificial intelligence (AI) to facilitate security such as experimental AI-powered surveillance cameras are allowed. For such experiments, France's data protection agency, CNIL, has assured that they are monitoring it to minimize bias and guarantee deletion of footage in due time. The French parliament is currently debating on whether to introduce new surveillance powers for improved security, or to enable more privacy safeguards against surveillance. France's debate over the use of AI and FRT comes amidst a recent wave of legislation proposing bans on FRTs, as well as lawsuits and fines levied against companies that utilize AI-powered FRTs. 

        This blog post will be published in two separate parts: The first part will give an introduction to how Artificial Intelligence and Facial Recognition Technology is being used today, and what privacy concerns it theoretically and has in reality raised. The second part will look at existing and developing legislation that addresses the privacy and security concerns raised in part 1. 

PART 1: Introduction to Facial Recognition Technology: what is it, and why is it an issue?

        Facial Recognition Technology ('FRT') is technology that can identify an individual based on the human face, by mapping facial features in a photo or video and comparing or analyzing the biometrics with a database of information. By itself, FRTs have been in use for many years; the reason it has become an issue is due to the rapid development of Artificial Intelligence ('AI') that have been incorporated as machine learning algorithms into FRTs alongside the growth of big data. Consequently, AI have been incorporated into many aspects of business and government, such as automation through AI and using AI to predict future changes. The global market for FRTs have grown immensely, with estimates putting it at $12.67 billion by 2028 compared to $5 billion in 2021.  These markets include verification/identification to access online accounts, authorization of payments, employee tracking and monitoring, and targeted advertisements. 

The risks of AI-empowered FRT: 

        The flaws of AI are also well-known, such as its dependence on complete and high quality data to 'learn' from in order to produce good results, which historically has been lacking (in terms of data) for minority groups and often have hidden biases. This creates the privacy quandary where failure in Ai is caused by inadequate data, but getting more and better data requires the collection and processing of more personal data, which is difficult to fully anonymize and therefore falls under privacy laws. The privacy risks include the absence of consent in collecting and processing individual's biometric data which can be used to identify them (violating most consent provisions in privacy laws such as the GDPR (concept of consent)). Additionally, there is a lack of transparency because oftentimes the data subject is not aware their biometric information has been collected in a database; the data collectors and processors are difficult to identify for such FRT that incorporates big data and FRT, as is the purpose and amount of information collected. The risk to privacy here is that without such transparency, it is difficult for the data subject to exercise their privacy rights to access, correct, control, or delete their personal data (which are fundamental privacy rights in most countries' privacy laws, such as the GDPR (Right of Access)). 

        FRT also raises concerns over its potential abuse in surveillance and threats to security. Aside from the privacy violation of monitoring and recording information without consent or knowledge, the use of FRT can have an impact on freedom of speech and association due to the fear of being monitored causing people to self-police and leave the public space. Moreover, surveillance can include not just the tracking of movement or behaviour, but even a person's mood through AI analysis of facial expressions through FRT. As for security, the large biometric database accumulated by companies that utilize FRT and AI represent a significant privacy/security hazard if breached, with consequences such as identity theft, robbery, and harassment. As US Senator Edward Markey stated, 'while you can change your password if they get hacked, you cannot change biometric information'. 

        Related to these concerns on surveillance, there is also the possibility of "mission creep" occurring when used by governments. For example, Big Brother Watch (a privacy watchdog) reported that a newly published guide for ethical and legal use of FRT by UK police officers (created after a 2020 Court ruling that found current use of FRTs by police to breach privacy rights) allowed police to place on watchlist those who were "victims of an offence" or may "have reasonable grounds to suspect of having more information". In other words, innocent people were allowed to be put on facial-recognition watchlists, which Big Brother Watch describes as 'mission creep' allowing the legitimate use of FRT to encroach beyond its permitted use to impact innocent people, not just criminals or suspects. 

        In line with these fears, FRT in the US by law enforcement has been temporarily banned or restricted in many states until better regulation is in place. Restrictive bills on FRT are also in development, such as the Wyden/Paul bipartisan bills which advocates to forbid US government and law enforcement agencies from buying user data(including biometric data) without a warrant, which seems to be aimed particularly against Clearview AI. Big Tech companies such as Microsoft, IBM, and Amazon have refused to sell FRT technology to government agencies until proper regulation is in place. Even where private companies did provide their services, such as in ID.me's case, the government agencies were sometimes pressured to not use them over the ethical problem of putting a private company in control of giving people access to government services, as was the case for the IRS when they tried to make taxpayers use ID.me to verify their identities. Further issues arose due to possible economic bias against those who don't have access to phones or cameras, as well as surveys discovering a lack of privacy risk awareness among government employees and inadequate compliance even when aware. Many agencies (13 of 42 federal agencies surveyed) did not track employee use of FRT systems at all. 

        That being said, these privacy and security risks of FRTs do not negate their many benefits. Aside from the general benefits of better crime prevention, faster and more accurate digital identification, and improved consumer service, understaffed agencies can use AI and FRT to provide government services more accurately and efficiently. Moreover, FRTs are already widespread in consumer devices, such as iPhone's Face ID, and the resulting public acceptance and familiarity of such technology allows greater adoption of FRT by government agencies in turn despite repeated calls for bans on FRT. For example, ID.me is already used by 27 US states to help ID unemployment benefits. 


Some recent privacy-related cases involving AI-powered Facial Recognition Technology: 

        These examples demonstrate the wide-ranging possibilities of abuse and violations of privacy law that is inherent in FRT, even when the original intent was benign. 

1) Clearview AI 

        Clearview AI is likely the most famous controversy regarding FRT and AI, with CNIL recently fining them 20 million euros for unlawfully using FRT without consent, legal basis, or accounting for individual's right of access/deletion of their data. This fine was given after Clearview AI ignored warnings about their privacy violations and orders to comply with the GDPR. Clearview AI's practice of using a technique called 'scraping' collects facial pictures on the internet and compiles it with other pieces of information, allowing for clients to search Cleaview AI's database to identify people merely by uploading a picture of them for the AI to compare and analyze. In addition to the fine, Clearview AI is also facing a lawsuit for invading US citizens' privacy through "covert" collection of biometric data for their own profit without consent. Compounding on this, the Ukrainian military's use of Clearview AI to scan for dead Russian soldiers' identity in a propaganda effort have raised serious concerns of mistaking civilians for soldiers, or misidentification causing great emotional harm to family members mistakenly told their child had died. 

2) Australian retailers Bunnings and Kmart

        The Office of Australian Information Commissioner is investigating the use of FRT by Bunnings and Kmart. Although these companies claim that FRT was used to protect against banned customers and preventing theft, the mere fact that people's facial imagery was taken and then stored and compared to others in a database of faceprints without their proper awareness or consent means there is a high risk it violates Australia's privacy laws. 

3) PimEyes, another facial recognition search engine. 

        Big Brother Watch filed a complaint in November 2022 to the UK Information Commissioner's Office over concerns that PimEyes' facial-recognition database facilitate abuse by people such as employers, university admissions officers, and stalkers. PimEyes did claim their tool was not intended to enable surveillance and that a 'data security unit' monitored usage for suspicious activity such as uploading children's photos, but the privacy watchdog claimed that safeguards were lacking or inadequate. 


To see what legislation is currently in place or being developed to address these privacy and security concerns, as well as in response to the landmark cases on AI and FRT given previously, read Part 2 of this short blog series on AI and FRT. 





Comments

Popular posts from this blog

Seeking ChatGPT's Insight: Are the Biden Administration's 'Trump-Proofing' Efforts Legally and Morally Justifiable?

ChatGPT's Age-related Slogans for Biden, Trump, and Desantis.

Unraveling the WGA’s MBA with ChatGPT: Expert Analysis or Algorithmic Bias Towards Legalese?