AI-powered Facial Recognition Technology and Privacy concerns (Part 2)

Created with Microsoft Bing Image Creator powered by DALL-E

AI-powered Facial Recognition Technology and Privacy concerns (Part 2) 


The issue

        Ahead of the 2024 French Olympics, France's data protection agency warned against using facial recognition technology (FRT) due to concerns about the surveillance abilities of FRT intruding upon privacy rights. Although direct use of FRTs are not permitted, using artificial intelligence (AI) to facilitate security such as experimental AI-powered surveillance cameras are allowed. For such experiments, France's data protection agency, CNIL, has assured that they are monitoring it to minimize bias and guarantee deletion of footage in due time. The French parliament is currently debating on whether to introduce new surveillance powers for improved security, or to enable more privacy safeguards against surveillance. France's debate over the use of AI and FRT comes amidst a recent wave of legislation proposing bans on FRTs, as well as lawsuits and fines levied against companies that utilize AI-powered FRTs. 

        This blog post will be published in two separate parts: The first part will give an introduction to how Artificial Intelligence and Facial Recognition Technology is being used today, and what privacy concerns it theoretically and has in reality raised. The second part will look at existing and developing legislation that addresses the privacy and security concerns raised in part 1. 

PART 1: Introduction to AI and FRT, and some cases: see here;

PART 2: Ongoing developments in AI and FRT related Legislation around the world: 

Australia
        In Australia, there is a growing pressure to develop regulation of FRTs. One draft legislation created by the Human Technology Institute proposes new obligations to promote transparency and reduce the risks of bias and discrimination based on a risk-tier approach. Their goal is not to ban FRT, but merely to regulate it proportionally to its risk of human rights abuses; the higher the potential and impact of abuse, the stricter the regulations would be. These goals stemmed from the Australian federal police's admission to using Clearview AI in their investigations, which breached the privacy of Australian citizens. Other factors included rising concern over the use of FRTs in workplaces, which some feared could lead to surveillance and punishment of workers even in private setting, such as monitoring toilet breaks or personal phone calls. 

European Union: 
        European Parties pushing for a total ban on FRT continue to increase, now including Renew, Greens, Socialists, and Democrats. They focus in particular on indiscriminate real-time scanning of crowds due to concerns of its abuse by authoritarian governments to persecute dissidents and miniorities, such as in Russia and China. Italy has already begun, with their Data Protection Authority, Garante, ordering an immediate halt to use of FRT systems using biometric data until a new law is passed (except for use in judicial investigations or fights against crime). Italy's ban was a response to municipalities experimenting with FRT. However, there is equal opposition from proponents of FRT - especially in France currently - who cite its use in improving efficiency and security, such as counter-terrorism, locating armed criminals, and searching for missing children. 

United States:
        In the US, there are several legislation and guidances on uses of FRT currently ongoing or planned. Ted Lieu, a California Congressman, proposed the Facial Recognition Act of 2022, which emphasizes restricting the use of FRT to that which is necessary, mandating transparency, and requiring annual assessments and reporting of use of FRT by law enforcement. NIST(National Institute of Standards and Technology) is scheduled to release(update, it has been released on Jan 26) their AI Risk management framework later this year in 2023. There are also discussions for joint ventures between the EU and US on building a mutual understanding of AI concepts based upon the EU's AI Act and the US's AI Bill of Rights. The US Algorithmic Accountability Act and the EU Artificial Intelligence Act are both currently in development, with the US bill focusing more on 'automated decision systems' while the EU bill is currently more general in scope for all 'high risk AI systems'. Christina Montgomery, the chief privacy officer and AI ethics board chair at IBM, predicted much more discussion on operationalizing AI ethics and improving testing for bias in the coming year. 

International conferences: 

        International conferences are also playing a pivotal role in shaping regulation on AI and FRT. In the 44th Global Privacy Assembly, data protection regulators around the world agreed to a framework for using personal data with FRT, with focus on establishing a clear legal basis, human rights assessments, transparency, accountability, and respect for general privacy and data protection principles. 

Canada

        In line with the developments in the 44th Global Privacy Assembly, Canada's Standing Committee on Access to Information, Privacy, and Ethics (ETHI) advised halting police use of FRT until approved by the Privacy Commissioner, and to modernize existing laws to account for FRT, such as in the currently developing Bill-C27 which is set to replace PIPEDA in the private sector. The Office of the Privacy Commissioner of Canada (OPC) also called for new legal framework for police use of FRT in Canada, although as the Yukon education department's rejection of their privacy commissioner's recommendations (to stop video surveillance of students in schools) demonstrated, the lack of enforcement power in Canada's privacy officers presents a significant roadblock to effectively protecting privacy rights against rapid FRT adoption. Fortunately, the current direction of Bill C-27 is aimed at investing far greater enforcement authority into privacy offices similar to regulatory powers in the GDPR. 

Korea: 

        In Korea, the government has increasingly pushed development of AI and big data, including FRT, which sparked concerns among privacy watchdogs that basic privacy rights were being neglected. During the Covid-19 pandemic, the Korean government provided 170 million facial images of Koreans and foreigners to private companies to develop immigration-screening AI. This collection and providing of biometric data was done without the data subjects' consent, and is merely one of many similar projects on AI and FRT conducted by municipal governments in Korea. FRT was also used by Bucheon's municipal government to track movements of people who tested positive for the disease. In response, the National Human Rights Commission of Korea (NHRCK) recommended a moratorium on use of real-time FRT until relevant laws are developed, while simultaneously emphasizing on the urgency of developing legislation to regulate FRT. Although government agencies in Korea defended such use by claiming that the images obtained through CCTV and used to develop AI and FRT were de-identified, it is still uncertain whether biometric data can be processed anonymously


Summary

        Artificial Intelligence and Facial Recognition Technology is still in the process of development, and consequently legislation in most countries are struggling to keep up in securing protection of citizens' privacy rights. Moreover, due to the many potential benefits that FRT can provide, such as surveillance and security, there is reluctance by government agencies to limit their development and use of FRTs, as in the case of France and Korea. On the other hand, others are pushing for total bans on the use of FRT, as some EU parties believe. Nevertheless, it appears that on the whole, most leading countries in privacy legislation, such as the US and Europe, seem to be pushing for greater transparency and safeguards to prevent the abuse of FRT, especially regarding its possible abuse as surveillance and the devastating consequences that can arise from errors due to misidentification or embedded bias. It seems likely that AI and FRT will be considered 'too useful' for most countries to completely ban, similar to data processing being used by targeted advertisers. AI technology is the next stage of development in our society, and it has already become entrenched in daily use; facial recognition to log into phones and computers is commonplace. Instead, erecting stronger safeguards, raising awareness, and imposing severe fines to those - companies and government agencies - that attempt to abuse this technology is the probable goal of most legislatures in the near future, especially with the wild surge in interest of AI technology of late such as ChatGPT. 






Comments

Popular posts from this blog

Seeking ChatGPT's Insight: Are the Biden Administration's 'Trump-Proofing' Efforts Legally and Morally Justifiable?

ChatGPT's Age-related Slogans for Biden, Trump, and Desantis.

Unraveling the WGA’s MBA with ChatGPT: Expert Analysis or Algorithmic Bias Towards Legalese?