New developments in AI Legislation with the EU AI Act.

Created with Microsoft Bing Image Creator powered by DALL-E

New developments in AI Legislation with the EU AI Act. 


        On Dec 6, the Council of the European Union adopted a common position on the Artificial Intelligence Act, which was drafted to ensure that AI systems in the EU market or used in the EU are safe and respect existing EU laws and regulations. The Act is not yet enacted; now that a 'general approach' has been agreed upon by the Council, they still need to negotiate with the European Parliament to enact the proposed regulation. As there is a parliamentary election in 2024, proponents of the Act hope to see it finished by the end of 2023. 

        This Act is significant for being one of the first major proposed regulations on the AI industry, which is still in early development with much room for further innovation and adaptation. 


What is the Artificial Intelligence Act (see here for more details)? 

        This proposal uses a risk-based approach as well as a horizontal legal framework for AI to ensure legal certainty. This 'horizontal' layer exists above the high-risk classification, so that the regulations only apply to 'high-risk AI systems' and preventing unintentional capture of non-high risk AI. This does mean 80-90% of AI activity will not be regulated by the Act, which MEP Tudorache states is reasonable in light of the proposed regulation's goal to promote AI innovation in a safe manner

        It seeks to cooperate with other initiatives such as the Coordinated Plan on Artificial Intelligence to accelerate and unify the investment of AI into a single market for AI applications. "AI systems" here is defined as systems developed through machine learning approaches and logic or knowledge based approaches. 

        This law seeks to prohibit the use of AI systems for 'social scoring' as well as exploitation of vulnerabilities in a specific group of persons, while further limiting biometric identification and predictive policing to 'strictly necessary' law enforcement purposes. Notably, there is a specific exclusion from this proposed law for national security, defence, and military purposes. Additionally, the Act provides greater autonomy and authority for the AI Board, as well as the power to impose fines. Transparency is emphasized as a key principle, especially for high risk AI systems. However, the Act also provides for real-world testing of AI systems in 'regulatory sandboxes', including unsupervised testing, to promote innovation. 


What are key points of debate in the EU Parliament regarding AI legislation? 

        The definition of an "Artificial Intelligence" is a major point of debate for the EU parliament, with some (such as the German MEP) believing a narrow definition of AI is necessary to prevent overlapping with existing regulation, while others (such as Access Now EU Policy Analyst Rodelli) believing that a narrow definition of AI would lead to legal uncertainty and exclude many harmful uses of AI. Similarly, the definition of 'high-risk' is also a concern, as an overly broad definition along with a wide definition of AI could lead to the regulation of virtually all software. 

        The merging of fundamental rights and consumer protection obligations in the AI Act also raises some legal quandaries. One example is elevators, which are classified as high risk systems but are not high-risk to fundamental human rights. The fact that there are no global standards for AI - as legal systems are currently in development for them - further complicates compliance for the AI Industry. 

        There is also the question of how the AI Act will interact with the GDPR and other privacy laws. Although the GDPR generally prohibits the processing of personal data unless specific requirements such as consent, legitimate purpose, or anonymity is met, there may be questions on if the regulatory sandboxes proposed by the AI Act would allow the GDPR rules to be 'bent' to permit processing of personal data without as stringent regulations.          


Why is this relevant now? 

        A survey of the state of AI in 2022 shows that business adoption of AI has doubled in the last five years, and is embedded into a wider range of business capabilities. Moreover, the leaders in AI development and adoption in businesses that gained the most financial returns are continuing to invest more into AI development and pulling ahead of their competitors. In 2022, 52% of surveyed organizations that use AI reported as investing more than 5% of their digital budget into AI, compared to 40% in 2018. Their growth is further aided by feeding more high quality data into the AI algorithms, as well as the emergence of low-code or no-code programs which have greatly eased the accessibility of creating and applying AI. However, despite the increase in use of AI, there has not been increased mitigation of the risks of AI in the past few years according to the survey. (click here for more detailed information and graphs). 

        Such increases in adoption and application of AI by businesses demonstrates that the potential impact of any risks in AI are rising, similar to how privacy risks became an increasing concern with the rise in the use and collection of personal data with the growth of the Internet. Therefore, there is a corresponding need for regulations and standards regarding AI to be implemented, to protect both businesses and consumers as well as providing a means of liability if parties suffer harm from AI. 


Are there similar laws for AI existing or being developed elsewhere? 

        On Dec 7, the EU-US Trade and Technology Council announced a Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management (AI Roadmap). The goal of this roadmap is to standardize measurements of AI trustworthiness and risk management methods with a shared dedication to democratic values and human rights. 

        The US also announced an initiative to create a Bill of Rights for AI in October, which focuses on a rights-based framework to enable equitable access and use of AI systems as well as promoting transparent and trustworthy automated systems (see here for a more detailed analysis/discussion on this). 

        In Canada, the ongoing Bill C-27 (The Digital Charter Implementation Act, 2022) - which is aimed at reforming federal privacy law - proposes to create a new Artificial Intelligence and Data Act which if passed, will regulate the development and use of AI in Canada. Currently there is no comprehensive legal framework regulating AI, with privacy laws generally being applied for situations involving AI. Bill C-27 - as it currently stands - would introduce greater regulation on governance and transparency regarding AI, as well as empowering the Minister administering the AIDA to make orders, regulations, and impose penalties for non-compliance including monetary fines. 


What are some recent concerns regarding AI? 

        One significant hazard is the possibility of AI systems causing discrimination. For instance, an EU rights watchdog(Agency for Fundamental Rights) warned of bias in AI-based detection of crime and hate-speech. They gave the example that crime prediction AI may have a bias towards crimes that are easier for the police to record (since AI algorithms depend on learning from data) and thus may create bias against demographic groups that are more often linked to "simpler crimes". In a similar vein, on Nov 30, Amazon warned that some of its AI-based cloud software that utilize facial recognition and audio transcription could potentially be discriminatory. 

        A joint study by the EU-US Trade and Technology Council further examined the impact of AI on the workforce. In particular, the study noted the possible unintended violations by AI systems of laws relating to bias, fraud, antitrust, economic harm, and exposure to legal/financial risk. The study recommended in investing and incentivizing the development of AI that is beneficial for labour markets, such as AI that augments workers's ability and productivity, while also improving the capacity of regulatory agencies to ensure transparent and fair practices by AI systems - especially regarding algorithmic hiring and algorithmic workplace management. 


Overall: 

        As AI systems develop and become increasingly widely implemented into businesses and people's daily lives, there is a correspondingly growing risk (aside from sci-fi fears of an evil AI taking over the world!) to fundamental human rights and consumer protection. Due to the AI Industry being a very recent development, there are few if any existing laws that regulate AI, although many being proposed such as Canada's Bill C-27 and the US's Bill of Rights for AI. Therefore, while many questions have yet to be resolved - such as the exact definition and scope of 'AI' and 'risks' - the EU's proposed AI Act is a welcome and much needed development in standardizing regulations and legal expectations regarding AI systems. 





Comments

Popular posts from this blog

Seeking ChatGPT's Insight: Are the Biden Administration's 'Trump-Proofing' Efforts Legally and Morally Justifiable?

ChatGPT's Age-related Slogans for Biden, Trump, and Desantis.

Unraveling the WGA’s MBA with ChatGPT: Expert Analysis or Algorithmic Bias Towards Legalese?