ChatGPT - the future path of Legal Tech?

Created with Microsoft Bing Image Creator powered by DALL-E

ChatGPT - the future path of Legal Tech? 

The Issue

        OpenAI's new chatbot, ChatGPT has recently gone viral with its powerful ability to gather and present information in a human-like readable format; this combined with the now-cancelled attempt to use an AI chatbot in the courtroom as a legal assistant demonstrates the growing role 'legal tech' is beginning to play in the legal profession. However, this also raises the question of whether sufficient laws are in place to regulate the use and development of such AI in the legal profession. 

Why is ChatGPT and such chatbots attracting so much attention? 

        ChatGPT is a game-changer in AI technology for its accessibility and ease of use for nontechnical users, as well as its ability to provide detailed and 'intelligent' answers to prompts regarding a vast variety of fields. Such a chatbot's has wide-ranging potential applications in the legal industry, in particular the field of 'people law' - which involves individuals and SMEs - due to the chatbot's accessibility and affordability, both of which are currently significant and long-standing problems. ChatGPT is arguably the next step for companies that seek to enable such cheap and fast legal assistance in an easy-to-understand manner, such as Legal Zoom, Rocket Lawyer, and DoNotPay. 

        However, significant pushback is to be expected from the legal profession, which is resistant to change and may view AI tools such as ChatGPT as "billable hour killers" and impair "client-subsidized junior lawyer training". Although some argue this is an opportunity for firms to market themselves to clients and new talents as a 'cutting edge', efficient and possibly more profitable, it is still possible that the majority of those in the profession will successfully marginalize and limit the adoption of AI tools, possibly making them illegal. The supposed first use of an AI-powered "robot lawyer" in a courtroom is one such example of pushback, with the entire plan being scrapped due to the "state bar prosecutors" warning DoNotPay (the company behind the chatbot) of possible imprisonment if the chatbot was used in court. Although some courts allow defendants to wear Bluetooth-enabled hearing aids, most courts do not permit the tech that DoNotPay uses, with any attempted use being more on a technicality of the law rather than the 'spirit' of the law. 

What are the benefits and dangers of such AI technology in legal professions? 

        That being said, there are many potential benefits of using AI technology in legal professions, such as increased productivity, avoiding mistakes, and increasing the speed of research and decision-making. Currently, AI technology - in particular, machine learning - is already being used to review contracts, conduct legal research, and in the eDiscovery process (the process of identifying relevant documents in a lawsuit from the opponent). AI is also used to aid in judgments, such as COMPAS which predicts recidivism (committing a crime again) rates. AI that predict judgments are also valuable as knowing the likelihood of success or failure of a lawsuit will greatly impact decisions on whether to pursue settlement, or even to attempt a lawsuit at all. 

        However, there are significant criticism regarding uses of AI in this manner (besides pushback from those in the legal profession concerned about job encroachment), such as issues on fairness and accuracy. For instance, a Propublica study found that COMPAS appeared to show bias against black prisoners as being more likely to reoffend, possibly due to embedded bias in data. Additionally, there is the problem that it is difficult if not impossible to understand the rationale behind machine-learning based decisions. This is the 'black box' model problem, where models that are too complex for humans to interpret the reasoning of can undermine trust in those models' decision-making. Given that the possible consequences of such black box decisions in the legal profession are literally life-and-death in some situations (such as criminal law in states with the death-penalty), the lack of transparency and explainability means it is unlikely for AI to replace human judgment in legal rulings yet. 

How is privacy related to AI technology? 

        There are also privacy considerations included when training machine-learning models on data. Since machine-learning models require large datasets of information to be 'trained' on, they raise concerns on possible infringement of privacy law if the datasets used contain personal information protected by law. Such concerns are further compounded in the legal profession, which has an additional layer of lawyer-client confidentiality protection on much of the relevant information. In the field of eDiscovery, professional awareness and responsibility for upholding security and privacy has risen in priority, with organizations such as the Electronic Discovery Reference Model (EDRM) publishing guidance on how to handle ethical issues and privacy considerations while using AI. An example of an ethical quandary is using results of machine learning (based on a client's data) for different clients. Chatbots such as ChatGPT may also "say the darnest things" based on the dataset they trained with, which raises possible worries on training such AI on client data. 

        Currently, no country has developed comprehensive regulation on AI regarding their handling of data that may involve privacy. Although the EU is beginning talks on such an act to regulate AI, it is yet in its early stages (see more details from a previous post on New Developments in AI Legislation). Instead, most countries seem to be using existing privacy laws where applicable - for instance, the GDPR regulates the processing of personal information, which AI may fall under - and are also encouraging the use of technical methods that 'anonymize' data, such as pseudonymization. It seems that most are relying on case examples of action taken by regulatory agencies to serve as a guidance on the expected standards and solutions while awaiting the development of a proper regulatory legal framework, which does raise the danger of large-scale privacy violations and possible damages until then. 

        South Korea, one of the leading countries in AI development, has encouraged pseudonymization in their plans to promote safe AI development, while also cracking down on AI that indiscriminately process personal information without proper safeguards, as they did with the chatbot Iruda in 2021. However, a recent court ruling in Korea determined that a data subject (the person whom identifiable information is about) can request the cessation of pseudonimization of their data according to Korea's PIPA (Article 37). This ruling greatly restricts the ability of companies - including AI developers - to process pseudonymized data using PIPA's 'no-consent required for processing anonymized data' rule, since an individual would be able to request that their data not be pseudonymized and thus rendered unavailable for data processing without specific consent. This seemingly contradictory relationship between Article 28 (allowing the processing of pseudonymized data without consent), and Article 15 (collection and use of personal information) is a flaw in the regulation that needs to be properly examined by courts. This ruling has significant impact on companies that anonymize/pseudonymize and process data on a large scale, and there have been calls for a re-thinking of the ruling and the workings of relevant PIPA regulations to protect personal data in a way that does not threaten the development of big data analysis. 













Comments

Popular posts from this blog

Seeking ChatGPT's Insight: Are the Biden Administration's 'Trump-Proofing' Efforts Legally and Morally Justifiable?

ChatGPT's Age-related Slogans for Biden, Trump, and Desantis.

Unraveling the WGA’s MBA with ChatGPT: Expert Analysis or Algorithmic Bias Towards Legalese?