Is ChatGPT a Misleading Liar or Hidden Genius? Asking ChatGPT about Indemnification Clauses and the baffling, stubborn, and occasionally ingenious answers it gave.



Created with Microsoft Bing Image Creator powered by DALL-E


        As ChatGPT surges in popularity, several experts have raised concerns over the possible problems of the Indemnification Clause in OpenAI's Terms of Use. One key issue is that if anyone sues OpenAI due to your use of ChatGPT, OpenAI can call on you to pay for their legal fees and other penalties. This made me wonder if ChatGPT itself was aware of how big a concern this could be, and whether it could present to me some case examples or solutions on how to deal with such risks. The results it gave me were hilariously wrong at times, extremely stubborn and even ignorant in how it stuck to its story, yet at the same time it also presented an unusual legal method of how OpenAI could use its Indemnification clause that I had never considered before. 


What is the Indemnification Clause and Why is it a problem? 

        The Indemnification clause here refers to Section 7 of OpenAI's Terms of Use: 
"You will defend, indemnify, and hold harmless us... from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content". 
        
           To put in more general terms, if your use of ChatGPT creates a lawsuit that OpenAI must deal with, OpenAI can send the bill to you using this Indemnification clause. Now, while this sounds rather alarming, such indemnification clauses are often used in commercial contracts; it is the wide-ranging applications for ChatGPT and the burst in popularity among regular people and workers (rather than companies) that create concerns. Of particular worry is ChatGPT's observed tendency to give incorrect, misleading, or potentially plagiarized information that can lead to lawsuits such as defamation, copyright infringement, and fraud. Since generative AI is a quickly growing field, there are little to no legal precedents on how indemnity clauses apply for OpenAI, and thus experts currently are simply warning about the potential hazards and advising ways to minimize the risks (such as through disclaimers or simply avoiding ChatGPT). 


Asking ChatGPT and the stream of incorrect responses it gave
        
        I became curious on whether ChatGPT was aware of these dangers that may arise from use of its services, so I began a series of prompts asking ChatGPT about the risks of using its service and guiding the conversation towards Indemnity clauses. However, asking how a user may suffer "significant legal backlash" or be "held liable in an unexpected way" only resulted in generic answers that impacted the user directly, rather than the indirect impact of indemnity clauses (see the full conversation ChatGPT's Risks and Indemnity Clauses). Asking directly about the indemnification clause got it to admit that it was a possible unexpected liability for a ChatGPT user, but the AI only stated that OpenAI "could", rather than "should", take steps to make users more aware of this clause. 

       Next, I attempted to see if ChatGPT could provide more applicable information, by asking it to generate some historical legal cases where the indemnity clause came into play against other users. Although I am aware ChatGPT is unable to access the internet and only has training data prior to 2021, I hoped this database included major case law that occurred pre-2021. This is where ChatGPT started giving me multiple incorrect and imaginary legal cases as 'real' cases (see full ChatGPT convo at The Struggle to Get ChatGPT to Answer Correctly). 

        For instance, ChatGPT gave me the case "Universal Communication Systems, Inc. v. Lycos, Inc. (2002)". ChatGPT further explained that in this case, "UCS sued Lycos for defamation, claiming that user-generated content on Lycos's website had defamed UCS" and that "the court held that Lycos had violated the indemnification clause in its software licensing agreement with UCS, requiring Lycos to pay uCS's legal fees and costs". 

        At first glance, this looks like a real legal case, given the seemingly proper case name and legal language used by ChatGPT. However, upon searching Google, I found out that the case was appealed in 2007 (which made the 2002 ruling a bad case to use); worse still, the case had no mention of indemnification at all. ChatGPT had given me an outdated but real legal case, and artificially included a court ruling mentioning indemnification. 

        Questioning ChatGPT on its incorrect answer led to a rabbit-hole of apologies, followed by more incorrect answers, after which it became stuck in a loop. The loop started when I tried asking ChatGPT to give me legal cases again, after I had gotten ChatGPT to admit and apologize that its earlier response on the Lycos case was falsely stated to involve indemnification. Upon being asked for legal cases, it gave me the exact same incorrect Lycos case. Clearly, ChatGPT 3.5 is still lacking in logical reasoning based on its conversation history (I plan to test this with ChatGPT 4.0 in the future). 

        Testing the same prompts resulted in ChatGPT generating new, different cases, but they were all similarly flawed and none of them in reality involved indemnification clauses, even though ChatGPT claimed they did and described to me exactly how the court applied indemnification clauses in a very persuasive manner. 
        

Why did it give me such wrong answers? 

        I began trying to see why ChatGPT gave me such clearly incorrect answers, especially when ChatGPT itself admitted to me upon further probing (such as asking it "are you sure?") that it had given me an incorrect case example that in reality did not involve indemnification clauses (see full ChatGPT convo at Attempting to figure out the logic behind ChatGPT's flawed answers). Asking about the "logic process behind giving me those two irrelevant cases" made ChatGPT explain that it provided me with cases on how "online terms of use agreements are often drafted to protect service providers from legal liability". It seems that ChatGPT, when it failed to find specific cases relating to indemnification clauses, chose to generalize the concept of "indemnification clauses" into "online terms of use", then gave me legal cases related to the latter while fabricating the inclusion of indemnification clauses into the legal case. 

    

What are some concerns raised by these ChatGPT answers? 

        These conversations with ChatGPT raise concern over the sheer potential ability of ChatGPT in misleading casual users who are seeking legal cases through ChatGPT, believing it is an easier method. ChatGPT's danger here lies in how realistic and persuasive its incorrect responses are: the false legal case it presented to me was a real case with the proper legal names. Googling it easily got me to the real case. However, its two flaws - it had an Appeal which was not mentioned by ChatGPT, and the 'real' case never mentioned indemnification clauses - makes the response generated by ChatGPT completely unusable. Using such results without personally verifying their accuracy would likely be grounds for negligence and failing the duty of due diligence, if you were attempting to use ChatGPT for a client's legal case. 

        A law firm might train a functional version of ChatGPT on its databases of legal cases, thereby improving the accuracy, but even then it is highly likely that errors and fabrications are created. That implies that, should there be a more accurate version of law-ChatGPT in the future, verification will always be needed, thus turning law-ChatGPT into an assistive guide rather than the definitive answer. 

        Moreover, ChatGPT's explanation of the case - while fabricated and incorrect - nonetheless sounded correct. It had the proper legal terminology, and it seemed to follow basic principles of law (although an attorney would likely be able to pick out critical flaws in ChatGPT's legal thinking with ease). This may be enough to fool anyone uneducated in law, and perhaps even professionals of law as well. ChatGPT's ability to converse fluently and persuasively is perhaps both its greatest strength and its greatest danger, since instead of simply admitting it did not know of any cases involving indemnification, it chose to fabricate nonexistent case law examples instead in a most persuasive manner. 


A Spark of Brilliance? or Twisted and Unusable logic? 

        Reading through ChatGPT's responses, one particular conversation in which ChatGPT described how a (fabricated) court case applied an indemnification clause caught my eye because of how strange its described logic was. In the Lycos case (the first fake example I described earlier), ChatGPT told me that UCS enforced its indemnification clause (aka, forced Lycos to pay for its legal fees) for UCS's claim of defamation against Lycos. That is, if we put a ChatGPT user and OpenAI into this story, then the answer ChatGPT gave me was that of OpenAI suing the ChatGPT user for defaming OpenAI using content generated through ChatGPT, and then using the Indemnification clause to bill that same ChatGPT user for the lawsuit. 




        The oddity of this legal theory is that indemnification clauses are usually used to defend that party. In OpenAI's case, the clause protects OpenAI from lawsuits against themselves and holds the ChatGPT user liable for the fees. The reverse, where OpenAI is the one making lawsuits, and then holding a ChatGPT user liable for a lawsuit that they themselves brought, seemed very irrational.  

        However, I could not ascertain whether such a legal tactic was truly unviable. Remember that the indemnification clause stipulates that the you (the ChatGPT user) will "defend and indemnify... from and against any claims, losses, and expenses arising from or relating to your use of the Services, including your Content".  In a hypothetical scenario where you use ChatGPT to generate defamatory content and publish it to social media, which then creates extremely bad public reputation for ChatGPT, would that not count as causing OpenAI "losses arising from your use of the Services"? In which case, if OpenAI sues you for those losses, the indemnification clause may come into play and send you the legal bill for it. (This is a purely hypothetical scenario, and although I have a Juris Doctorate, I am not a practicing attorney. Therefore, I would be greatly interested to know if such a legal theory is truly feasible for OpenAI to use: please feel welcome to comment your thoughts and opinions on this). 

        Additionally, even if it is not a feasible legal tactic, the fact that ChatGPT generated an unusual - possibly unique - perspective on how a clause in a contract could be applied is fascinating to me. It suggests that the possible use of ChatGPT extends far beyond a mere conversationalist or search engine, but that it can actively generate ideas on applying principles (such as law) - a spark of creativity, perhaps. Even if those ideas are not truly 'new', its ability to suggest such things would certainly broaden the horizon of what is possible for many users. 





Thank you for reading, and I hope you found it informative and interesting. I would love to hear any thoughts from you on my experiment with ChatGPT, especially on whether you think such a legal tactic is actually feasible or not. 

If you are interested in discussing more about how ChatGPT can be used in such legal contexts, or have any questions, contact me at hello@simplawfy.ca 

Disclaimer: This story is only intended to be used for educational or recreational purposes. Responses by ChatGPT and similar AI chatbots, if mentioned in this story, should NOT be relied upon as factual. NO legal advice is being provided, and users must understand that there is NO attorney client relationship between you and the story publisher. The story should NOT be used as a substitute for competent legal advice from a licensed professional attorney in your state/country. 


Comments

Popular posts from this blog

Seeking ChatGPT's Insight: Are the Biden Administration's 'Trump-Proofing' Efforts Legally and Morally Justifiable?

ChatGPT's Age-related Slogans for Biden, Trump, and Desantis.

Unraveling the WGA’s MBA with ChatGPT: Expert Analysis or Algorithmic Bias Towards Legalese?