Home Industry Law ethics and legal services OpenAI faces Lawsuit Questioni...

OpenAI faces Lawsuit Questioning if Its Product ChatGPT Crosses Legal Boundaries


Law Ethics And Legal Services

OpenAI faces Lawsuit Questioning if Its Product ChatGPT Crosses Legal Boundaries

Nippon Life’s $10.3 Million Lawsuit against OpenAI sparks a landmark debate on whether ChatGPT’s legal 'advice' constitutes the Unauthorized Practice of Law.

Recently, Nippon Life Insurance Company of America sued OpenAI for $10.3 million. The case follows Graciela Dela Torre of the US, who filed dozens of motions against the aforementioned life insurance company after suspecting that her lawyer had misled her into settling a long-term disability claim with prejudice earlier in January 2024. Upon sharing her attorney’s correspondence to ChatGPT, the generative AI chatbot validated her suspicion as legitimate. Nippon Life Insurance has sought to claim compensation for the damages incurred, both monetarily and resource-wise, as it fights the case due to the plaintiff’s legal filings being filed haphazardly. It holds OpenAI responsible for the legal assistance that ChatGPT offered to Graciela during this process.

However, the lawsuit faces several pressing challenges. Firstly, there is the question of determining if it is legally tenable. This stems from the various Unauthorized Practice of Law (UPL) rules across the US and the act of contemporary generative AI and large language models (LLMs) offering legal advice, potentially violating these. In other words, other than being the lawyer in your own case, only a qualified legal practitioner is authorized to provide legal advice. Secondly, experts point to OpenAI’s ChatGPT not conforming to the principle of the uncrossable threshold, which is a design principle safeguarding against unauthorized practice of law by separating the provision of legal information. ChatGPT does not have any built-in provision to refuse particular requests, like legal advice embedded in it.

Along similar lines, the third issue pertains to OpenAI prohibiting users from relying on ChatGPT for legal advice by updating the terms of service in October 2024. This is interpreted as a tacit admission of the problem at hand, with the addition of a mere disclaimer not resolving the crux of the matter - that users could still generate legal advice in the first place. On the other hand, UPL provisions are currently unclear on whether the developer of an AI software can be treated as the actor or only a bystander. Hence, courts are likely to outline a ‘safe harbor’ for AI legal applications.

Explore More

Recommended News

Latest  Magazines