There is a lawsuit against OpenAI right now that nobody is really talking about. A woman asked ChatGPT for legal advice, it gave her some, and now her insurance company is suing OpenAI for practicing law without a license.
And honestly my first thought was — yeah, that tracks. We gave it the bar exam and then acted surprised when it started acting like a lawyer.
Here is what happened. Graciela Dela Torre settled a disability lawsuit against Nippon Life Insurance. Case closed, full release signed. She was unhappy about it, took her attorney’s email to ChatGPT, asked if she was being gaslit. ChatGPT said yes. She fired her lawyer, used ChatGPT to file 21 motions trying to reopen the case. All of them went nowhere.
Nippon is now suing OpenAI for $10 million. The core claim is unauthorized practice of law — that ChatGPT crossed the line from giving general information to giving actual legal advice on a specific active dispute.
OpenAI says the complaint has no merit. Maybe they are right on the legal technicalities. But the more interesting question is not whether OpenAI wins this case. It is whether a company can keep marketing a product as smart enough to pass the bar exam and then argue it bears zero responsibility for what happens when people believe that.
I don’t think they can have it both ways forever.