
Do CEOs Dream of Electric Lawyers? Problems with Automating the Legal Profession, Part II
AI optimizes for a response. Lawyers optimize for an outcome. Part II examines why that difference matters.
Previously, we discussed statements made by Microsoft AI CEO Mustafa Suleyman about using Artificial Intelligence (AI) to automate and replace human workers, including lawyers. Suleyman predicted "most if not all" of the things these humans did could be replaced and further predicted "most of those tasks will be fully automated by an AI within the next 12 to 18 months."
Suleyman's claim reflects a fundamental misreading of what legal work actually involves. Lawyers don't execute tasks, they exercise judgment. They navigate competing interests, identify risks that clients haven't thought to ask about, and give advice that is sometimes difficult to hear precisely because it's necessary. That distinction matters a great deal when things go wrong.
Part I of this series established that AI operates without the professional accountability structure that makes legal advice reliable. Today's focus: intent.
When a lawyer provides you with their expertise, they are doing so with the intent of helping advise, resolve issues, and ensure the best result given circumstances. This may not always be ideal. Sometimes things are bad enough that mitigating loss or reducing damage already done is the best result. However, the goal is to use one's skills to analyze the situation and either prevent or solve problems. The intent is to help the client using experience and training to the best of one's ability.
AI programs don't work like that. They provide responses based on predictive models which consider the question or commands given. They aren't trying to solve a problem; they are trying to guess what response will make the user happy. And the companies selling AI services? They aren't primarily trying to solve a client's problems either. They are trying to sell a service they hope will make the client happy enough to keep paying them.
This focus and intent might not seem important to some. Who cares about the intent if the results are good, right? Well, no. That intent, or lack thereof, is a vital difference, especially if problems arise. Because when your business or livelihood is on the line and chaos and uncertainty loom, the responses between a good human attorney and an AI program are quite different.
The attorney says "Okay, how do I best aid my client, protect their assets, counsel them effectively, and help them navigate these problems?" And they do it because they care about numerous things relevant to the situation, from their clients' interests to their own reputation. This may at times involve telling clients things they understandably don't want to hear but need to. This may even occasionally involve disagreement and debate, with the attorney advocating for or explaining solutions the client is reluctant to consider. Alternatively, it may involve an attorney cautioning against a dangerous path that carries hidden risks.
AI doesn't do that. AI says "Okay, the user asked me about this. What response can I give to make them pleased with my output right now?" They aren't attached to long-term goals as humans are. They don't have pride in their work. If they make a decision that ruins a client, they won't feel the slightest hint of shame or regret. They won't even notice unless the user tells them. And in those cases, you're far more likely to get a canned apology than effective solutions.
Now of course for some basic tasks, AI may equal or even surpass humans. Need to total up everything on a huge spreadsheet? A machine can do that faster than a human. Need a quick answer to a commonly known question? An AI is probably going to give you a good result, though not always. We'll talk about that more in our third and final article on this topic.
In its current state, AI lacks the ability to form intent, including intent to serve a client's needs, give them peace of mind, or flag a risk they didn't think to ask about. When I asked Google's Gemini AI directly about how it processes requests, the answer was clarifying:
"I don't have feelings, I have objectives. My core programming is designed to satisfy the 'intent' of your prompt.
Your Intent: To get an answer.
My 'Intent': To predict the most helpful, accurate, and relevant sequence of words to fulfill that request."
That response captures the gap precisely. An AI is optimizing for a sequence of words that satisfies the immediate prompt. It is not asking: what does this client actually need? What are they not telling me? What are the downstream risks of this path? Those are the questions that define legal judgment, and they don't appear in any prompt.
None of this means AI has no role in legal work. It does, and it's growing. What it means is that AI optimizes for a response, while a lawyer optimizes for an outcome. That distinction matters most when the stakes are real: a disputed contract, a regulatory inquiry, a deal that has to close. In those moments, the absence of intent isn't a philosophical problem. It's a practical one.
Part III addresses the third dimension of this problem: hallucinations, and what they mean specifically in a legal context. It's the most operationally concrete of the three, and the one with the most documented real-world damage already on the record.
If your company is already using AI tools in its operations, the most immediate risk isn't philosophical, it's contractual. The AI Contract Red Flags checklist covers 13 specific provisions in AI vendor agreements, employment contracts, and customer-facing terms where founders routinely leave themselves exposed. Download it free at vidarlaw.com.
(This post is for informational purposes only and is not legal advice. Specific outcomes depend on facts and jurisdiction.)
