Welcome back, legal innovators! Today we’re diving into the murky waters of AI hype, where billion-dollar valuations can be built on smoke and mirrors. Every day, another startup promises they are the “first” to revolutionize legal practice with “groundbreaking AI,” “reasoning models,” or now “AI agents” and “agentic workflows.” But how many of these solutions are stretching the truth?
The recent Builder.ai scandal should make every tech-savvy lawyer sit up and take notice. This billion-dollar startup promised AI that could write software “like ordering pizza,” but what investigators found behind their slick marketing was something else entirely. As lawyers trained to spot deception, we somehow keep falling for the AI hype. Maybe it’s FOMO, maybe it’s fear of looking outdated, or maybe we’ve all drunk too much AI Kool-Aid. Whatever the reason, it’s time to apply our professional skepticism to the AI revolution, before we find ourselves on the wrong end of the next billion-dollar illusion. Ready to sharpen your BS detector? Let’s dig in.
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
When famed futurist Roy Amara said we tend to overestimate a technology’s impact in the short run and underestimate it in the long run, he wasn’t just talking about the flashy bits, he was also warning us about illusions. Today, artificial intelligence stands atop the crest of the hype cycle, accompanied by the sort of optimism that has many starry-eyed and ready to jump headfirst into new tools. But with hype comes the risk of deception, and that’s where “AI-washing” enters the scene.
Think of AI-washing as painting “lightning-fast” stripes on a 20-year-old sedan and passing it off as a brand-new sports car. It’s the marketing spin that inflates or outright fabricates AI capabilities, capitalizing on the technology’s mystique. For lawyers, especially those of us in practice areas where data privacy, diligence, and compliance are paramount, understanding AI-washing is no longer optional. “Buyer beware” has never been more relevant.
If this sounds interesting to you, please read on…
A Cautionary Tale: Builder.ai
Picture a high-flying startup, flush with hundreds of millions in venture capital. Its founders promise software development so automated that ordering an app is as easy as ordering a pizza. Their secret sauce? A slick “AI assistant” they call Natasha. Investors included recognizable names. Then came the reveal. Instead of a miracle maker, from behind the curtain emerged 700 human engineers in India. Meanwhile, the marketing promised an autonomous AI, capable of revolutionizing app-building.
Then came financial misconduct allegations. Auditors discovered that the company reported around 220 million dollars in revenue for 2024, while the real figure may have been below 50 million. There were also allegations of inflating sales by round-tripping transactions with another firm, VerSe Innovation. Creditors promptly seized tens of millions of dollars, and soon the house of cards fell apart. Builder.ai declared bankruptcy in multiple jurisdictions, from India to the United States.
This meltdown highlights an uncomfortable reality. Tech watchers have often been tempted to label every new software tool as “AI-powered,” aiming to attract big checks and major partnerships. For lawyers, the legal risk is manifold: false advertising, potential securities fraud, and contract disputes linger in the aftermath. Whether you represent an investor burned by unrealistic promises or a corporate client considering acquisitions, this cautionary experience is a valuable signpost.
AI-Washing 101
AI-washing is reminiscent of greenwashing in environmental marketing. Much as some companies present themselves as more sustainable than they really are to capitalize on eco-friendly enthusiasm, AI-washing involves marketing an ordinary, rule-based system as a sophisticated AI solution. A rule-based chatbot becomes “generative intelligence.”
This practice thrives, in part, on two deep-rooted phenomena:
First, the fear of missing out is profound. Teams worry that if they do not adopt AI, they will be left behind. Psychologists call this a form of groupthink, fueled by the worry that skepticism could cause an organization to appear outdated.
Second, it taps into the mania described in Gartner’s hype cycle. Picture the peak of inflated expectations. People see potential in a technology, sometimes more than is realistic. Without strong checks and balances, both buyers and sellers can get pulled into a whirlwind of overselling.
Humans are prone to confirmation bias.
If the pitch lines up with the hype, we are inclined to believe it. But when whistleblowers or journalists confront the reality, illusions tumble. None of this means that authentic AI solutions do not exist. It simply means buyers and advisers should look carefully at the claims a vendor makes and examine how they align with verifiable evidence.
How Lawyers Can Spot Potential Trouble
Certain signals indicate whether a product’s AI claims might be puffery or outright fabrication. One common giveaway involves “human-in-the-loop disguised as AI.” If a vendor insists that customers are receiving an automated service, yet user interactions show frequent manual interventions or delays, that might be a red flag. If staff must constantly override machine decisions, you are probably looking at an engine that is more mechanical Turk than machine learning.
Another telltale sign arises when marketing collateral is filled with buzzwords like generative AI, cognitive computing, or neural nets, yet provides no tangible details about training data, the underlying model, or oversight mechanisms. A partial reliance on black-box models can be legitimate, but a complete refusal to share even the broad strokes of how the model functions should raise questions about authenticity.
Financial statements that appear inflated or rely on unusual revenue recognition methods also speak volumes. The Federal Trade Commission has penalized tech startups that claim inflated results from supposed AI breakthroughs. Excessive reliance on press releases announcing “the next big thing” without backing from real customer references is also suspicious.
Hidden Psychological Dynamics
Part of what allows AI-washing to trigger such dramatic hype is the halo effect. If a company has endorsements from prominent backers or has landed a large seed round, observers assume the product must be solid. This taps into a classic psychological principle: we are more prone to accept claims if they are associated with success or recognition. (Think: Theranos CEO Elizabeth Holmes)
Groupthink is a factor too. When multiple players in an industry all claim to have the same new innovation, professionals can feel pressured to follow the crowd. You might encounter colleagues or clients who say, “Company X and Y are on board, so we must do it too.” Outward dissent can be misconstrued as being uninformed.
In some corners, the Dunning-Kruger effect runs rampant. It describes the phenomenon where less knowledgeable individuals overestimate their expertise. In the realm of cutting-edge AI, novices can be so confident in their limited understanding that they do not see red flags in a vendor’s claims. Legal counsel, by contrast, has a duty to adopt a measured stance, ask the uncomfortable questions, and ensure that marketing meets reality before signing on the dotted line.
Practical Steps for Attorneys
Readers of this piece are sophisticated. You likely already apply due diligence when confronted with new technologies. Still, it helps to have a mental checklist.
One crucial starting point is to ask vendors: “What genuine AI model do you rely on, and how is it trained?” If they name-drop a large language model such as GPT-4.1 or Llama 4, follow up with questions about how they supervise, test, and refine that model. Also, keep an eye on how much intellectual property is truly proprietary. If everything is just an API call to ChatGPT, perhaps the product is a “thin wrapper,” not an advanced platform, but a third-party integration with a pretty coat of paint.
Then there is the matter of disclaimers. If a company claims that its AI can do complex tasks with near-perfect accuracy and reliability, yet provides no disclaimers about mistakes or “hallucinations,” you may want to question how thoroughly they have tested the product.
Keep your eyes on the contractual fine print. Are there indemnification clauses? Detailed descriptions of what data the AI uses and how it is retained? Does the system train on submitted data? Who is accountable if a system goes off-track?
The Enduring Hype Cycle
One might ask, how is it that so many experts and investors fall for AI-washing even after repeated fiascos? Part of the explanation lies in Gartner’s hype cycle. Early adopters and media outlets spotlight the promise of a new technology, which triggers intense enthusiasm in the marketplace. The peak of inflated expectations is reached quickly. Once actual capabilities fail to meet that lofty standard, a slump sets in and companies may fold. Those that survive eventually enter a more realistic phase of productivity, but by then, the mania has often moved on to the next bright toy.
We can add a psychological dimension to this. The mania that surrounds AI can feel like an unstoppable wave. Professionals read stories about generative AI passing bar exams and writing short novels, and they believe it can handle just about anything. Marketing departments exploit that sentiment, especially when they brand normal algorithms or orchestrated manual work as “agentic AI workflows.”
Yet as the tales of Builder.ai and other exposed AI-washing cases illustrate, illusions wear off when tested by real scrutiny. For lawyers who help shape or evaluate these technologies, a guiding principle is caution tempered by curiosity.
Closing Thoughts
The shockwaves from Builder.ai’s downfall underscore something essential. Even the most promising startup, with all the right endorsements and plenty of venture capital, can unravel when it overstates its achievements. Attorneys play a pivotal role in practicing due diligence and bringing accountability.
We can glean valuable lessons by reflecting on the fiasco.
First, a robust sense of skepticism is an asset, not a drawback. Questioning the basis for claims about “transformative AI” can feel uncomfortable, especially when swirling hype suggests that everyone else has already embraced the trend. Yet the role of the lawyer demands analytical thinking that stands apart from group excitement.
Second, it’s worth remembering that illusions sustain themselves only when people are reluctant to look behind the curtain. Understanding the basics of how AI works, or at least knowing the right questions to ask, can prevent big missteps. Thinking like a lawyer invites us to stay curious and vigilant, rather than simply going along.
Finally, the key is to separate honest, evidence-based claims from illusions spun to raise fast capital or close deals prematurely. Lawyers who keep calm in this hype-laden world will shape better outcomes for clients, themselves, and society at large.
If you ever wonder whether that shiny new AI pitch is more smoke than substance, remember the cautionary tale of Builder.ai. It is a powerful reminder that behind every claim lies a testable reality. And for us, as legal professionals who champion ethics and responsibility, our duty is to be both bold and prudent in our pursuit of the future.
By the way, did you know you that I now offer a daily AI news update? You get 5 🆕 news items and my take on what it all means, delivered to your inbox, every weekday.
Subscribe to the LawDroid AI Daily News and don’t miss tomorrow’s edition:
LawDroid AI Daily News, is here to keep you up to date on the latest news items and analysis about where AI is going, from a local and global perspective. Please share this edition with your friends and colleagues and remember to tell me what you think in the comments below.
If you’re an existing subscriber, you read the daily news here. I look forward to seeing you on the inside. 😉
Cheers,
Tom Martin
CEO and Founder, LawDroid