AI Washing
The Hidden Risk Behind the Hype
Among the most pressing concerns is the phenomenon of AI washing, which can be the practice of overstating or misrepresenting a company’s AI capabilities or can be the increased acceptance of low-quality output. While it may be a trendy marketing tactic, AI washing can also carry significant legal liabilities and valuation distortions that demand scrutiny. The author discusses her views on AI washing and the implications for companies and professionals that view it as a panacea.
With all the Artificial Intelligence (AI) prefixes affixed to articles, software, and paid gateway add-ons, buyer’s remorse and cautionary tales are slowly creeping into the algorithmic summary of news floating around the internet. AI has become a defining feature of just about everything in content marketing and service offerings. Burgers—now powered by AI! It is not unlike greenwashing waves of years past where investment drivers jumped on the latest, greatest content trend (green-ness) to fit their old company into a new, desirable mold. For instance, your new soda flavor was not the winner of drinker surveys, it was suggested by AI. Yum. AI did it so it must be better. What happens when AI makes a flavor that kills people? Or, put in a less dystopian way, what happens when AI makes a flavor that is just gross but sounds nice? Or, put in a way that references the actual past, what happens when AI output is just dumb? Consider Burger King’s “Tastes like bird.” campaign.[1]
AI is not the only thing that can be dumb. Once upon a time, I was at an unnamed firm that hired a marketing firm for its rebranding effort. The marketing firm never bothered to understand the deliverables it was hired to rebrand, which resulted in the presentation of an ad campaign comprised entirely of clichés.[2] It is unsurprising that the unnamed firm dissolved shortly thereafter. The lesson here is that people can also be dumb. Was it a failure of the marketing firm or a failure of the unnamed firm in its instructions to the marketing firm? Likewise, when AI fails, is that a failure of the prompt, the person writing the prompt, the retailer of the AI service, or do we all collectively shrug, sigh, and sip some coffee?
Among the most pressing concerns is the phenomenon of AI washing, which can be the practice of overstating or misrepresenting a company’s AI capabilities or can be the increased acceptance of low quality output (dumbness, stupidity, hallucinations … use whatever term you like, but I prefer not to disparage mental illness when referencing AI’s misrepresentations so I dislike the rising term, hallucination). While it may be a trendy marketing tactic, AI washing can also carry significant legal liabilities and valuation distortions that demand scrutiny.
AI washing typically involves companies claiming that their products or services are powered by advanced AI technologies when they rely on basic automation or rule-based systems (which are not AI at all but glorified if…then logic). At its core, AI washing involves presenting products or services as more technologically advanced than they truly are. A company might claim that its customer service chatbot is powered by cutting-edge machine learning, when it operates on a simple decision tree. Another might boast about using AI to personalize user experiences, when the underlying system is based on static filters or manually coded rules. These misrepresentations are not always intentional; sometimes they stem from a lack of understanding within the organization itself. However, the consequences are the same: consumers, investors, and regulators are misled, and trust is eroded. These claims are found in investor presentations, marketing materials, earnings calls, content marketing, and even regulatory filings. The ambiguity, market confusion, and lack of technical understanding surrounding what constitutes “real AI” allows companies to stretch the term to fit their narrative, creating a perception of innovation that may not be substantiated by reality. For legal professionals, this raises immediate concerns about false advertising, securities fraud, and contractual misrepresentation. For valuation professionals, it introduces distortions in assessing intangible assets, future earnings potential, and competitive positioning.
The legal implications of AI washing were increasingly coming into focus. The Federal Trade Commission (FTC) issued guidance warning companies against making unqualified or deceptive claims about AI capabilities. Under Section 5 of the FTC Act, such practices may have been deemed unfair or deceptive, exposing companies to civil penalties and enforcement actions. The FTC’s 2023 advisory, “Keep Your AI Claims in Check,” specifically cautioned against overstating performance, implying bias elimination without evidence, and failing to disclose limitations. These guidelines were particularly relevant for legal counsel advising clients in consumer-facing industries, where misleading AI claims can trigger class actions and reputational damage. This entire paragraph is in the past tense, though. Why? Because the FTC removed this article at some point after April 26, 2023.[3]
Securities law may present another layer of risk. Public companies that exaggerate their AI capabilities in earnings calls, investor decks, or SEC filings may run afoul of Section 10(b) and Rule 10b-5 of the Securities Exchange Act under its traditional application and enforcement. These provisions generally prohibit materially false or misleading statements made in connection with the purchase or sale of securities. If a company’s stock price is inflated due to perceived AI innovation, and those claims are later revealed to be false, it could face shareholder lawsuits, SEC investigations, and significant market losses. Legal professionals involved in securities litigation or corporate governance must be vigilant in reviewing AI-related disclosures for accuracy and substantiation. The financial practitioner should consider their jurisdiction and applicable precedent for establishing materiality, perception, and falsehood.
Contractual liability is another area where AI washing can have serious consequences. Representations made during the sales process (whether in marketing materials, RFP responses, or even oral pitches) can become enforceable warranties if incorporated into contracts. If the promised AI functionality fails to materialize, the counterparty may sue for breach of contract, negligent misrepresentation, or fraud. Again, the practitioner should consider their jurisdiction and applicable precedent for the likelihood of an award under these claims. This is particularly relevant in enterprise software and SaaS agreements, where clients rely heavily on vendor claims when selecting technology solutions. Legal advisors should ensure that contracts include clear definitions of AI functionality, integration clauses to limit reliance on promotional statements, and disclaimers where appropriate. Of course, this only applies if you are large enough to make demands from your vendors.
For valuation professionals, AI washing introduces a different but equally critical set of concerns. The perceived presence of AI can significantly influence a company’s valuation, especially in sectors like financial technology, health software, and enterprise services. Investors often assign premium valuations to companies that appear to be leveraging AI for scalability, efficiency, or personalization; even if that appearance is a lot of buzzwords and hand waving without substance. If those capabilities are overstated, the valuation may be based on flawed assumptions. This can affect everything from discounted cash flow models to market comps and intangible asset assessments. During due diligence, valuation experts must probe beyond surface-level claims and seek technical validation of AI systems. This includes reviewing documentation, interviewing engineering teams, analyzing performance metrics, and pushing buttons (literally, not figuratively … although, maybe also figuratively). I recall a presentation at an oil and gas conference for some emerging technology application where the presenter still had Latin filler words in his software.
To mitigate these risks, companies must adopt some sort of governance framework around AI development and marketing. Do not start saying your practice is now “powered by AI” just because Microsoft added a free trial of copilot to your personal office software subscription you share with your mom or just because you hired your teenager as an AI Prompt Engineer intern. Like the unnamed firm who did not bother informing the marketing firm about the level of professionalism and lack of cliché use in this industry, if you do not give good input, you do not get good output. It is true of humans, it is true of glorified if…then logic, it is true of machine learning, and it is true of AI. Garbage in=garbage out.
AI washing is not merely a marketing issue, it is a multifaceted risk that intersects with legal liability, regulatory compliance, and corporate valuation. Legal service professionals must be proactive in identifying and mitigating the legal exposures associated with exaggerated AI claims. Valuation professionals must ensure that their assessments reflect the true capabilities and limitations of AI systems, avoiding inflated valuations based on hype rather than substance. As AI continues to reshape industries, the integrity of corporate disclosures and the accuracy of valuation models will be critical to maintaining trust, ensuring compliance, and safeguarding investor interests.
AI is no longer just a technological tool—it has become a symbol of progress, a magnet for investment, and a powerful marketing asset like popular buzzwords and prefixes before it: green, cyber, e-, and fat free. Practical safeguards such as some additional contractual language where you can, some cynicism, and some suspension of blind acceptance can reduce an overreliance on marketing. Integration clauses, disclaimers, and clear definitions of AI functionality can help manage expectations and limit liability. Education is another key component. Marketing, sales, and investor relations teams can be trained to understand the limitations of AI and the legal implications of overstating its capabilities. With a surge in popularity comes the posers, per usual, and the rise of “AI washing.” Be wary; this article might have been written with the use of a free trial AI or maybe not.
[1] “Tastes like bird.” is a reference to the satirical ads created for Burger King using AI in 2018. The results are comical and are importantly not the result of current technology or a concerted, genuine marketing effort. https://web.archive.org/web/20190821064027/https://www.restaurantbusinessonline.com/marketing/burger-king-lets-ai-do-its-new-ads-predictable-results
[2] Arguably, the sole deliverable of a litigation support firm is professional writing, which does not include cliché usage.
[3] You can still view it here: https://web.archive.org/web/20230426192636/http://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check?source=email. You will get a page not found error if you attempt to view it directly at the FTC. Do you think it was removed by an AI-powered algorithm?
Dorothy Haraminac, MBA, MAFF, CFE, PI, provides financial forensics, digital forensics, and blockchain forensics under YBR Consulting Services, LLC, and teaches software engineering and digital forensics at Houston Christian University. Ms. Haraminac is one of the first court-qualified testifying experts on cryptocurrency tracing in the United States and provides pro bono assistance to victims of cryptocurrency investment scams to gather and summarize evidence needed to report to law enforcement, regulators, and other parties. If you or someone you know has been victimized in an investment scam, report it to local, state, and federal law enforcement as well as federal agencies such as the FTC, the FCC, and the IRS.
Ms. Haraminac can be contacted at (346) 400-6554 or by e-mail to admin@ybr.solutions.