Vendors lie about AI. Buyers pay the price.
In 2019, MMC Ventures examined 2,830 European startups claiming to be AI companies. 40% showed no evidence that AI mattered to their products. These firms raised capital and won contracts on claims they couldn’t substantiate.
Nothing has changed. The UK Advertising Standards Authority analysed 16,000 online ads in late 2024 and found AI claims everywhere – mostly in business-to-business marketing. Companies slap “AI” on products because it commands higher valuations. Many exaggerate. Some invent.
When you buy from these vendors, you inherit their problems. Their marketing claims become your operational promises. Their failures become your liability.
Real failures, real harm
Evolv Technologies sold AI weapons detectors to 800 US schools. They claimed AI could find threats before they got inside. But the system failed to find a seven-inch knife, which was later used to stab a student. When staff made the system more sensitive, false alarms rose to 50%. Now, schools can cancel their contracts due to an agreement with the FTC.
Nate Inc. raised $42m claiming AI completed online purchases automatically. In April 2025, prosecutors charged the founder with fraud. The “AI” was hundreds of contractors in the Philippines doing the work manually. Automation rate: zero.
Pieces Technologies told Texas hospitals its AI had hallucination rates below 0.001%. The Texas Attorney General disagreed. Settlement followed in September 2024.
You own your vendor’s mistakes
In February 2024, Jake Moffatt asked Air Canada’s chatbot about bereavement fares. It gave wrong information. When he requested the promised discount, the airline refused – arguing the chatbot was “a separate legal entity responsible for its own actions.”
The tribunal dismissed this. Air Canada owns its website. Air Canada owns what the chatbot says.
The same applies to you. Deploy a vendor’s AI chatbot, fraud detector, or decision tool, and you cannot blame the vendor when it fails. Courts and regulators will hold you responsible for outputs you put in front of customers.
How to verify claims
Ask specific questions:
- What machine-learning techniques does this use?
- What percentage of tasks are automated versus human?
- What accuracy metrics exist, and how were they measured?
Vendors who can’t answer clearly warrant suspicion.
Demand evidence: technical architecture documents, model cards, performance data. Talk to the vendor’s engineers, not salespeople. Contact customers the vendor didn’t choose for you.
Run proper trials. Define success criteria before you start. Use your data, not theirs. Include difficult cases. Measure human intervention. Check outputs yourself.
Make sure contracts protect you
Write warranties that mean something. Not “uses AI”, but specific accuracy thresholds, automation levels, and performance metrics. Include consequences for missing them.
Build in exit rights. The Evolv settlement created precedent for cancellation when AI claims prove false. Your contracts should allow the same without waiting for regulators.
Allocate liability clearly. If the AI produces wrong outputs, who pays? Consider uncapped liability for material misrepresentation.
Regulators are watching
Use this as leverage.
The SEC created its Cyber and Emerging Technologies Unit in February 2025, naming AI washing as an enforcement priority. The FTC launched “Operation AI Comply” in September 2024 with five actions. The UK Competition and Markets Authority can now fine companies up to 10% of global turnover under powers that took effect in April 2025.
Vendors face real enforcement risk. Tell them you expect compliance with AI regulations. Tell them you won’t absorb liability for their regulatory failures.
The point
Many AI companies make claims, but show no evidence of material AI use. They want your contracts. Ask hard questions before you sign.
Sources
- MMC Ventures, “The State of AI 2019” (March 2019): CNBC
- ASA, “AI as a Marketing Term” (November 2024): ASA
- FTC v. Evolv Technologies (November 2024): FTC
- DOJ v. Albert Saniger / Nate Inc. (April 2025): DOJ
- Texas AG v. Pieces Technologies (September 2024): Texas AG
- Moffatt v. Air Canada, 2024 BCCRT 149 (February 2024): CBC
- FTC “Operation AI Comply” (September 2024): FTC
- SEC CETU (February 2025): SEC
- UK DMCCA enforcement powers (April 2025): DLA Piper

Leave a comment