Air Canada’s chatbot lied to a customer about bereavement discounts. The airline told a tribunal it wasn’t responsible for its AI’s mistakes. The tribunal said no. Moffatt v. Air Canada (2024 BCCRT 149), February 2024.
This case tells you everything. Boards own their AI problems now. No hiding behind the technology.
Directors approve AI projects with less scrutiny than they’d give a new factory. Bad idea. These systems reshape customer relationships, create new liabilities, and land you in court (see the lawsuits over AI hiring discrimination).
The question isn’t whether to govern AI. It’s how to govern it without killing innovation or rubber-stamping disasters.
Who Takes the Blame?
When AI goes wrong, who’s responsible?
Not the algorithm. Regulators and courts want a human to blame. But responsibility for AI sits nowhere specific, it’s lost between technical teams, business units, and executives.
Boards must fix this. Ask hard questions:
- Who approved the training data?
- What assumptions did we build into the model?
- Who set the acceptable error rate?
- When this harms someone, who explains it to regulators?
New job titles won’t help. Chief AI officers and Heads of algorithmic accountability sound impressive. They mean nothing without clear responsibility and accountability.
Require a named executive sponsor for every customer-facing AI system. They must certify they understand how it works and what could break. Put your name on it, and you’ll do better due diligence.
Real Oversight vs Theatre
Most “human in the loop” oversight is rubbish.
Take a lending algorithm processing thousands of applications per hour. Yes, a human reviews flagged cases. But can they override the system? Do they understand its reasoning? Or are they just clicking “approve” on the AI’s decisions?
That’s not oversight. That’s theatre.
Real oversight sometimes means slowing down. Some decisions can’t be automated responsibly. Healthcare organisations using diagnostic AI must decide: which recommendations need a physician’s review? This trumps processing speed. Accept it. Some scale isn’t worth the risk.
Be honest about where you need human judgement. Then give those humans time, information, and authority to use it.
If your oversight is so cumbersome people work around it, you’ve failed. If it catches nothing, you’ve also failed.
Ask the Awkward Questions First
Before AI touches customers or employees, ask:
- What could go wrong?
- Who might get harmed?
- Could this discriminate?
- What happens when it fails publicly?
Most organisations skip this. Technical teams check accuracy. Legal checks compliance. Procurement checks the contract. Nobody asks: is this a good idea?
Build a review board with diverse voices. Include technologists who know what the system does, business leaders who know the market, and people who represent affected communities. Some organisations add frontline employees and customer advocates. They imagine how systems might fail. This catches problems technical testing misses.
Keep it fast. Hours, not months. But do it before deployment, not after disaster.
The Speed Trap
Competitors are deploying AI fast. Your board feels pressure to match them. “We’ll refine it as we go” sounds tempting.
Sometimes that works. Often it doesn’t.
Distinguish between recoverable mistakes and catastrophic ones. AI generating mediocre marketing copy? Fix it iteratively. AI making hiring decisions or setting insurance premiums? Go slower.
Getting it wrong means regulatory investigations, lawsuits, and systematic unfairness that takes years to fix.
Set different approval levels for different risks. Low stakes (optimising delivery routes, suggesting meeting times)? Lightweight review. High stakes (affecting lives, livelihoods, fundamental rights)? Rigorous scrutiny. Even if competitors deploy first.
Being second with a reliable system beats being first with a disaster. Social media companies still struggle with AI moderation that fails on context. Workday and iTutorGroup faced lawsuits over algorithmic hiring discrimination. The market remembers these failures for years.
What to Do
Stop treating AI as a technical matter. It’s a business decision with legal, ethical, and reputational consequences.
Three actions:
- Fix accountability. Name someone responsible, and accountable, for every significant AI system.
- Make oversight real. If humans can’t intervene meaningfully, the oversight is worthless.
- Review ethically. Fast but thorough. Include people who understand the technology and its impact.
Accept this truth: going slower often gets you there faster. You avoid fixing disasters.
The days of approving AI for “efficiency gains” are over. Don’t ask whether to adopt AI. Ask whether you’re governing it properly.
Get this wrong, and your next conversation with regulators won’t be hypothetical. Get it right, and AI becomes a lasting advantage.
Sources
- Moffatt v. Air Canada, 2024 BCCRT 149 (February 2024) – https://www.canlii.org/en/bc/bccrt/doc/2024/2024bccrt149/2024bccrt149.html
- CBC News: Air Canada chatbot lawsuit – https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416
- CNN: Workday AI hiring discrimination lawsuit – https://www.cnn.com/2025/05/22/tech/workday-ai-hiring-discrimination-lawsuit
- American Bar Association: Navigating AI Employment Bias – https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-april/navigating-ai-employment-bias-maze/
- Meta Oversight Board: Content Moderation in a New Era for AI and Automation (February 2025) – https://www.oversightboard.com/news/content-moderation-in-a-new-era-for-ai-and-automation/

Leave a comment