Skip to main content

Context is Everything - UK AI Consultancy & Private AI Deployment

Context is Everything is a UK-based AI consultancy specialising in private AI deployment and institutional intelligence. We build SASHA, an enterprise AI platform deployed inside your firewall, trained on your proprietary methodology. Our AI concierge Margaret demonstrates these capabilities for free on our website.

Our Team

Proven Results

You Still Have to Own It - Thought leadership article by Context is Everything on AI implementation

You Still Have to Own It

·4 min read·730 words
AI ResponsibilityLegalGovernance

Every legal system that's looked at this question has reached the same conclusion: if you publish it, send it, or act on it, you own it. AI doesn't carry liability. You do.

Why AI-generated content is still your responsibility — and what to do about it

There's a comforting thought that creeps in when you use generative AI: the machine wrote it, so it's the machine's problem. If ChatGPT gets a fact wrong or Copilot drafts something that breaches copyright, that's on the technology. Right?

Wrong. Every legal system that's looked at this question has reached the same conclusion: if you publish it, send it, or act on it, you own it.

The Courts Have Already Decided

This isn't theoretical. Courts have been ruling on this since 2023, and the pattern is clear.

In Canada, Air Canada's chatbot told a customer he could book a full-fare flight and get a bereavement discount applied afterwards. The airline argued the chatbot was a "separate legal entity" and its statements weren't binding. The tribunal disagreed: "You are responsible for all the information on your website." Air Canada paid up.

In the US, two New York lawyers used ChatGPT to prepare a court filing. It fabricated six case citations — complete with convincing-sounding judicial opinions. When the judge discovered the cases didn't exist, both lawyers were fined $5,000 each and ordered to write letters of apology to the judges whose names had been used. The court was blunt: "When lawyers use technological assistance, they must still ensure accuracy."

In the UK, barristers have been rebuked for submitting AI-generated legal arguments built on fabricated precedents. The High Court has issued formal warnings to the entire profession. The message is consistent: "The AI told me" is not a defence.

Timeline of AI accountability rulings 2023-2026 across US, UK, EU and Canada showing enforcement acceleration

What "Owning It" Actually Means

The principle is straightforward: if you put your name to it, it's yours. That applies whether you typed every word yourself or an AI generated the first draft.

This means:

  • If you publish it, you're responsible for its accuracy, just as you would be with any other publication
  • If you send it to a client, it carries the same professional obligations as anything else you'd send
  • If you use it to make a decision, the consequences of that decision are yours
  • If it infringes someone's copyright, you can't point at the tool and walk away
  • The regulatory direction is the same everywhere. The UK applies existing law — data protection, professional standards, consumer protection — and says AI doesn't get a special exemption. The US has no single federal AI law, but the FTC, EEOC, and state attorneys general are all clear: using an AI tool doesn't shift your obligations. The EU AI Act, with transparency obligations binding from August 2026, requires AI-generated content to be marked and identifiable.

    None of these frameworks let you off the hook. They all place the responsibility on the person or organisation that deploys the output.

    The Good News: AI Can Help You Check

    Here's the thing people miss: the same technology that creates the risk also helps you manage it.

    AI is genuinely good at verification. You can use it to fact-check claims against reliable sources, verify that citations are real and say what you think they say, check content against copyright databases, and cross-reference facts across multiple sources before you publish.

    The responsible approach isn't to avoid AI — it's to build checking into your workflow. Generate the draft, then use AI (and your own judgement) to verify it. Treat every AI output as a first draft that needs review, not a finished product.

    This is actually how most professionals already work with human-generated content. You wouldn't publish a report without reviewing it. You wouldn't file a legal brief without checking the citations. AI output deserves exactly the same scrutiny.

    The Bottom Line

    Generative AI is a powerful tool. But it's a tool — and tools don't carry liability. You do.

    The courts, regulators, and professional bodies have all landed in the same place: you are responsible for the content you put into the world, regardless of how it was created. Build verification into your process. Check the facts. Confirm the sources. Use the technology to help you do that.

    But never forget: you still have to own it.

    ---

    Sources

  • Air Canada v Moffatt, Civil Resolution Tribunal BC (2024)
  • Mata v Avianca, US District Court SDNY (2023) — lawyers sanctioned for AI-fabricated citations
  • UK High Court formal warning on AI-generated submissions (2024)
  • UK Data Protection Act 2018 / UK GDPR — ICO guidance on AI accountability
  • FTC guidance on AI claims substantiation (2023)
  • EU AI Act, Article 50 — transparency obligations for AI-generated content (binding August 2026)
  • Colorado AI Act SB 24-205 — duty of reasonable care for AI deployers (effective 2026)
  • EEOC guidance on employer liability for AI-driven decisions
  • Related Articles

    What happens next?

    Talk to us. We'll tell you honestly whether AI makes sense for your situation.

    If it does, we'd love to work with you. If it doesn't, we'll tell you that too.

    Start a Conversation