Context is Everything is a UK-based AI consultancy specialising in private AI deployment and institutional intelligence. We build SASHA, an enterprise AI platform deployed inside your firewall, trained on your proprietary methodology. Our AI concierge Margaret demonstrates these capabilities for free on our website.
18 checks to run before you trust, use, or share AI-generated content. Use it on screen or get a printable PDF.
AI tools produce confident, fluent text — even when the content is wrong. Hallucinated citations, fabricated statistics, and plausible but incorrect claims are common in AI-generated output. Without a verification process, these errors get published, shared, and acted upon.
This 18-point checklist covers four critical stages: pre-use checks (is AI appropriate for this task?), factual verification (are claims accurate?), red flag detection (does the output contain hallucinations or contradictions?), and pre-share review (is this ready for your audience?).
Many organisations use this checklist as part of their AI acceptable use policy — requiring team members to complete it before publishing or sharing AI-assisted work.
0 of 18 verified
Run through every item before using or sharing AI output
Ask these questions before you act on any AI-generated content.
Actively verify before trusting the output.
These patterns often indicate hallucinated or unreliable output.
Final checks before this content reaches anyone else.
Share this tool
Email me a PDF version
Prevention strategies beyond verification
Why AI output remains your responsibility
Poor accuracy governance is a common failure pattern
Learn the science behind why AI gets things wrong
Diagnose uncontrolled AI usage in your organisation
Train your team on secure AI practices
Use a systematic process covering four areas: pre-use checks (is AI appropriate?), factual verification (are claims accurate?), red flag detection (hallucinations or contradictions?), and pre-share review (is this ready for your audience?). This checklist covers all four.
Hallucinated citations (references that don't exist), confident but incorrect statistics, outdated information presented as current, fabricated quotes, and logical inconsistencies. These are particularly dangerous because they look authoritative.
Yes. It's free to use on screen and available as a printable PDF. Many organisations use it as part of their AI acceptable use policy — requiring team members to complete it before publishing AI-assisted work.
Large language models generate text by predicting likely next words based on training data patterns. They don't retrieve facts from a database, so they can produce fluent, confident text containing fabricated information — particularly for specific claims, dates, and citations.