Confident and Wrong — Detecting LLM Hallucinations in Production

Room 1Tue 12 May • 11:15–12:15AI & AgentsIntermediate
LLMs produce fluent, confident, wrong answers. This session covers what hallucinations actually are, why the taxonomy matters for detection, and the approaches available to developers working with external API endpoints — no model internals required. We look at grounding-based and consistency-based techniques honestly, including where each fails, and build toward what a responsible detection layer looks like in production today. You will leave with a clear picture of best practice and a realistic view of its limits.

About the speaker

Hampton Paulk

After 25+ years of development, Hampton researches AI applications through the lens of creative misuse. While others follow best practices, he explores what AI can actually do when you ignore the guardrails—across security, development, and everyday problems. His approach combines hacker curiosity with academic rigor, finding that the most valuable insights come from treating AI tools as raw materials rather than finished products. You'll see real examples of AI doing things it wasn't designed for, and learn why that matters for security and beyond.