Safe by Design — Guardrails and Evals for Production LLM Systems

Room 9Tue 12 May • 16:00–17:00AI & AgentsIntermediate
Guardrails are not a single component — they are an architecture. This session cuts through the confusion around what guardrails actually are, maps the distinct categories of concern, and shows what a layered, production-ready approach looks like in practice. It then tackles the question most teams skip: how do you know your guardrails are working? We cover evals as the discipline that answers that question — and why you need both to build LLM systems you can actually trust.

About the speaker

Hampton Paulk

After 25+ years of development, Hampton researches AI applications through the lens of creative misuse. While others follow best practices, he explores what AI can actually do when you ignore the guardrails—across security, development, and everyday problems. His approach combines hacker curiosity with academic rigor, finding that the most valuable insights come from treating AI tools as raw materials rather than finished products. You'll see real examples of AI doing things it wasn't designed for, and learn why that matters for security and beyond.