What is LSAS?
LSAS (Layered Safety & Accuracy System) is a boundary architecture for GenAI and tool-using systems.
It is a safety-orchestration layer that can sit in front of multiple models, tools, and vendors without retraining — treating all model outputs, tool plans, and API payloads as untrusted until they are classified, grounded, validated, escalated when needed, and recorded as audit-ready evidence.
The framework is designed for leaders who must defend GenAI systems to boards, regulators, security teams, and clinical governance councils. It turns "we think it is safe" into something you can define, enforce, measure, and improve over time — as a spec, architecture, and operating model.
Introducing LSAS Stack: a working LSAS gateway
LSAS began as a specification and reference architecture. I have now shipped LSAS Stack — an open, self-hostable implementation of that spec that you can run in your own environment.
What LSAS Stack is
- A Fastify-based gateway and runtime that sits in front of both LLM providers (e.g., OpenAI-compatible APIs) and your existing JSON/REST services.
- A deterministic validator and policy engine that turns every call into a structured decision: ALLOW, REDACTED, BLOCKED, or ESCALATE_HITL, with severity and reasons.
- Out-of-the-box policy packs for PHI/PII privacy, PCI-like data, secrets/security, accessibility, and prompt integrity.
Key interfaces
LSAS Stack exposes simple, developer-friendly endpoints:
- POST /v1/chat/completions — OpenAI-compatible. Applies validators and policy packs, calls the upstream model when allowed, and returns the model response plus LSAS metadata (decisions, findings, redactions).
- POST /v1/payload/sanitize — accepts arbitrary JSON payloads and returns a sanitized data object plus structured lsas findings for use in front of existing APIs.
A Sandbox demo UI shows exactly what would be redacted, blocked, or escalated — and why — making LSAS behavior transparent to engineering, security, and clinical or risk stakeholders.

LSAS Stack Sandbox
Visualizes exactly what LSAS Stack redacts, blocks, or escalates for a given request — and why — so engineering, security, and risk stakeholders can see the boundary in action.
How LSAS Stack fits your environment
LSAS Stack is designed to live inside your VPC, close to your existing systems and controls. Your boundary matters more than any vendor cloud.
Self-hosted gateway
- Containerized, self-hostable deployment, suitable for Kubernetes or your preferred orchestration.
- Sits in front of multiple LLM providers and internal JSON/REST services.
- Acts as a policy enforcement point for both AI and high-risk API traffic.
Deterministic validation
- Validator pipelines for PHI/PII, PCI-like data, secrets, security, prompt injection, and accessibility.
- Each call emits structured decisions, severities, and rule hits instead of opaque scores.
- Policy packs let you express HIPAA, PCI, security, and A11y posture as configuration.
Data boundary first
- Raw prompts and payloads stay inside your boundary; only derived telemetry leaves.
- You control storage, retention, and encryption for logs and Safety Reports.
- Well-suited to healthcare, fintech, and enterprise SaaS teams that need auditable, defensible GenAI and API boundaries.
LSAS Stack in front of your APIs and platform
These diagrams show how LSAS Stack sanitizes payloads before they reach internal services and how it fits into a broader VCP-style deployment.
Payload sanitization
POST /v1/payload/sanitize applies LSAS policy to arbitrary JSON payloads before they ever touch downstream services.
VCP deployment
Illustrates LSAS Stack running inside your cloud boundary alongside identity, logging, and observability services.
Executive summary: why LSAS exists
Why GenAI fails in healthcare
- PHI leakage through prompts, logs, and shadow integrations.
- Prompt injection and jailbreaking against clinical or policy intent.
- Insecure handling of generated content and tool calls.
- Overconfident compliance guidance with no grounded evidence.
- Unsafe code and configuration generation that bypasses controls.
- Audit gaps: no way to reconstruct what the system actually did.
- Drift as models, providers, and prompts change underneath you.
What LSAS does instead
LSAS turns GenAI output into a validated, evidence-bound, audit-ready pipeline — without requiring you to retrain models.
- Classifies intent and risk before capabilities are unlocked.
- Grounds claims in authoritative knowledge with explicit citations.
- Runs privacy, security, accessibility, and compliance validators.
- Escalates uncertain or high-risk cases to humans or safer modes.
- Produces structured Safety Reports for every interaction.
- Feeds results back into policy packs and regression harnesses.
The 5 layers of LSAS
LSAS is intentionally layered. Each layer addresses a specific failure mode and can be adopted incrementally, but together they form a coherent safety boundary around GenAI systems.
1. Intent + Risk Classification
Every interaction is classified by use case, domain, and risk level before it can reach models, tools, or data. Conservative tie-breaks ensure ambiguous traffic is treated as higher risk.
This enables capability gating, different policies per domain (e.g., clinical vs. marketing), and early rejection of clearly unsafe requests.
2. Knowledge Grounding
LSAS requires that important claims be grounded in authoritative sources: clinical guidance, internal SOPs, policies, and contracts. Responses carry an evidence contract that maps each claim to citations.
When currency matters — guidelines, formularies, reimbursement rules — LSAS can orchestrate live retrieval and caching policies without changing application code.
3. Real-time Validation
Outputs and tool plans are passed through validators that enforce privacy, security, accessibility, and compliance rules. Each validator returns PASS, WARN, or FAIL, with structured findings and suggested transformations.
Examples include PHI/PII detectors, prompt-injection filters, fairness and harms checks, and accessibility checks for generated UI content.
4. Escalation + HITL
When the system is uncertain, or a validator raises a high-risk finding, LSAS degrades to safer modes: clarifying questions, partial answers, or routed review queues for humans.
The goal is deterministic safe behavior under uncertainty — not pretending the model is more confident than it is.
5. Continuous Learning
Policy packs, validators, and regression harnesses are versioned, testable artifacts. LSAS tracks how policy changes, new models, and new prompts affect safety, quality, and latency.
This supports regulatory watcher postures and change-control workflows that match how your risk and compliance teams already operate.
What makes LSAS different
LSAS is not just "guardrails" bolted on to prompts. It is an architecture that separates policy and governance from runtime behavior, so you can change one without destabilizing the other.
- Separation of control plane (policy, governance, approvals) from data plane (runtime enforcement and traffic).
- Policy packs as versioned, reviewable, and testable artifacts — policy-as-code that can be release-gated.
- Validator pipelines that emit structured findings, not opaque scores, so engineers and compliance can review them together.
- Tool firewall / policy enforcement point (PEP) that controls what agents and tools are allowed to do, with approvals where necessary.
- Safety Reports per interaction: traceable logs of inputs, evidence, validator outcomes, and escalations, ready for audits or incident reviews.
- Designed to be implemented incrementally, in front of existing assistants, workflows, or products — not a rip-and-replace.
Reference architectures
LSAS can sit in front of multiple applications and model providers. The same core architecture supports assistants, workflow automation, search, and clinical documentation use cases.
End-to-end pipeline
From intent classification and grounding, through validation and escalation, to continuous learning and policy updates.
Control plane vs data plane
Policy packs, approvals, and tests live in the control plane; runtime enforcement and Safety Reports happen in the data plane.
Preventing PHI by accident: data lifecycle & system boundary
LSAS forces clarity on what data crosses the boundary, where it is stored, and for how long. That includes prompts, retrieved context, logs, embeddings, analytics, and audit artifacts.
Default posture
- Minimum necessary data for each workflow and integration.
- Preference for hashes, metadata, and pointers over raw PHI.
- Configurable retention policies for prompts, logs, and vectors.
- Clear separation between production, sandbox, and research traffic.
System boundary thinking
- Written data-flow diagrams for PHI and high-risk data.
- Documented integration points with EHRs, CRMs, and data lakes.
- Explicit treatment of logs, metrics, and Safety Reports as part of the record.
- Alignment with your DPA, BAA, and internal data governance policies.
Adoption model: how enterprises implement LSAS
LSAS is implemented as a boundary around your GenAI systems: a gateway, orchestrator, and policy layer that can start small and expand across use cases.
Phase 1
Assessment & risk mapping
Select high-value workflows, map risk domains, and define success metrics. Align stakeholders on where GenAI can safely help — and where it must not.
- Use-case inventory and risk taxonomy.
- Current-state architecture and data-flow review.
- Target-state LSAS boundary and integration points.
Phase 2
Pilot implementation
Stand up an LSAS gateway/orchestrator in front of one meaningful workflow. Wire evaluation harnesses, audit logging, and go/no-go gates.
- Policy packs and validator pipeline for the pilot.
- Eval suite and regression harness for safety and quality.
- Safety Reports and dashboards for executive visibility.
Phase 3
Production hardening
Extend LSAS across additional workflows, tenants, and regions, with monitoring, incident playbooks, and change control wired into your governance model.
- Multi-tenant, multi-environment controls.
- Incident response, rollback, and red-team playbooks.
- Version governance and formal change-control paths.
MVP vs. production: typical ranges
Timelines depend on scope and environment, but most teams can stand up an LSAS-aligned MVP boundary in approximately 2–4 weeks, with production-grade hardening and rollout typically landing in the 8–16 week range. These are directional ranges, not guarantees.
MVP boundary
- One priority workflow protected by LSAS.
- Initial policy packs and validators in place.
- Eval harness and Safety Reports wired into your stack.
Production footprint
- Multiple workflows and tenants routed through LSAS.
- Monitors, SLOs, and incident playbooks live.
- Policy/version governance and change control formalized.
Metrics that matter to leadership
LSAS is designed to produce the kinds of metrics that executives, boards, and regulators ask for when they evaluate GenAI systems.
Evidence coverage
Evidence coverage rate
Share of high-impact outputs with properly grounded citations and traceable evidence contracts.
Safety outcomes
Unsafe output rate
Rate of outputs or tool plans that trigger WARN/FAIL validators, escalations, or safe-mode degradations — tracked over time.
Performance
Latency overhead
The additional latency budget LSAS introduces for validation and logging, tracked per workflow so you know where to optimize.
Security
Security defect rate
Incidents or near-misses tied to generated code, configuration, or insecure tool use — before and after LSAS is in place.
Accessibility
Accessibility defect rate
Measured accessibility issues in generated UI content and copy, particularly for public-facing experiences.
Model & policy drift
Drift indicators
Signals that model, provider, or policy changes are affecting quality or safety — before they show up as incidents.
LSAS Stack as proof of execution
LSAS Stack is more than a reference project. It is a concrete, running implementation that shows how the LSAS specification becomes a production-grade gateway and runtime — with real endpoints, validators, policy packs, and telemetry.
Architecture in code
- Separates the control plane (policy packs, approvals, tests) from the data plane (gateway, validator pipeline, Safety Reports).
- Implements LSAS decisions and Safety Reports as structured, reviewable artifacts instead of ad-hoc logs.
- Exposes OpenAI-compatible and JSON-sanitization endpoints that teams can adopt without redesigning their products.
Consulting proof point
- Demonstrates that LSAS is not just a whitepaper — it is an implementable boundary for real GenAI and API workloads.
- Shows how to balance safety, latency, and developer experience in a gateway your teams can actually live with.
- Provides a portfolio centerpiece and starting point for pilots that we can adapt to your environment and governance model.
Open specification + enterprise implementation
The LSAS framework is maintained as an open specification, with an accompanying reference implementation approach for enterprise and academic partners.
The specification is currently licensed under an Apache-2.0 style posture, making it suitable for internal adoption, academic collaboration, and enterprise implementations — while reserving consulting and implementation depth for clients.
Deploy GenAI with a boundary you can defend.
If you are exploring GenAI or high-risk API workflows in clinical, financial, or enterprise SaaS contexts, I can help you implement LSAS as a production boundary — using LSAS Stack where it fits, or applying the LSAS design pattern to your existing platform.
Typical engagements combine strategy, architecture, and hands-on implementation: risk mapping, boundary definition, LSAS Stack evaluation, deployment inside your VPC, and the policy packs, validators, and evaluation harnesses your governance model requires.
Engineering and research material only; not legal, compliance, or clinical advice.

