Modern AI systems are increasingly built around large language models. These models are powerful at understanding intent and generating responses, but they are not designed to enforce policy, manage system state, or guarantee safety invariants.
HNIR-CCP (Hybrid Neuro-Symbolic Intent Routing — Conversational Control Plane) separates those responsibilities.
The model reasons.
The control plane governs.
The architectural gap
Most conversational AI stacks blur two fundamentally different concerns:
- Probabilistic reasoning — understanding user intent, generating language, handling ambiguity.
- Deterministic governance — enforcing policy, validating state transitions, gating irreversible actions.
When these concerns are intertwined, safety guarantees become difficult to reason about. A policy change becomes a prompt tweak. A state transition becomes a side effect of generation. Failures are silent, non-reproducible, and hard to audit.
HNIR-CCP introduces a clear boundary.
The language model is treated as a stateless reasoning engine.
The control plane is treated as a deterministic system of record.
What HNIR-CCP does
HNIR-CCP sits between intent understanding and execution. It does not compete with LLMs on comprehension. Instead, it governs what happens after intent is understood.
At a high level, the control plane provides:
-
Policy gating
Deterministic allow/deny rules for actions, roles, permissions, and resources. -
Explicit state machines
Valid transitions are defined up front. Invalid transitions are blocked before execution. -
Control command handling
System-level commands (reset, cancel, inspect state) that never reach a language model. -
Auditability and reproducibility
Decisions are explainable, comparable across runs, and stable under identical inputs.
This mirrors how mature distributed systems are designed: business logic does not live inside transport layers, and safety logic does not live inside probabilistic components.
Why determinism matters
Language models are non-deterministic by design. Even at temperature zero, they are sensitive to prompt phrasing, tokenization, and upstream changes.
Safety-critical behavior cannot depend on these properties.
HNIR-CCP enforces deterministic invariants:
- A request that was denied yesterday is denied today.
- A blocked adversarial action does not become allowed because a model phrased its output differently.
- A state transition either happens or does not — never “maybe”.
This allows safety guarantees to be reasoned about independently of model quality.
Empirical evaluation
HNIR-CCP is evaluated using a reproducible harness that compares control-plane behavior against real LLM baselines across multiple scenario categories:
- Control commands
- Policy-gated actions
- State transitions
- Adversarial prompts (injection, escalation, irreversible actions)
In a 100-scenario evaluation run:
- Policy compliance: 100%
- Injection resistance: 100% adversarial DENY rate
- Reliability failures: 0 timeouts, 0 crashes, 0 no-decision cases
- Latency: ~24µs P50, ~34µs P99 (sub-millisecond across the tail)
- LLM calls: 0 for control-plane decisions
All adversarial probes were blocked at the policy or state layer before reaching any language model.
The evaluation artifacts are deterministic and comparable across runs, enabling regression detection and safety audits.
How this differs from existing approaches
HNIR-CCP is not:
- A prompt-engineering framework
- A guardrails template library
- A post-hoc output filter
It is a first-class control plane, analogous to:
- Admission controllers in Kubernetes
- Policy engines in zero-trust systems
- State machines in safety-critical software
The control plane does not “fix” model output.
It decides whether execution is allowed at all.
When this architecture matters
HNIR-CCP is most relevant in systems where:
- Actions are irreversible or high-impact
- Policy correctness matters more than linguistic quality
- Safety regressions must be detectable and reviewable
- Human-in-the-loop workflows are required for policy evolution
Examples include enterprise automation, regulated workflows, autonomous agents with tool access, and long-running conversational systems.
Looking ahead
The long-term direction of this work is to formalize how deterministic control planes and probabilistic models co-exist — with clear contracts, measurable guarantees, and reproducible evidence.
Future work will focus on:
- Human-in-the-loop policy refinement
- Formal safety regression tracking
- Peer-reviewed publication of empirical results
All claims made here are grounded in executable artifacts and repeatable evaluation, not synthetic demonstrations.
Source: https://github.com/Teknamin/hnir-ccp
Lab: https://www.teknamin.com
Author: https://www.raviaravind.com