Making AI Trustworthy in Cybersecurity: Why Hybrid Determinism Wins

I came across an interesting Norwegian startup Reliable AI this week working on what they call “hallucination-free AI.” Their approach is simple. Remove the final generative step of the model and work directly with embeddings.

No generation → no hallucinations.

That works well in domains like legal search where the goal is retrieval, not interpretation. But it highlights something important: 

We are starting to see three distinct directions in AI design today:

Direction 1: Remove generation

If the model can’t generate, it can’t hallucinate. This leads to deterministic, explainable systems, essentially advanced semantic retrieval.

Direction 2: Constrain generation

Instead of removing the LLM, you control what it sees and what it’s allowed to do. 

But in cybersecurity, you’re not searching documents. You’re trying to understand what is happening across systems, in time, under uncertainty.

  • A firewall log is not “text.” It’s a state transition in a system, observed at a point in time.
  • It’s an event in a sequence.

There is a third design pattern highly applicable for cybersecurity:

Direction 3: Hybrid determinism

This is the path we’re taking with PyLog; an AI for understanding firewall and network behavior:

Logs → normalization → behavioral baselines → correlations → signals → then LLM

Hybrid determinism doesn’t remove generation, but it makes generation explain, not guess. Example: “PowerShell execution on a host that has never used it before, followed by outbound traffic to a first-seen destination.”

By the time the LLM is involved, the ambiguity is already reduced. It’s not guessing. It’s explaining. Every conclusion can be traced back to:

  • a baseline deviation
  • a correlated event
  • a known pattern

We aren’t purely “removing generation” (like the Norwegian startup) nor purely “constraining generation” (which implies a probabilistic black box). We are building a deterministic front-end that feeds a probabilistic back-end. This is the winning pattern for enterprise security. That changes the role of AI completely. The question is no longer:

“Can we trust the model?”

But:

“Did we design the system correctly before the model ever speaks?”

Multiple models will undoubtedly exist, but one thing is becoming clear: The goal isn’t to eliminate uncertainty—that’s impossible in security. It’s to make uncertainty visible earlier in the pipeline, before the LLM can hide it behind fluent language. That is why system design matters more than model fluency.