Are Security Certifications Still Relevant in the Age of AI?

I recently passed my CISSP exam (Certified Information Systems Security Professional). Over the past 15 months, I’ve completed CISM and CRISC from ISACA. In my job as CIO for a Norwegian seafood company, I also have the CISO responsibility. The triple certifications seem perfect from my standpoint, but it raises a question:

Are security certifications still necessary in the age of AI?

With AI tools now capable of analyzing logs, correlating signals, and even suggesting remediation steps, it’s tempting to think that human expertise may become less important. I believe the opposite is true.

Security Is Not a Language Problem

Many AI systems are built on language models. But security is not language; it is behavior. A log entry, a process execution, a network connection—these are not sentences. They are signals within a system. AI can surface patterns, but someone still needs to ask:

  • Does this deviation matter in this environment?
  • What is the business impact?
  • Is this risk acceptable?
  • What is the correct response?

These are not statistical problems. They are accountability decisions.

The Real Role of AI in Cybersecurity

AI is extremely powerful at:

  • Processing large volumes of data
  • Finding weak signals
  • Correlating events across systems
  • Summarizing complex situations

In other words: AI finds problems and suggests paths. But it does not own the outcome. That responsibility remains with the human.

Human-in-the-Loop Security Requires Fluency

As AI becomes embedded in security operations, the role of the practitioner changes. Less time is spent finding data. More time is spent understanding and deciding. This requires fluency across domains. When an AI system says:

“Suspicious PowerShell execution on a newly joined workstation, downloading from a flagged IP.” Someone must understand:

  • Endpoint behavior
  • Identity context
  • Network flows
  • Threat patterns
  • Business impact

Without that understanding, the output is just another alert.

Certifications Provide the Mental Model

This is where certifications still matter. Not as checkboxes. Not as credentials for a CV. But as structured mental models. CISSP, CISM, and CRISC do something important:

They force you to understand how the pieces fit together. They provide a common language across:

  • Technical teams
  • Risk management
  • Leadership

In an AI-driven environment, this becomes more—not less—important. Because the volume of information increases, and decisions must be made faster. Certifications don’t make you great at security, but they do provide you with a shared mental framework for making consistent decisions under pressure.

The Risk of “AI-Only” Security

Relying solely on AI without deep domain understanding creates a dangerous illusion of safety. While AI excels at processing volume, it lacks the contextual nuance required for high-stakes security decisions. The risks are not merely theoretical; they manifest in four critical failure modes:

1. The Accountability Vacuum 

AI can generate a recommendation, but it cannot accept liability. Under frameworks like NIS2 and ISO 27001, the ultimate responsibility for security posture rests with the organization’s leadership, not the software vendor.

  • The Audit Trap: During an incident response or regulatory audit, you must demonstrate why a decision was made. If an AI blocks a critical service or fails to detect a breach, you cannot explain its probabilistic reasoning in a court of law or to a board of directors.
  • Regulatory Reality: NIS2 explicitly mandates “human oversight” of automated systems. Delegating all decision-making to an algorithm is a direct violation of these governance requirements, exposing the organization to fines and legal liability.

2. Contextual Blind Spots 

AI operates on patterns derived from historical data, but it struggles with the unique “business logic” of your environment.

  • False Positives: An AI might flag a massive data transfer as malicious exfiltration. A human CISO knows this is a scheduled, encrypted backup for a new product launch. Acting on the AI’s recommendation to block the transfer could halt revenue-generating operations.
  • Novel Threats: AI is excellent at recognizing known threats (signatures, known IOCs). It is far less effective at identifying “zero-day” attacks that deviate from historical norms but follow a logical, albeit new, attack chain. Without human intuition to spot the anomaly in the behavior rather than the signature, these threats slip through.

3. The “Black Box” of Decision Making 

Modern AI models, particularly Large Language Models (LLMs) used in security, often operate as “black boxes.” They provide an answer without a clear, linear logic trail.

  • Non-Auditable Decisions: In a forensic investigation, every action must be traceable. If an AI autonomously isolates a server, the “why” is often buried in millions of parameters. This opacity makes it impossible to reconstruct the incident timeline accurately, hindering root cause analysis and future prevention.

4. Complacency and Skill Atrophy 

Perhaps the most insidious risk is the erosion of human expertise. If analysts stop engaging with the raw data and only review AI summaries, their ability to detect subtle anomalies degrades. When the AI inevitably fails or is bypassed by a sophisticated adversary, the human team may lack the fluency to intervene effectively.

A Shift in What It Means to Be Skilled

The skillset is changing. For a modern CIO or CISO, this means spending less time tuning controls and more time defending decisions. Combining a technical certification like CISSP with a management like CISM is useful. But security professionals are no longer just analysts, engineers, and operators. They are becoming:

  • Interpreters of machine output
  • Decision-makers under uncertainty
  • Translators between technology and risk

This requires both breadth and judgment. You need to have a deeper understanding of the end-to-end security architecture to make the right decisions for your environment.

Conclusion

AI changes how we operate security, but not who’s responsible for it. AI is a force multiplier, not a replacement. It provides the “what” and the “where,” but the human expert must provide the “why” and the “so what.” Without this human-in-the-loop, organizations risk automating their way into compliance failures, operational disruptions, and unmitigated breaches. Security certifications are not a legacy of the past. They are becoming more relevant. Because AI increases speed—but also complexity. And in that environment, the differentiator is not who can generate answers. It is who can understand them, challenge them, and act on them correctly.