In 1994, while finishing my last year at Penn State, I learned that a neighboring consultant was tasked with dismantling the massive Corning glass plant in State College and shipping it overseas. The prevailing mantra of the era—“Manufacturing doesn’t matter; the future is the information economy”—felt wrong to me then, and it feels just as misplaced today amid the AI‑centric headlines promising universal basic income and mass job loss.
The 1990s narrative
The dominant narrative in the 1990s was very clear:
- Manufacturing doesn’t matter
- The future is “the information economy”
- We will design, others will produce
- The real value is in ideas, not physical infrastructure
I had this discussion with many people that told me “You don’t get it. We don’t need to make things anymore.” It just didn’t sound right because no matter how digital the world becomes, it still runs on physical systems:
- Energy
- Food
- Infrastructure
- Hardware
- Networks
And someone has to build and control those systems. Over the next 30 years, we saw:
- Manufacturing concentration in Asia
- Deep industrial capability built outside the West
- Supply chain dependencies becoming strategic vulnerabilities
- Nations rediscovering the importance of sovereignty in production
The assumption that “we will keep the high-value work” turned out to be only partially true. Capability follows production, and once you lose the capability it is very hard to rebuild.
The parallel to AI
Today, I hear a similar narrative about AI:
- “AI will replace most jobs”
- “Work will disappear”
- “We will live off universal income”
- “Just adopt AI or you will be left behind”
This feels very familiar. Not because AI isn’t important, it is, but because the narrative is too simple. The mistake in the 90s was treating the economy as if it were purely informational. The mistake today is treating AI as if it replaces the system it operates within. AI does not replace:
- Physical infrastructure
- Energy systems
- Supply chains
- Security
- Governance
- Human judgment
It sits on top of these systems, and those very systems still determine power. From a technical and operational perspective, AI is best understood as:
- A compression layer for knowledge
- A decision support system
- A pattern recognition engine
It can:
- Accelerate analysis
- Reduce manual work
- Improve decision speed
But it does not eliminate the need for:
- Domain expertise
- Accountability
- Contextual understanding
- Operational ownership
Relevance to security
In cybersecurity, this is very clear. AI can surface anomalies and suggest actions. But a human still needs to decide: Does this matter in this environment? That requires understanding systems, not just data. Instead of “AI replaces work,” a more realistic model is:
- AI shifts the nature of work
- It increases the leverage of skilled operators
- It amplifies both good and bad decisions
Just like the internet did. The organizations that win are not the ones that adopt AI blindly. They are the ones that:
- Integrate AI into real workflows
- Maintain control of critical systems
- Understand the underlying infrastructure
- Keep humans in the loop where judgment matters
The real question is not “Will AI take jobs?”. The real question is: “Who controls the systems that AI depends on?” Because that is where long-term value and power sit.
I was a student when I first saw a piece of critical infrastructure being moved out of the country. At the time, it was framed as progress, but in hindsight it was also a transfer of capability. AI is another inflection point. It will create enormous value, but if we focus only on the “information layer” again, we risk missing the same lesson: “Technology does not replace systems. It reshapes who controls them.” So, we’ve been here before. How is your organization ensuring AI stays under human oversight?