The world just woke up to scaffolding.
Claude Code exploded. Then came Claude CodeWork. And suddenly everyone's asking the same question: why does wrapping an AI in the right structure make it feel like a different product entirely?
Daniel Miessler has been answering that question for years. His open-source PAI (Personal AI Infrastructure) project represents one of the most mature attempts to build what he calls a "digital assistant" - not a chatbot you query, but a persistent AI system that knows your goals, tracks your progress, and upgrades itself based on how well it's helping you.
In a recent conversation with Nathan Labenz on The Cognitive Revolution, Miessler laid out his vision for the future of work, why most knowledge workers are already replaceable, and how individuals can build AI systems that serve them rather than their employers.
AGI Is a Product Release, Not a Model Release
Here's Miessler's core insight: the difference between Claude and GPT-4 or Gemini isn't dramatic. The models are converging. Open-source alternatives are getting close.
What made Claude Code explode wasn't the model. It was the scaffolding.
"It's not Anthropic that's blowing up, it's not Opus 4.5 that's blowing up - it's Claude Code," Miessler explained. "Because it's scaffolding."
This reframes the entire AGI debate. Most definitions focus on benchmarks, reasoning capabilities, or some threshold of general intelligence. Miessler's definition is brutally practical: AGI is the point when a product can replace an average knowledge worker.
That means AGI isn't about passing tests. It's about showing up Monday morning, joining the all-hands meeting, taking work from a manager, pivoting when priorities change, and actually delivering.
"They onboard, they show up, they're in the cohort with human employees. They go through the onboarding, they watch all the videos, they do the training. And then Monday morning, they show up and they're on the all-hands with the team manager."
The image is deliberately mundane. No singularity. No superintelligence. Just an AI product that can do what a typical knowledge worker does, including the weird, general stuff: mandatory compliance training, shifting project priorities, political dynamics with other teams.
Miessler's timeline? 2027. Possibly sooner.
The Bar for Worker Replacement Is Lower Than You Think
The uncomfortable truth in Miessler's analysis is that most knowledge workers aren't operating at full capacity. They're not trying to compete with AI - they're trying to get through the day.
"The work that they're doing, most people, I would say most workers, is very sort of rote. You've got to get the email, you got to summarize the email, you got to write the report. You've got to look at a number of different reports and create another one."
This isn't a criticism of workers. It's a structural observation. Most people dread Monday. Corporate jobs are hostile environments filled with politics and churning priorities. Nobody's showing up determined to unlock their maximum cognitive potential.
"It's not like people are coming to work and saying, wow, let me just unlock my creativity and let me be maximally intelligent in a way that's going to compete in some way with AI."
The model most companies operate under - worker as fungible resource - was always a bad deal for humans. AI just makes that explicit. If a job consists of summarizing inputs and producing outputs in predictable formats, the scaffolding challenge becomes solvable.
Miessler frames the ideal number of employees for most companies as zero. That sounds provocative, but consider the ice cream truck owner who makes $500 a week selling ice cream. Nobody stands outside demanding he hire them. He does his own work. That's the natural state.
"The reason we have a labor economy is because the people who came up with the company or the idea or the product, they can't do the work themselves."
AI is about to change that equation.
The TELOS Framework: Making AI Personal
If corporations are going to replace workers with AI scaffolding, individuals need to build their own.
Miessler's PAI system starts with what he calls TELOS - a framework for capturing who you are and what you're trying to accomplish. It's essentially the "alien with a clipboard" interview:
- What problems do you see in the world?
- What do you want to do about them?
- What are your obstacles?
- What are your capabilities?
- What does a typical day look like for you?
This isn't productivity theater. It's the context that makes AI useful. When Miessler starts Claude Code running PAI, it reads his entire TELOS structure on startup. Every subsequent interaction happens with that context loaded.
The difference is dramatic. Instead of generic responses optimized for "world knowledge," the AI operates within the frame of Miessler's specific goals and constraints.
"The magic is when it's actually encompassing everything about you and incorporating that into the pursuit of the best answer."
His workflows reflect this. An idea captured via the Limitless pendant while walking by the bay gets pulled from the API, red-teamed by a council of AI critics, debated, refined, and published to social media - all through the PAI scaffold.
The system even upgrades itself. When Anthropic releases a new Claude Code version, Miessler runs an "upgrade skill" that reads the changelog, checks engineering blogs, looks for YouTube explainers, and produces a prioritized list of system improvements.
"It reads my entire TELOS, what I'm trying to accomplish. It looks at my full PAI system and gives me recommendations on how to upgrade itself."
The Human Activation Problem
Here's where Miessler's vision gets philosophical.
Imagine an alien with a clipboard visiting Earth to interview a billion random people. The alien asks: "Who are you? What do you believe is wrong with the world? How do you plan on changing it?"
Most people would answer with their job title. "I'm an accounting specialist at Company X. I check spreadsheets."
That's not a human answer. That's a job description. But we've been trained for generations to equate identity with employment.
"There are special people who have podcasts and have ideas and write them down and think that they are worth sharing with others. And then there are the regular people, which are the 99%."
Miessler's mission is what he calls "human activation" - helping people recognize that they have ideas worth developing and sharing. The creative potential of most humans is effectively zero, not because they lack capability, but because nobody ever told them their ideas mattered.
"Imagine that planets have stats hovering over them - a creativity activation percentage. When you scroll over Earth, it says 0.0013."
The opportunity is massive. Someone watching Netflix who thinks, "I'd love a story about X" can now write that story and publish it. The barrier between consumer and creator just collapsed - if people can overcome the psychological barrier that tells them creating is for other people.
AI tutors, built on frameworks like PAI, could help with this. Not just teaching skills, but modeling the belief that every person has something valuable to contribute.
Security: Attacker AI vs. Defender AI
Miessler's background is cybersecurity, and his analysis of how AI changes the threat landscape is sobering.
The game now is attacker AI stack vs. defender AI stack. Period.
"I could say, based on all the history of social engineering attacks being successful and the fact that you have all these psychological profiles of this company, why don't you come up with 128 really cool campaigns that would work against these employees?"
That's not science fiction. It's a prompt that executes in minutes. 256 different social engineering campaigns using different psychological tactics, each with separate infrastructure, all running simultaneously.
The social engineering attack Nathan Labenz described - a fake email appearing to be from SendGrid supporting ICE, designed to trigger outrage and harvest credentials - is just the low-effort version. Personalized attacks using psychological profiles at scale are now trivial.
But defenders have one structural advantage: access.
"The defender has actual access to AWS, direct access to the network logs, direct access to all this stuff. Attackers are inferring this from external signals."
If your defender AI stack is as sophisticated as the attacker's, you win on information. You see the configuration change the moment it happens. You catch the anomaly in real-time instead of doing forensics after the breach.
This is why Miessler sees AI as a "container" for security rather than just another attack vector. Yes, AI creates new vulnerabilities. But AI is also the only way to defend against AI-powered attacks. Human-scale security teams cannot monitor the volume and velocity of changes in modern infrastructure.
Building Your Personal AI Infrastructure
Miessler's challenge to listeners was direct: do the TELOS assessment.
Dump your problems, goals, capabilities, and daily workflows into a document. Have the conversation with an AI about who you are and what you're trying to accomplish. Build that into a persistent scaffold.
For technical practitioners, this means exploring PAI, learning Claude Code's hook and skill systems, and building workflows around your actual work rather than generic productivity templates.
For everyone else, it means taking seriously the idea that you have something worth sharing. The tools to amplify your ideas are now available. The question is whether you believe you have ideas worth amplifying.
The corporations will figure out AI-powered knowledge work. They're highly motivated to reduce headcount. The question for individuals is whether you're building your own AI infrastructure, or waiting to be replaced by someone else's.
As Miessler put it: "AI is about to return to a more natural state of everyone does their own work."
The transition will be messy. UBI probably becomes necessary by 2028-2029, when the displacement becomes impossible to ignore. But the opportunity is real: use AI to overcome your own weaknesses, build systems that serve your goals, and stop treating yourself as a worker waiting for assignments.
The alien with the clipboard is coming. What will you tell them?
Full Transcript
Below is the complete transcript with timestamps. Click any timestamp to jump to that point in the audio.
[0:00] Nathan Labenz introduces Daniel Miessler, describing him as a cybersecurity veteran, founder of Unsupervised Learning newsletter, and creator of PAI - the Personal AI Infrastructure Framework.
[0:45] The timing is perfect with the explosion of interest in Claude Code and this week's release of Claude CodeWork. The world is collectively waking up to the importance of scaffolding.
[1:30] Miessler's goal is increasing "human activation" - helping people recognize they can be more than cogs in a machine, and that their ideas are worth developing and sharing.
[2:15] Miessler expects corporations will automate routine work and reduce headcount, converging to companies consisting of a single human owner supported by AI agents.
[3:00] Discussion of Miessler's background in cybersecurity since 1999, joining Apple's machine learning team around 2016-2018, then going independent six months before ChatGPT launched.
[4:00] "My main focus now is basically trying to help humans and companies, mostly humans, to just be able to adapt to what's coming."
[5:30] The "dread Monday" metric - most people weren't happy with corporate jobs even before AI. The entire education system taught people their goal is to get a job from the 1% special people.
[7:00] Miessler sees AI as a container for security - the ability to encapsulate goals and align work with those goals continuously. It removes the opacity between different parts of organizations.
[9:00] Discussion of labor economics - what happens when ownership matters more than labor? The system built on wages buying things fundamentally breaks.
[11:00] The key insight: scaffolding is more important than models. Claude Code exploded not because of Opus 4.5, but because of the scaffolding system around it.
[13:30] Why hasn't this happened already? Average knowledge worker jobs are extremely general - checking emails, watching mandatory training, handling HR meetings, pivoting to new projects. No scaffolding system existed that could handle all of that.
[16:00] Miessler's AGI definition: "The ability to replace an average human knowledge worker" - estimated timeline of 2027.
[18:00] "The ideal number of employees for most companies is zero. That's always been the ideal number." The ice cream truck owner analogy - if you could do all the work yourself, you wouldn't hire anyone.
[21:00] The TELOS framework - capturing problems, goals, obstacles, capabilities through an AI interview. "Who are you? What do you think is wrong with the world? How do you plan on changing it?"
[24:00] How PAI works: reads the entire TELOS structure on startup, loads customized skills, incorporates everything into every interaction.
[27:00] The upgrade skill - takes any YouTube video, reads his entire TELOS, looks at his PAI system, and gives recommendations on how to upgrade itself based on new features.
[30:00] Security analysis: the game is now attacker AI stack vs. defender AI stack. Miessler can generate 128 personalized social engineering campaigns against a company's employees in minutes.
[34:00] "The defender has one structural advantage - access. Direct access to AWS, network logs, all configuration changes. Attackers have to infer from external signals."
[37:00] Discussion of P(doom) - Miessler sees the most likely outcome not as sudden ASI, but gradual elite/authoritarian control using AI to manage populations, with people diverted by immersive experiences.
[40:00] Human activation mission: imagine planets have creativity activation percentages. Earth would show something like 0.0013. The opportunity is helping people realize they have ideas worth sharing.
[43:00] Final message: do the TELOS assessment. Dump your problems, goals, and workflows into a persistent scaffold. Build AI infrastructure around your goals, not generic productivity templates.