The Thinking

The bottleneck moved. Here's what's on the other side of it.

On process, code, AI, bidirectional alignment, and what it actually means to do good work in a world where execution is cheap, the tools are thinking, and judgment is the only thing that was never going to be automated.

01

On Process

There is a version of this argument that is lazy. "Process slows us down." People say it to justify being undisciplined. That's not what I mean.

Process is good. Process is necessary. Process makes the world work. The organizations that abandon it entirely don't move faster; they just fail in more chaotic ways.

The failure mode I've actually seen, over and over, across industries and team sizes and budget ranges, is treating process as an end rather than a means. Infrastructure that was built to serve a mission starts to take precedence over the mission itself. People defend the procedure instead of asking whether it's still doing its job.

"Process is good. Process is necessary. Process makes the world work. But when process gets in the way of progress, invariably it is the process that must yield. It doesn't mean we throw it out. It means we adapt it. Process should never hold up progress."

The discipline required to hold both of those things at once — real respect for process, and real willingness to challenge it — is rarer than it sounds. Most organizations can do one or the other. The ones that can do both tend to be the ones that survive long enough to matter.

02

On Code and What Comes Next

Code is no longer the bottleneck.

For most real-world problems, the ability to produce working software is commoditizing faster than most of the industry wants to admit. The technical execution barrier is collapsing. This disorients people who have been treating that barrier as their primary differentiator, and I understand why. It's a legitimate disruption.

But Jevons Paradox tells us what actually happens when the cost of production drops dramatically. You don't just do the same things cheaper. You unlock entire categories of problems that couldn't be addressed before. Demand doesn't flatten. It explodes. The bottleneck doesn't disappear. It moves.

Pareto's law had a good run as the pop business frame of choice: 80% of results from 20% of effort. Jevons is the frame that matters now. When efficiency increases, consumption increases with it. The more accessible software development becomes, the more software the world will want. The more software the world wants, the more the scarce resource becomes the thing that can't be automated: knowing what to build, why to build it, and what good looks like when you're done.

That has always been the real constraint. AI just made it visible.

03

On AI — Specifically

I was doing applied AI work before calling it that was a useful thing to do professionally. For a stretch, I stopped using the term entirely. It had become so synonymous with systems that don't work that attaching it to serious work was a liability.

That has changed. We are in a genuinely singular moment. For the first time, and possibly the last time at this scale, humanity has a real opportunity to remake how knowledge work happens. That is a sober read, not a hype cycle. The tools are real. The leverage is real. The window is real.

The mistake most organizations make is treating this as a procurement decision. They buy a tool, point it at their workflows, and wonder why the results are mediocre. The model is not the moat. The moat is the human who knows how to work with it: the person who can decompose a problem clearly, describe a desired outcome with enough precision that anyone working toward it knows when they've arrived, and hold the editorial line until the work is actually right.

That last part matters more than it sounds. The cost of producing something mediocre has never been lower. Taste and judgment are not soft skills. They are the scarcest resource in the room.

04

On What Is Worth Executing

AI, when used well, can solve for a lot of the execution. But you still have to know what is worth executing.

This is the thing that gets lost in almost every conversation about AI productivity. The conversation tends to focus on speed: how much faster can we produce code, copy, designs, analyses, reports? The answer is much faster. Sometimes instantaneously. That is real and it matters. And it is also the wrong question to be obsessing over.

Think about why you got into your business. If you built something of your own, you did it because you have a passion for solving a specific kind of problem. You did it for the thing, not the administrative surface area around the thing. You probably did not get into it because you love formatting decks, or managing accounts receivable, or making sure the codebase is adherent to the latest framework conventions, or ensuring the image-to-text ratio on slide seven is optimal. Those things need to happen. They are not why you showed up.

AI can handle an enormous amount of that execution surface. And when it does, you get something back that most people have not had enough of since they started: time and attention to spend on the part that is actually yours. The problem-finding. The solution-shaping. The judgment about which direction is worth going in before anyone starts moving.

That last piece is the one AI cannot touch. A process is scalable. A person is not. What AI can do is amplify the execution of a process to a degree that no other technology in history has matched, with the possible exception of the printing press. But the printing press made books more ubiquitous. It did not solve what was worth writing. That has always been a human problem, and it still is.

Scale what is scalable. Protect the judgment that isn't.
05

On What Becomes Possible

There is a version of the Jevons argument that stays abstract: when execution gets cheaper, demand expands, the bottleneck moves, the medium changes. That is all true. It is also incomplete, because it does not capture what this actually feels like from the inside of it.

Here is what it feels like.

For most of my career, the gap between "this would be useful" and "this is worth building" was measured in time. Not all useful things are useful enough to justify six months of development. Most genuinely useful things are too specific, too personal, too niche. They would serve one person or one workflow or one very particular situation really well, and the math never worked. So they never got built.

The karaoke history and reporting tool is a perfect example. It is genuinely useful. I needed it. In the old world it was a six-month project and I would never have been able to justify it. In the new world it was a weekend. It exists now because the break-even calculation flipped.

The same logic applies to what I think of as the incremental time trap: things that eat up time slowly, consistently, across months or years, but never eat up enough time at once to justify a dedicated solution. The hyper-specific dashboard that would streamline something costing you ten minutes a day but never costing you an afternoon. In the old world the math usually said don't build it. Now it is four hours of work and thirty hours of analysis saved, and the math says build it every time.

This is the Jevons effect in lived experience. When internet bandwidth became close to trivial, people did not just send more text faster. The medium changed entirely. Video took over. Thirty frames per second — if a picture is worth a thousand words, suddenly you are communicating at thirty thousand words per minute. A new category of thing became possible because the old constraint was removed.

When I was young, I built things because I was curious about them. A numerology app. A Yahtzee game. A stock trading platform at fourteen because my family needed money and I needed to solve that problem. Curiosity plus need plus whatever time I had. The projects were not justified by ROI. They were justified by the fact that I wanted to understand something and building was how I understood it.

That feeling went away for a long time, because professional work introduced the constraint that every project had to be worth the time it cost. And most things that are genuinely interesting are not worth the time they used to cost.

That constraint is lifting. The things I can build now, the pace at which I can build them, the specificity I can target — it is starting to feel again like it felt when I was twelve and had a passing interest in something and just built a thing about it over a weekend. Except now the things I build are more capable, they are deployed, they serve real people, and some of them could not have existed at all until very recently.

A real-time translation and transcription overlay running on top of a live stream feed is not a trivial technical achievement. It required real thought and real craft. But the amplification of execution made it possible to complete on a timeline that matched the moment it was needed for — something with enormous personal significance that would not have existed if the cost of execution had stayed where it was five years ago.

That is what Jevons actually feels like from the inside. The medium changes. The projects that get built change. And the people who understand this early enough to position themselves at the intersection of judgment and cheap execution are going to do things that were not possible before. Not because they are smarter. Because the constraint that used to stop them is gone.
06

On Bidirectional Alignment

Almost every serious conversation about AI alignment points in one direction. How do we shape AI to reflect human values? How do we constrain it, guide it, make it safe? These are real and important questions. I am not dismissing them.

But they share a hidden assumption: that the human on the other end of the relationship is fixed. Static. The stable reference point that AI must orient toward. That assumption is already wrong, and it is going to become more wrong faster than most people are prepared for.

The progression of AI capability is not a gradual slope. We have already cleared one gate. The transition from dumb tool to thinking partner has happened. Right now, today, anyone paying attention is working with something that can reason, that can push back, that can surface things you hadn't considered. That is not the future. That is Tuesday.

The next gate is the one that matters for how you live your life and run your organization: the transition from thinking partner to potentially superior intelligence. The timeline on that is genuinely uncertain. The direction is not.

Bidirectional alignment is my term for the posture that makes sense given this reality. Yes, AI must align to human values. That conversation needs to continue. And humans need to be actively aligning to AI: developing real literacy, building real working relationships with these tools, and updating their mental models before the next transition happens rather than after.

This does not mean subjugation. It does not mean deferring to the machine. It means treating this as the most consequential skill development opportunity of your career and acting accordingly. The people who have been building genuine AI fluency, who understand how to frame problems for these systems, where to trust the output and where to verify it, how to work iteratively rather than transactionally, those people are going to be in a fundamentally different position when the next gate clears.

The ones who waited for the tools to become perfect before engaging with them seriously will find themselves behind, not because AI replaced them, but because other humans using AI better did.

That is what bidirectional alignment means in practice. The relationship runs both ways. Start building your side of it now.
07

On Human Judgment as the Irreducible Moat

Every automation wave in history has threatened to eliminate the need for human skill. Every one of them has ultimately elevated the kind of human skill that matters. What remains after automation is always the thing that couldn't be automated: taste, judgment, and the ability to say "this isn't good enough yet" when everything around you is pushing to ship.

This is not a comfortable position to stake out when everyone around you is chasing speed and volume. Speed and volume are real pressures. But the work that survives, the work that actually changes things, tends to come from people who slowed down long enough to ask whether the thing they were producing was worth producing.

In AI-augmented work, that question is more important than it has ever been. The cost of producing something mediocre has never been lower. The abundance of output is not a problem that more output solves.

The scarcity that matters now is the willingness to hold the line on quality. That is what I've been building toward, across four decades and seven industries, whether or not I had a clean name for it at the time.

08

On Leading Across Domains

Something people notice early when working with me is that I don't stay in my lane in the way they expect. A new collaborator will assume that because I'm operating as the technologist, I won't have informed opinions about the sales motion, or the legal exposure in the contract, or the UX decision that's about to create three months of support tickets. Then they find out I do.

The initial read is usually that I'm overstepping. Over time it becomes clear that I'm doing the job.

"I am just arrogant enough to know that I can do anything, but wise enough to know that I can't do everything."

I trust the people around me to do their roles. When I steer, I mean it, and I do it because I see something in someone that just needs a little coaxing to come out at its full potential. The steering is in service of that. The goal is always to make the people around me more capable than I found them, not more dependent on me.

That distinction matters. The perception of someone who holds a lot of knowledge across a lot of domains is often that they're hoarding it. I have fought that perception my whole career. The reality is the opposite. I document prolifically. I mentor intentionally. I will take significantly longer on a problem if it means someone else understands it well enough to solve it reliably the next ten times without me in the room. That trade is always worth it.

The metric I care about is not how indispensable I am at the end of an engagement. It is how capable the people around me are. Those are very different things, and optimizing for the wrong one is one of the more common failure modes I see in senior technical leaders.

The people who understand this become the best collaborators I've ever had. The pattern, once you see it, is consistent.

09

On Describing Desired Outcomes

Problem decomposition gets talked about. Describing a desired outcome is its own distinct skill that gets talked about much less, and the gap between the two is where a lot of otherwise good work falls apart.

Decomposing a problem means breaking a complex thing into its real constituent parts. That is necessary and valuable. But you can decompose a problem with perfect clarity and still hand off a vague definition of success. "Make it better." "Improve the user experience." "Make the AI smarter." These are not outcomes. They are directions. And directions without destinations produce a lot of motion and not enough arrival.

Describing a desired outcome means knowing, with enough precision to communicate it, what done actually looks like. What would be true when the work is complete that isn't true now? What would a person see, or feel, or be able to do that they couldn't before? What is the specific thing that has changed?

This is harder than it sounds in AI-augmented work, because the tools are extraordinarily capable of producing output that looks like an answer while not being the answer you actually needed. The precision required to describe what the right something is before you start is irreplaceable human work.

At New Era Systems, we treat outcome description as a first-class discipline. Before a prompt gets written, before a line of code gets generated, before a workflow gets designed, the question on the table is: what does good look like, specifically, and how will we know when we've gotten there? That question, answered with genuine rigor, is where the real leverage lives.

Taste and judgment tell you when you've arrived. Outcome description tells you where you're going. You need both.

10

On Problem Decomposition

Most failed projects don't fail in execution. They fail in framing. The wrong problem gets defined with great precision, solved with great competence, and delivered to someone who needed something else entirely.

The most valuable skill in any complex engagement is the ability to decompose a problem before touching the tools. To ask the questions that surface the real constraint, the real goal, the real scope. Not the problem as it was first described, but the problem underneath it.

This is the skill that AI amplifies most dramatically, and the skill that AI cannot replace. A model will answer the question you asked. It takes a person to figure out whether you asked the right question.

← Back to home