Can AI Become More Than Just a Tool - A New Paradigm?
Today, AI isn't simply a high-level language abstraction or anything like that. But could it become something like that over time? That's the core question. I think it could.
When you think about traditional compiler design or programming languages, if we could leverage LLMs as tools, we'd think completely differently about how to build compilers. While this shift hasn't fully materialized yet, if we could define certain things efficiently and rigorously enough in human language, and use that as direct input to compilers, many things could change.
Where AI Coding Stands in Today's Market
I'm confident that AI coding is currently the second-largest AI market. Pure consumer chatbots are first, and coding is second. Of course, the consumer market is an aggregate of many things, but if we look at it as a defined market, you could argue that coding is actually number one.
Is coding bigger than companion AI? I think so. Something like ChatGPT does serve as a useful companion to some extent, but I'd say a significant portion of ChatGPT usage is in that companion role. Ultimately, whether people's motivation is to build something or find love... they're probably similar.
The Unique Characteristics of AI Coding
There's something very unique about AI coding that's sometimes underestimated. This was actually behavior that existed in multiple ways before.
First, people were already going somewhere to seek help. As I mentioned earlier, it was mostly Stack Overflow. So people already had the muscle memory of going to the internet to find information when they hit problems they couldn't solve. AI is just a much better form of that.
There were jokes that Stack Overflow had actually written most of the code over the past few years, and much of that might now transfer to AI models. I'm not even sure if that was a joke.
Then there's GitHub Copilot. They did the really foundational work of transitioning people from Stack Overflow use cases to using AI models. And I think companies like Cursor are now doing that much better.
Developers Solve Their Own Problems First
Another aspect is that if developers have access to the latest AI technology, the first thing they solve is their own problems. Developers tackle the problems they understand best, the ones they face every day. So they build infrastructure for themselves to use.
Developers are always early adopters of new technology. Naturally, they love to tinker, they love setting up new tools, and they're lazy. So they'll adopt anything that increases productivity.
The coding market works well partly because it's a somewhat verifiable problem. Coding functions have very clear inputs and outputs, compared to user preferences or other issues.
The Enormous Market Size
Another aspect is that this is a massive market. Think about it - there are 30 million developers worldwide. If we say the average value a developer creates is $100,000 annually, that's $3 trillion.
From data I've seen at some large financial institutions, they estimate that just vanilla Copilot deployment increases developer productivity by about 15%. My gut feeling is that we can push that significantly higher.
Let's assume we could double the productivity of every developer worldwide. That's $3 trillion in value - equivalent to Apple Computer's valuation or something like that. That's the enormous amount of value we're capturing there.
There was a good blog debate last year about overinvestment in AI. The number then was $200 billion annually in investment, with questions about whether we could recoup that. Here we have a way to recoup $3 trillion. That makes $200 billion look like peanuts.
How Developer Work is Changing
When AI evolution is complete, do you have ideas about how software developer work will differ from today?
When I write code today, I write specifications and discuss with the model how to implement something. For easy features, I actually just ask it to implement the feature and then review it.
How will the process change? Will we still have the same steps? Will we all basically become product managers writing specifications, with AI writing code and us occasionally intervening for debugging? Or will we all become QA engineers testing whether things match specifications?
Becoming QA engineers is ironic. We all got into this work to avoid becoming QA engineers.
Changes in Current AI Coding Workflows
The way I use these models has changed a lot over the past six months. Before, you'd take something like your favorite ChatGPT, give it a prompt, get a problem out, copy it to your editor, and see if it worked. That was the Stack Overflow replacement approach.
But the next step was starting to have things integrated into IDEs. GitHub Copilot and Cursor basically let you use autocomplete, which is a big advancement. It's no longer monolithic questions but happens within the flow.
Then it split into line-level autocomplete. You can ask questions about paragraphs and have separate chat interfaces for longer discussions. Then IDEs became able to use command-line tools, so suddenly you can say "Can you set up a new Python project with UV?" and it can basically execute commands that do all that work.
Today's Development Process
When I want to write new software today (not production code, but when I want to try something), the first thing I do is start writing specifications. I start at a very high level. What I want to do is this. It's still quite abstract and not well thought out.
Then I basically ask the model (probably GPT-3.5 or 3.7 or Gemini), "What I want to do is this. Does this make sense? If anything's unclear, ask me questions and write me a more detailed specification."
Then the model gets to work. It asks me lots of questions. Very simple things like "You'll need API keys for that" or more complex ones like "How do you want to manage state? Put it in a database or in an opportunity file?"
So it's basically a back-and-forth discussion that helps clarify my thinking, and the model is almost like a sparring partner for thinking through the process. In some ways it's really strange. But it works.
Over time, I basically get more detailed specifications. When I have those, I ask the model to start implementing. And all of that comes with a substantial amount of context. I have my standard Python coding guidelines with the model. How I like comments, whether I'm more object-oriented or more procedural, how I like to structure classes, etc.
Limitations and Challenges of AI Coding
Has anything gone really wrong with AI-assisted coding? Not really wrong, but much of how we call methods depends on agent behavior based on how the client implemented the agent.
For example, there's a really cool tool that generates very pretty pages and sends back React components or HTML pages that coding agents can reference. Once I asked the Cursor agent to "connect to this tool and implement based on what it says."
The Cursor agent's reaction was very interesting. It looked at the code and said, "Oh, this looks good. Let me give you a new version." So it didn't adopt what was returned. It seemed like the Cursor agent slipped into "I disagree with this direction."
MCP and the Importance of Context
The essence of MCP is just a way to provide the most relevant context to LLMs. So many MCP servers these days help leverage whatever client is using them. That's what enables the experience I just described. You can use Linear MCP, use GitHub MCP to bring in relevant context.
Tool calling is a technical detail that implements how to get context. But the core of MCP is really the context part. What's most relevant? What can I provide so that as a model, I can help you better?
Senior Developer Engagement
Do you think being able to use these kinds of tools in IDEs makes AI coding more productive or more suitable for senior developers? For a long time, the criticism was that vibe coders were creating great demos and junior developers were getting up to speed faster.
But the people affectionately called "neckbeards" - those who own clusters and prevent you from breaking things or handle overall architecture - were skeptical. Do you think this is one way to get the neckbeards involved?
I think it depends on what very senior engineers are optimizing for. There are very senior application engineers who are very good at fleshing out ideas. In these cases, it's more like an evenly distributed skill set. You just recombine.
But there are very senior engineers optimizing distributed systems. I don't think it's gotten there yet. Coding agents can't bring in all the state of distributed systems, and solving specific problems requires a lot of human intervention.
But given enough context window and enough tool-calling capability to bring appropriate knowledge to the model, I feel like we're heading in that direction.
Problem Complexity and Context
There seems to be a pattern where the more esoteric the problem, the newer the problem you're trying to solve, the more context you need to provide. If it's a simple version like "write me a blog" or "write me an online store," that's like a standard undergraduate software development class problem.
The amount of samples on the internet is almost infinite. Models have seen this billions of times and are incredibly good at regurgitating this code.
If there's something with almost no training code, generally everything falls apart and you have to specify exactly what you want. You have to provide context, provide API specifications, and it's much, much harder. And it will give you very confidently wrong answers.
I've experienced this so much that I can't say "Oh my god, this function exists. It's exactly what I need." But wait, no, it doesn't exist.
Possibilities and Limitations of Vibe Coding
We haven't talked much about vibe coding, but there's this idea that non-developers can now write code. This seems pretty cool and like something that should happen. We're not priests of computers. There's no need to intervene between regular people and processors. People should be able to control computers in direct ways, not through pre-made programs.
I think this is very interesting and very exciting. There's a question there. Does this scale at all, or is it like everyone can build a cabin but no one can build a skyscraper?
The demos that everyone does when they first try website generators or something like Cursor probably won't be very helpful to the rest of humanity. Like first weekend projects.
But assuming some of the people trying this start climbing the ladder and begin doing increasingly sophisticated things, but in a completely different way from how the three of us probably learned programming the hard way. I have tremendous optimism about a new pool of people being able to write software in new ways and see the world in completely new ways.
Future Programming Education
It's very fair that if you work at one abstraction level, you should learn the abstraction one level below where you work. I keep thinking about what that one level lower abstraction is for vibe coders. Is it code? Is it IDEs? Is it something else?
This is a very good question, and to rephrase it slightly: what do future people who want to do software development need to learn? Is it something one level deeper, or is it actually something adjacent?
Some people say you don't need to learn CS anymore. That it's all about social-emotional learning and those kinds of things. I disagree with that. But that seems to come up every 20 years.
Honestly, I have no idea what the equivalent of computer science education will look like in five years. Looking at what happened historically when we did similar things in computation, when we went from manually adding numbers to Excel, entire job categories didn't disappear. Bookkeepers became accountants or something like that.
Entering data, writing numbers, and manually adding became less important, and doing higher-level, more abstract concepts became more important. If you pattern-match that one-to-one, my guess is that explaining problem statements, explaining algorithmic foundations, explaining architecture, explaining data flow becomes more important, and actually coding - the cleverest way to solve for loops - becomes a very specialized, more niche area.
Managing AI and Uncertainty
If you think of AI not as a tool for writing code but as a primitive element of applications, it seems like you're pushing the boundaries of the degree of uncertainty and non-deterministic behavior you can have in software.
Way back in the day, we wrote software for local machines, and you could have pretty good expectations about how it would execute. There was this new thing called networks, which were very hard to predict how they'd behave, but you could express them in the same terms. It felt like a problem you could wrap your arms around.
AI seems like an extension of that in some ways. When you add AI to software or use it, you really don't know what's going to happen.
When we went to networked systems, there were new failure modes like timeouts, and new solutions like retries. When you get to distributed databases, you had to worry about atomicity and rollbacks, and things got very complex very quickly from digital foundations.
Some of these design patterns still don't have very good software architecture today. There might still be unsolvable problems.
Models are interesting. At temperature 0, a model is technically deterministic. It's not that the same input brings different outputs - that's something we do by choice. The bigger problem is that infinitesimally small changes in input can have arbitrarily large effects. So it's a chaotic system.
Users can put anything in a text box, and the system is chaotic enough that before, you just had to check apostrophes to execute database doors. There were only a few things that could break text boxes. Now almost anything can happen when someone enters text.
AI's Narrow Waist and Prompts
If you've been throughout internet history and pioneered much of networking research, there's a narrow waist to how the internet came about. Do you think similar dynamics are at play in AI?
The narrow waist is probably prompts. Generally, these big technology cycles are built on abstractions that let you encapsulate complexity into very narrow APIs. For databases, it's SQL. How early database transactional databases work under SQL queries, something related to B* trees - I learned that in grad school but it's no longer important. You just need to be able to specify queries.
I think the same thing led to the rise of modern ML. You no longer need an overly expensive snapper PhD to train models for you, but instead you can express and manipulate models with prompts. So a fairly semi-skilled copilot programmer can suddenly leverage very powerful LMs with just prompts.
Looking more closely at prompts, is it expressing what you want in natural language? Since there's no standard, prompts can be anything. To call it a narrow waist, it's neither a formal language nor English.
We're all learning a new language to prompt these things, and it's actually slightly different for each model. There are dialects. There are translation problems.
Conclusion: The Changing Future of Development
AI coding is fundamentally changing how developers work, going beyond simple tool evolution. We've moved from an era of finding answers on Stack Overflow to an era of conversing with AI while writing code. This isn't just a technical change - it's a shift in mindset.
What's most fascinating is that AI coding is creating not just better existing developers, but an entirely new class of developers. The phenomenon of vibe coders creating software without traditional programming education raises fundamental questions about the future of computer science education.
But we're also facing new challenges. The non-deterministic nature of AI systems, hallucination problems, and debugging difficulties in complex systems are among them. These problems might not be solved with purely technical solutions. We might need to change our expectations and development methodologies themselves.
Ultimately, the future of AI coding depends on how the collaboration between technology and humans evolves. Whether prompts become the new programming language or more sophisticated formal languages emerge remains to be seen. But what's clear is that how we create and think about software is fundamentally changing.
In a $3 trillion market, even a 15% productivity improvement can create enormous value. But the true value might lie in what can't be measured in numbers. More people being able to implement their ideas as software, and developers being freed from repetitive tasks to focus on more creative problem-solving.
No one knows what computer science education will look like in five years. But one thing is certain: change has already begun. We're standing in the middle of that change, and we're creating its direction together.
The transformation isn't just about tools getting better - it's about democratizing the power to create software. When anyone can build a cabin but building skyscrapers still requires expertise, we're not just changing who can code, but expanding the very definition of what coding means.
As we navigate this transition, the most successful developers will likely be those who can adapt to working alongside AI, understanding both its capabilities and limitations. The future belongs not to those who can write the most elegant for loops, but to those who can architect solutions, communicate complex ideas clearly, and bridge the gap between human intention and machine execution.
The revolution is already here. The question isn't whether AI will change how we develop software, but how quickly we can adapt to make the most of this unprecedented opportunity.