I guess we're about to find out. By the end of 2025, most code will be generated, not written. In my personal experience, models like Gemini 2.5 and Claude 3.7 exceed the abilities of most senior software architects and developers. It's better solution architecture, better frameworks, and more testing. Most human software isn't that great TBH.
Lots of money and focus are going into "Vertical Agents", which in combination with Anthropic's Model Context Protocol (MCP), could start to look like an Internet of Agents.
"If anything, AI will likely cement the position of these players, as it will create further efficiencies that widen their financial moats, allowing them to operate at razor-thin margins at gargantuan volumes, serving mostly AI-based agents instead of humans."
Yes. It’s easy to assume that because AI can code, software development is dead. You could make a similar argument about writing. AI can certainly help in both cases, but it can’t handle 100% of the work. And I don’t believe that will change anytime soon. OpenAI recently released a new model that is 30 times more expensive, took a year to train, and is worse than their existing models, except perhaps at writing—TBD on that.
Consider Salesforce as an example. On the surface, it appears to be just a database of customers and prospects. Simple. But it is much more than that. If people only needed a list of names, they would use Google Sheets. Instead, Salesforce has built an ecosystem that seamlessly integrates external tools, such as sales enablement and marketing automation systems, allowing data to flow back and forth effortlessly.
I don’t think anyone will be able to prompt their way into creating their own version of Salesforce anytime soon. Have you ever tried using Marketo’s API? It’s a nightmare. AI enables developers (and writers) to do more with less. It is an amazing tool and copilot, and the big players are best positioned to capitalize on it. Their margins will improve and be better equipped to respond to new challenges/competitors.
Coding is the canary in the coal mine, the capability AI labs focus on most. Claude Code is yet another step up in agentic coding, i.e. the ability to take broad tasks, break it down, and just do it. It doesn't replace developers 100%, but it does automate most of their work.
There's a reason Salesforce is going all-in with Agentforce. In fact, I see the SaaS is dead message more and more on various platforms now. It's becoming accepted and internalized. Salesforce is very expensive. So are LLMs. So there is a cost pressure to justify your existence. Adopting AI agents will simply become easier than the alternatives, and ROI will be faster than SaaS.
What about the non-deterministic answer provided by LLM? Many tasks and automations will require, somehow, deterministic data. After all, you expect that same input generates the same outputs for receipt scanner. It not happens at all with LLM. The problem is, on my view, to LLM works they must be non-deterministic. It's pure probability and statistic. To follwing specific rules, even the reasoning models can make big mistakes sometimes. The coincidence here is you gave samples about invoice processing, and I read an article how make mistakes can be in the output data without a constant double checking (which may costs a lot if you hava a lot of daily data to processes). See: https://www.runpulse.com/blog/why-llms-suck-at-ocr
LLM's alone will not replace all existing software, because as you say, non-determinism (hallucination) is a core feature. The paradigm I see now coming is using deterministic workflows that use LLMs selectively. Perhaps to decide what to do, or process individual content in/out of enterprise systems. This can happen in collaboration with either deterministic code and even other simpler ML models that are more predictable.
Having worked on some AI contracts it's going to be an interesting question where the liabilities get stacked. For the most part, it's not going to be the AI labs unless regulators make it so.
Love this @Aki, I think it is spot on. In the last year or so, whenever I was looking to signup for a SaaS tool, I tried to build what I needed directly with an AI tool. I didn't end up signing up with any SaaS, saved a ton of money and learned a few things along the way. We're at a turning point.
Was talking about this trend with a lot of people last week, and I realized the first order conclusion is actually incorrect. In a real sense, we aren't replacing SaaS tools 1:1 with AI. SaaS tools require things like users and UI with lots of forms. When we replace SaaS with AI, the resulting process can be radically simplified. It's not just a replacement, it can be a 10x improvement.
Nice article. How do you think email-as-a-service (saas) will be impacted? You used to be able to spin up servers and send email way back when (and you can still do that yourself - in theory) but email services don't really rely on the product itself, rather the reliability and deliverability of emails. You pay for deliverability. I think I've answered my own question ha.
I think X-as-a-service, with X being something other than just software still has a place. The reason in many cases would be that you are paying for access to or outsourcing something that you don't want to manage internally. Could be emails, or something like payments (Stripe), financial risk, insurance, hiring, logistics, etc..
Good post. I wrote a similar essay a few weeks ago, referencing similar ideas as the YC intern + invoicing product: a lot of SaaS tools are just data transformation services, which become commodities (or easily-DIYable) in a more LLM-enabled world. As a general matter, margins tend to come down over time -- the fact that SaaS has for so long had 80%+ gross margins should stand out as an unstable equilibrium to any outside observer. https://loeber.substack.com/p/18-the-end-of-schematic-businesses
Thanks, I like your data transformation analogy. Intuitively I would claim 90%+ of SaaS platforms do things that any human could do, but it would be annoying and time-consuming. When you can have a free humanlike AI doing those things, why pay for software at all?
Running payroll off of something AI spit out in a 2 hour session. What could go wrong?
I guess we're about to find out. By the end of 2025, most code will be generated, not written. In my personal experience, models like Gemini 2.5 and Claude 3.7 exceed the abilities of most senior software architects and developers. It's better solution architecture, better frameworks, and more testing. Most human software isn't that great TBH.
SaaS will transform to micro, personalized services that are well connected to industries’ ecosystems.
The competition will be even more brutal
Lots of money and focus are going into "Vertical Agents", which in combination with Anthropic's Model Context Protocol (MCP), could start to look like an Internet of Agents.
"If anything, AI will likely cement the position of these players, as it will create further efficiencies that widen their financial moats, allowing them to operate at razor-thin margins at gargantuan volumes, serving mostly AI-based agents instead of humans."
Yes. It’s easy to assume that because AI can code, software development is dead. You could make a similar argument about writing. AI can certainly help in both cases, but it can’t handle 100% of the work. And I don’t believe that will change anytime soon. OpenAI recently released a new model that is 30 times more expensive, took a year to train, and is worse than their existing models, except perhaps at writing—TBD on that.
Consider Salesforce as an example. On the surface, it appears to be just a database of customers and prospects. Simple. But it is much more than that. If people only needed a list of names, they would use Google Sheets. Instead, Salesforce has built an ecosystem that seamlessly integrates external tools, such as sales enablement and marketing automation systems, allowing data to flow back and forth effortlessly.
I don’t think anyone will be able to prompt their way into creating their own version of Salesforce anytime soon. Have you ever tried using Marketo’s API? It’s a nightmare. AI enables developers (and writers) to do more with less. It is an amazing tool and copilot, and the big players are best positioned to capitalize on it. Their margins will improve and be better equipped to respond to new challenges/competitors.
Coding is the canary in the coal mine, the capability AI labs focus on most. Claude Code is yet another step up in agentic coding, i.e. the ability to take broad tasks, break it down, and just do it. It doesn't replace developers 100%, but it does automate most of their work.
There's a reason Salesforce is going all-in with Agentforce. In fact, I see the SaaS is dead message more and more on various platforms now. It's becoming accepted and internalized. Salesforce is very expensive. So are LLMs. So there is a cost pressure to justify your existence. Adopting AI agents will simply become easier than the alternatives, and ROI will be faster than SaaS.
What about the non-deterministic answer provided by LLM? Many tasks and automations will require, somehow, deterministic data. After all, you expect that same input generates the same outputs for receipt scanner. It not happens at all with LLM. The problem is, on my view, to LLM works they must be non-deterministic. It's pure probability and statistic. To follwing specific rules, even the reasoning models can make big mistakes sometimes. The coincidence here is you gave samples about invoice processing, and I read an article how make mistakes can be in the output data without a constant double checking (which may costs a lot if you hava a lot of daily data to processes). See: https://www.runpulse.com/blog/why-llms-suck-at-ocr
What is your opinion about?
LLM's alone will not replace all existing software, because as you say, non-determinism (hallucination) is a core feature. The paradigm I see now coming is using deterministic workflows that use LLMs selectively. Perhaps to decide what to do, or process individual content in/out of enterprise systems. This can happen in collaboration with either deterministic code and even other simpler ML models that are more predictable.
I just elaborated on AI agents here: https://open.substack.com/pub/akiranin/p/what-exactly-are-ai-agents?r=1yjs3s&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Great insights @Aki -- thank you for this! You inspired me to think about this through the lens of contract negotiations. It's going to be fascinating to see how legal terms and teams keep up with this new frontier. https://privaicyinsights.substack.com/p/ai-contracts-need-new-thinking-beyond
Having worked on some AI contracts it's going to be an interesting question where the liabilities get stacked. For the most part, it's not going to be the AI labs unless regulators make it so.
Love this @Aki, I think it is spot on. In the last year or so, whenever I was looking to signup for a SaaS tool, I tried to build what I needed directly with an AI tool. I didn't end up signing up with any SaaS, saved a ton of money and learned a few things along the way. We're at a turning point.
Was talking about this trend with a lot of people last week, and I realized the first order conclusion is actually incorrect. In a real sense, we aren't replacing SaaS tools 1:1 with AI. SaaS tools require things like users and UI with lots of forms. When we replace SaaS with AI, the resulting process can be radically simplified. It's not just a replacement, it can be a 10x improvement.
Nice article. How do you think email-as-a-service (saas) will be impacted? You used to be able to spin up servers and send email way back when (and you can still do that yourself - in theory) but email services don't really rely on the product itself, rather the reliability and deliverability of emails. You pay for deliverability. I think I've answered my own question ha.
I think X-as-a-service, with X being something other than just software still has a place. The reason in many cases would be that you are paying for access to or outsourcing something that you don't want to manage internally. Could be emails, or something like payments (Stripe), financial risk, insurance, hiring, logistics, etc..
Good post. I wrote a similar essay a few weeks ago, referencing similar ideas as the YC intern + invoicing product: a lot of SaaS tools are just data transformation services, which become commodities (or easily-DIYable) in a more LLM-enabled world. As a general matter, margins tend to come down over time -- the fact that SaaS has for so long had 80%+ gross margins should stand out as an unstable equilibrium to any outside observer. https://loeber.substack.com/p/18-the-end-of-schematic-businesses
Thanks, I like your data transformation analogy. Intuitively I would claim 90%+ of SaaS platforms do things that any human could do, but it would be annoying and time-consuming. When you can have a free humanlike AI doing those things, why pay for software at all?