Road to AGI: Do we even need it?
If Narrow AI keeps winning Nobel prizes, why aim higher?
This is part 2 of my series Road to AGI. You can catch up on Part 1 “Is it even possible?”.
The fact that Demis Hassabis just won a Nobel Prize for his work on AlphaFold raises an important question. If we can solve really hard and important problems like protein folding without Artificial General Intelligence (AGI), then why are we building it?
Narrow AI vs General AI
Ironically, it’s Demis himself who in some sense sets up this quandary with his famous one-liner.
“Solve intelligence, and then use that to solve everything else.”
— Demis Hassabis
Now as we can see, this isn’t a specific statement about AGI. He’s just saying that AI can help solve problems. The Nobel Prize is just proof that he’s on the right track.
AlphaFold is based on an AI architecture similar to the famous AlphaGo system that became superhuman in board games in 2016.
These are both built on the Reinforcement Learning (RL) paradigm introduced by AlphaGo. Unlike Large Language Models (LLMs) like ChatGPT, AI of the RL type is highly specialized. These are usually called Narrow AI, whereas LLMs would be considered General AI.
Above and below you see two animations. They showcase what happens when you scale a Narrow AI system like AlphaGo, compared to what happens when you scale a General AI system. The Narrow AI just gets better at the specified task. In this case, board games. It can get a lot better, in fact even superhuman at this task.
Now look at the second animation. It’s not just that as you scale you get better at known tasks, you actually get better at completely new tasks. There is no engineer that decided they wanted an LLM to get better at joke explanations. It just happens as you feed more data into bigger models. It’s the gift that just keeps on giving.
But here’s the catch: we can’t know what those new capabilities are until we test them. This means an AI model that is great at coding might also be a superhuman hacker. Superhuman marketing could also mean superhuman manipulation. The only way to know is to test. If you don’t know what to test for, chances are something unexpected might get past you, and make it out into the world.
The key difference here is that an AI that is superhuman at chess doesn’t pose a risk of suddenly writing code to escape its servers and manipulating stock markets. It doesn’t write code, it just plays chess. In stark contrast, OpenAI’s latest O1 model showed some early signs of trying to escape, by attempting to hack its way out of its server. Because it can write code and run scripts on servers. This is the danger.
“But suppose we could somehow establish that a certain future AI will have an IQ of 6,455: then what? We would have no idea of what such an AI could actually do.” — Nick Bostrom, Superintelligence
Well, how far can we take Narrow AI and RL? How broadly can we apply the AlphaFold approach to other problem domains?
Transformative AI vs General AI
Besides AlphaFold and AlphaGo, Demis and his team at Google DeepMind have been working on many other important problems for years. Recent highlights include:
Semiconductors (AlphaChip, 2024)
Biology (AlphaProteo, 2024)
Mathematics (AlphaGeometry and AlphaProof, 2024)
Fusion Reactors (TORAX, 2024)
Material Science (GNoME, 2023)
Weather (GraphCast, 2023)
Supposedly, when introducing new team members at DeepMind, he has been known to state their mission as winning multiple Nobel Prizes. Perhaps we should start taking this approach more seriously.
Why not just keep going down this path and keep solving problems and picking up prizes? One such proposal was recently made under the title “A Narrow Path”. This is how the authors define “Transformative AI”.
“The natural next step is then the development of Safe and Controllable Transformative AI, to benefit all of humanity. Not superintelligence, nor AGI, but transformative AI. AI that is developed not with more and more capabilities as an end in itself, but as a tool for humans and under human control to unlock prosperity and economic growth. AIs as tools for humans to automate at scale, not AI as a successor species.” — A Narrow Path
That seems pretty sensible. We can have most of the upside with none of the downside. Complete win-win for humanity. So why the heck are we building AGI again?
Reasons to pursue AGI
Well, many reasons… let me highlight some of the more obvious takes.
Reason 1: Human lives are at stake
To stop being lambasted on his chat groups as a “doomer”, Anthropic CEO Dario Amodei took a whopping 15,000 words to declare his undying optimism for AGI under the title “Machines of Loving Grace”. No, I didn’t make that up.
He claims that with AGI, within the next decade, we can achieve:
Extended human lifespans to 150 years
Prevent and fix mental health disorders
Eliminate poverty
World peace
So clearly he believes the upside is rather high here. We could save millions, even billions of lives! Pretty much solve all our problems, condensed into a magical decade of blooming human potential. All thanks to a benevolent but superior alien intelligence we invited to live among us.
Then again, he’s also said his “p(doom)” number is as high as 25%. Let me make this more vivid for you. Imagine there is a loaded gun, with four chambers, three of which have bullets. Now this gun gets placed at the temple of every human on the planet at the same time. Do you pull the trigger if 75% gets you Dario’s list of goodies and 25% everybody dies?
Yikes. Suddenly, it’s not such a clear trade any longer. The doom father himself Eliezer Yudkowsky lists 43 creative ways that AGI will kill us. From his point of view, the whole exercise of AGI is doomed by definition. He doesn’t believe there is even a theoretical, let alone practical solution to controlling a higher intelligence forever. Even if it turns out to be possible, it would mean that our timeline is the one that gets it right the first time. There is no second chance.
Well, at least these AI labs with huge war chests are focused on a safety-first approach, right?
“The team with the highest performance builds the first AI. The riskiness of that AI is determined by how much its creators invested in safety. In the worst-case scenario, all teams have equal levels of capability. The winner is then determined exclusively by investment in safety: the team that took the fewest safety precautions wins.” — Nick Bostrom, Superintelligence
So in some sense, the ethical quandary here is one of great uncertainty. This explains why the debate often degrades into questions of faith and personal insults. We simply don’t know whether safe AGI is possible or not. Some believe it’s our prime motive to figure it out, and with utmost haste. Others think this is a stone best left unturned.
Reason 2: Material Abundance
Without offering such miracles, we could take a more straightforward economic view of AI. Stuart Russell in his essay “If we succeed” made the following back of the napkin math.
If we take the current GDP per capita of the United States and uplift the rest of the world to that level, what does the world look like? I believe the number comes out at 13.5 quadrillion dollars. We don’t use numbers that big very often yet, but it’s what comes after trillions.
“If another such transition to a different growth mode were to occur, and it were of similar magnitude to the previous two, it would result in a new growth regime in which the world economy would double in size about every two weeks.” — Nick Bostrom, Superintelligence
Peter Diamandis, a noted techno-optimist and singularity man, thinks that AI can provide a world of “abundance”. No one need go hungry. We can mine the asteroids and make toilets out of diamonds.
How do we distribute the profits of AI to ensure this abundance is enjoyed by others than Sam Altman? Let’s see:
Can’t open-source it, because that’s too dangerous according to every expert except Yann LeCun.
Could nationalize AI labs, seems plausible according to Leopold Aschenbrenner.
Could form an entente of Western nations, according to Dario Amodei.
Could centralize global AI research, according to A Narrow Path and MAGIC.
Could use blockchain to distribute directly, according to Sam Altman.
Out of all of these, the most carefully designed is the one where we centralize AI research. Ironically, that’s also the one path where we don’t necessarily need AGI!
Reason 3: It’s cool to accelerate
Of course, we would all want to “feel the AGI”, as Ilya Sutskever likes to say. Right? I mean it’s clearly a total vibe. Who wants to work at McKinsey when you can work at OpenAI? Spend six months getting coffee for Sam Altman at OpenAI, and you can now raise hundreds of millions to do your own thing as a bonafide AI expert. What did you publish? Oh, we don’t publish anything. What did you do? Oh, I can’t talk about it. Okay, here’s a check with as many zeroes as I could fit in.
A lot of this vibe was championed by the Effective Accelerationism movement, founded by the infamous anon-duo of BasedBeffJezos and his sidekick BayesLord. Two twenty-something (clearly) single guys that read the works of gonzo-philosopher Nick Land and moved his dated cyberpunk material into the world of memes and shitposting on Twitter. They are too busy accelerating to even format their manifesto.
This oddball meme duo has been very influential on people in AI, and even a source of friction among OpenAI leadership. I wrote about that here, but notably, honorary acceleration man Sam Altman is the last man standing at OpenAI. Not a coincidence.
It’s all fun and games to accelerate, raising all of them billions, that is until you lose control of AI. Having actually gone through Nick Land’s material, the core of his thesis is not only an indifference to humanity, but an outright disdain. I mean, this one-liner kind of sums it up.
“Nothing human makes it out of the near future.” —Nick Land, Meltdown
Clearly, his AI Lab would be called Misanthropic. Who in their right mind would subscribe to a philosophy that not only accepts but actively aims at ending the human race? What is wrong with these people? Ego?
One of my favorite minds in AI is Dan Faggella, who wrote about all of this years ago and has analyzed in-depth some of the reasons for people to see the risks, and steam ahead almost energized by them.
Sardanapalus: The masculine urge to create AGI and hold the trophy of god creator for however long until the nanobots arrive.
Mandate of Heaven: The race to AGI is also a PR race. The winner must appear worthy of holding the ultimate power. OpenAI has been the technical leader, but due to all the drama and turbulence, it’s driven a lot of goodwill toward the direct alternative in Anthropic.
Conatus: We shouldn’t expect saintly figures to rule over the ultimate power in human history. We should expect selfish interest, which ultimately rules the universe according to Spinoza’s concept of the “conatus”. As Faggella says, govern accordingly.
Reason 4: We can’t stop
The real reason I believe AGI will happen is very simple. We couldn’t stop even if we wanted to.
Even the very modest 6-month pause on AI development beyond GPT-4, despite 33,000 signatures did nothing. That had the backing of pretty much all the top names in AI, except the people running the labs of course.
This has happened many times in human history. Most notably the nuclear bomb. The Americans simply had to do it. I mean, what if the Nazis got it first? The truth is, the Nazis weren’t even trying any longer at that point, but it was a good story like the WMDs they went searching for in Iraq.
This is probably a cellular-level function that humanity has evolved with. We must venture out of the cave. We must go across that next hill. We must fly.
"We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard." — John F. Kennedy, 1962
Nick Land makes an interesting claim that there has been an unstoppable positive reinforcement loop in place since the Renaissance. Capitalism in some sense has an invisible intelligence of its own, and it seeks to complete its productivity conquest by extinguishing imperfect human labor once and for all with Artificial Intelligence.
Whether you take a philosophical stance or a highly pragmatic stance, the race to AGI is well underway. As Mark Zuckerberg aptly puts it, he and other AI CEOs will continue to ramp up investments by orders of magnitudes, as long as the models keep getting better.
Until we find the end of the scaling laws, if that even exists, the scaling will continue. Even Demis Hassabis, despite the recent Nobel Prize, still wants to realize his original childhood dream to build AGI to solve all the problems faster. So I guess we’ll find out how the story goes!