Sam Altman, CEO of OpenAI, recently shared his thoughts on the potential of AI to shape the future of technology and global wealth in a blog post entitled “The Intelligence Age.” In this piece, Altman envisions a world where AI accelerates human progress, predicting that superintelligent AI could become a reality within the next ten years.
Altman muses, “It’s conceivable that we might achieve superintelligence in a few thousand days (!); it might take longer, but I am confident that we will reach that point,” he asserts.
OpenAI’s present focus is on developing AGI (artificial general intelligence), a theoretical technology capable of performing a wide range of tasks at human levels of intelligence without prior specific training. In contrast, superintelligence goes beyond AGI, representing a hypothetical scenario where machines significantly surpass human capabilities in all intellectual endeavors, potentially to an unimaginable extent.
Superintelligence, often abbreviated as “ASI” for “artificial superintelligence,” has been a subject of interest among machine-learning circles for years, particularly after the philosopher Nick Bostrom published Superintelligence: Paths, Dangers, Strategies in 2014. Ilya Sutskever, a co-founder and former Chief Scientist at OpenAI, left in June to start a company named Safe Superintelligence. Altman himself has been discussing the development of superintelligence since at least the previous year.
But what does “a few thousand days” really mean? While it’s unclear, Altman’s choice of a vague timeframe likely stems from his uncertainty about when ASI will materialize, though he seems to believe it could happen within a decade. To put it in perspective, 2,000 days is about 5.5 years, 3,000 days about 8.2 years, and 4,000 days nearly 11 years.
Altman’s ambiguity is easy to critique; predicting the future is no small feat, but as CEO of OpenAI, he likely has insights into forthcoming AI research not widely available to the public. Even with a loosely defined timeline, his assertion carries weight within the AI community, where he has a vested interest in continuing AI advancements.
Not all share Altman’s positive outlook. Grady Booch, a computer scientist and frequent critic of AI hype, reacted to Altman’s timeline by writing on X, “I am so freaking tired of all the AI hype: it has no basis in reality and only serves to boost valuations, stir public interest, generate headlines, and detract from genuine computing advancements.”
Despite such criticisms, it remains significant when the head of a leading AI company forecasts future capabilities, even if it’s part of an ongoing effort to secure funding. For many tech CEOs, building the infrastructure to support AI services is a top priority.
“To make AI accessible to as many people as possible,” Altman notes, “we need to reduce the cost of computing and increase its availability (which demands significant energy and chips). If we fail to build sufficient infrastructure, AI will remain a scarce resource, leading to conflicts and becoming a tool predominantly for the wealthy.”
Altman’s Perspective on “The Intelligence Age”
Altman describes our current time as the beginning of “The Intelligence Age,” a new epoch in human history marked by transformative technologies following the Stone Age, Agricultural Age, and Industrial Age. He attributes this new age to the success of deep-learning algorithms, summarizing the transition as: “How did we reach the threshold of the next prosperity leap? In three words: deep learning succeeded.”
He envisions AI assistants evolving into “personal AI teams” capable of helping individuals achieve nearly any conceivable goal. Altman anticipates AI will facilitate breakthroughs in education, healthcare, software development, and other areas.
While acknowledging potential drawbacks and disruptions to the labor market, Altman remains optimistic about the overall impact of AI on society, asserting, “Prosperity alone doesn’t necessarily lead to happiness—there are plenty of unhappy wealthy individuals—but it would significantly enhance global living standards.”
Although current discussions around AI regulation are prevalent, Altman didn’t specifically address sci-fi-type dangers from AI. On X, Bloomberg columnist Matthew Yglesias noted, “It’s noteworthy that @sama no longer seems to even acknowledge existential risk concerns, focusing only on labor market adjustment challenges.”
While enthusiastic about the potential of AI, Altman also advises caution. He writes, “We must act wisely yet decisively. The onset of the Intelligence Age poses complex, high-stakes challenges. It won’t be completely positive, but the potential benefits are so vast that we owe it to ourselves and future generations to navigate these risks.”
Aside from potential labor market issues, Altman doesn’t specify other negatives of the Intelligence Age. He concludes with a historical analogy about an obsolete job, writing, “Many of today’s jobs would seem trivial to people centuries ago, yet no one yearns for the past roles like lamplighting. If a lamplighter could see today’s world, he would find the widespread prosperity unimaginable. If we could see a hundred years into the future, today’s prosperity would seem equally unbelievable.”
The article was updated on September 24, 2024, at 1:15 pm to correct an editorial error that misrepresented Grady Booch’s views on AI.