I believe that artificial general intelligence (AGI), specifically “a highly autonomous system that outperforms humans at most economically valuable work” is technologically achievable and >90% likely to exist in the next 1-20 years (and honestly, 10 years feels way too long). You should too. Once AI systems that are better, cheaper, faster, and more reliable than humans at most economic activity are widely available, the intelligence curse should begin to take effect. We should expect to be locked into the outcome 1-5 years after this moment.Let's examine this timeline. Although the definition of AGI is rather nebulous depending on who you talk to, we can look at the most recent predictions of some of the leaders in the field. Sam Altman of OpenAI now says it could be this year. Dario Amodei the CEO of Anthropic thinks that 2026/2027 AGI could happen. People like Demis Hassabis of DeepMind and Yann Lecun of Meta have a longer timeline with Yann Lecun generally disliking the term AGI in favor of human-level intelligence. But it's fair to say that regardless of the definition of the timeline, predictions have been decreasing. I generally think the shorter timelines are correct of 2025-2026, but let's say the timeline when most reasonable people can agree that AGI has arrived is 5 years from now. Then according to Luke Drago, this dystopic scenario would be "locked in" within 1-5 years after that. Let's take the median of his estimate of 3 years, so then in 2030 AGI widely exists and in 2033 we have this dystopia of the intelligence curse. But if it is much sooner - if Sam Altman and Dario Amodei predictions of AGI are correct, then this dystopia arrives before 2030. Let's look at the World Economic Forum report just released in January of 2025 and I'm going to quote directly from the report as to how it was compiled and the timeframe for its forecast:
The Future of Jobs Report 2025 brings together the perspective of over 1,000 leading global employers—collectively representing more than 14 million workers across 22 industry clusters and 55 economies from around the world—to examine how these macrotrends impact jobs and skills, and the workforce transformation strategies employers plan to embark on in response, across the 2025 to 2030 timeframe.It is looking at the timeframe we are interested in:
Extrapolating from the predictions shared by Future of Jobs Survey respondents, on current trends over the 2025 to 2030 period job creation and destruction due to structural labour-market transformation will amount to 22% of today’s total jobs. This is expected to entail the creation of new jobs equivalent to 14% of today’s total employment, amounting to 170 million jobs. However, this growth is expected to be offset by the displacement of the equivalent of 8% (or 92 million) of current jobs, resulting in net growth of 7% of total employment, or 78 million jobs.So over this period when AGI could very well be happening and the intelligence curse should be wrecking havoc with economies, the World Economic Forum is predicting a net growth in jobs. To be sure, it lists out several major transformative changes that will be taking place even beyond AI - demographic changes (population aging), climate change, "geoeconomic fragmentation and geopolitical tensions", but even with these changes there will still be positive increases in employment. And I don't think this increase in employment is fully taking into account opportunities that will be created by AGI in technologies that cannot be possibly imagined at this point. This hardly sounds like the dire employment scenario described in the paper. There is also the problem I have with this paper and also with the "My Last Five Years of Work" paper is that it is strongly implied that once AGI is achieved that there is a very quick transformation - a complete makeover of economic and social structures. But there are all kinds of inertial forces at work that prevent radical transformation across industries to happen quickly. These difficulties include changing highly dependent systems, bureaucracies, complicated logistics, regulation conformities, and just the natural resistance to change from people across organizations. Often the large costs and timelines of changing complicated systems will drag out AGI implementation in parts of the economy such that AGI will be very unevenly distributed. There will be this transition period after AGI is achieved where some automations will be easy and quick to do, but other parts of the economy could take over 20 years. And it's because of this transition period - and if governments and organizations use this transition period well, then that time to plan could prevent the dystopia of the intelligence curse. 2) Paradox of AI Alignment My idea of the paradox of AI alignment is that we are striving for an AI that is aligned with human values and we spend a lot of time and effort trying to create that alignment, but in the end we may as a species be much better served if AI alignment proves to be impossible. Over the last couple of years there have been highly publicized departures of very notable alignment people leaving OpenAI and recently at Anthropic to go elsewhere to do alignment research. The stories around these departures is ostensibly that the companies that they were at weren't serious enough about AI dangers, were moving too fast, and weren't giving alignment teams enough resources. But I think the reality is that AI alignment is just really, really hard and the frustration is that it has been very difficult to make progress and sustain progress in alignment - especially when AI is constantly changing. AI alignment, in the end, might just be impossible. However, alignment being impossible could be a good thing. Here's why: In a world with an "aligned" AGI/ASI we would be trusting it more, giving it more general objectives, objectives that would be more fuzzy, where the AI could choose from more and more paths to achieve those objectives. We would be letting it be more autonomous to achieve those general objectives, because it's "aligned" with human values. In a world, where we trust AI, there's less need for human oversight of AI, human management of AI, human security of AI, human deployments of AI. But without trusted alignment, AIs will need human partners, AI will need to be directed to specific tasks and with that need for management that ensures that a whole sector of jobs will continue to grow if we assume that we would never have AI fully aligned. Now I know that alignment won't be an all or nothing proposition, that we can have alignment that does some goals well, but as long as we don't have fully trusted AI, I believe we will be better off. And since alignment is difficult - that's a good thing. 3) Myth of AI Centralization By the myth of AI centralization, I am referring to a widely held belief by almost everyone, until maybe very recently, that in order to create and control AI you needed enormous resources to train and deploy these models. Estimates on training GPT 3.0 were around $4-5 million, however the estimated cost to train a model the size of GPT 4.0 is around $63 million. The data centers have gotten larger, the number of GPUs needed have increased, and the energy requirements have increased to the point that major labs talk about having their own nuclear power plants. And just recently, it was announced that Stargate would be a data center costing $500 million that would begin construction in the middle of 2025. And Luke Drago makes this his main point for comparing AGI to resouces like oil in rentier states. He states:
But AGI looks a lot more like coal or oil than the plow, steam engine, or computer. Like those resources:But almost at the same time as Stargate was being announced with its enormous price tag, Deepseek R1 was released and open sourced, rivaling the best models at the time at a fraction of the size and cost. Regardless of if the cost was not exactly what was first reported, it is still much smaller than what the large labs have been saying they needed. The combination of ideas like distillation, improved data sets, improved reinforcement learning techniques, open source, and even potentially different architectures than transformers are leading to the possibilities of smaller models. Plus, chips should continue to drop in price to the point that I believe very small groups - and individuals will be able to create their own AGI versions. And this I believe is the biggest wild card that is not being accounted for in the "The Intelligence Curse" paper. It will not be a few mega-trillion dollar companies controlling AGI. Instead AGI will be comoditisized and democratized. Individuals will be empowered to control intelligence in ways that they want and not as dictated by large AI labs. Conclusion While The Intelligence Curse presents a sobering and dystopian vision of a post-AGI future, I am unconvinced that this outcome is inevitable or even likely. The future is rarely as linear as such scenarios suggest, and history shows us that technological transitions are complex, uneven, and full of surprising turns. The timeline proposed by the paper feels overly compressed, underestimating the inertia of existing systems and the time it takes for society to adapt. Moreover, the assumption that AGI will remain in the hands of a few centralized actors ignores recent developments in open source AI and more efficient models that could democratize access to powerful technologies. The paradox of AI alignment may also work in humanity's favor, ensuring that humans remain an integral part of managing and guiding AI systems for the foreseeable future. Far from eliminating the need for human oversight, imperfect alignment could create new categories of work that preserve human relevance in a world increasingly shaped by AI. Additionally, the myth of AI centralization overlooks the potential for decentralized, individual driven innovation that could disrupt the monopolistic control envisioned in the paper. Ultimately, while it's crucial to think critically about the risks posed by AGI and advocate for responsible AI development, it’s equally important to remain grounded in what we know about technology adoption and economic change. The future will still need us - not just as passive recipients of technological progress but as active participants in shaping its direction. I'm convinced there's reason to believe that AI will enhance human potential rather than diminish it.
- It will require immensely wealthy actors to discover and harness.
- Control will be concentrated in the hands of a few players, mainly the labs that produce it and the states where they reside.