Sunday, February 9, 2025

From Innovation to Irrelevance: Reflections on the Paper "The Intelligence Curse"

Last month an article was published in Less Wrong titled "The Intelligence Curse" by Luke Drago. Luke Drago who works on AI Governance projects has written this piece that everyone should read about a possible future that is quite depressing in its implications - that AI, perhaps humanity’s greatest innovation could make most of humanity irrelevant. But although the ideas in the paper do have a non-zero chance of occurring, it's one that I don't think is likely and in this post I'll explain why.

I almost titled this post: "Why the Future Will Still Need Us" as a nod to the seminal article written in 2000 from Wired titled "Why the Future Doesn't Need Us." In that article, Bill Joy, co-founder of Sun Microsystems, expressed deep concern over the potential dangers posed by emerging technologies; specifically genetics, nanotechnology, and robotics (GNR). He argued that these advancements could surpass human control, leading to self-replicating entities capable of causing massive destruction. Interestingly, in that 2000 article, AI wasn't at the top of his existential threats like it is now for the P(doom) crowd.

Other important articles that are more recent and AI specific that I think are very relevant for this discussion that I commented about here and here on are "My Last Five Years of Work" by Avital Balwit and "Situational Awareness" by Leopold Aschenbrenner. Both of these authors address the societal transformations with the advent of artificial general intelligence (AGI). Balwit writes on the potential obsolescence of human labor, suggesting a future where AGI could render traditional employment obsolete. Similarly, Aschenbrenner discusses the rapid progression towards AGI, emphasizing the need for heightened awareness and preparedness for the imminent changes. Both emphasize the urgency of addressing the societal and ethical implications of AGI’s integration into various facets of life. Likewise, this new paper by Luke Drago is a continuation of this thread of a need to think about and prepare for a possible future that could be radically different from today.

But before I get into the "The Intelligence Curse" paper itself, I want to state that I am an AI optimist and believe in accelerating AI progress and that AI acceleration can be done in a way that is beneficial and responsible. Accelerating change in and of itself can be disorienting. But I believe AI may give us the only real hope to counter and control forces in the 21st century that have truly existential implications such as global warming, conflicts over energy and resource scarcity, food scarcity and malnutrition, disease, pandemics, etc.

So what is the "Intelligence Curse"?

The article introduces a scenario where AGI fundamentally alters economic incentives, leading to widespread human irrelevance and reduced investment in human welfare. Unlike past technological revolutions that expanded human potential, AGI is more like a concentrated natural resource such as oil. Its control will rest with a few powerful actors, corporations and governments, who prioritize maximizing returns from AI systems over investing in people.

In this post-AGI economy, companies would replace human labor with AGI because these AI agents are faster, cheaper, and more reliable. This shift removes the economic motivation to invest in traditional areas like education, infrastructure, and social welfare, mirroring the “resource curse” seen in "rentier" states. For example, the Democratic Republic of Congo (DRC) is rich in natural resources, with an estimated $24 trillion worth of untapped minerals, yet most of its population lives in poverty. Similarly, the "Intelligence Curse" could cause states and corporations to focus solely on extracting value from AGI, leaving human development and welfare behind.

The social and economic consequences of this transformation could be dire. As AGI replaces human labor across all fields, most people would lose economic relevance. States may increasingly rely on taxing corporations instead of individuals, while AI labs become powerful rent seekers, controlling significant economic resources. Consequently, social safety nets and public funding for human-centered initiatives would likely diminish, leaving many economically vulnerable.

The author emphasizes that incentives drive societal decisions. Today, states and corporations invest in human capital because they derive economic returns from people’s productivity. Education, infrastructure, and social programs create skilled workers, productive citizens, and consumers who fuel the economy. This feedback loop benefits those in power. However, once AGI becomes the primary driver of economic value, this incentive disappears. Powerful actors will no longer need human labor or consumer demand, relying instead on scalable, cost-effective AI systems.

He argues that this shift will lead to widespread disinvestment in people, collapsing the economic structures that support education, social programs, and infrastructure. Without an economic reason to invest in human capacity, powerful actors will focus on optimizing and extracting value from AI, creating a society where the majority of people become irrelevant to the system’s success.

So before, I get to why I think this scenario is unlikely, I'll extend out his dystopic ideas even more. Even looking at this future, there would be some optimists who would say that we would be living in a post scarcity society and that everything would be freely available and what is not freely available could be obtained by a government implementing Universal Basic Income (UBI). It will be utopia! Even if all humanity is beholden to authoritarian governments and corporations. But in Luke Drago's argument that centers solely around incentives, what would be the incentive to provide UBI? Altruism?

Governments in this scenario would only provide enough income to ensure that people don't riot and that those in power can keep that power. AI enabled feudalism. Those corporations and governments in control will be in control of AI that will be more persuasive than the best advertising campaign or the greatest politician that's ever existed. So in this post scarcity society, with endless AI provided entertainment, AI persuasiveness, and advancements in pharmaceuticals that can enable whatever human beings want to experience.

But what about future employment in this dystopia? Human beings derive meaning from doing "meaningful" things. We can turn to the article by Avital Balwit of "My Last Five Years of Work" I mentioned above and in my blog post about that article to see that humans would still find meaningful things to do. But would these corporations and governments still need to hire people to do specific jobs? There would still be those who work directly for these large corporations and governments - although these positions would grow fewer and fewer as AI is given more control over these tasks. What about those professions that involve human to human experiences: nurses, caretakers, physical therapists, etc. Yes, there will continue to be a need for those professions. Also, human created entertainment that can be experienced like plays, athletics, musical performances would be popular. Human created art and literature that could only be based on real, actual human experiences would be popular. So books based on human experiences that are autobiographical or semi-autobiographical would also be popular (think Dave Eggers book A Heartbreaking Work of Staggering Genius). So sure, there will still be these types of human activities, but they would be a small percentage compared to that of current employment numbers.

And to put a really cynical point on this dystopic vision, the greatest employer would be military services and security forces. Governments would still need large militaries in their competition with other AI equipped governments. And both governments and corporations would need large security forces for protection and pacification purposes from the broader population.

Okay, okay this has all been very depressing, because the author makes the case that this future is an inevitable outcome of AI advancement unless specific policies and actions are taken - and to be fair he does promise a follow up article to outline more on those specifics. But I want to outline why I think this future is not inevitable and is actually unlikely.

1) Timeline and Transition Period

First let's look at his timeline and the nature of that timeline. I'm going to quote two paragraphs in full here, so we have his full context:
I believe that artificial general intelligence (AGI), specifically “a highly autonomous system that outperforms humans at most economically valuable work” is technologically achievable and >90% likely to exist in the next 1-20 years (and honestly, 10 years feels way too long). You should too.

Once AI systems that are better, cheaper, faster, and more reliable than humans at most economic activity are widely available, the intelligence curse should begin to take effect. We should expect to be locked into the outcome 1-5 years after this moment.

Let's examine this timeline. Although the definition of AGI is rather nebulous depending on who you talk to, we can look at the most recent predictions of some of the leaders in the field. Sam Altman of OpenAI now says it could be this year. Dario Amodei the CEO of Anthropic thinks that 2026/2027 AGI could happen. People like Demis Hassabis of DeepMind and Yann Lecun of Meta have a longer timeline with Yann Lecun generally disliking the term AGI in favor of human-level intelligence. But it's fair to say that regardless of the definition of the timeline, predictions have been decreasing.

I generally think the shorter timelines are correct of 2025-2026, but let's say the timeline when most reasonable people can agree that AGI has arrived is 5 years from now. Then according to Luke Drago, this dystopic scenario would be "locked in" within 1-5 years after that. Let's take the median of his estimate of 3 years, so then in 2030 AGI widely exists and in 2033 we have this dystopia of the intelligence curse. But if it is much sooner - if Sam Altman and Dario Amodei predictions of AGI are correct, then this dystopia arrives before 2030.

Let's look at the World Economic Forum report just released in January of 2025 and I'm going to quote directly from the report as to how it was compiled and the timeframe for its forecast:
The Future of Jobs Report 2025 brings together the perspective of over 1,000 leading global employers—collectively representing more than 14 million workers across 22 industry clusters and 55 economies from around the world—to examine how these macrotrends impact jobs and skills, and the workforce transformation strategies employers plan to embark on in response, across the 2025 to 2030 timeframe.

It is looking at the timeframe we are interested in:

Extrapolating from the predictions shared by Future of Jobs Survey respondents, on current trends over the 2025 to 2030 period job creation and destruction due to structural labour-market transformation will amount to 22% of today’s total jobs. This is expected to entail the creation of new jobs equivalent to 14% of today’s total employment, amounting to 170 million jobs. However, this growth is expected to be offset by the displacement of the equivalent of 8% (or 92 million) of current jobs, resulting in net growth of 7% of total employment, or 78 million jobs.

So over this period when AGI could very well be happening and the intelligence curse should be wrecking havoc with economies, the World Economic Forum is predicting a net growth in jobs. To be sure, it lists out several major transformative changes that will be taking place even beyond AI - demographic changes (population aging), climate change, "geoeconomic fragmentation and geopolitical tensions", but even with these changes there will still be positive increases in employment. And I don't think this increase in employment is fully taking into account opportunities that will be created by AGI in technologies that cannot be possibly imagined at this point.

This hardly sounds like the dire employment scenario described in the paper.

There is also the problem I have with this paper and also with the "My Last Five Years of Work" paper is that it is strongly implied that once AGI is achieved that there is a very quick transformation - a complete makeover of economic and social structures. But there are all kinds of inertial forces at work that prevent radical transformation across industries to happen quickly. These difficulties include changing highly dependent systems, bureaucracies, complicated logistics, regulation conformities, and just the natural resistance to change from people across organizations. Often the large costs and timelines of changing complicated systems will drag out AGI implementation in parts of the economy such that AGI will be very unevenly distributed.

There will be this transition period after AGI is achieved where some automations will be easy and quick to do, but other parts of the economy could take over 20 years. And it's because of this transition period - and if governments and organizations use this transition period well, then that time to plan could prevent the dystopia of the intelligence curse.

2) Paradox of AI Alignment

My idea of the paradox of AI alignment is that we are striving for an AI that is aligned with human values and we spend a lot of time and effort trying to create that alignment, but in the end we may as a species be much better served if AI alignment proves to be impossible.

Over the last couple of years there have been highly publicized departures of very notable alignment people leaving OpenAI and recently at Anthropic to go elsewhere to do alignment research. The stories around these departures is ostensibly that the companies that they were at weren't serious enough about AI dangers, were moving too fast, and weren't giving alignment teams enough resources. But I think the reality is that AI alignment is just really, really hard and the frustration is that it has been very difficult to make progress and sustain progress in alignment - especially when AI is constantly changing. AI alignment, in the end, might just be impossible. However, alignment being impossible could be a good thing.

Here's why:

In a world with an "aligned" AGI/ASI we would be trusting it more, giving it more general objectives, objectives that would be more fuzzy, where the AI could choose from more and more paths to achieve those objectives. We would be letting it be more autonomous to achieve those general objectives, because it's "aligned" with human values. In a world, where we trust AI, there's less need for human oversight of AI, human management of AI, human security of AI, human deployments of AI. But without trusted alignment, AIs will need human partners, AI will need to be directed to specific tasks and with that need for management that ensures that a whole sector of jobs will continue to grow if we assume that we would never have AI fully aligned.

Now I know that alignment won't be an all or nothing proposition, that we can have alignment that does some goals well, but as long as we don't have fully trusted AI, I believe we will be better off. And since alignment is difficult - that's a good thing.

3) Myth of AI Centralization

By the myth of AI centralization, I am referring to a widely held belief by almost everyone, until maybe very recently, that in order to create and control AI you needed enormous resources to train and deploy these models. Estimates on training GPT 3.0 were around $4-5 million, however the estimated cost to train a model the size of GPT 4.0 is around $63 million. The data centers have gotten larger, the number of GPUs needed have increased, and the energy requirements have increased to the point that major labs talk about having their own nuclear power plants. And just recently, it was announced that Stargate would be a data center costing $500 million that would begin construction in the middle of 2025.

And Luke Drago makes this his main point for comparing AGI to resouces like oil in rentier states. He states:

But AGI looks a lot more like coal or oil than the plow, steam engine, or computer. Like those resources:

  • It will require immensely wealthy actors to discover and harness.
  • Control will be concentrated in the hands of a few players, mainly the labs that produce it and the states where they reside.

But almost at the same time as Stargate was being announced with its enormous price tag, Deepseek R1 was released and open sourced, rivaling the best models at the time at a fraction of the size and cost. Regardless of if the cost was not exactly what was first reported, it is still much smaller than what the large labs have been saying they needed. The combination of ideas like distillation, improved data sets, improved reinforcement learning techniques, open source, and even potentially different architectures than transformers are leading to the possibilities of smaller models. Plus, chips should continue to drop in price to the point that I believe very small groups - and individuals will be able to create their own AGI versions.

And this I believe is the biggest wild card that is not being accounted for in the "The Intelligence Curse" paper. It will not be a few mega-trillion dollar companies controlling AGI. Instead AGI will be comoditisized and democratized. Individuals will be empowered to control intelligence in ways that they want and not as dictated by large AI labs.

Conclusion

While The Intelligence Curse presents a sobering and dystopian vision of a post-AGI future, I am unconvinced that this outcome is inevitable or even likely. The future is rarely as linear as such scenarios suggest, and history shows us that technological transitions are complex, uneven, and full of surprising turns. The timeline proposed by the paper feels overly compressed, underestimating the inertia of existing systems and the time it takes for society to adapt. Moreover, the assumption that AGI will remain in the hands of a few centralized actors ignores recent developments in open source AI and more efficient models that could democratize access to powerful technologies.

The paradox of AI alignment may also work in humanity's favor, ensuring that humans remain an integral part of managing and guiding AI systems for the foreseeable future. Far from eliminating the need for human oversight, imperfect alignment could create new categories of work that preserve human relevance in a world increasingly shaped by AI. Additionally, the myth of AI centralization overlooks the potential for decentralized, individual driven innovation that could disrupt the monopolistic control envisioned in the paper.

Ultimately, while it's crucial to think critically about the risks posed by AGI and advocate for responsible AI development, it’s equally important to remain grounded in what we know about technology adoption and economic change. The future will still need us - not just as passive recipients of technological progress but as active participants in shaping its direction.

I'm convinced there's reason to believe that AI will enhance human potential rather than diminish it.

No comments:

Post a Comment

Random Forest: A Comprehensive Guide

Random Forest is a highly powerful and versatile machine learning algorithm, often considered the most widely used model among data scie...