- So-called "e/acc" supports AI development at full speed, while "decels" remain hesitant due to AI's many risks.
- The AI alignment problem, or the issue that AI might get out of control, is crucial to the future of AI and was a prominent tension in the recent OpenAI board drama and CEO fight.
- Companies are beginning to work on the issues of AI alignment and AI safety as pressure increases from government officials and policymakers to make the technology "responsible" and "safe."
More than a year after ChatGPT's introduction, the biggest AI story of 2023 may have turned out to be the drama in the OpenAI boardroom over the rapid advancement of the technology itself. During the ousting and subsequent reinstatement of Sam Altman as CEO, the underlying tension for generative artificial intelligence going into 2024 became clear: AI is at the center of a huge divide between those who are fully embracing its rapid pace of innovation and those who want it to slow down due to the many risks involved.
The debate — known within tech circles as e/acc vs. decels — has been making the rounds in Silicon Valley since 2021. But as AI grows in power and influence, it's increasingly important to understand both sides of the divide.
Here's a primer on the key terms and some of the prominent players shaping AI's future.
e/acc and techno-optimism
The term "e/acc" stands for effective accelerationism.
In short, those who are pro-e/acc want technology and innovation to be moving as fast as possible.
Money Report
"Technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness," the backers of the concept explained in the first-ever post about e/acc.
In terms of AI, it is "artificial general intelligence," or AGI, that underlies the debate. AGI is the hypothetical concept of a super-intelligent AI becoming so advanced it could do things as well, or even better, than humans. AGIs would also be able to improve themselves, creating an endless feedback loop with limitless possibilities.
Feeling out of the loop? We'll catch you up on the Chicago news you need to know. Sign up for the weekly Chicago Catch-Up newsletter.
Some think that AGIs will have the capabilities to cause the end of the world, becoming so intelligent that they figure out how to eradicate humanity. But e/acc enthusiasts choose to focus on the benefits that an AGI can offer. "There is nothing stopping us from creating abundance for every human alive other than the will to do it," the founding e/acc substack explained.
The founders of the e/acc movement were shrouded in mystery until recently, when @basedbeffjezos, arguably the biggest proponent of e/acc, revealed himself to be Guillaume Verdon after his identity was exposed by the media.
Verdon, who formerly worked for Alphabet, X, and Google, is now working on what he calls the "AI Manhattan project" and said on X that "this is not the end, but a new beginning for e/acc. One where I can step up and make our voice heard in the traditional world beyond X, and use my credentials to provide backing for our community's interests."
Verdon is also the founder of Extropic, a tech startup which he described as "building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics."
An AI manifesto from a top VC
One of the most prominent e/acc supporters is venture capitalist Marc Andreessen of Andreessen Horowitz, who previously called Verdon the "patron saint of techno-optimism."
Techno-optimism is exactly what it sounds like: believers think more technology will ultimately make the world a better place. Andreessen wrote the Techno-Optimist Manifesto, a 5,000-plus word statement that explains how technology will empower humanity and solve all of its material problems. Andreessen even went as far as to say that "any deceleration of AI will cost lives," and it would be a "form of murder" not to develop AI enough to prevent deaths.
Another techno-optimist piece he wrote called Why AI Will Save the World was reposted by Yann LeCun, Chief AI Scientist at Meta, who became known as one of the "godfathers of AI" after winning the prestigious Turing Prize for his breakthroughs in AI.
LeCun labeled himself on X as a "humanist who subscribes to both Positive and Normative forms of Active Techno-Optimism."
He also recently said that he doesn't expect AI "super-intelligence" to arrive for quite some time, and has served as a vocal counterpoint to those who he says "doubt that current economic and political institutions, and humanity as a whole, will be capable of using [AI] for good."
Meta's embrace of open-source AI, which would push for generative AI models to be widely accessible to many developers, reflects LeCun's belief that the technology will offer more potential than harm, while others have pointed to the dangers of such a business model.
AI alignment and deceleration
In March, an open letter by Encode Justice and the Future of Life Institute called for "all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4."
The letter was endorsed by prominent figures in tech, such as Elon Musk and Apple co-founder Steve Wozniak.
OpenAI CEO Sam Altman addressed the letter back in April at an MIT event, saying, "I think moving with caution and an increasing rigor for safety issues is really important. The letter I don't think was the optimal way to address it."
Altman was caught up in the battle again during the OpenAI boardroom drama, when the original directors of the nonprofit arm of OpenAI grew concerned about OpenAI's rapid rate of progress and its stated mission "to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity."
Their sentiments, which match some of the ideas from the open letter, are key to decels, supporters of AI deceleration. Decels want progress to slow down because the future of AI is risky and unpredictable, and one of their biggest concerns is AI alignment.
The AI alignment problem tackles the idea that AI will eventually become so intelligent that humans won't be able to control it.
"Our dominance as a species, driven by our relatively superior intelligence, has led to harmful consequences for other species, including extinction, because our goals are not aligned with theirs. We control the future — chimps are in zoos. Advanced AI systems could similarly impact humanity," said Malo Bourgon, CEO of the Machine Intelligence Research Institute.
AI alignment research, such as MIRI's, aims to train AI systems to "align" them with the goals, morals, and ethics of humans, which would prevent any existential risks to humanity. "The core risk is in creating entities much smarter than us with misaligned objectives whose actions are unpredictable and uncontrollable," Bourgon said.
Government and AI's end-of-the-world issue
Christine Parthemore, CEO of the Council on Strategic Risks and a former Pentagon official, has devoted her career to de-risking dangerous situations, and she recently told CNBC that the "mass scale death" AI could cause if used to oversee nuclear weapons should be considered as an issue that requires immediate attention.
But "staring at the problem" won't do any good, she stressed. "The whole point is addressing the risks and finding solution sets that are most effective," she said. "It's dual-use tech at its purest," she added. "There is no case where AI is more of a weapon than a solution." For example, while large language models can become virtual lab assistants and accelerate medicine, they can also help nefarious actors identify the best and most transmissible pathogens to use for attack. This is among the reasons AI can't be stopped, she said. "Slowing down is not part of the solution set," Parthemore continued.
Earlier this year, her former employer, the U.S. Department of Defense, said there will always be a human in the loop in its use of AI systems. That's a protocol Parthemore believes should be adopted everywhere. "The AI itself cannot be the authority," she said. "It can't just be, 'the AI says X.' ... We need to trust the tools, or we should not be using them, but we need to contextualize. … There is enough general lack of understanding about this toolset that there is a higher risk of overconfidence and overreliance."
Government officials and policymakers have started taking note of these risks. In July, the Biden-Harris administration announced that it secured voluntary commitments from AI giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to "move towards safe, secure, and transparent development of AI technology."
Just a few weeks ago, President Biden issued an executive order that further established new standards for AI safety and security, though stakeholders across society are concerned about its limitations. Similarly, the U.K. government introduced the AI Safety Institute in early November, which is the first state-backed organization focusing on navigating AI.
Amid the global race for AI supremacy, and links to geopolitical rivalry, China is also implementing its own set of AI guardrails.
Responsible AI promises and skepticism
OpenAI is currently working on Superalignment, which aims to "solve the core technical challenges of superintelligent alignment in four years."
At Amazon's recent Amazon Web Services re:Invent 2023 conference, it announced new capabilities for AI innovation alongside the implementation of responsible AI safeguards across the organization.
"I often say it's a business imperative, that responsible AI shouldn't be seen as a separate workstream but ultimately integrated into the way in which we work," said Diya Wynn, the responsible AI lead for AWS.
According to a study commissioned by AWS and conducted by Morning Consult, responsible AI is a growing business priority for 59% of business leaders, with about half (47%) planning on investing more in responsible AI in 2024 than they did in 2023.
Although factoring in responsible AI may slow down AI's pace of innovation, teams like Wynn's see themselves as paving the way toward a safer future. "Companies are seeing value and beginning to prioritize responsible AI," Wynn said, and as a result, "systems are going to be safer, secure, [and more] inclusive."
Bourgon isn't convinced and says actions like those recently announced by governments are "far from what will ultimately be required."
He predicts that it's likely for AI systems to advance to catastrophic levels as early as 2030, and governments need to be prepared to indefinitely halt AI systems until leading AI developers can "robustly demonstrate the safety of their systems."