I helped build Sophia the Robot. We should not be scared of AI for these 5 reasons
A pause of AI development would be a badly wrong move
{{#rendered}} {{/rendered}}
The Future of Life Institute has issued a petition to pause the development of GPT-5 and similar Large Language Models (LLMs).
Their anxieties are understandable, but I believe they are much overblown. I’ve heard similar fears related to the advent of Artificial General Intelligence expressed off and on since I introduced the term AGI in 2005, but I think a pause would be a badly wrong move in the current situation for several reasons.
LLMs are limited, and the threats they pose are limited
Let me first emphasize something that’s been mostly forgotten in the panic: Large Language Models can’t become Artificial General Intelligences.
{{#rendered}} {{/rendered}}
LLMs utterly lack the sort of cognitive architecture needed to support human-level AGI. The vast majority of AI researchers know this. LLMs don’t even beat more traditional machine learning models at most linguistic tasks, and suffer from numerous major limitations, including:
- Inability to distinguish truth from ‘hallucination.'
- Constrained creativity. LLMs have a broader set of capabilities than previous narrow AIs, but this breadth is limited. They cannot intelligently reason beyond their experience-base. They only appear broadly capable because their training base is really enormous and covers almost every aspect of human endeavor.
- Inability to effectively construct lengthy reasoning chains.
My own AGI development effort, OpenCog Hyperon, uses LLMs for pattern recognition and synthesis, and combines them with other AI methods such as symbolic logical inference and evolutionary learning. Many other teams around the world are pursuing similar projects – using LLMs as components of AGIs. But LLMs alone are not, and cannot be, AGI. Therefore, pausing LLM research is not pausing AGI research.
ELON MUSK SAYS THERE SHOULD BE ‘SOME SORT OF REGULATORY OVERSIGHT’ OF AI
{{#rendered}} {{/rendered}}
AI can heal as well as harm
We should pause technology development if: The downsides are clear, certain, and imminent; The upsides are few or indistinct. LLMs fail to meet either of these criteria.
The open letter fails to specify any concrete risk, instead meandering in nebulous rhetoric like, "Should we risk loss of control of our civilization?" Such vague feelings do not justify shutting down potentially beneficial research. Compare to the extremely direct risks from, say, briefcase nukes or genetically engineered pathogens.
These vague risks are balanced by concrete benefits. Consider risks like cancer, like climate change, like aging and death, like global child hunder. These are real. And a superhumanly capable AGI could – likely will – cure cancer and mental illness, prolong human healthspan, solve climate change, develop space travel, and end suffering caused by material poverty.
{{#rendered}} {{/rendered}}
Doomsayers love to focus on hypothetical scenarios in which AGI causes human extinction – but there is no reason to believe these are realistic or likely. One can spin up equally dramatic hypothetical scenarios in which it saves us from extinction. And we can become extinct with no help from AGI (e.g. from a nuclear war, a bioengineered pandemic or a meteor strike). Why obsess on movie-blockbuster AI disaster scenarios instead of the real, specific concrete AI benefits that are near at hand?
Pausing change does not solve the problems change brings
Technology has always moved forward entangled with legal, political, and economic aspects of the world. Only in rare cases is it possible to pause one leg of progress while the other legs march forward; a technology like AI with a diffuse definition and a broad variety of massively impactful immediate practical applications is clearly not one of these unusual instances.
Take the problem of AI leading to unemployment. If all work is automated, some sort of post-work economy will emerge. But this post-work world can’t be designed in a vacuum while the automation technologies are paused. We need the technology to be deployed in reality, so that society can adapt to it.
{{#rendered}} {{/rendered}}
A moratorium is impractical
It will be impossible to ensure 100 percent compliance with an AI moratorium around the globe. Some players will pause – and they will fall behind. Other, perhaps less ethical, players will race ahead.
Enforcing simpler things like WTO agreements has proved barely viable on the global scale. Things like bans on nuclear tests or bioweapons development work as well as they do because these technologies lack immediate huge benefit in terms of helping governments achieve economic and other non-military goals. AI is opposite in this regard.
It’s ironic to note that Elon Musk, one of the leading forces behind the proposed moratorium, has recently directed his company Twitter to purchase a large number of GPUs, apparently for an LLM project. The tension between anxiety and economic opportunity is evident.
{{#rendered}} {{/rendered}}
We should be focusing on how we build beneficially, not stopping building altogether
Open and democratic governance of emerging AGI will help them develop democratic, humane values, in the tradition of Linux or Wikipedia. This is the raison d’être of SingularityNET, a project I founded and have led since 2017.
Human values need to be inculcated into the AGI’s training data, by raising it on value-driven activities like education, medical care, science and creative arts.
I believe that, with the proper governance and training, we can engineer machines that are supercompassionate as well as superintelligent, making the benevolent-AGI scenarios more likely. Figuring out proper governance and training will take some work, but there is no reason to believe that pausing AI development for 6 months at the current stage is going to make this work go better or bring it to a better outcome.
{{#rendered}} {{/rendered}}
CLICK HERE TO GET THE OPINION NEWSLETTER
Humans and AGIs should move onward together into the next era. They should do this while helping each other with their knowledge, their practical capability, and their values. The most sensible route forward involves developing open AGIs that are of service to humanity, and creating appropriate governance and training methodologies as we go.
CLICK HERE TO GET THE FOX NEWS APP
{{#rendered}} {{/rendered}}
In his 2005 book The Singularity is Near, Ray Kurzweil predicted human-level AGI by 2029, a prediction that now seems very likely, perhaps even pessimistic, Once this AGI can do engineering as well as a good human AI team, it will likely rewrite its own codebase, and design itself new hardware within a year or less — creating a next-generation AI which will upgrade itself even more, even faster, and so on in an upward spiral.
If we’re really at the dawn of superhuman intelligence, caution and care are obviously called for. But we should not be scared off just because the future feels weird and has complex pluses and minuses. That is the nature of revolution.