Artificial intelligence won't likely reach human-like levels without this one key component, study finds
AI systems connected to robots and programmed to evolve through experience considered crucial to achieving human-like cognition
{{#rendered}} {{/rendered}}
Artificial intelligence will likely not reach human-like cognition unless the programs are connected to robots and designed with evolutionary principles, researchers in the U.K. found.
Revolutionary AI platforms that mimic human conversation, such as the wildly popular ChatGPT, will never reach human-like cognition despite their large their neural networks and the massive datasets they are trained on if they remain disembodied and only appear on computer screens, researchers at the University of Sheffield reported in a new study.
ChatGPT, a chatbot that can simulate conversations with human users who provide prompts to the AI platform, learns in a similar way to human children through supervised and unsupervised learning. Unsupervised learning entails the system learning through trial and error, such as a human telling the chatbot an answer to a prompt was wrong and building off of that information. Supervised learning is more similar to children attending school and learning required material – AI-powered chatbots are trained on inputs that have pre-established outputs that the program learns from.
{{#rendered}} {{/rendered}}
University of Sheffield professors of computer science Tony Prescott and Stuart Wilson found that despite AI having the ability to emulate how humans learn, the programs are unlikely to fully think like humans unless given the opportunity to artificially feel and sense the real world.
AI COULD GO 'TERMINATOR,' GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS
"ChatGPT, and other large neural network models, are exciting developments in AI which show that really hard challenges like learning the structure of human language can be solved. However, these types of AI systems are unlikely to advance to the point where they can fully think like a human brain if they continue to be designed using the same methods," Prescott said, according to a University of Sheffield press release on the research.
{{#rendered}} {{/rendered}}
The study, which was published in the research journal Science Robotics, argued that human intelligence is developed due to the complicated subsystems of the brain that all vertebrates share. This architecture of the brain, coupled with a human’s experience in the real world to learn and improve through evolution, is something that is rarely incorporated when building AI systems, the researchers argued.
"It is much more likely that AI systems will develop human-like cognition if they are built with architectures that learn and improve in similar ways to how the human brain does, using its connections to the real world. Robotics can provide AI systems with these connections – for example, via sensors such as cameras and microphones and actuators such as wheels and grippers. AI systems would then be able to sense the world around them and learn like the human brain," Prescott continued.
{{#rendered}} {{/rendered}}
HUMANS STUMPED ON DIFFERENCE BETWEEN REAL OR AI-GENERATED IMAGES: STUDY
"[S]uch AIs might be good at certain kinds of sensing, thinking and planning but poor at understanding and reasoning about the ethical consequences of their actions. It is therefore important that we think carefully about AI safely as we build these more general-purpose systems, and put safety at the heart of the AIs operating system."
Prescott added in comment to Fox News Digital that "a significant risk" surrounding the systems stems from when learning is not transparent, and said he would like to "see greater transparency from companies who are developing AI, alongside better governance, which needs to be international to be effective."
"AIs that are not transparent could behave in ways we don’t expect. By applying an understanding of how real brains control real bodies, we think these systems could be made more transparent, and we could advance towards having AIs that are better able to explain how they have made decisions," Prescott said.
{{#rendered}} {{/rendered}}
The professor also noted that there could possibly be risks around a "kind of general-purpose intelligence" that could "match or exceed human ability in some domains, but will likely be very under-developed in others."
"For example, such AIs might be good at certain kinds of sensing, thinking and planning but poor at understanding and reasoning about the ethical consequences of their actions. It is therefore important that we think carefully about AI safely as we build these more general-purpose systems, and put safety at the heart of the AIs operating system. This should be possible. Just as we are able to make planes, cars and power stations safe we should be able to do the same for AIs and robots. I think this also means we will need negulation, as we have in these other industries, to ensure that safety requirements are properly addressed," he explained.
The researchers said there has been some progress on building AI platforms for robots that would give the tech a direct line to the real world, but that the platforms are still a long way off from mimicking the architecture of the human brain.
{{#rendered}} {{/rendered}}
CLICK HERE TO GET THE FOX NEWS APP
"Efforts to understand how real brains control bodies, by building artificial brains for robots, have led to exciting developments in robotics and neuroscience in recent decades. After reviewing some of these efforts, which have mainly focussed on how artificial brains can learn, we think the next breakthroughs in AI will come from mimicking more closely how real brains develop and evolve," Wilson said.