Tech expert says 'existential' fears from AI are overblown, but sees 'very disturbing' workplace threats
Michael Wooldridge will host the annual Royal Institution Christmas lectures to demystify AI
{{#rendered}} {{/rendered}}
A U.K.-based tech expert said he is not losing sleep at night over the recent growth of artificial intelligence but argued he does have concerns over AI potentially becoming a hellish boss that oversees an employee’s every move.
Michael Wooldridge is a professor of computer science at the University of Oxford who has been a leading expert on AI for at least 30 years. He spoke with The Guardian this month regarding upcoming lectures he will lead this winter to demystify artificial intelligence, while noting what concerns he does have with the tech.
He told the outlet that he does not share the same worries as some AI experts who warn the powerful systems could one day lead to the downfall of humanity. Instead, one of his concerns is AI morphing into a hellish boss that monitors employees’ emails, offers constant feedback and even perhaps decides which human employees to fire.
{{#rendered}} {{/rendered}}
"There are some prototypical examples of those tools that are available today. And I find that very, very disturbing," he told The Guardian.
AI has already staked its claim in a handful of industries, such as helping medical leaders diagnose cancer, or detecting fraud at financial companies, and even drafting legal briefs that cite relevant case law.
{{#rendered}} {{/rendered}}
"I do lose sleep about the Ukraine war, I lose sleep about climate change, I lose sleep about the rise of populist politics and so on," he said. "I don’t lose sleep about artificial intelligence."
Wooldridge explained to Fox News Digital in an email that "existential concerns about AI are speculative" and that "there are very much more immediate and concrete existential concerns right now."
"Top of these is escalation in Ukraine - that’s a very real possibility that means nuclear war is surely closer now than at any time in 40 years. So, if one wants to lose sleep over SOMETHING, I think that is a much more important issue," he said.
{{#rendered}} {{/rendered}}
'PEERBOTS' CAN MEAN A FUTURE WHERE HUMAN POLITICIANS ARE OUT OF THE JOB: EXPERT
Wooldridge did say that the proliferation of AI and its growth in intelligence does bring other risks, such as bias or misinformation.
{{#rendered}} {{/rendered}}
"It can read your social media feed, pick up on your political leanings, and then feed you disinformation stories in order to try to get you for example, to change your vote," he said.
AI COULD GO 'TERMINATOR,' GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS
Wooldridge, however, said users should arm themselves against such risks by viewing AI through skeptical lenses and argued companies behind the tech need to be transparent with the public.
{{#rendered}} {{/rendered}}
"I don’t discount existential concerns about AI, but to take them really seriously would need to see a genuinely plausible scenario for how AI might represent a threat (not just "it might be cleverer than us")," he added in comment to Fox News Digital.
The Oxford professor will lead a prestigious U.K. public science lecture series this December, the Royal Institution Christmas lectures, which has explored various scientific topics since it was launched in 1825. He will tackle explaining artificial intelligence to the public this year, highlighting that 2023 marks "the first time we had mass market, general purpose AI tools, by which I mean ChatGPT."
"It’s the first time that we had AI that feels like the AI that we were promised, the AI that we’ve seen in movies, computer games and books," he said.
{{#rendered}} {{/rendered}}
ChatGPT, the popular chatbot from OpenAI that can mimic human conversation, exploded in use this year, gaining 100 million monthly active users by January, which set a record as the fastest-growing platform.
"In the [Christmas] lectures, when people see how this technology actually works, they’re going to be surprised at what’s actually going on there," Wooldridge said. "That’s going to equip them much better to go into a world where this is another tool that they use, and so they won’t regard it any differently than a pocket calculator or a computer."
REGULATORS SHOULD KEEP THEIR HANDS OFF AI AND FORGET MUSK-BACKED PAUSE: ECONOMIST
{{#rendered}} {{/rendered}}
The lectures will include a Turing test, which investigates whether AI demonstrates human-like intelligence. Humans will have a written conversation with a chatbot, and if they cannot tell if they are corresponding with a human or chatbot, this could show AI has matched human-like intelligence, The Guardian reported.
Wooldridge, however, pushed back that the test is not best suited to make such a determination.
"Some of my colleagues think that, basically, we’ve passed the Turing test," Wooldridge told The Guardian. "At some point, very quietly, in the last couple of years, the technology has got to the point where it can produce text which is indistinguishable from text that a human would produce."
{{#rendered}} {{/rendered}}
"I think what it tells us is that the Turing test, simple and beautiful and historically important as it is, is not really a great test for artificial intelligence," he added.
CLICK HERE TO GET THE FOX NEWS APP
The Christmas series will begin filming on Dec. 12 before it is broadcast on BBC Four between Christmas and New Years.
{{#rendered}} {{/rendered}}
"I want to try to demystify AI, so that, for example, when people use ChatGPT they don’t imagine that they are talking to a conscious mind. They aren’t!" Wooldridge told Fox of the upcoming lectures. "When you understand how the technology works, it gives you a much more grounded understanding of what it can do. We should view these tools – impressive as they are – as nothing more than tools. ChatGPT is immensely more sophisticated than a pocket calculator, but it has a lot more in common with a pocket calculator than it does a human mind."