Britain’s competition watchdog said Thursday that it's opening a review of the artificial intelligence market, focusing on the technology underpinning chatbots like ChatGPT.

The Competition Markets Authority said it will look into the opportunities and risks of AI as well as the competition rules and consumer protections that may be needed.

AI's ability to mimic human behavior has dazzled users but also drawn attention from regulators and experts around the world concerned about its dangers as its use mushrooms — affecting jobs, copyright, education, privacy and many other parts of life.

SALES INDUSTRY'S 'ALWAYS BE CLOSING' MANTRA COULD GET BOOST FROM AI

The CEOs of Google, Microsoft and ChatGPT-maker OpenAI will meet Thursday with U.S. Vice President Kamala Harris for talks on how to ease the risks of their technology. And European Union negotiators are putting the finishing touches on sweeping new AI rules.

The OpenAI logo

The Competition Markets Authority in Britain is opening a review of the artificial intelligence market. The investigation will look into the opportunities and risks of AI. (AP Photo/Michael Dwyer, File)

The U.K. watchdog said the goal of the review is to help guide the development of AI to ensure open and competitive markets that don't end up being unfairly dominated by a few big players.

REGULATORS SHOULD KEEP THEIR HANDS OFF AI AND FORGET MUSK-BACKED PAUSE: ECONOMIST

Artificial intelligence "has the potential to transform the way businesses compete as well as drive substantial economic growth," CMA Chief Executive Sarah Cardell said. "It’s crucial that the potential benefits of this transformative technology are readily accessible to U.K. businesses and consumers while people remain protected from issues like false or misleading information."

The authority will examine competition and barriers to entry in the development of foundation models. Also known as large language models, they're a sub-category of general purpose AI that includes systems like ChatGPT.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

The algorithms these models use are trained on vast pools of online information like blog posts and digital books to generate text and images that resemble human work, but they still face limitations including a tendency to fabricate information.