Image credit: Shutterstock
The UK’s competition watchdog is launching a review of foundation AI models, the technology behind advanced tools like ChatGPT, to gain a better understanding of the market’s opportunities and risks.
The Competition and Markets Authority’s (CMA) preliminary investigation will look at potential applications of foundation AI models, consumer protections and look to create “guiding principles”.
Sarah Cardell, chief executive of the CMA, said: “AI has burst into the public consciousness over the past few months but has been on our radar for some time.”
She added: “It’s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth.”
Key areas of focus for the CMA are AI’s implications on safety, security, copyright, privacy and human rights.
“It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information,” added Cardell.
Global regulators focus on AI
The review comes as regulators around the world are taking different approaches to advanced AI models. Last month Italy banned ChatGPT over privacy concerns but has since reversed its decision.
The US Federal Trade Commission said it was “focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers”.
The EU has also been updating its AI rules in the wake of generative AI models like Google’s Bard and Microsoft-backed OpenAI’s ChatGPT.
Prime Minister Rishi Sunak has previously indicated that the UK will take a lighter touch to AI regulation and last week announced a £100m taskforce to “accelerate” the UK’s generative AI sector and keep pace with rapid advances in technologies like ChatGPT.
“The UK has already set out its stall, promising a balanced, business-friendly framework to encourage innovation and investment, and the CMA’s review supports this approach,” said Tim Wright, tech and AI partner at the London law firm Fladgate. “But as the adage goes, with great power comes great responsibility, hence the CMA’s investigation will consider what, if any, additional guardrails are needed to protect consumers above and beyond the existing regulatory corpus.”
The CMA is asking for expert opinions and evidence until 2 June. It will then publish a report on the topic in September.
“With a short turnaround for comments from stakeholders being set to early June, it is clear that the CMA does not intend to let the moss grow under its feet before issuing its views and setting out the rules for this transformative technology,” said Gareth Mills, partner at the law firm Charles Russell Speechlys.
Andrew Bennett, policy principal at Form Ventures, said: “As AI accelerates, there’s a real risk that our regulatory state will not keep up. In that context, it’s entirely reasonable for the CMA to quickly improve its understanding of this nascent, possibly transformative, technology. But this isn’t just about intervention: upgraded capability now could also improve regulatory clarity and prevent overreach later.”
While the report is solely an information-gathering exercise, the CMA has recently been demonstrating its regulatory powers.
Last month it blocked the $68.7bn merger of Microsoft and Activision due to concerns in the cloud gaming market. The CMA is also conducting a deeper investigation into Broadcom’s $61bn acquisition of VMware.