• May 4, 2023
  • Thomas Waner
  • 0

The UK’s competition watchdog, the Competition and Markets Authority (CMA), has launched an initial review of AI foundational models, including large language models (LLMs) such as OpenAI’s ChatGPT and Microsoft’s New Bing, as well as generative AI models like OpenAI’s DALL-E and Midjourney. The CMA said the review would examine competition and consumer protection considerations in the development and use of AI foundational models and aims to understand how foundation models are developing. The review will also assess the conditions and principles that will guide the development and use of these models in the future.

The CMA is proposing to publish the review in early September and has given stakeholders until June 2 to submit responses to inform its work. The regulator noted that foundation models have the potential to transform much of what people and businesses do, and to ensure that innovation in AI continues in a way that benefits consumers, businesses, and the UK economy, regulators have been asked to think about how the development and deployment of AI can be supported against five overarching principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

The CMA’s initial review will examine how the competitive markets for foundation models and their use could evolve, explore what opportunities and risks these scenarios could bring for competition and consumer protection, and produce guiding principles to support competition and protect consumers as AI foundation models develop.

The CMA’s CEO, Sarah Cardell, said that AI has burst into the public consciousness over the past few months, and it is a technology developing at speed with the potential to transform the way businesses compete, as well as drive substantial economic growth. Cardell stressed that it is crucial for the potential benefits of this transformative technology to be readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. The CMA’s goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets, and effective consumer protection.

The review comes after the UK government signaled its preference to avoid setting any bespoke rules or oversight bodies to govern the use of artificial intelligence at this stage. However, ministers said existing UK regulators, including the CMA, would be expected to issue guidance to encourage safe, fair, and accountable uses of AI.

The CMA says its initial review of foundational AI models is in line with instructions in the government’s AI white paper, where existing regulators were asked to conduct detailed risk analysis to be in a position to carry out potential enforcements, i.e., on dangerous, unfair, and unaccountable applications of AI, using their existing powers.

The regulator also points to its core mission of supporting open, competitive markets as another reason for taking a look at generative AI now. The competition watchdog is set to get additional powers to regulate Big Tech in the coming years, under plans taken off the back-burner by Prime Minister Rishi Sunak’s government last month, when ministers said it would move forward with a long-trailed ex ante reform aimed at digital giants’ market power.

The CMA’s Digital Markets Unit, up and running since 2021 in shadow form, is expected to gain legislative powers in the coming years to apply proactive pro-competition rules tailored to platforms deemed to have strategic market status. Providers of powerful foundational AI models may, down the line, be judged to have SMS, meaning they could expect to face bespoke rules on how they must operate vis-a-vis rivals and consumers in the UK market.

The UK’s data protection watchdog, the ICO, also has its eye on generative AI. It is another existing oversight body that the government has tasked with paying special attention to AI under its plan for context-specific guidance to steer the development of the tech through the application of existing laws.

Developers working on generative AI need to pay attention to data protection obligations from the outset of their projects, warns Stephen Almond, the executive director of regulatory risk at the Information Commissioner’s Office (ICO). Almond advises companies to take data protection by design and default approach, stating that it is not optional and that if personal data is being processed, it is required by law. At present, lawmakers in the European Union are deciding on a set of fixed rules that will apply to generative AI, with negotiations ongoing regarding the regulation of foundational models. The EU is focused on a layered approach to tackle safety issues and address specific content concerns like copyright. The European Data Protection Board has recently set up a task force to coordinate investigations by different data protection authorities into AI chatbots like ChatGPT, which is also under investigation by Spain’s privacy watchdog.

In the EU, data protection law already applies to AI. Parliamentarians are proposing a layered approach to regulate foundational models that are at risk for safety issues, and the complexity of responsibilities across AI supply chains, and to address specific content concerns associated with generative AI, like copyright. The EU’s incoming AI rulebook aims to regulate foundational models and is still under negotiation. However, the European Data Protection Board is already coordinating investigations between different data protection authorities, including Italy’s watchdog, which intervened in the privacy-focused investigations of the AI chatbot ChatGPT. As a result of the intervention, OpenAI issued a series of privacy disclosures and controls last month.

Thomas Waner

A writer interested in artificial intelligence fields with good experience in programming. He is currently working for us as a writer, manager, and reviewer, with a strong CV.
from India. You can contact him via e-mail: [email protected]

Leave a Reply