This is the second briefing on Artificial Intelligence (AI) and its relevance to your business, covering AI regulations in different jurisdictions.

What are the emerging approaches taken by different jurisdictions towards the regulation of AI?

AI regulation in the United Kingdom (UK)

In the UK, there are no proposals to issue new legislation to regulate AI. However, the UK Government has published a White paper titled “Policy paper: a pro-innovation approach to AI regulation.” The White paper proposes that existing regulators develop rules and regulations tailored to their specific sectors and roles.

The proposal mandates that every regulator, in setting rules and regulations, must adhere to five general principles identified as follows:

  • Safety, security and robustness
  • Transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

What you need to do:

The current legal position outlined in the UK Government’s white paper means that businesses wishing to use or develop AI technology must first determine what AI-related rules and regulations apply to them.

You should investigate this with any regulator specific to your business and other regulators with a wide remit, such as the UK Information Commissioner’s Office (ICO), with respect to data protection.

AI regulation in the European Union (EU)

The EU proposed the AI Act, a legal framework outlining AI development, marketing, and use rules. The Act classifies AI into four risk levels based on system use and sets provider obligations.

What are the areas of risk you need to know about to avoid fines and other sanctions if you do business in the EU?

Unacceptable risk

This includes cognitive behavioural manipulation of people or specific vulnerable groups, social scoring, and real-time and remote biometric identification systems. Unacceptable risk AI systems pose a threat to people and are prohibited.

High risk

High-risk AI systems are those used in products covered by the EU’s product safety regulations, such as toys, cars, and medical devices, as well as those that fall into the eight specific areas listed in the AI Act. High-risk AI systems will be required to undergo a conformity assessment.

Limited risk

This includes deep fake systems and chatbots, which must meet certain transparency obligations like labelling or disclosing content manipulation

Minimal risk

Minimal-risk AI systems like spam filters or AI-enabled video games will have no restrictions, but providers should follow voluntary codes of conduct.

AI regulation in the United States’ (US)

The United States regulates AI through existing federal and state laws, along with various frameworks issued by federal agencies such as the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST).

The recent case of P.M. et al v. OpenAI LP et al serves as a prime example of this approach. In this case, a group of unnamed individuals filed a class action lawsuit against OpenAI, the developer of the popular ChatGPT AI product. They claimed that OpenAI exploits personal data from millions of internet users of all ages without consent or knowledge to train and develop its products and that OpenAI violates several existing federal and state laws, such as the Computer Fraud and Abuse Act. As of October 2023, this case is still ongoing but illustrates the type of issue we can expect to be tested in the Courts.

For further information on the legal implications of Artificial Intelligence and how it may impact your business, please do not hesitate to get in contact at or 0330 0539 759.

Other articles in the series:

Edited by

Dean Drew

Share this blog