This is the second briefing on Artificial Intelligence (AI) and its relevance to your business, covering AI regulations in different jurisdictions.

What are the emerging approaches taken by different jurisdictions towards the regulation of AI?

AI regulation in the United Kingdom (UK)

In the UK, there are no proposals to issue new legislation to regulate AI. However, the UK Government has published a White paper titled “Policy paper: a pro-innovation approach to AI regulation.” The White paper proposes that existing regulators develop rules and regulations tailored to their specific sectors and roles.

It is proposed that every regulator, in prescribing rules and regulations, must adhere to 5 general principles. These are identified as follows:

  • Safety, security and robustness
  • Transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

What you need to do:

The current legal position outlined in the UK Government’s white paper means that businesses wishing to use or develop AI technology must first determine what AI-related rules and regulations apply to them.

You should investigate this with any regulator specific to your business and other regulators with a wide remit, such as the UK Information Commissioner’s Office (ICO), with respect to data protection.

AI regulation in the European Union (EU)

The EU has proposed the AI Act, which is a legal framework that sets out the rules for the development, marketing, and use of AI in the EU. The proposed Act classifies AI into four distinct levels of risk based on the intended use of the system and establishes obligations for providers.

What are the areas of risk you need to know about to avoid fines and other sanctions if you do business in the EU?

Unacceptable risk

This includes cognitive behavioural manipulation of people or specific vulnerable groups, social scoring, and real-time and remote biometric identification systems. Unacceptable risk AI systems pose a threat to people and are prohibited.

High risk

High-risk AI systems are those used in products covered by the EU’s product safety regulations, such as toys, cars, and medical devices, as well as those that fall into the eight specific areas listed in the AI Act. High-risk AI systems will be required to undergo a conformity assessment.

Limited risk

This includes deep fake systems and chatbots which will be subject to certain transparency obligations such as labelling, or disclosure that the content has been manipulated.

Minimal risk

Minimal risk AI systems such as spam filters or AI-enabled video games will be permitted with no restrictions, but providers will be encouraged to adhere to voluntary codes of conduct.

AI regulation in the United States’ (US)

The United States regulates AI through existing federal and state laws, along with various frameworks issued by federal agencies such as the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST).

The recent case of P.M. et al v. OpenAI LP et al serves as a prime example of this approach. In this case, a group of unnamed individuals filed a class action lawsuit against OpenAI, the developer of the popular ChatGPT AI product. They claimed that OpenAI exploits personal data from millions of internet users of all ages without consent or knowledge to train and develop its products and that OpenAI violates several existing federal and state laws, such as the Computer Fraud and Abuse Act. As of October 2023, this case is still ongoing but illustrates the type of issue we can expect to be tested in the Courts.

For further information on the legal implications of Artificial Intelligence and how it may impact your business, please do not hesitate to get in contact at dean.drew@LA-law.com or 0330 0539 759.

Other articles in the series:


Edited by

Dean Drew

Share this blog