This is the third briefing on AI and its impact on businesses. This article will look at AI and data protection.
What does AI have to do with data protection law?
AI systems rely heavily on data, including personal and sensitive information, to learn patterns, make predictions, and inform decisions. This dependence on data necessitates a robust framework for data protection to ensure the privacy and rights of individuals are upheld.
What guidance is there on AI and data protection?
The Information Commissioner’s Office (ICO), the organisation responsible for monitoring data protection in the UK, has published several AI-specific guidance documents such as the “Guidance on AI and data protection” and the “AI and data protection risk toolkit v.1.0.” The guidance, although it is not statutory code of practice, provides an overview of how data protection law applies to AI systems that process personal data.
The guidance, aimed at compliance-focused individuals and technology specialists, aligns to the seven data protection principles set out in the UK General Data Protection Regulation (UK GDPR), and is structured as follows:
This section looks at accountability, governance, and how organisations must demonstrate that AI systems in use process personal data in accordance with data protection legislation. It considers data protection impact assessments, setting a meaningful risk appetite and controller/processor responsibilities.
This section covers the application of the lawfulness, fairness and transparency principles of AI systems, which focuses on providing insights on how organisations can identify the legal basis for any processing undertaken. The ICO recommendations in this section include ensuring organisations document and break down each processing operation in order to correctly identify the legal basis for processing data and ensuring any AI system processes data in a fair manner.
This section examines the new risks and challenges raised by AI in security and data minimisation and states a number of techniques to help reduce risk for AI development and deployment.
The final section of the guidance covers compliance with individual rights, including rights relating to solely automated decision-making with legal or similarly significant effects. It also covers the role of meaningful human oversight.
What should UK businesses do?
Businesses wishing to use or develop AI technology must ensure that they are transparent about their use of AI and provide clear and concise information to individuals about how their personal data is being collected, used, and processed.
They must also ensure that they have obtained consent from individuals prior to processing their personal data for the specific purposes for which it is being collected. Individuals must be given the option to explicitly consent to the processing of their data.
Ultimately, all businesses wishing to use or develop AI technology will need to consolidate the AI-specific guidance provided by the ICO into their compliance processes to ensure that their use of AI is consistent with UK GDPR and respects the rights of individuals regarding their personal data.
Other articles in the series: