The rapid advance of generative AI technologies like Open AI’s ChatGPT and other chatbots has captured the attention of regulators in the US, EU, and UK. Whether regulators can place guardrails on these technologies or if the AI genie is out of the bottle remains to be seen. Let’s take a look at the efforts to regulate AI technologies and the related privacy and data security concerns.
A Brave New AI World
Most countries agree that AI poses data privacy and security risks to users. However, they are taking different approaches to address such concerns. For example, Italian data protection authority Il Garante recently banned ChatGPT temporarily, citing issues such as:
- Transparency
- The lawful basis for processing
- Inaccurate information being used about data subject rights
- Age filtering
Il Garante lifted the ban on April 28 after OpenAI put compliance measures in place, such as:
- Updating its privacy policy
- Providing more information about user data controls (including how to export and delete ChatGPT data)
- Creating an opt-out for users who do not want their personal data to be used.
At the same time, EU regulators are investigating to determine whether OpenAI is in compliance with the General Data Protection rules. Additionally, other EU members are looking into some of OpenAI’s practices in the context of the GDPR:
- Spain recently opened a formal investigation into OpenAI
- German is questioning potential data privacy and security risks
- The French National Commission on Informatics and Liberty is investigating complaints
Moreover, the EU has adopted a “pan-EU” cooperation approach. In particular, the European Data Protection Board has launched a task force focused on the concerns around generative AI.
Ultimately, companies leveraging generative AI must consider how the technology fits within the GDPR. Questions to ask include: Are you a controller, a processor, or a joint controller? and How does that impact how the GDPR applies to your business?
Notably, GDPR is not the only regulation targeting AI. Lawmakers in Brussels are also considering the AI Act, which will have an additional impact on generative AI applications and chat systems. And the U.K. has also issued guidance on AI and data protection but with a more “pro-innovation” attitude than EU nations.
Generative AI Regulatory Developments in the U.S.
US regulation of generative AI is not as robust as in the EU. At this juncture, the U.S. Senate is considering rules on AI and the Biden Administration recently proposed an AI Bill of Rights, outlining principles on data privacy and safety, and protections against discrimination in AI tools.
Most data privacy regulation has been enacted at the state level as several states now have data privacy laws, some of which include facets of AI regulation. On top of this patchwork approach, federal agencies such as the Federal Trade Commission may have enforcement oversight of AI and automated systems under existing legal authorities.
The Takeaway
As AI technologies advance, companies at the board, executive, and management levels must examine their data privacy and security practices to ensure compliance with existing and pending regulations. The best way for businesses worldwide to implement suitable AI guardrails is to consult an experienced domestic and international cyber law attorney.