Iubenda logo
Start generating

Documentation

Table of Contents

AI and Data Privacy: Global Responses and Concerns

🚨 Update Alert:

This article was written prior to the recent developments regarding the New AI ACT. For the latest information, insights, and implications of this significant legislation, please visit our updated coverage here.

As artificial intelligence (AI) technologies continue to advance, concerns around privacy and ethical implications have intensified. Among these concerns is the question, is ChatGPT safe? ChatGPT (Generative Pre-trained Transformer) is an AI system capable of engaging in human-like conversations. In response, Data Protection Authorities (DPAs) have taken action to address the potential risks associated with this technology.

From guidelines and investigations to enforcement measures, DPAs worldwide have assumed a critical role in regulating ChatGPT and other AI systems. Join us on a journey into the world of AI regulation as we explore the diverse array of responses from Data Protection Authorities across the globe. 

Is ChatGPT safe? 

People are questioning: is ChatGPT safe to use, and why are Data Protection Authorities concerned?

To answer simply, privacy implications and data handling concerns are at the forefront of decisions for all DPAs across the globe. Below are several detailed reasons why Data Protection Authorities are making noise over ChatGPT, to help you determine, is ChatGPT safe to use?

  • Privacy Risks: Data Protection Authorities are expressing concerns regarding the potential privacy risks associated with ChatGPT. They are scrutinizing the collection and processing of personal data during conversations to ensure compliance with data protection regulations, considering the interactive nature of the technology.
  • Data Handling and Consent: DPAs emphasize the need for organizations deploying ChatGPT to handle personal data responsibly. They focus on ensuring explicit consent from individuals, clearly communicating the purpose of data processing, and enabling individuals to exercise control over their data.
  • Transparency and User Awareness: DPAs are keen on transparency in AI interactions. They want individuals to be aware that they are interacting with an AI system rather than a human, ensuring that users have a clear understanding of how their data is used and the implications of engaging with ChatGPT.
  • Bias and Discrimination: The potential for biases in AI-generated conversations raises concerns for DPAs. They emphasize the importance of fairness and non-discrimination, urging organizations to address and mitigate biases that may emerge in ChatGPT’s responses.
  • Misinformation and Manipulation: DPAs are worried about the spread of misinformation or malicious manipulation through AI-generated conversations. They aim to mitigate the risks of social engineering, manipulation, and phishing attempts facilitated by ChatGPT’s persuasive capabilities.
  • Data Security and Unauthorized Access: DPAs highlight the need for robust security measures to protect against data breaches, unauthorized access, and misuse of personal information. They want organizations to implement safeguards to prevent unauthorized parties from exploiting ChatGPT’s access to sensitive data.
  • Regulatory Compliance: DPAs have a responsibility to enforce data protection regulations, such as the GDPR. Given the transformative nature of ChatGPT and its impact on privacy, DPAs are compelled to ensure compliance and hold organizations accountable for their use of the technology.

ChatGPT Faces Global Scrutiny: Summary

ChatGPT, the highly popular chatbot powered by artificial intelligence, is encountering challenges with European Union’s influential privacy watchdogs. In April 2023, it faced a temporary ban in Italy due to concerns that it could violate the General Data Protection Regulation (GDPR).

EU privacy watchdogs are now contemplating their next steps in examining potential abuses associated with ChatGPT, following the lead of their Italian counterparts. The Irish Data Protection Commission has expressed its intention to coordinate with other EU Data Protection Authorities on this matter, and the Belgian Data Protection Authority believes that ChatGPT’s potential infringements should be discussed at the European level.

Complaints against ChatGPT have already been filed with France’s Data Protection AuthorityCNIL, alleging privacy violations, including breaches of the GDPR.

Advocacy groups, such as the Center for AI and Digital Policy in the U.S. and consumer watchdog BEUC in Brussels, have also called for investigations into OpenAI and ChatGPT, warning that potential harm may occur before the EU’s forthcoming AI rule book is in place.

EU lawmakers are currently negotiating legal frameworks for AI technology as part of the EU Artificial Intelligence Act Draft. However, the absence of specific legislation on artificial intelligence has empowered data protection regulators to intervene. 

As DPA’s, their role includes enforcing the GDPR, which governs data collection, user protections against automated decision-making, transparency in data usage, the accuracy of personal data, and the right to correction. 

Keep reading below to see how regulators are reacting to ChatGPT 👇

🌐 Global By Country Breakdown: DPAs Reactions to ChatGPT

The EDPB has decided to establish a dedicated task force to facilitate cooperation and information exchange among data protection authorities regarding potential enforcement actions. Read the official press release here →

After the temporary banning of ChatGPT in Italy, OpenAI, the company behind ChatGPT, has complied with the requirements of the Italian Data Protection Authority (Garante Privacy) and introduced new measures. OpenAI has published a notice explaining the processing of personal data and granting European users the right to object to data processing. 

The company has implemented age verification measures and tools for users to request opposition to indexing or modification of their data. OpenAI is allowed to use legitimate interest as the legal basis for training the algorithm, but is subject to evaluation by the Garante Privacy.

OpenAI will continue to engage in dialogue with the Garante Privacy for compliance with GDPR.

In response to the rapid advancements in artificial intelligence (AI), particularly generative AIs like ChatGPT, the French Data Protection Authority, CNIL (Commission Nationale de l’Informatique et des Libertés), has released an action plan aimed at ensuring the deployment of AI systems that respect individual privacy.

With a long-standing focus on addressing the challenges posed by AI, CNIL’s action plan extends its efforts to encompass generative AIs, large language models, and their derivative applications, including chatbots. The plan revolves around four key objectives:

  1. Understanding the Functioning and Impact: CNIL aims to comprehensively grasp the workings of AI systems and their implications for individuals. This includes evaluating aspects like fairness, transparency, protection against biases and discrimination, and addressing the security challenges posed by these tools.
  2. Enabling Privacy-Friendly AI: CNIL seeks to guide and facilitate the development of AI systems that uphold personal data protection principles. It plans to offer guidance and recommendations to professionals, addressing issues such as data sharing, re-use, and selection, rights of individuals, and data accuracy.
  3. Supporting the AI Ecosystem: CNIL aims to foster innovation within the AI ecosystem in France and Europe by supporting and collaborating with innovative players. This includes offering tailored advice, launching support programs, and engaging in a sustained dialogue with research teams, R&D centers, and companies involved in AI development.
  4. Auditing and Control: CNIL will establish a framework for auditing and controlling AI systems, both prior to and following their deployment. The focus areas for control in 2023 include compliance with regulations on “enhanced” video surveillance, the use of AI in fraud detection, and the investigation of complaints related to AI systems. Notably, CNIL has opened a control procedure and a dedicated working group to analyze the data processing implemented by the OpenAI tool, including the ChatGPT service.

The CNIL’s action plan also includes a dedicated dossier on generative AI, shedding light on its technical functioning, legal questions, ethical challenges, and real-world applications. This additional resource complements existing materials available to professionals and the general public on the CNIL’s website.

As the AI landscape continues to evolve, CNIL’s proactive approach underscores its commitment to ensuring the responsible and ethical deployment of AI systems while protecting individual rights and freedoms.

Helen Dixon, the Data Protection Commissioner of Ireland, has emphasized the importance of thoughtful analysis and careful consideration when it comes to regulating AI technologies. She cautioned against being overly reactionary or hasty in implementing regulations, as doing so may lead to ineffective laws or unnecessary bans that lack durability and validity. Dixon highlights the need for a measured approach to ensure that regulations adequately address the complexities of AI while standing the test of time.

Reported here →

Spain’s Data Protection Authority has requested the EDPB to assess privacy concerns surrounding OpenAI’s ChatGPT. This request comes amidst increased global scrutiny of AI systems.

Spain’s DPA emphasizes the need for coordinated EU decisions on global processing operations. The inclusion of ChatGPT in the next Plenary of the European Data Protection Committee is requested.

More details here →

In a coordinated effort, German Data Protection Authority, led by the state commissioner for data protection and freedom of information in Rhineland-Palatinate, Prof. Dr. Dieter Kugelmann, have taken action against OpenAI, the operator of the popular AI chatbot ChatGPT. The authorities have sent a comprehensive catalog of questions to OpenAI, seeking clarification on various aspects of data protection and compliance. This move is part of the newly established TaskForce ChatGPT at the European level, reflecting the concerns of all EU data protection supervisory authorities.

Prof. Kugelmann, who also leads the TaskForce AI of German data protection supervisory authorities, highlighted the significance of this initiative, stating, “We need information from OpenAI in order to be able to check compatibility with European data protection law. Innovation is good and important, but on the other hand, applicable rules must be observed. The task forces in Germany and European Union will take care of that.”

The model letter developed by the German data protection supervisory authorities covers a range of crucial topics. It focuses on determining the legal basis for data processing by ChatGPT, ensuring the protection of children’s data, and ascertaining the transparency and adequacy of information provided to users regarding data processing. Transparency is of utmost importance when deploying AI systems, as it enables individuals to exercise their rights effectively.

Access here → (In German)

NEW Latest Update

President Joe Biden has issued a significant executive order aimed at enhancing the safety and privacy of artificial intelligence (AI) technology in the United States. The White House unveiled this executive order on Monday, which outlines a series of measures designed to ensure the responsible development and utilization of AI.

One key aspect of the executive order is the requirement for AI companies and developers to adhere to new rules and practices to ensure the safety of AI technology. This includes sharing information about safety tests with the government and developing tools to guarantee the safety, security, and trustworthiness of AI systems.

White House Deputy Chief of Staff Bruce Reed emphasized the global significance of this move, stating, “President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.” The order reflects a comprehensive strategy to harness the benefits of AI while mitigating associated risks.

The executive order also has several implications for federal agencies. It calls for the development of a National Security Memorandum to guide the military and intelligence communities in their use of AI. Additionally, it focuses on protecting user privacy during AI training and addressing concerns related to cyberattacks and fraud attempts through the development of practices and standards.

Equity and civil rights are another central focus of the order. It builds upon previous executive orders to combat algorithmic discrimination, ensuring that AI is not used to discriminate in federal benefit programs, contracting, or within the judicial and law enforcement processes.

Furthermore, the order mandates the White House to establish principles and best practices for addressing AI’s impact on the workforce, examining job displacement and identifying potential uses to supplement specific needs. This information will be compiled into a report on AI’s labor-market implications.

On the international front, the State Department will work to create a “robust international framework” for AI governance, aligning with Vice President Kamala Harris’s involvement in the United Kingdom’s AI Summit.

President Biden’s executive order on AI comes shortly after Senate Majority Leader Chuck Schumer’s “AI Insight Forum,” which aimed to explore regulatory approaches for AI technology while fostering transformative innovation.

The Biden administration has announced that it is inviting public comments on accountability measures for artificial intelligence (AI) systems. Concerns about the impact of AI on national security and education have prompted this move.

During a groundbreaking congressional hearing, OpenAI CEO Sam Altman, along with other prominent figures in the AI industry, expressed their support for increased regulation, setting themselves apart from influential tech companies that have opposed regulatory intervention.

Altman emphasized the potential dangers associated with AI and advocated for additional government regulation. He highlighted how AI advancements could impact various sectors such as labor, healthcare, and the economy, underscoring the need for regulatory measures to prevent and mitigate any negative consequences. Altman emphasized that government intervention through regulations would play a “critical” role in addressing these concerns.

Accompanying Altman as witnesses were IBM Chief Privacy & Trust Officer Christina Montgomery and New York University Professor Emeritus Gary Marcus. Marcus delivered some of the most striking warnings during the hearing, particularly focusing on issues like political manipulation, health misinformation, and hyper-targeted advertising. He suggested the establishment of a Cabinet-level organization dedicated to keeping pace with AI developments and proposed safety reviews akin to those conducted by the Food and Drug Administration as a means of oversight.

Montgomery highlighted the importance of tailoring oversight of AI to different risks, suggesting the implementation of distinct rules for specific use cases based on their potential impact on society. She stressed that the most stringent regulations should be applied to those use cases posing the greatest risks to society.

Reported here →

Following the temporary limitation imposed by the Italian data protection authority on OpenAI’s ChatGPT due to data breach incidents, Brazil’s perspective on the use of similar AI technologies raises concerns about data protection. While Brazil does not currently have specific decisions from its National Data Protection Authority regarding ChatGPT or similar AI systems, expectations are justified for security, transparency, and privacy parameters to be consistently observed in order to provide safe and reliable technologies to the public.

In Brazil, the General Personal Data Protection Law (LGPD) imposes obligations on transparency, and data processing for children, and prohibits processing without a proper legal basis. Additionally, the Senate is considering Bill No. 21/2020, which establishes principles and guidelines for the development and application of AI in Brazil. The bill proposes procedural obligations to mitigate risks associated with AI technology, including privacy control, trustworthy testing, prevention of discriminatory practices, and transparency measures.

While regulators in Brazil have not yet introduced specific regulations for AI, the existing norms emphasize the need to address risks and protect children and adolescents in data processing.

Read here → (In Portuguese)

As always, we’re following this evolving case and will keep this post updated with the latest developments. Bookmark this post to make sure that you don’t miss an update!