Italy’s new rules for ChatGPT could become a template for the rest of the EU

You are currently viewing Italy’s new rules for ChatGPT could become a template for the rest of the EU
<span class="bsf-rt-reading-time"><span class="bsf-rt-display-label" prefix=""></span> <span class="bsf-rt-display-time" reading_time="2"></span> <span class="bsf-rt-display-postfix" postfix="min read"></span></span><!-- .bsf-rt-reading-time -->

Last month, Italy became the first Western country to temporarily ban ChatGPT within its borders.

Prompted by a data breach that occurred on March 20, the Italian data protection agency, known as Garante, accused OpenAI of “unlawful” collection of personal data — against the EU’s General Data Protection Regulation (GDPR) — and the absence of an age verification system for minors.

Correspondingly, it ordered the US-based company to cease offering access to ChatGPT in the country.

Now, Garante has announced nine measures OpenAI must comply with for the ban to be lifted. These can be summarised in five main demands:


OpenAI will have to publish an information notice detailing ChatGPT’s data processing (either necessary for its operation, or for the training of its algorithms), as well as the rights afforded to data subjects, including users and non-users alike.

The information notice must be easily accessible and placed in such a way that will as to make it immediately visible to users right when accessing the service and before signing up.

Exercising data rights

The Italian watchdog is also asking for a new set of tools that will enable both users and non-users to have control over how their data is handled.

They will be able not only to object to OpenAI’s processing of their personal data for training purposes, but also ask for corrections of false personal information. In case the latter isn’t technically feasible, there is the option of data deletion.

Legal basis

Regarding the legal basis for ChatGPT’s data processing for algorithm training, Garante has narrowed down the available options to two: obtaining consent or demonstrating legitimate interests.

This means that the agency is removing all references to performance of a contract, which in practice allows the processing of personal data in exchange for access to OpenAI’s service.

Minor protection

According to Garante’s orders, all new and existing users must go through an age gate upon accessing ChatGPT, to allow the AI system to filter out underage users.

OpenAI is also required to develop age verification tools that will prevent access for users aged under 13 as well as users aged between 13 and 18 who can’t provide parental consent.

Awareness campaign

OpenAI is to promote a “non-marketing” campaign on all the country’s main mass media, informing Italians that their personal data may have been used for ChatGPT’s training, while raising awareness of the new information policy and attached data rights.

The road ahead

Garante has given OpenAI until April 30 to fulfil most of its demands. However, he US-based company has been granted a more generous timeline for the campaign promotion — by May 15. Furthermore, it has until May 31 to submit a plan for the age verification system, which is to be in place by September 30.

If these measures are sufficiently implemented, the Italian agency will lift ChatGPT’s ban, but it may decide “to take additional or different measures if this proves necessary.”

With other data protection agencies — including France’s, Ireland’s, and Spain’s — paying close attention to the developments, Italy might set a European precedent regarding the regulation not just of ChatGPT, but of the overall use of the increasingly widespread large language models.