Italy Informs Chatbot Creator OpenAI That ChatGPT Breached European Privacy Regulations

Italy_Informs_Chatbot_Creator_OpenAI_That_ChatGPT_Breached_European_Privacy_Regulations

Italy’s data protection agency, Garante, has told OpenAI that its AI chatbot, ChatGPT, may be breaking the EU’s strict data privacy rules. Garante started looking into ChatGPT last year and even temporarily banned it in Italy. After some investigation, they found evidence suggesting that the chatbot might be violating privacy regulations.

OpenAI now has 30 days to respond to these claims. The company has said that it’s ready to work with the Italian authorities and believes that it follows all privacy laws. It also stressed that it tries to use as little personal data as possible when training systems like ChatGPT, and it doesn’t allow requests for private or sensitive information about people.

Last year, OpenAI met several conditions set by Garante to remove the ban on ChatGPT. The ban was first put in place because some users’ messages and payment details were exposed, and there was no way to check users’ ages, which could lead to inappropriate responses for children.

Garante also questioned why OpenAI needed to collect so much data to train ChatGPT’s algorithms and was worried about the system possibly creating false information about people.

As AI systems like ChatGPT become more popular, they’re being watched more closely by regulators in the US and Europe. The US Federal Trade Commission is looking into the relationships between AI startups OpenAI and Anthropic and their investors, which include Amazon, Google, and Microsoft. Also, competition authorities in the EU and Britain are checking Microsoft’s investments in OpenAI.

The AI Act, the world’s first comprehensive set of AI regulations, is also being developed by the EU. On Friday, it is anticipated that all 27 EU members will ratify a version of this bill.