Italy’s Data Protection Authority Raise Data Privacy Concerns Over Chatgpt

Italian Data Protection Authority Worries About Chatgpt

Garante, Italy’s data protection regulator, raised worries that OpenAI’s ChatGPT may infringe GDPR.

A temporary prohibition on ChatGPT in Italy prompted an investigation last year. The investigation found personal data and age verification concerns.

The Italian Data Protection Authority (Garante per la protezione dei dati personali) investigation was prompted by incidents where users’ messages and payment details were accidentally exposed and the lack of effective mechanisms to block minors from accessing inappropriate content.

The authority also questioned OpenAI’s data gathering procedures used to train ChatGPT and highlighted concerns about the AI’s sometimes erroneous personal data.

AI “hallucinations” have slandered many people, sometimes indicting them for embezzlement or sexual harassment. AI developers are facing inconclusive libel lawsuits.

OpenAI responded to these charges by pledging to comply with GDPR and other privacy rules and secure data.

OpenAI stated, “We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy.” It reduced personal data in its training processes and designed its system to reject requests for private or sensitive information.

Global AI investment scrutiny
The US Federal Trade Commission (FTC), led by Chair Lina Khan, is scrutinizing the links between major AI startups like OpenAI and IT giants like Microsoft, Amazon, and Google.

The Mozilla Foundation and other civil rights groups questioned Microsoft and OpenAI’s partnership this week, urging the European Commission to investigate them for antitrust violations.

The FTC also investigates whether these collaborations give larger corporations undue influence or privileged access, hurting fair competition.

As these technologies become more widespread, a legal framework that balances innovation, ethics, privacy, and market fairness is needed. Still forthcoming.

These investigations and regulatory measures may set major precedents for AI governance in the future, impacting AI research and its incorporation into the global digital economy.

Leave a Reply

Your email address will not be published. Required fields are marked *