Email
Banner Image
qoute Image

Currently, the companies that collect our data are in control of it.

- Suzan K. Delbene, WA Democratic Congresswoman

Welcome to Snippets—OpenAI is, once again, facing scrutiny in the EU. A complaint sent to the Polish DPA from private citizen and privacy researcher, Lukasz Olejnik, outlines a detailed litany of the ways in which the ChatGPT creator is allegedly violating EU privacy laws.

Plus, with the DSA now in force, tech companies are concerned about an uptick in enforcement, a Democratic Congresswoman makes the case that effective AI regulation relies on passing a Federal privacy law, regulators from 12 countries sent a letter telling social media companies to clean up data-scraping on their sites, and more.


AI in the EU

OpenAI accused of GDPR violations

Image

Leon Neal / Getty Images

OpenAI, the creator of ChatGPT, has been accused of systematically violating the General Data Protection Regulation (GDPR) in a formal complaint written by privacy researcher Lukasz Olejnik and filed with the Polish data protection authority.
  • The complaint alleges OpenAI is systematically violating several aspects of the GDPR, including lawful basis, transparency, fairness, data access rights, and privacy by design.
  • The complaint also claims OpenAI failed to consult with regulators prior to launching ChatGPT in Europe.
  • Earlier this year, Italy's privacy watchdog temporarily ordered OpenAI to stop processing data locally due to concerns about lawful basis, information disclosures, and child safety.
  • Just this week, OpenAI released an enterprise version of ChatGPT, which is purported to provide greater privacy and security protections.
TRANSCEND NEWS

Join hundreds of your peers in the Privacy Pulse community 👋

Privacy Pulse is an invite-only community where privacy professionals can crowdsource solutions to their biggest challenges, share or find a new role, and expand their professional network.

To make sure our community is valuable, thriving, and safe, we ask that everyone submit a brief application to join. All applications will be reviewed within 24 hours.

Digital Services Act

How the DSA is changing big tech regulation

Image

Christian Hartmann (Reuters)

The European Union’s (EU) Digital Services Act (DSA) went into effect last Friday, August 25. A comprehensive new digital privacy law, this act focuses on user protection and prevention of harmful content—with stringent regulations for tech platforms operating within Europe.
  • The DSA requires that tech giants operate more transparently, including providing more data on content moderation processes, user identification for pornography sites, and clarification on content recommendation mechanisms.
  • It also targets manipulative "dark patterns" and restricts surveillance advertising for minors.
  • The DSA primarily applies to 19 websites, including Google and Meta, each of which have more than 45 million users in the EU.
  • DSA enforcement measures include fines of up to 6% of a company's global revenue and possible bans from the EU market.

OPINION

Why AI regulation needs a privacy law

Image

LIONEL BONAVENTURE/AFP VIA GETTY IMAGES

The rapid integration of artificial intelligence (AI) into business and everyday life highlights the urgent need for a national data privacy standard in the US, according to Democratic Congresswoman Suzan K. Delbene.
  • Policymakers face critical decisions on AI's application in sensitive areas like finance, health care, national security, and the intellectual property rights of AI-generated content.
  • While certain sectors, like health care, have basic data protection laws, most other industries lack similar protections—allowing companies that collect data to control it carte blanche.
  • A national privacy standard would provide consistent data protections across the US, limiting companies' ability to store and sell personal data without consent.

IN OTHER NEWS
  • WhatsApp's new feature will hide your IP address during calls.
  • Google DeepMind releases a tool that watermarks AI generated images.
  • A detailed review of ExpressVPN—stylish and minimal, with sound privacy practices.
  • Apple speaks out against the Investigatory Powers Act.
  • How one developer built an AI disinformation machine for $400.

A Letter

Social media data-scraping under scrutiny

Image

Basak Gurbuz Derman / Getty Images

International privacy watchdogs have issued a joint statement urging social media platforms to protect public posts from data scraping—citing widespread legal responsibility and the potential misuse of data.
  • Signed by regulators from twelve countries, the statement emphasizes that personal information online is still subject to global privacy laws—arguing that mass data scraping could constitute a data breach.
  • The regulators highlight several privacy risks associated with data scraping, including targeted cyberattacks, identity fraud, and unauthorized surveillance.
  • They also point to potential misuse of the data in AI models.
  • YouTube, TikTok, Instagram, Facebook, and LinkedIn were all sent copies of the statement directly, indicating regulators’ focus in this matter.

AI OVERLOAD

The dangers AI poses to itself

Image

Sarah Grillo/Axios

The exponential increase of AI generated content online could pose risks to AI itself, inducing disorders such as model collapse, model autophagy disorder, and “Habsburg AI.”
  • Experts predict that AI-generated content could make up 90% of online information in a few years, but the lack of reliable methods to differentiate AI output from human-created content may lead to information overload and model degradation.
  • A small, but growing body of research has highlighted potential AI disorders—one example being "model collapse," where outputs from AI models trained on data produced by other AIs rapidly decline in quality.
  • Another disorder, known as "Habsburg AI," results from AI consuming its own products, leading to an "inbred mutant" system with “exaggerated, grotesque features.”
  • AI creators will need to better understand the lineage of their training data in order to minimize these issues.
TRANSCEND NEWS

AI Governance 101—your complete guide

AI governance is the process of building technical guardrails around how an organization deploys and engages with artificial intelligence (AI) tools. Applied at the code level, effective AI governance helps organizations observe, audit, manage, and limit the data going into and out of AI systems 🔄

Learn more about why AI governance is so important, the pillars of effective AI governance, current and upcoming AI regulation, and more with our latest guide.

Transcend Horizontal Logo

Snippets is delivered to your inbox every Thursday morning by Transcend. We're the platform that helps companies put privacy on autopilot by making it easy to encode privacy across an entire tech stack. Learn more.

You received this email because you subscribed to Snippets. Did someone forward this email to you? Head over to Transcend to get your very own free subscription! Curated in San Francisco by Transcend.