Email
Banner Image
qoute Image

Ever since caller ID and GPS became part of our lives, we've known that digital technologies can be used by abusers...

- Lana Ramjit, Director of Operations, Clinic to End Tech Abuse

Welcome to Snippets—This week, 23andMe confirmed a significant leak after hackers posted a sample of the stolen data on BreachForums, claiming the full data set contains personal details from members of minority groups. 

Plus, Snapchat's AI chatbot may be scrapped by the UK's data protection agency, experts argue that effective AI governance goes beyond privacy, Google agreed to reform it's data collection policy in Germany, and more.  


DATA BREACH

Over one million 23andMe accounts compromised

Image

David Paul Morris/Getty Images

Consumer genetics firm 23andMe confirmed a significant data leak—with compromised data reportedly including users’ profiles, name, sex, year of birth, as well as genetic and geographic ancestry information.
  • The incident came to light after hackers posted a sample on the cybercrime and hacking group BreachForums—a sample they claim contains one million data points about Ashkenazi Jews.
  • Despite this claim, it seems the leak also affected hundreds of thousands of users of Chinese descent.
  • On Wednesday, the hackers started selling breached profiles for $1-10.
  • 23andMe claimed the leak wasn't due to a data system breach, but rather that hackers guessed login credentials for several users and then scraped data from the DNA Relatives feature.
TRANSCEND NEWS

Join hundreds of your peers in the Privacy Pulse community

Privacy Pulse is an invite-only community where privacy professionals can crowdsource solutions to their biggest challenges, share or find a new role, and expand their professional network.

To make sure our community is valuable, thriving, and safe, we ask that everyone submit a brief application to join. All applications will be reviewed within 24 hours.

MY AI

Snapchat’s AI chatbot faces UK exile

Image

Getty Images

The UK’s Information Commissioner’s Office (ICO) has threatened to shut down Snapchat’s My AI feature following a preliminary investigation—citing a “worrying failure” to protect privacy for users 13-17 year old.
  • Released in April, My AI is intended to be a “personal sidekick” for any user who interacts with the bot—with over two million chats happening per day.
  • Though ICO hasn’t officially found Snapchat to be in breach of UK privacy laws, the regulator stated My AI could be taken offline until Snapchat completes “an adequate risk assessment.”
  • Snapchat claims My AI went through extensive privacy and legal review before being released, but that they will "continue to work constructively with the ICO to ensure they're comfortable with our risk assessment procedures."

AI OWNERSHIP

Why AI governance isn’t just a privacy issue

Image

Photo by Ian Waldie/Getty Images

As companies integrate AI further into their operations, experts are weighing in on what effective AI governance looks like—arguing that, though privacy professionals can lead the charge, a cross-functional approach is critical.
  • While conventional wisdom suggests privacy officers come with key skills in terms of governance and data protection, AI risks are far too broad for a one-dimensional approach.
  • According to experts, siloing AI threat mitigation with privacy teams overlooks cybersecurity, IP safety, and AI biases.
  • Kimberly Zink, global data strategist at Applied Materials, said: “If generative AI becomes a check-the-box compliance function, like privacy is for some organizations, then I can almost guarantee that that company is going to run into unforeseen risks.”

IN OTHER NEWS
  • Google defaults to passkey sign-in to thwart phishing.
  • AI-powered Google Search may use the same amount of electricity as the entire country of Ireland.
  • Costco allegedly shared patient health data with Meta.
  • FTX staffers pulled an all-nighter during a $1-billion crypto theft.
  • Dr. Geoffrey Hinton, one of the pioneers of Google’s AI technology, fears an AI coup.

PRIVACY

When privacy features become tools for tech abuse

Image

 

Simson Garfinkel, the Chief Scientist of cybersecurity and AI accelerator BasisTech LLC, notes that security features designed to protect children and user privacy may unintentionally enable abusers.
  • Everyday services Apple’s iCloud, Google Maps, and even family phone plans can be misused when shared between partners in a toxic relationship.
  • Though there are laws giving victims the right to remove themselves from a family phone plan, the process of doing so may inadvertently reveal the victim’s intention to leave.
  • Recounting one such experience, a victim said: "A perpetrator-partner can literally commit felony wiretapping against his wife and not get (an order of protection from domestic violence or stalking)..."

GERMANY

Google bends to FCO consent mandate

Image

Carsten Koall/Getty Images / Getty Images

Google has agreed to reform its data collection policy in accordance with the terms laid out by Germany’s competition regulator, the Federal Cartel Office (FCO).
  • The mandate covers services like Gmail, Google TV, and Assistant—stating Google cannot use data collected across its services without first giving users an option to provide unambiguous consent.
  • The policy excludes Fitbit (which is under a ten-year ban on using health data for ads), as well as Google services addressed under the European Digital Markets Act (DMA).
  • The FCO mandate is likely to be extended across the European Union, as most EU countries sport similar antitrust rules.
TRANSCEND NEWS

Understanding the dangers of ungoverned AI

Artificial intelligence (AI) is one of the greatest technological advancements of the last decade. But the rapid development and expansive global adoption has left headlines, governments, and everyday people asking—is AI dangerous?

The truth is that artificial intelligence can be a powerful tool, but without the appropriate AI governance structures and human oversight, it can present significant risks, including cyber breaches, job displacement, and biased decision-making.

Transcend Horizontal Logo

Snippets is delivered to your inbox every Thursday morning by Transcend. We're the platform that helps companies put privacy on autopilot by making it easy to encode privacy across an entire tech stack. Learn more.

You received this email because you subscribed to Snippets. Did someone forward this email to you? Head over to Transcend to get your very own free subscription! Curated in San Francisco by Transcend.