|
|
"[T]he things that they want to do to achieve their infinite growth model is not good for the public."
— Brandon Forester, organizer, MediaJustice on Google
|
|
|
Welcome back to Snippets đź‘‹ Here's what's been happening at the intersection of privacy, AI, and tech:
- Hollywood and Silicon Valley clash over age verification bill
- Google working behind-the-scenes to lobby small businesses against California's AB 566
- New research shows interactive in-app features make users more likely to disregard privacy concerns
P.S. Snippets will be on break the first week of October, but will be back on the 16th!
|
|
|
|
|
Iconic industries at odds over California’s age verification bill
|
 |
|
Justin Sullivan/Getty Images
|
A California bill (AB 1043) that would require device-makers and app stores to verify users' ages is headed to Governor Gavin Newsom’s desk for a final decision.
|
- The bill passed the state Assembly 58-0 with broad support from both parties and most major tech firms who see it as a balanced, privacy-friendly alternative to stricter laws in Utah and Texas.
- The bill does not require a photo ID upload, instead asking parents to input their kid's age while setting up the device, grouping users into age brackets, and sharing that info with apps.
- Despite broad support, the bill does have detractors. The Motion Picture Association, which represents both Amazon’s film studio and Netflix, has written a letter urging lawmakers to scrap the bill.
- Newsom must sign or veto AB 1043 by October 13.
|
|
|
|
|
|
|
See exactly how much revenue you could unlock 🔓
|
|
As the market races towards AI-powered personalization and hyper-connected customer experiences, the brands winning market share are those treating consent and preference management (CPM) as a growth engine, not a legal formality.
That’s why we’re excited to introduce the Transcend Preference Management ROI Calculator—an interactive tool that helps marketing, security, and privacy leaders measure the potential revenue lift, efficiency gains, and compliance savings of upgrading to a best-in-class CPM platform.
|
|
|
|
|
|
|
|
Google rallying small businesses against a California privacy law
|
 |
|
Jeff Chiu, AP Photo
|
In a behind-the-scenes campaign, Google mobilized small business owners to oppose California’s AB 566, which would require web browsers to allow users to block sites from sharing their personal data.
|
- Sponsored by the California Privacy Protection Agency, AB 566 would require that web browsers offer users an automatic do-not-share signal—a feature already offered by Chrome competitors like DuckDuckGo and Firefox.
- Working through the Connected Commerce Council nonprofit, Google tried to influence small-business owners to sign a petition against the bill—warning it would hurt their ability to reach potential customers through online ads.
- Google’s use of front groups resembles tactics once employed by the fossil fuel and tobacco industries. While legal, critics warn these methods can give money outsized influence over laws meant to serve the public good.
|
|
|
|
|
|
|
|
Interactive features increase risky user behavior, study finds
|
 |
|
Thai Liang Lim/Getty Images
|
Researchers at Penn State found that when apps or AI chatbots feel more interactive and playful, users tend to become more absorbed in the experience and more likely to disregard their privacy concerns.
|
- In experiments with a simulated fitness app, higher levels of interactivity boosted users’ sense of playfulness, engagement, and willingness to keep using the app.
- Instead of making users more cautious, message interactivity (like chatbot conversations that build on past responses) distracted them from considering the sensitive data they were sharing.
- Researchers found that adding modality interactivity (like pop-up prompts or ratings buttons) could interrupt this “flow” and remind users to reflect on their privacy choices.
|
|
|
|
|
|
|
|
- Anthropic’s copyright settlement isn’t the end of affordable AI.
- Cloudflare CEO Matthew Prince discusses AI’s future.
- OpenAI issues a statement on teen safety, freedom, and privacy.
- Privacy measures are hindering research into avian flu.
- China accuses Nvidia of antitrust violations.
|
|
|
|
|
|
Wearables regulation gaps exposing critical user risks
|
 |
|
|
As connected health wearables grow in popularity, their data flows through a fragmented U.S. regulatory system where HIPAA covers only a fraction of use cases.
|
- HIPAA applies narrowly to healthcare providers, insurers, and their vendors, meaning most device makers, wellness apps, and consumer health platforms fall outside its scope—instead falling under the purview of the FDA or FTC.
- The FDA, traditionally focused on safety, is now enforcing new cybersecurity requirements for “cyber devices."
- The FTC has emerged as the primary regulator for consumer-facing healthtech companies, expanding breach notification obligations and pursuing enforcement against apps and wearables.
- Unique risks like device “jailbreaking,” legacy system vulnerabilities, and insecure integrations with health records amplify threats, making collaboration between regulators, developers, and users essential to safeguarding patient privacy.
|
|
|
|
|
|
|
|
Google unveils new privacy-preserving LLM
|
 |
|
Credit: Google
|
Google Research unveiled VaultGemma, a 1-billion-parameter open-weight model trained entirely with differential privacy, alongside new scaling laws that define the compute-privacy-utility trade-offs in private language model training.
|
- Differential privacy introduces calibrated noise to prevent memorization of sensitive data, but it destabilizes training and demands much larger batch sizes, which in turn raises computational costs.
- Google’s team discovered that diminishing output quality caused by the addition of "noise" could be offset by raising the compute budget (FLOPs) or data budget (tokens).
- Guided by this theory and a set of scaling laws, defined in a paper titled Scaling Laws for Differentially Private Language Models, VaultGemma was trained from scratch under strict privacy guarantees.
- Though unlikely to overhaul the development of supersized models, which are optimized for high performance, the findings could help private model developers allocate resources more efficiently.
|
|
|
|
|
|
|
Why consulting partners are guiding enterprises from OneTrust to Transcend
|
|
Enterprises are rethinking their privacy stack. For consulting firms and service providers guiding clients through complex transformations, the shift from legacy platforms like OneTrust to Transcend is more than a tooling change: it’s a strategic opportunity to build a more profitable, future-ready practice.
This is especially true for category leaders and Fortune 500 companies navigating high consumer expectations and increasingly complex data ecosystems. These organizations aren’t looking for incremental upgrades—they need scalable privacy infrastructure that can support digital acceleration, AI adoption, and global compliance simultaneously.
|
|
|
|
|
|
|
|
|
Snippets is delivered to your inbox every Thursday morning by Transcend. We're the platform that helps companies put privacy on autopilot by making it easy to encode privacy across an entire tech stack. Learn more.
|
|
|
|
You received this email because you subscribed to Snippets. Did someone forward this email to you? Head over to Transcend to get your very own free subscription! Curated in San Francisco by Transcend.
|
|
|
|
|
|