Loading...

February 2, 2024

Issue 33 — Classic EU! A big Friday news drop on AI

Oh Hey! Welcome to The Privacy Beat Newsletter!

Here’s the gist: Written by longtime privacy journalist Angelique Carson, director of content strategy at TerraTrue, you can come here for insights on the hottest topics in privacy. Told through some of our peers’ social posts, this quick read aims to arm you with the knowledge you need to walk into any happy hour or team meeting and sound like the absolute baller you are. No current topic gets by you!

Welp. It’s Friday. And I had a lot to do. And then the EU dropped some major news in our laps. And the thing is, they always drop the big stuff on Fridays or the eve of a U.S. holiday. I’m not saying they do it on purpose, but I will tell you there was one Thanksgiving Eve when I was on the phone with Julie Brill because Safe Harbor had just sank. The turkey was mid-brine, and I was scrambling.

Today, I would expect no less. But at least it’s exciting news. Let’s talk about it and the other big privacy developments this week.

The AI Act is almost a real thing!

On Friday, Feb. 2, Member States voted to approve the final text of the EU AI Act, pushing the landmark legislation closer to the finish line, Reuters reports. The regulation would prohibit certain uses for AI that are deemed an unacceptable risk, including using AI for social scoring. It would also create rules for high-risk uses, including where AI apps could harm health, safety, or democracy.

As many journalists have noted, the news follows a fabricated video – or a deepfake – of Taylor Swift circulating online. That spurred a group of U.S. senators to announce a new bill that would allow people to sue over AI-generated, nonconsensual, sexualized images.

Meanwhile, celebrating the EU AI Act’s passage, EU digital chief Margrethe Vestager said: “What happened to [Taylor Swift] tells it all: the harm that AI can trigger if badly used, the responsibility of platforms, and why it’s so important to enforce tech regulation.”

The EU AI Act moves to a vote in Parliament on Feb. 13.

Now, you can audit your site before a DPA does

The European Data Protection Board released a nifty little tool if you aim to comply with the GDPR. The agency built the tool to help data protection authorities conduct and evaluate audits during investigations. But it’s offering it as an open-source and accessible resource for controllers and processors who want to test their websites. While other codes exist, the EDPB said they usually require technical expertise, and this one is easy to use.

You can find more information about the tool here.

California still droppin’ those kids’ bills like they’re 🔥

California lawmakers have introduced two new bills on children’s privacy. It shouldn’t surprise anyone. The states are currently putting various drafts on a virtual conveyor belt. Many are copycats of California’s Age-Appropriate Design Code – despite uncertainty about that law’s future. As I’ve written extensively in past editions of this newsletter, the AADC is on pause while California’s attorney general fights against NetChoice’s attempts to kill it dead. The NetChoice argument, if it prevails in the end, would establish a precedent for future arguments against similar laws because its foundational argument is that the AADC violates First Amendment rights. Obviously, that creates a complication for copycats.

Back to the news, though: The California Children’s Data Privacy Act, AB 1949, would amend the CCPA by banning businesses from collecting, using, sharing, or selling the personal data of anyone under the age of 18 unless the company has acquired informed consent or doing so is strictly necessary for the service. We’re used to COPPA’s standard: under 13. But then the GDPR changed the consent game when it said you need parental consent from anyone under 16. California now says, forget that, Europe! They’ve got to be 18.

Well, that’s what the bill says.

The Protecting Youth from Social Media Addiction Act, SB 976, aims to respond to youth mental health crises. The bill would give parents the choice of whether kids under 18 see a chronological feed from users they already follow OR the current default feed, which is algorithmic. It would also allow parents and guardians to pause social media notifications and block access to platforms for minors during nighttime hours and the school day.

Lastly, on the children’s privacy thing, you likely saw that a U.S. Senate committee held a hearing to grill chief executives from Meta, X, Discord, Snap, and TikTok about child safety. It was pretty wild, as far as Senate hearings go. At one point, for example, Sen. Lindsey Graham, a Republican fom South Carolina, said to Meta’s CEO:

“Mr. Zuckerberg, you and the companies before us, I know you don’t mean it to be so, but you have blood on your hands,” reports The New York Times. At this point, spectators at the hearing erupted in cheers and applause. Many of those spectators were holding pictures of their dead children, a visual reminder of the risks we face as a nation if we don’t act.

“You have a product that’s killing people,” Graham added. This space is only going to heat up, so, I’ll continue to tell you what I learn. And, if it helps: Bailey Sanchez is senior counsel on Youth and Education privacy, and I spotted this news on her feed. If you’re looking to stay up on the latest in the children’s space, she’s a good follow.

As always, thank you for reading! Please feel welcome to comment and share! I like hearing from you. It’s nice.

xo,
Angelique

Latest podcasts!

Podcast

Phil’s back!
In this episode, Phil Lee returns! Phil is a self-proclaimed tech nerd, but that comes in handy, given the uptick in questions on deploying AI without breaching privacy rules or consumer expectations. He says to understand the potential harms of any deployment, you’ve got to get to know the tool and, more importantly, get reps from the potentially impacted group of users in the room with you.

Listen here

Podcast

Uber’s CPO on her gig’s unique challenges & raising all the boats

In this episode of the podcast, host Angelique Carson chats with longtime friend and Uber CPO Ruby Zefo. The two discuss Ruby’s working relationship with product & eng, the unique challenges a company like Uber faces, and why she’s so focused on diversity, equity, and inclusion in the name of raising everyone’s boats.
Listen here

Latest resource!

Podcast

How Ancestry does privacy at scale
As privacy experts, we know that privacy teams can be strategic business enablers. To be truly effective, though, your privacy program should work well with product and engineering, spotting problems before they become big issues, and be built to scale at both the pace of business and developments in privacy regulation. In this chat, Steve Stalder, who manages privacy at Ancestry, discusses what made his strategy successful and how his team uses TerraTrue to:

  • Leverage integrations to gain early visibility on product plans so he can flag risks early.
  • Provide greater transparency for cross-functional teams on product launches.
  • Gather data to indicate his program is enabling the business.

Watch it here!