Loading...

April 14, 2023

Issue 18: We are so screwed

Oh hey! Welcome to the Privacy Beat Newsletter!

Here’s the gist: Come here for insights on the hottest topics in privacy according to our peers’ tweets so you can walk into any happy hour or team meeting and sound like the absolute baller you are. No current topic gets by you.

First of all, last week’s IAPP’s Global Privacy Summit felt like a warm hug, no? An exhaustingly long marathon of a warm hug, but a warm hug. Maybe I’m still feeling some of the fallout from COVID, but, it felt incredible to once again swim in a sea of people speaking the same language, facing the same problems, and stressing over how to figure it all out. It’s a very collegiate field, ours. And I love that about us. My team flew in from California for the big event, and I got to have them over to my place for cocktails and sushi. I've never been so nervous. I hired a CLEANING LADY for the first time. I dressed my dog in a snowsuit so he wouldn't shed a single hair afterward! Okay no. I didn't go that far. But the cleaning lady part is true. Let's talk about all the things.

Anyone staring at a programming schedule at the conference would have found it impossible to ignore a major theme emerging this year: AI. What are we to do about it? WE ARE SO SCREWED! Or are we?

In her keynote at the conference, author Nina Schick reported it took ChatGPT just five days to reach one million users, and two months to reach 100 million. She predicted 90 percent of online content may be AI-generated by 2025.

If you’re online in any way, you’ve seen the posts on LinkedIn and Twitter on ChatGPT. Everyone and their sister has posted examples of everything from poems to privacy policies they prompted the robot to write. And there’s been widespread fear that automation like ChatGPT and its future competitors will come for our very jobs! If a robot can write a privacy policy, are we doomed? Whenever people raise this point, I like to remind folks of the concerns about self-driving cars. When Tesla announced it would pilot them, panic spread about the future of everyone from Uber drivers to truckers. But we’re nowhere close to getting there. As U.S. News & World Report noted last year, “there are no fully self-driving cars available to buy as of late 2022, and there won’t be for the foreseeable future.” Too many glitches, yo.

From a legal standpoint, there’s much uncertainty over what laws actually govern AI itself. But at the conference, FTC Commissioner Alvaro Bedoya – calling AI “among the darkest of black boxes” – noted that there are laws on the books to govern AI now: Section 5 of the FTC Act for example, or civil rights law.

Further, he said, and I found this comforting, that “there’s a back-and-forth that’s playing out in the popular press. There will be a wave of breathless coverage – and then there will be a very dry response from technical experts, stressing that no, these machines are not sentient, they’re just mimicking stories and patterns they’ve been trained on. No, they are not emoting, they are just echoing the vast quantities of human speech that they have analyzed. I worry that this debate may obscure the point. Because the law doesn’t turn on how a trained expert reacts to a technology – it turns on how regular people understand it.”

And it’s clear that, at least for now, the average consumer certainly doesn’t understand products like ChatGPT. And in that sense, the FTC may well be equipped to take on AI in the wild via its oversight of deception and unfairness in the consumer marketplace.

That doesn’t mean there’s not a problem here nor that there isn’t work to be done.

As U.K. ICO John Edwards said in his session, “We’ve reached an inflection point with generative AI. We can’t miss the boat like we did with social media and search.” And if you’re worried about job security, you may find comfort in what Trevor Hughes said to Mlex reporter Mike Swift, that we’re “always going to need professionals who can manage the risks of these new technologies ... so I don't think we're going to be replaced [by AI] anytime soon."

Italy, for its part, has already started its enforcement work on AI: In April, it straight-up banned ChatGPT in the country under an emergency order as the regulator investigates its very legality under the GDPR. The Italian DPA said there “appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.” As WIRED reported this week, ChatGPT says it relies on “legitimate interest” as its legal basis for when it “develops” its service but doesn’t publicly identify the basis it relies on for its algorithmic training data.

Canada announced last week its launched an investigation into ChatGPT as well. And, as you may well know, the EU is aiming to tackle AI regulation with its European AI Act, which is currently in an “open discussion” phase. As CNBC reports, the European Commission specifically seeks to examine AI’s impact on human rights, travel, and facial recognition.

A couple of quick hit soundbites from the week, for funsies

Recent podcast episodes from me to you

One year in: Here are the top 5 Privacy Beat Episodes

It's been a year since I launched The Privacy Beat. While I'm going to keep mum about metrics for competitive marketplace purposes, we just hit a major milestone. And I'm excited! Ya'll are listening, thank you! To celebrate, I tallied up our top 5 most popular episodes according to the number of downloads. Because what kind of celebration doesn't include a list?

Iowa just passed a privacy law. Huzzah?

Iowa is the first state in 2023 to pass a comprehensive privacy law. What does it contain? Is it a game-changer? In this episode, Keir Lamont, director of U.S. legislation at the Future of Privacy Forum, and David Stauss, partner at Husch Blackwell, talk us through why privacy peeps are calling this law a tech company's dream.
Listen here

Illinois’ BIPA is having a CCPA-like moment, non?

Plantiffs’ attorney Jay Edelson has been using Illinois’ Biometric Privacy Act to take companies like Facebook and Clearview AI to task for alleged misuse of such scans. And he’s had great success. In the meantime, without a federal law on biometrics in the U.S., states have started introducing their own versions of BIPA in rapid succession. In fact, 17 U.S. states have introduced a biometric privacy law this year already. In this episode, Edelson discusses his recent wins and his forecast for the BIPA-like landscape.
Listen here

Stuff that I made for you

On March 28, 2023, Iowa became the sixth U.S. state to pass a comprehensive privacy law. All existing U.S. state comprehensive privacy laws require companies to have a privacy policy that provides detailed notice of privacy practices and rights, as well as to implement reasonable security measures. But how else do they align and differ? Here's a chance that aims to be your at-a-glance resource to stack up each state law against the others. Is every detail captured here? No. But it's the important stuff.
Get it here

Privacy by design is such a tired phrase

Part of the gig that I like here at TerraTrue is my work on privacy by design content. Cavouks has been talking about PbD for a long time, but it’s always felt abstract, academic. Part of my job is to dispel some of the mystique around it and give folks operational insights. At our Summit session, we talked with Lyft Privacy Analyst Brittany Rhyne and Greenlight Financial CPO Cristin Morneau about how they operationalize PbD at their organizations, and the metrics they gather to prove their programs are adding value to the business. If you missed that chat at Summit, fear not! You can check out an abbreviated, cleaned-up transcript at the link below.
The chat’s highlights here.

Hot take(s) of the week

Burn(s)!

See you two weeks from now. Thanks for reading, loves!