Loading...

TerraTrue handling AI privacy risk
Privacy·

December 20, 2023

4 action items you can take TODAY on AI's privacy risks at your organization 

Share

While many lament the fact that we don’t yet have a federal law on AI in the U.S., that doesn’t mean there aren’t existing laws and frameworks to consider if your organization is currently using or plans to use AI.

If you’re looking for proof: This week, the FTC announced charges that Rite Aid “failed to implement reasonable procedures and prevent harm to consumers in its use of facial recognition technology in hundreds of stores.” The FTC said in doing so, Rite Aid violated a 2010 data security order to adequately vet its service providers. As a result, Rite Aid is banned from using the AI-based system if it can’t control potential risks to consumers. 

Breaking: Rite Aid Banned from Using AI Facial Recognition After @FTC Says Retailer Deployed Technology without Reasonable Safeguards https://t.co/gxbzT8ADef

— Jevan Hutson (@jevanhutson) December 19, 2023

The good news is that there are steps you can take today to move the needle toward ethical and responsible AI deployment. Below are four of them.

1. Get the visibility you need

To do your job, you must know what data exists and where it lives within the organization.

To get the answers, you should meeting with your CIO or CTO. Ask them for access to the tools your organization uses so you can develop an understanding of how widely your organization has adopted AI tools already. To cover your bases, check in with security, procurement, legal, and product & eng. Here’s how to frame your questions to get the answers you need to do your job.

  • Do we have confidential or proprietary data leaving our organization’s environment?
  • Are we talking about AI used internally for business efficiency purposes? Or are we building AI into products that we then sell to our customers?
  • If the latter, is the AI built in-house or provided by another enterprise? That will determine your next steps because if it’s a vendor, you’ve got work to do on that agreement.

2. Catalog your data

If data catalog is a new term to you: It’s a live inventory of the types of data an organization stores in their product or app.

Creating and maintaining a data catalog serves multiple purposes, including enabling data discovery, providing a basis for data governance, and tracking data lineage.

In this context, a data catalog will help you understand or create a fuller picture of where you’re currently using AI within the business. Whether you’re developing it or procuring it, that will help you see the broader landscape, and you need that to determine next steps.

Not all AI deployments must be treated equally, depending on where and how it’s used.

If you’re looking for a tool to help you get a comprehensive data catalog on the books, TerraTrue can help.

Data soruces

3. Review laws that may already apply

Don’t let technology fool you into believing that old standards no longer exist. The same due diligence applies to AI deployments as would your traditional deployments. As the author of this IAPP blog post opines, privacy laws directly regulate the collection, use, disclosure, and cross-border transfer (etc.) of data. AI systems are basically large sets of data inputs used to produce data outputs, so those same rules apply. 

A number of states, including California, Connecticut, Colorado, and Illinois, have passed privacy laws providing rules for automated decision-making algorithms. Many more have proposed them. Recently, California’s privacy regulator issued a draft proposal on its AI rulemaking.

Check out the existing NIST AI Risk Management Framework for insights on gaps between how you’d conduct a typical risk assessment and additional concerns on AI. 

4. Develop AI policies

An internal policy on AI will guide your organization on appropriate uses. Those will differ depending on the context. For example, using AI to increase efficiency within the business is less risky than using AI in external products and services.

The first step in developing AI policies is defining the scope. You’ll want to identify: What is the purpose of the AI deployment? What is the business aiming to achieve? From there, you’ll want to conduct a risk assessment on the desired deployment. Talk to your stakeholders to determine how the technology the team aims to deploy fits within the existing laws and standards you’ve identified above.

You’ll also want to determine:

  • To whom will the policy apply, and under what circumstances?
  • Whether the deployment is ethical and transparent.
  • The modifications you’ll need to apply to your existing policies in light of this new deployment.
  • Who owns the policy, and who will train employees on it?

For more on developing AI policies, ISACA has a good blog here.