Sensitive information is everywhere in your business, from customer lists, to product roadmaps, to the passwords your stakeholders use every day. You’re compelled to protect this information by a web of obligations, which include:
- State, national, and international compliance regimes
- Industry-specific compliance regulations and best practices
- Legal agreements with vendors and customers
- Competitive business imperatives
Here’s how to understand the types of sensitive and high risk data your business holds, and best practices for protecting that data.
Contents:
Types of Sensitive Data
- Personal Information
- Personally Identifiable Information
- Sensitive Personal Information
- Private Information
- Protected Health Information
- Nonpublic Personal Information
- Other Data Risks
Reduce your data risks with automation.
Personal Information: everything you know about someone
Personal Iinformation (PI), is a broad category of information that includes all data that’s associated with a person — from broad, non-specific information like what region they live in, to detailed biometric data.
However, not all PI can be associated with an identity. For example, race and gender are personal information, but they’re not specific enough to identify someone on their own. However, even with non-specific PI, it’s important to be careful — the more information you add, the more likely it becomes that the person will be identifiable.
Here are some examples of personal information:
Identifying information:
- Name
- Photographs
- Birthday
- Address
- Social Security number
- Employment history
- Criminal record
- Geo-location information
Physical information:
- Health data
- Genetic info
- Biometric info
Demographic information:
- Age
- Gender
- Sexual orientation
- Race and ethnicity
- Religious beliefs
- Trade union membership
Personally Identifiable Information: data that can identify someone
Personally identifiable information (PII) is data that can be used to infer someone’s identity. It includes:
- Information that can identify someone alone (e.g. name, Social Security number, driver’s license number)
- Information that can be used in combination to identify someone (e.g. mother’s maiden name and birthday)
The sensitivity of PII can vary based on the regulation and context. In general, publicly available PII like a phone number is less sensitive than confidential information like a Social Security number. However, all PII is regulated by compliance rules like the GDPR and the CPRA.
Sensitive Personal Information: the real sensitive data types of Beverly Hills (and the rest of California, too)
In California, everything is a little different. A highway is called a freeway. Gridlock is called a normal commute. And yes, Sensitive Personal Information (SPI) gets its own unique definition as well.
Under the California Privacy Rights Act, Sensitive Personal Information (SPI) is information that merits heightened privacy protections, because it could cause harm if released. That includes PII like driver’s license numbers and biometric data.
However, not all SPI can be used to identify someone. For example, a consumer’s password counts as SPI under California law, because it could be used to compromise their account, even though the password itself doesn’t tell you who the person is. Under the CPRA, a data subject can request to limit the use of any of their sensitive personal information.
The CPRA gives Californians extensive rights over their PI, but affords additional protections for SPI. Check out our two-part CPRA VS CCPA blog to learn more about specific obligations under California’s privacy laws.
SPI includes:
- Government identity information, such as:
- Social Security number
- Passport number
- State ID
- Driver’s license number
- Other codes linked to an individual*, such as:
- Credit card numbers
- Security credentials
- Financial account numbers
- Passwords and credentials, such as answers to security questions*
- Precise geo-location data
- Demographic information, such as:
- Race
- Ethnicity
- Religion
- Philosophical beliefs
- Union membership
- Health and genetic data, including:
- Biometric data
- Health records
- Information about sex life or sexual orientation
- Personal communications sent to a third party, such as:
- Emails
- In-app messages
*Note: user names, financial account numbers, credit card numbers, and similar data are only SPI when combined with any codes, passwords, or credentials you would need to access or use the account. However, for security purposes, passwords and similar data types should always be treated as high-risk data, even when separate from other information needed to access an account.
New York state of mind: private information and the NY SHIELD Act
This category of private information was enshrined in 2019 by New York’s Stop Hacks and Improve Electronic Data Security Act, also known as the NY SHIELD Act. The law has received much less public scrutiny than the CPRA, in part due to its more limited scope — while the CPRA provides a broad range of data rights and obligations, the NY SHIELD Act is solely concerned with breach notification requirements.
The law covers anyone who owns or licenses digital data that includes private information about a New Yorker. Covered entities must implement physical, technical, and administrative safeguards to keep that data secure and private. If you do undergo a breach, you’re required to report it to the New York State Attorney General’s Office.
Under the NY SHIELD Act, personal information can either be:
- Credentials — e.g. an email address or user name, along with any passwords or security questions that can unlock the account, or;
- Unencrypted or accessible personal data, combining identifiers like a person’s name with
- Government ID numbers
- Financial account numbers (bank account, credit card), or;
- Biometric information
Protect sensitive, regulated, and personal data.
Keeping your medical records private: HIPAA’s protected health information
The Health Insurance Portability and Accountability Act (HIPAA) has strict privacy controls, covering healthcare providers, along with the service providers and businesses they rely on. HIPAA covers essentially everyone involved in health care, health insurance, and health insurance payment processing, along with the companies providing digital services to those providers, such as healthcare portals and secure data storage.
Data protected under HIPAA is called Protected Health Information (PHI or ePHI), and it includes virtually any information provided to obtain healthcare, including:
- Identity information such as:
- Name
- Birthday
- Location
- Identifying images such as photos that show the patient’s face
- Identity numbers and codes, such as:
- Email address
- Physical address
- Social Security number
- Health insurance account number
- License plate number
- Appointment information, such as:
- Date of appointment
- Scheduled procedures
- Electronic identifiers like:
- IP address
- Device identifiers
- Medical records, including:
- Prescriptions
- Health histories
- Lab results
- Biometric data
- Heart beat monitoring
- Blood sugar monitoring
- Voice recognition
Essentially, if personal data is obtained or used to provide healthcare — it’s covered. The law gives patients a range of rights to access their data, and control how it is used. It also requires providers to put in place a range of administrative, technical, and physical safeguards — and imposes harsh penalties for breaches or careless data handling.
As with a number of compliance regimes, you can take data out of HIPAA coverage by anonymizing it. For example, if a medical provider removes names, addresses, appointment dates, location, and other identifying information, they can publish a case study on a particular patient, or a study on the outcome of a particular treatment over multiple patients. However, you need to take great care not to leave data that could be potentially used to reconstruct a patient’s identity.
The Ringo of privacy laws: what to do with the GLBA’s nonpublic personal information
The Gramm-Leach-Bliley Act is a strange law. It was crafted as something of an afterthought in the 1990s. Insurance companies and financial services companies were merging more than before, and federal legislators worried that these cross-sectoral conglomerates would use their toehold in one sector to gain leverage in another.
Anticipating that insurers that own banks and banks that own insurers might have heightened data responsibilities, the GLBA regulates how these companies handle Non-Public Information (NPI), which includes any information from or about insurance or financial services customers.
Under GLBA, NPI is defined as:
- Personally identifiable
- Not publicly available
- Provided by a consumer
- Used to provide a financial service
One key thing to note about the GLBA is how you’re collecting the information which you’re using. If the information you’re using is from public records, or widely available, and it’s not related to financial products or services, then that information is not considered NPI.
Reducing data risk and concerns
Calculating how to organize, tag, and protect data can pose a number of major challenges for companies. These include uncertainties over compliance, such as:
- Overlapping compliance regimes (e.g. organizations governed by both the GLBA and the CPRA)
- Uncertainty over compliance category (e.g. user data from a user whose place of residence is unknown)
There’s also sensitive information that may pose legal or business risks outside of a particular regulatory framework. Examples include:
- Intellectual property that would jeopardize a major competitive advantage if exposed
- Trade secrets from a client or business partner that pose a legal liability
- Confidential materials that you are forbidden from disclosing
Your company also needs to account for data that poses an indirect compliance risk. This is data that may not be covered directly by compliance laws, but could lead to breaches or compliance violations if disclosed. This includes things like administrative credentials, and information about your internal security processes.
Data protection: finding the right fit for your organization
Unfortunately, there’s no silver bullet. You can’t do business without sensitive data, and that data will always pose a degree of risk. Additionally, the number and range of compliance regulations make it very difficult to eliminate legal and regulatory risk entirely.
However, you can (and should) mitigate your data risks. Here’s how:
Embed privacy-by-design into your product or service
Privacy by design is an approach that emphasizes building privacy and security into tools and systems as a default, rather than patching holes later. When you build products that protect data at every stage of its use, you reduce the need for future remediation, and make it easier to meet your compliance obligations.
Use data minimization and anonymization techniques
Data minimization decreases both your footprint and your compliance burden by purging personal information as soon as you’ve used it for its declared purpose. The less sensitive data you manage, the lower your risk, and the less work you’ll have to do to meet your compliance obligations.
Data anonymization also decreases your compliance burden by removing identifiable personal information. This allows you to look for trends in data, without the risks involved in retaining personal information. It also can enable you to use existing data for a previously undeclared purpose without violating laws like the GDPR and the CPRA.
4 best practices to help you get better control over your data
The better visibility you have into your data types, the more effective your compliance efforts will be.
- Tag the data you’d like to protect by category. Include both compliance categories and other risk categories, such as Intellectual Property, trade secrets, and confidential business communications.
- Consider further subdividing data by risk level. This can help you prioritize the most sensitive data, and balance the need for access with the need for protection.
- Implement control standards that will satisfy overlapping privacy regimes. For example, let’s say you have a subsidiary subject to both the GLBA and the CPRA. Even though the GLBA may apply to an insurance customer’s data and will preempt the CPRA in that context, it is the CPRA (and not the GLBA) that applies to the handling of prospect data — so a nuanced approach is required.
- Use access controls to prevent users from accessing data they shouldn’t. Use both risk level and data category to evaluate what classes of users should have access to each data type. The goal is to give users the minimum amount of data they need for the purpose at hand.
Automate everything with TerraTrue
TerraTrue simplifies and accelerates the entire process of protecting your data, from identifying where you’re using sensitive data to implementing and maintaining your controls. For each new project, it empowers you to quickly identify what data is being collected, account for possible compliance conflicts and burdens, and ensure you remain compliant. It also doubles as an audit log and data map, so you can keep track of where your data is, and make sure you’re providing the necessary controls.
Protect your data without derailing innovation.