The Dark Side of AI Chatbots
How Convenience Compromises Your Privacy and Security
AI chatbots have become valuable tools for productivity and research, as well as creative endeavors like image generation and brainstorming. However, the convenience they offer can draw attention away from the risks associated with sharing sensitive information with these chatbots.
The Data-Sharing Reality
According to the Cyberhaven Q2 2024 AI Adoption and Risk Report, most AI chatbot usage at work is done through personal accounts—73.8% for ChatGPT—and 38% of employees admitting to sharing sensitive work information with chatbots without their employer’s knowledge.
The report reveals that the types of sensitive data most often shared with AI chatbots include not only financial details and legal documents but also HR records, proprietary source code, and confidential research materials, all of which carry serious legal and financial risks if exposed.
Your Secrets May Not Be Safe
The problem is that AI chatbots typically store the information you provide. While it might seem harmless when you’re asking a chatbot to compose an email or help with an analysis, the risk increases when you’re inputting confidential work documents.
To address this problem, enterprise versions of popular AI chatbots, like ChatGPT, Anthropic’s Claude, and Microsoft’s Copilot pledge that data will not be used in training public models and so they are generally considered more secure than their non-enterprise versions. Despite promises not to use data for training, there’s often a lack of transparency about what exactly happens with the data you enter into these chatbots and how it’s stored.
For instance, OpenAI, Anthropic, and Microsoft privacy policies each acknowledge that they collect user input data such as account information and content, as well as IP addresses and device types. They also maintain that the data may still be processed and stored for other purposes such as improving services, compliance, or security and that they may share collected data with their affiliates, vendors, and service providers.
Cracks in the Armor
Considering that data shared with AI chatbots is stored and most interactions take place through personal accounts, the outlook for privacy becomes increasingly concerning. Making matters worse, vulnerabilities in these AI chatbots are discovered regularly. Recent research and incidents show that AI models, even those designed to prioritize security, are not immune to attacks. For instance, a vulnerability in OpenAI’s ChatGPT was discovered that allowed users to see snippets of other users’ conversations as well as some users’ payment information.
Your Privacy Playbook
A common recommendation is to redact or anonymize sensitive data before sharing it with an AI chatbot. Despite these safeguards, research has found that large language models have a remarkable ability to infer personal information from seemingly innocuous text. Even when identifiers like names or addresses are removed, these models can piece together other details such as your occupation, location, or personal habits from the context of your conversation and identify you. Nonetheless, it’s still better to redact or anonymize sensitive data whenever possible, even though it isn’t foolproof.
Additionally, familiarize yourself with the privacy policies of the chatbots you use and take advantage of any “opt-out” or “disable chat history” features offered, but recognize that this doesn’t mean your data isn’t stored, which still represents a security and privacy risk.
For organizations, it’s important to establish AI usage policies and training for your employees that clearly define what information can and cannot be shared with chatbots as well as guidelines for the permissible use of AI tools.
Ultimately, the most important recommendation is also the most obvious—think carefully before sharing any information—sensitive or not. If it’s something you wouldn’t post publicly, it’s probably not something you should share with an AI chatbot.