
The top 5 sensitive data types employees are feeding AI—and why it’s a risk
Strategic HRSkillingHR Technology#AdaptableHR#Cybersecurity#Artificial Intelligence
The adoption of generative AI (GenAI) tools in the workplace is raising serious security and compliance concerns, as employees increasingly input sensitive company data into publicly available AI platforms.
A recent survey by TELUS Digital Experience revealed that 57% of employees at large enterprises admitted to entering confidential information into AI tools such as ChatGPT, Google Gemini, and Microsoft Copilot.
Conducted in January 2025 via Pollfish, the survey collected responses from 1,000 U.S.-based employees working at companies with at least 5,000 staff members. A key finding was the extensive use of personal AI accounts for work-related tasks, often bypassing corporate IT and security oversight. Notably, 68% of employees accessed GenAI assistants through personal accounts rather than company-approved platforms, contributing to the rise of ‘shadow AI’—the unregulated adoption of AI tools that increases the risk of data leaks and regulatory violations.
Similar trends are evident in Australia, where two out of three office workers use AI tools without formal company approval. This lack of control, combined with limited regulatory oversight, creates an environment where sensitive data is shared with little consideration for security implications.
Sensitive Data at Risk
A study by Harmonic Security found that 8.5% of all AI-generated prompts contain sensitive data. Among these:
-
Customer data accounts for 46%
-
Employee data comprises 27%
-
Legal and financial records make up 15%
Lack of risk awareness, pressure to boost productivity, and the absence of clear security policies are some of the reasons why employees frequently share sensitive data with AI tools. Many users are unaware that the information they input can be stored or used to refine AI systems, making them susceptible to leaks. Additionally, inadequate training on AI security exacerbates the issue, leaving employees without proper guidance on safe AI usage.
Convenience is another factor. AI tools streamline tasks such as drafting documents, summarizing reports, and managing workflows, making them highly attractive for workplace use.
However, reliance on personal accounts rather than company-approved platforms increases exposure risks. A false sense of privacy further compounds the problem, as many employees mistakenly believe their data is either deleted immediately or inaccessible to third parties. A lack of enforcement and accountability within organizations reinforces this behavior, weakening overall data security.
Despite corporate policies restricting AI use for sensitive data, many employees continue to bypass these guidelines. The survey found that:
-
31% admitted to inputting personal details such as names, addresses, emails, and phone numbers.
-
29% disclosed project-specific information, including unreleased product details and prototypes.
-
21% entered customer-related data, such as order histories, chat logs, and recorded communications.
-
11% shared financial data, including revenue figures, profit margins, and budget forecasts.
While 29% of respondents indicated that their companies had clear AI guidelines, enforcement remains inconsistent. Only 24% of employees reported receiving mandatory AI training, while 44% were unsure whether AI policies even existed within their organization. Furthermore, 50% admitted they did not know if they were adhering to AI-related policies, and 42% noted there were no consequences for failing to comply.
AI Adoption Accelerates Despite Security Concerns
Despite these risks, workplace AI adoption continues to surge, with employees citing significant productivity benefits:
-
60% reported that AI assistants help them complete tasks faster.
-
57% noted improvements in efficiency.
-
49% reported enhanced work performance.
-
84% expressed interest in continuing to use AI at work.
Among those who support AI integration, 51% emphasized its role in creative tasks, while 50% pointed to its ability to automate repetitive processes. However, security experts warn that unregulated AI usage poses substantial risks to data sovereignty, intellectual property protection, and regulatory compliance.
The Five Most Common Types of Sensitive Data Shared with AI
A closer examination of data shared with AI platforms highlights key vulnerabilities:
-
Customer Data: Nearly half of all confidential information shared involves customer details. Employees frequently use AI to process claims, summarize documents, and manage customer interactions, potentially violating privacy regulations such as GDPR.
-
Employee Records: HR departments and managers input performance evaluations, payroll data, and personally identifiable information (PII), risking compliance breaches and legal repercussions.
-
Financial and Legal Information: AI tools are often used for spell-checking, contract translation, and document summarization, exposing financial projections, legal agreements, and merger details.
-
Security and Access Credentials: Some employees still enter passwords, encryption keys, and network configurations into AI models, increasing the risk of cyberattacks.
-
Proprietary Code and Intellectual Property: Developers use AI for debugging and optimization, sometimes inadvertently exposing proprietary algorithms and software architecture, which could lead to competitive disadvantages and security threats.
Amid growing security concerns, major corporations such as Apple, Amazon, and Deloitte have imposed restrictions on AI tools like ChatGPT. Their cautious approach reflects an industry-wide recognition that unrestricted AI use could lead to data breaches and regulatory violations.
Six strategies to mitigate risks
-
Establishing and enforcing clear AI usage policies – Develop comprehensive guidelines that define acceptable AI use, ensuring employees understand security risks and compliance requirements.
-
Providing employees with mandatory AI security training – Educate staff on the risks of AI misuse, best practices for data protection, and company policies to minimize security vulnerabilities.
-
Restricting the entry of customer, employee, and financial data into public AI models – Prevent unauthorized exposure of sensitive data by enforcing strict controls on what information can be shared with AI tools.
-
Avoiding AI for processing legal or compliance-related documents – Minimize regulatory and legal risks by prohibiting the use of AI for drafting or analyzing sensitive contracts, agreements, or compliance materials.
-
Prohibiting the sharing of proprietary code, passwords, and encryption keys – Safeguard intellectual property and cybersecurity by banning the entry of confidential business assets into AI platforms.
Regularly updating AI policies to align with evolving security best practices – Continuously review and adapt AI governance strategies to keep pace with technological advancements and emerging security threats.