People Matters Logo

The top 5 sensitive data types employees are feeding AI—and why it’s a risk

• By Gabriela Paz
The top 5 sensitive data types employees are feeding AI—and why it’s a risk

The adoption of generative AI (GenAI) tools in the workplace is raising serious security and compliance concerns, as employees increasingly input sensitive company data into publicly available AI platforms. 

A recent survey by TELUS Digital Experience revealed that 57% of employees at large enterprises admitted to entering confidential information into AI tools such as ChatGPT, Google Gemini, and Microsoft Copilot.

Conducted in January 2025 via Pollfish, the survey collected responses from 1,000 U.S.-based employees working at companies with at least 5,000 staff members. A key finding was the extensive use of personal AI accounts for work-related tasks, often bypassing corporate IT and security oversight. Notably, 68% of employees accessed GenAI assistants through personal accounts rather than company-approved platforms, contributing to the rise of ‘shadow AI’—the unregulated adoption of AI tools that increases the risk of data leaks and regulatory violations.

Similar trends are evident in Australia, where two out of three office workers use AI tools without formal company approval. This lack of control, combined with limited regulatory oversight, creates an environment where sensitive data is shared with little consideration for security implications.

Sensitive Data at Risk

A study by Harmonic Security found that 8.5% of all AI-generated prompts contain sensitive data. Among these:

Lack of risk awareness, pressure to boost productivity, and the absence of clear security policies are some of the reasons why employees frequently share sensitive data with AI tools. Many users are unaware that the information they input can be stored or used to refine AI systems, making them susceptible to leaks. Additionally, inadequate training on AI security exacerbates the issue, leaving employees without proper guidance on safe AI usage.

Convenience is another factor. AI tools streamline tasks such as drafting documents, summarizing reports, and managing workflows, making them highly attractive for workplace use. 

However, reliance on personal accounts rather than company-approved platforms increases exposure risks. A false sense of privacy further compounds the problem, as many employees mistakenly believe their data is either deleted immediately or inaccessible to third parties. A lack of enforcement and accountability within organizations reinforces this behavior, weakening overall data security.

Despite corporate policies restricting AI use for sensitive data, many employees continue to bypass these guidelines. The survey found that:

While 29% of respondents indicated that their companies had clear AI guidelines, enforcement remains inconsistent. Only 24% of employees reported receiving mandatory AI training, while 44% were unsure whether AI policies even existed within their organization. Furthermore, 50% admitted they did not know if they were adhering to AI-related policies, and 42% noted there were no consequences for failing to comply.

AI Adoption Accelerates Despite Security Concerns

Despite these risks, workplace AI adoption continues to surge, with employees citing significant productivity benefits:

Among those who support AI integration, 51% emphasized its role in creative tasks, while 50% pointed to its ability to automate repetitive processes. However, security experts warn that unregulated AI usage poses substantial risks to data sovereignty, intellectual property protection, and regulatory compliance.

The Five Most Common Types of Sensitive Data Shared with AI

A closer examination of data shared with AI platforms highlights key vulnerabilities:

Amid growing security concerns, major corporations such as Apple, Amazon, and Deloitte have imposed restrictions on AI tools like ChatGPT. Their cautious approach reflects an industry-wide recognition that unrestricted AI use could lead to data breaches and regulatory violations.

Six strategies to mitigate risks

Regularly updating AI policies to align with evolving security best practices – Continuously review and adapt AI governance strategies to keep pace with technological advancements and emerging security threats.