TechHR
ex
L&D
UNPLUGGED
Sphere
About Us • Contact Us
People Matters ANZ
People Matters Logo
Login / Signup
People Matters Logo
Login / Signup
  • Current
  • Top Stories
  • News
  • Magazine
  • Research
  • Events
  • Videos
  • Webinars
  • Podcast

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

 

 

  • HotTopic
    HR Folk Talk FutureProofHR
  • Strategy
    Leadership Csuite StrategicHR EmployeeRelations BigInterview
  • Recruitment
    Employer Branding Appointments Permanent Hiring Recruitment
  • Performance
    Skilling PerformanceMgmt Compensation Benefits L&D Employee Engagement
  • Culture
    Culture Life@Work Diversity Watercooler SheMatters
  • Tech
    Technology HR Technology Funding & Investment Startups Metaverse
  • About Us
  • Advertise with us
  • Become a sponsor
  • Contact Us
  • Feedback
  • Write For Us

Follow us:

Privacy Policy • Terms of Use

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

People Matters Logo
  • Current
  • Top Stories
  • News
  • Magazine
  • Research
  • Events
  • Videos
  • Webinars
  • Podcast
Login / Signup

Categories:

  • HotTopic
    HR Folk Talk FutureProofHR
  • Strategy
    Leadership Csuite StrategicHR EmployeeRelations BigInterview
  • Recruitment
    Employer Branding Appointments Permanent Hiring Recruitment
  • Performance
    Skilling PerformanceMgmt Compensation Benefits L&D Employee Engagement
  • Culture
    Culture Life@Work Diversity Watercooler SheMatters
  • Tech
    Technology HR Technology Funding & Investment Startups Metaverse
The top 5 sensitive data types employees are feeding AI—and why it’s a risk

Story • 5th Mar 2025 • 5 Min Read

The top 5 sensitive data types employees are feeding AI—and why it’s a risk

Strategic HRSkillingHR Technology#AdaptableHR#Cybersecurity#Artificial Intelligence

Author: Gabriela Paz Y Miño Gabriela Paz Y Miño
997 Reads
As employees increasingly integrate AI tools like ChatGPT into their daily work, they unknowingly expose sensitive company and personal data. A new report highlights the five most common types of confidential information being shared—and the risks that come with it.

The adoption of generative AI (GenAI) tools in the workplace is raising serious security and compliance concerns, as employees increasingly input sensitive company data into publicly available AI platforms. 

A recent survey by TELUS Digital Experience revealed that 57% of employees at large enterprises admitted to entering confidential information into AI tools such as ChatGPT, Google Gemini, and Microsoft Copilot.

Conducted in January 2025 via Pollfish, the survey collected responses from 1,000 U.S.-based employees working at companies with at least 5,000 staff members. A key finding was the extensive use of personal AI accounts for work-related tasks, often bypassing corporate IT and security oversight. Notably, 68% of employees accessed GenAI assistants through personal accounts rather than company-approved platforms, contributing to the rise of ‘shadow AI’—the unregulated adoption of AI tools that increases the risk of data leaks and regulatory violations.

Similar trends are evident in Australia, where two out of three office workers use AI tools without formal company approval. This lack of control, combined with limited regulatory oversight, creates an environment where sensitive data is shared with little consideration for security implications.

Sensitive Data at Risk

A study by Harmonic Security found that 8.5% of all AI-generated prompts contain sensitive data. Among these:

  • Customer data accounts for 46%

  • Employee data comprises 27%

  • Legal and financial records make up 15%

Lack of risk awareness, pressure to boost productivity, and the absence of clear security policies are some of the reasons why employees frequently share sensitive data with AI tools. Many users are unaware that the information they input can be stored or used to refine AI systems, making them susceptible to leaks. Additionally, inadequate training on AI security exacerbates the issue, leaving employees without proper guidance on safe AI usage.

Convenience is another factor. AI tools streamline tasks such as drafting documents, summarizing reports, and managing workflows, making them highly attractive for workplace use. 

However, reliance on personal accounts rather than company-approved platforms increases exposure risks. A false sense of privacy further compounds the problem, as many employees mistakenly believe their data is either deleted immediately or inaccessible to third parties. A lack of enforcement and accountability within organizations reinforces this behavior, weakening overall data security.

Despite corporate policies restricting AI use for sensitive data, many employees continue to bypass these guidelines. The survey found that:

  • 31% admitted to inputting personal details such as names, addresses, emails, and phone numbers.

  • 29% disclosed project-specific information, including unreleased product details and prototypes.

  • 21% entered customer-related data, such as order histories, chat logs, and recorded communications.

  • 11% shared financial data, including revenue figures, profit margins, and budget forecasts.

While 29% of respondents indicated that their companies had clear AI guidelines, enforcement remains inconsistent. Only 24% of employees reported receiving mandatory AI training, while 44% were unsure whether AI policies even existed within their organization. Furthermore, 50% admitted they did not know if they were adhering to AI-related policies, and 42% noted there were no consequences for failing to comply.

AI Adoption Accelerates Despite Security Concerns

Despite these risks, workplace AI adoption continues to surge, with employees citing significant productivity benefits:

  • 60% reported that AI assistants help them complete tasks faster.

  • 57% noted improvements in efficiency.

  • 49% reported enhanced work performance.

  • 84% expressed interest in continuing to use AI at work.

Among those who support AI integration, 51% emphasized its role in creative tasks, while 50% pointed to its ability to automate repetitive processes. However, security experts warn that unregulated AI usage poses substantial risks to data sovereignty, intellectual property protection, and regulatory compliance.

The Five Most Common Types of Sensitive Data Shared with AI

A closer examination of data shared with AI platforms highlights key vulnerabilities:

  • Customer Data: Nearly half of all confidential information shared involves customer details. Employees frequently use AI to process claims, summarize documents, and manage customer interactions, potentially violating privacy regulations such as GDPR.

  • Employee Records: HR departments and managers input performance evaluations, payroll data, and personally identifiable information (PII), risking compliance breaches and legal repercussions.

  • Financial and Legal Information: AI tools are often used for spell-checking, contract translation, and document summarization, exposing financial projections, legal agreements, and merger details.

  • Security and Access Credentials: Some employees still enter passwords, encryption keys, and network configurations into AI models, increasing the risk of cyberattacks.

  • Proprietary Code and Intellectual Property: Developers use AI for debugging and optimization, sometimes inadvertently exposing proprietary algorithms and software architecture, which could lead to competitive disadvantages and security threats.

Amid growing security concerns, major corporations such as Apple, Amazon, and Deloitte have imposed restrictions on AI tools like ChatGPT. Their cautious approach reflects an industry-wide recognition that unrestricted AI use could lead to data breaches and regulatory violations.

Six strategies to mitigate risks

  • Establishing and enforcing clear AI usage policies – Develop comprehensive guidelines that define acceptable AI use, ensuring employees understand security risks and compliance requirements.

  • Providing employees with mandatory AI security training – Educate staff on the risks of AI misuse, best practices for data protection, and company policies to minimize security vulnerabilities.

  • Restricting the entry of customer, employee, and financial data into public AI models – Prevent unauthorized exposure of sensitive data by enforcing strict controls on what information can be shared with AI tools.

  • Avoiding AI for processing legal or compliance-related documents – Minimize regulatory and legal risks by prohibiting the use of AI for drafting or analyzing sensitive contracts, agreements, or compliance materials.

  • Prohibiting the sharing of proprietary code, passwords, and encryption keys – Safeguard intellectual property and cybersecurity by banning the entry of confidential business assets into AI platforms.

Regularly updating AI policies to align with evolving security best practices – Continuously review and adapt AI governance strategies to keep pace with technological advancements and emerging security threats.

Read More

Did you find this article helpful?


You Might Also Like

How AI Can Reduce Meeting Fatigue

STORY • 19th Mar 2025 • 7 Min Read

How AI Can Reduce Meeting Fatigue

Strategic HR#DigitalTransformation#Work Culture#RemoteWork#Wellbeing#Artificial Intelligence
How to Build a Powerful Reward Strategy

STORY • 25th Feb 2025 • 4 Min Read

How to Build a Powerful Reward Strategy

Strategic HRHR AnalyticsBenefits & RewardsLeadership Solutions#Career#RedefiningRewards
8 Leaders You Should Follow in 2025

STORY • 25th Dec 2024 • 5 Min Read

8 Leaders You Should Follow in 2025

LeadershipStrategic HRTalent Management#Innovation#FutureHRLeadership#Create The Future
NEXT STORY: The workplace trend that could kill your employer brand

Trending Stories

  • design-thinking-hr

    New tech incoming: how do you get people to accept it?

  • design-thinking-hr

    Why people metrics matter more than ever

  • design-thinking-hr

    ChatGPT Walks It Back — AI Giants Race Ahead

  • design-thinking-hr

    Skype is dead: Did Microsoft's leadership let a billion-doll...

People Matters Logo

Follow us:

Join our mailing list:

By clicking “Subscribe” button above, you are accepting our Terms & Conditions and Privacy Policy.

Company:

  • About Us
  • Advertise with us
  • Become a sponsor
  • Privacy Policy
  • Terms of Use

Contact:

  • Contact Us
  • Feedback
  • Write For Us

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

Get the latest Articles, Insight, News & Trends from the world of Talent & Work. Subscribe now!
People Matters Logo

Welcome Back!

or

Enter your registered email address to login

Not a user yet? Lets get you signed up!

A 5 digit OTP has been sent to your email address.

This is so we know it's you. Haven't received it yet? Resend the email or then change your email ID.

People Matters Logo

Welcome! Let's get you signed up...

Starting with the absolulte basics.

Already a user? Go ahead and login!

A 5 digit OTP has been sent to your email address.

This is so we know it's you. Haven't received it yet? Resend the email or then change your email ID.

Let's get to know you better

We'll never share your details with anyone, pinky swear.

And lastly...

Your official designation and company name.