In the realm of text generation, ChatGPT has become a name on everyone's lips. Its remarkable achievements include breezing through a business school exam, leaving educators bewildered as they sought to catch cheaters, and serving as a trusty companion in composing eloquent emails for both professional and personal connections. It is no wonder that ChatGPT's popularity has skyrocketed to unprecedented heights.
According to an analysis conducted by Swiss bank UBS, ChatGPT has become the fastest-growing app in the annals of history. Astoundingly, a mere two months after its grand debut, ChatGPT captivated a staggering 100 million active users in January. To put this feat into perspective, it took nine months for the viral TikTok to reach the same milestone. While ChatGPT is gaining traction among businesses, on the other hand, some are advising their employees to steer clear of it.
Here are some of them:
Apple banned its employees from using the AI chatbot ChatGPT, citing concerns about data leaks. The company is developing its own AI technology and is worried that employees using ChatGPT could expose confidential information.
Amazon has warned its employees not to share code with the AI chatbot ChatGPT. The company is concerned that the bot could be used to access and steal confidential information. The warning comes after ChatGPT was reportedly able to generate responses that mimicked internal Amazon data.
Chatbots are changing the way businesses operate, and the retail industry is no exception. Walmart Global Tech restricted the use of ChatGPT after it was found to be sharing confidential corporate and customer information. The memo, which was leaked, stated that ChatGPT had been blocked following "activity that presented risk to our company."
JPMorgan Chase also prohibited its employees from using ChatGP. The organisation's decision is consistent with its rules on the use of third-party software. The company also declined to remark on its policy on employees using ChatGPT for work.
Samsung banned the use of ChatGPT after employees accidentally revealed sensitive information to the chatbot. According to Bloomberg, a memo to employees announced the restriction of generative AI systems on company-owned devices and internal networks. Samsung employees had shared source code with ChatGPT to check for errors and used it to summarise meeting notes.
Citigroup has prohibited access to the chatbot as part of its automatic restrictions on third-party software, according to a Bloomberg report, citing a source familiar with the matter. The restrictions are in place to protect the company's data and systems from potential security risks.
Goldman Sachs has also limited its employees' use of ChatGPT, the AI chatbot. According to a Bloomberg report, the company disabled access to the chatbot as part of its automatic limitations on third-party software.
Central Bank of Ireland
Ireland’s central bank has become the latest financial institution to bar its employees from using ChatGPT. According to a report by The Business Post, the Central Bank of Ireland imposed the ban in line with its cybersecurity rules. A spokesperson for the bank told the newspaper that the central bank had “implemented appropriate and relevant technical and organisational measures to ensure the ongoing protection of the organisation.”
As the chatbot is capable of collecting and storing sensitive information, Deutsche Bank has too disabled access to ChatGPT for its employees, according to a spokesperson for the lender. The spokesperson revealed that the decision was made due to concerns about the security and privacy of sensitive information.
Wells Fargo has also forbidden access to ChatGPT for its employees, according to a spokesperson for the bank. The spokesperson said that the decision was made as part of the bank's standard control procedures for implementing third-party software. In addition to the security risks, there are also concerns about the accuracy and reliability of ChatGPT.
According to IBL News, Verizon has blocked ChatGPT from its corporate systems. The company claims that ChatGPT could put sensitive customer data, such as phone numbers and addresses, at risk.
Northrop Grumman, an aerospace and defense technology company, has restricted its employees from using tools like ChatGPT in their work, according to The Wall Street Journal. The company told the outlet that it was limiting employee use until the technology is fully vetted.
The concerns about ChatGPT highlight the need for companies to carefully consider the security risks of using AI-powered tools. Companies need to put in place appropriate security measures to protect sensitive information and they need to train employees on how to use these tools safely.