A recent study found that ChatGPT fails 35% of finance questions, raising concerns over the use of the tool as a financial advisor.1
This coincides with findings that over a third of U.S. adults who use the tool find themselves “dependent” on it for answers, revealing an over-reliance on AI for work-related matters.2
With this in mind, Indusface, a leading application security firm, sought to investigate what personal and professional data Americans might be oversharing with LLMs, and where the boundaries should be drawn.
ChatGPT No-Gos: Never Share This Data
- Work files, such as reports and presentations
One of the most common categories of information shared with AI is work-related files and documents. Over 80% of professionals in Fortune 500 enterprises use AI tools, such as ChatGPT, to assist with tasks such as refining emails, reports, and presentations.3
However, 11% of the data that employees paste into ChatGPT is strictly confidential, such as internal business strategies. It is therefore recommended to remove sensitive data from files such as business reports and presentations ahead of inputting the file into ChatGPT, as LLMs hold onto this information indefinitely and might share your information with other users if prompted.
- Passwords and access credentials
From a young age, we are taught not to share our passwords with others, and that’s why we rely on notepads, phones, or even our memories to remember the password we have chosen. 24% of Americans store their passwords on a note on their device, whilst 18% save them in an internet browser.4
As LLMs regularly perform both roles, it’s important to remember that they are not designed with confidentiality in mind, but rather the purpose to learn from what users input, the questions they ask, and the information they provide.
- Personal details, such as your name and address
Although this ‘data’ might seem invaluable day-to-day, sharing personal details such as your name, address, and recognizable photos makes you vulnerable to fraud. It is critical to avoid feeding LLMs information that might allow fraudsters to either 1) impersonate you, or 2) create deepfakes, which depict people saying or doing something they never said or did.5
If either situation were to happen, it could damage both personal and professional reputations. If the above information is shared about a colleague without their knowledge and fraud or deepfakes were to happen, it would create severe distrust and lead to legal action against the company.
This is why AI literacy and education is critical for business operations in the age of technology.
- Financial information
LLMs like ChatGPT can be a useful tool to explain financial topics or even conduct some level of financial analysis, but should never be used as a tool for a business’s financial decisions. LLMs are lacking in numerical literacy as they are primarily a word-processing tool, so inputting financial figures into ChatGPT is likely to output mistakes and potentially harmful business strategies.
It is best practice to use LLMs as an aid in your understanding of finance, rather than a tool to calculate solutions or make important financial decisions.
- Company codebases and intellectual property (IP)
Developers and employees increasingly turn to AI for coding assistance; however, sharing company codebases can pose a major security risk, as it is a business’s core intellectual property. If proprietary source codes are pasted into AI platforms, they may be stored, processed, or even used to train future AI models, potentially exposing trade secrets to external entities.
References
- The Intermediary ChatGPT gave incorrect answers to 35% of finance questions, research finds
- Info-Security Magazine | Over a Third of Employees Secretly Sharing Work Info with AI
- Masterofcode | MOCG Picks: 10 ChatGPT Statistics Every Business Leader Should Know
- Pew Research Center | Password management and mobile security
- Government Accountability Office | Deconstructing Deepfakes—How do they work and what are the risks?