Samsung has issued a temporary ban on the use of generative AI tools such as OpenAI’s ChatGPT on its internal networks and company-owned devices due to concerns about privacy and security, according to Bloomberg News reports. The ban comes after the South Korean firm discovered that some of its employees had leaked internal source code by uploading it to ChatGPT. The company is worried that uploading sensitive information to these platforms may pose a risk of exposing it publicly and limiting Samsung’s ability to delete it afterward. As a result, the firm has asked staff not to use these tools on company-owned devices and to avoid inputting company-related details on personal devices.
The ban applies to Samsung’s employees worldwide, and the company has warned that failure to adhere to the security guidelines may result in disciplinary action, including termination of employment. The memo also reveals that Samsung is working on developing in-house solutions to help with translation, summarizing documents, and software development, with plans to allow employees to use AI tools eventually.
This move by Samsung follows similar restrictions by other companies and institutions that have limited the use of generative AI tools due to various reasons. JPMorgan has restricted their use over compliance concerns, while other banks such as Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, and Wells Fargo have either banned or restricted the use of such tools. New York City schools have banned ChatGPT over cheating and misinformation fears, while data protection and child safety concerns were cited as the reason for ChatGPT’s temporary ban in Italy, But the ban was soon lifted last Friday.
Samsung imposed the ban after discovering that certain employees leaked internal source code by uploading it to ChatGPT. The company is concerned that the practice of uploading sensitive company information to external servers operated by AI providers increases the risk of public exposure and limits its ability to remove the information after the fact. This policy follows news of a bug in ChatGPT that exposed some chat histories and potential payment information to other users of the service about a month ago. OpenAI reviews conversations users have with ChatGPT to improve its systems and ensure compliance with its policies and safety requirements and advises users not to share any sensitive information in their conversations. The company has also rolled out a feature similar to a browser’s “incognito mode,” which does not save chat histories and prevents them from being used for training.