Apple Prohibits Employees from Utilizing ChatGPT and Bard AI Tools Internally

Apple Prohibits Employees from Utilizing ChatGPT and Bard AI Tools Internally

Apple has implemented a strict ban on the use of chatbots, including ChatGPT and Google Bard, as well as other AI tools built on large language models (LLMs), within its company premises.

In an internal memo obtained by the Wall Street Journal, Apple cautioned its employees against utilizing various artificial intelligence tools during their work at the company.

The primary concern driving Apple’s decision is the potential risk of data leakage. Similar incidents involving other companies have raised alarms, prompting Apple to take preventive measures.

AI tools heavily rely on the analysis and storage of vast amounts of data. Their development is intricately linked to user inputs and interactions, meaning that any work-related data entered by an employee could potentially end up on the servers and become accessible to other users.

Preceding Apple’s ban, companies such as Amazon, Verizon, and Samsung had already taken similar measures to prohibit the use of AI tools during work hours.

Reports indicate that Samsung’s decision, in particular, stemmed from a data breach caused by an employee’s inadvertent mistake. Samsung explicitly warned its employees that non-compliance with the new regulations could result in disciplinary actions, including termination.

Sources within Apple revealed that the company is currently working on developing its own lineup of AI applications for public use, akin to the popular applications that have gained prominence in recent months.

Microsoft, in collaboration with OpenAI, introduced the public to the “Bing” chatbot based on the advanced GPT-4 artificial intelligence model. Google, too, has entered the domain with its experimental Bard robot. Meanwhile, Apple has yet to release any comparable products, leaving industry watchers eager to learn more during the upcoming Worldwide Developers Conference (WWDC) in 2023.