Alphabet, the parent company of Google, has issued a cautionary notice to its employees regarding the use of AI chatbots, urging them to refrain from sharing sensitive or confidential information with these advanced conversational tools. The warning encompasses not only Google’s Bard chatbot but also other popular chatbots like Microsoft-backed ChatGPT developed by OpenAI. The move comes as Alphabet seeks to address concerns about potential data leaks and the inadvertent exposure of trade secrets through these AI-powered applications.
As part of ongoing efforts to enhance and refine AI technology, human reviewers may access conversations between users and chatbots, which raises concerns about personal privacy and the inadvertent disclosure of confidential information. Furthermore, chatbots are partially trained using user text exchanges, creating the possibility of sensitive data being repeated to the public if specific prompts are used. Alphabet, with a particular emphasis on protecting trade secrets, deems these risks significant enough to warrant employee caution.
Bard, like ChatGPT, is now available for public use. The Bard webpage specifically cautions users against including personally identifiable information or data that could identify others during conversations. Google’s data collection practices involve the aggregation of Bard conversations, related product usage information, location data, and user feedback to enhance its suite of products and services, including Bard itself. Stored for a period of up to 18 months, Bard activity can be adjusted to three or 36 months through a user’s Google account. To preserve privacy, Bard conversations are disconnected from the user’s Google account before being reviewed by human moderators.
According to Reuters, Alphabet recently expanded its warning to employees, advising them to refrain from using precise computer code generated by chatbots. While Bard has been known to make occasional “undesired code suggestions,” the tool is still considered a valuable programming aid. By discouraging the direct utilization of chatbot-generated code, Alphabet aims to reduce potential risks associated with unintended consequences or vulnerabilities.
Alphabet is not alone in taking measures to address privacy and security risks related to AI chatbots. Samsung recently instructed its employees to exercise caution after several instances of sensitive semiconductor-related data being shared with ChatGPT. Similarly, companies like Apple and Amazon reportedly have internal policies in place to mitigate potential risks associated with chatbot usage.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
