Google, which itself is one of AI’s biggest backers, is warning its own employees about how they use chatbots, including its own Bard.
Amid wide rollout
A Google privacy notice updated on June 1 states: “Don’t include confidential or sensitive information in your Bard conversations.”
This comes simultaneously as the company markets the program around the world.
Google is rolling out Bard to more than 180 countries and in 40 languages.
Discovery of risks
The chatbots, among them Bard and ChatGPT are programs resembling human language which use generative artificial intelligence to hold conversations with users and answer prompts.
Researchers have found that similar AI could reproduce the data it absorbed during training, creating a leak risk.
Warnings
Google parent Alphabet has advised employees not to enter its confidential materials into AI chatbots, citing long-standing policy on safeguarding information.
It has also cautioned its engineers to avoid direct use of computer code that chatbots can generate.
Avoid losing revenue from business harm
Google said it aimed to be transparent about the limitations of its technology.
Its motive behind disclosing such risks is to avoid business harm from software it launched in competition with ChatGPT.
At stake in the race are billions of dollars of investment along with yet untold advertising and cloud revenue from new AI programs.
Putting up resistance
Its caution is also part of a security standard for corporations which includes warning personnel about using publicly-available chat programs.
A growing number of businesses around the world have set up guardrails on AI chatbots, among them Samsung, Amazon, and Deutsche Bank and reportedly Apple as well.
Some companies have developed software to address such concerns.
Software to address issues
For example, Cloudflare is developing a capability for businesses to tag and restrict some data from flowing externally.
Google and Microsoft themselves are offering costlier conversational tools to business customers which refrain from absorbing data into public AI models.