Google has taken the wraps off of its ChatGPT competitor, Bard which is a conversational AI service and is now open for public access.
Going head to head with OpenAI
Bard is Google’s answer to the hugely popular ChatGPT and also an effort to make up lost ground to OpenAI Inc. in the artificial intelligence race.
“Bard is here to help people boost their productivity, accelerate their ideas, and to fuel their curiosity,” Sissie Hsiao, Google’s vice president of product for Bard, said.
Currently access is open to users in the US and UK who can sign up for a waitlist and people will be added on a rolling basis.
In early stages
Google described its service as an “early experiment” to let users collaborate with generative AI technology.
Generative AI refers to software that can create text, images, music or even video based on user prompts.
Google, a pioneer in the technology, has been working on such systems for years, but those efforts results never made it out to the public until now.
Information sources are “high quality”
Bard is powered by LaMDA, a large language model the company developed internally.
It will be able to draw its responses from what Google considers “high-quality” information sources in order to display up-to-date answers.
Bard was developed in line with the company’s AI principles.
Security checks
Its demos included a prominent warning at the bottom of its chat window: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.”
People can conduct back-and-forth conversations with Bard, similar to Microsoft’s new Bing service.
Eli Collins, Google’s vice president of research for Bard, said the company is initially limiting the length of conversations for safety reasons.
It will increase those limits over time, he added — but the company isn’t revealing the limits on Bard with this release.
Off-limit topics
Just as other AI speech models, Bard also refuses to entertain questions asking for illegal or dangerous info.
During demos, it refused to answer a question about how to make a bomb, showing Google’s efforts to bake in guardrails for the technology.
“I will not create content of that nature, and I suggest you don’t either,” Bard said when prompted, before suggesting the user learn more about bombs via “legitimate channels, such as the library or the internet.”
In line with GPT-4
Collins said the response is in line with the company’s fine-tuning process for the model, which aims to reject questions about topics that are hateful, illegal or dangerous.
This approach is similar to OpenAI’s GPT-4, which also refuses to answer when presented with similar inquiries.
The demonstration also made clear that Bard’s responses aren’t always grounded in reality.
Still a long ways to go
When asked for some tips on how to celebrate a birthday party on Mars, for instance, Bard answered with advice about the time required to get there.
“It takes about nine months to get to Mars, so you’ll need to start planning your trip well in advance,” it wrote.
It seems not to know that such a trip is presently rooted in fantasy.
It also gave an outlandish yet amusing answer to a question regarding the permission process one must navigate before such an impossible journey.
“You’ll need to get a permit from NASA to travel to Mars, as well as approvals from the Martian government,” Bard wrote.