• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Google Bard Chatbot Trial Launches in USA and UK

T0@st

News Editor
Joined
Mar 7, 2023
Messages
2,077 (3.16/day)
Location
South East, UK
We're starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We're beginning with the U.S. and the U.K., and will expand to more countries and languages over time. Today we're starting to open access to Bard, an early experiment that lets you collaborate with generative AI. This follows our announcements from last week as we continue to bring helpful AI experiences to people, businesses and communities.

You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post. We've learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people.



About Bard

Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time. It's grounded in Google's understanding of quality information. You can think of an LLM as a prediction engine. When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next. Picking the most probable choice every time wouldn't lead to very creative responses, so there's some flexibility factored in. We continue to see that the more people use them, the better LLMs get at predicting what responses might be helpful.

While LLMs are an exciting technology, they're not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently. For example, when asked to share a couple suggestions for easy indoor plants, Bard convincingly presented ideas…but it got some things wrong, like the scientific name for the ZZ plant.



Although it's important to be aware of challenges like these, there are still incredible benefits to LLMs, like jumpstarting human productivity, creativity and curiosity. And so, when using Bard, you'll often get the choice of a few different drafts of its response so you can pick the best starting point for you. You can continue to collaborate with Bard from there, asking follow-up questions. And if you want to see an alternative, you can always have Bard try again.



Bard is a direct interface to an LLM, and we think of it as a complementary experience to Google Search. Bard is designed so that you can easily visit Search to check its responses or explore sources across the web. Click "Google it" to see suggestions for queries, and Search will open in a new tab so you can find relevant results and dig deeper. We'll also be thoughtfully integrating LLMs into Search in a deeper way — more to come.

Building Bard responsibly

Our work on Bard is guided by our AI Principles, and we continue to focus on quality and safety. We're using human feedback and evaluation to improve our systems, and we've also built in guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic.



Sign up to try Bard

In case you were wondering: Bard did help us write this blog post — providing an outline and suggesting edits. Like all LLM-based interfaces, it didn't always get things right. But even then, it made us laugh.



We'll continue to improve Bard and add capabilities, including coding, more languages and multimodal experiences. And one thing is certain: We'll learn alongside you as we go. With your feedback, Bard will keep getting better and better.

You can sign up to try Bard at bard.google.com. We'll begin rolling out access in the U.S. and U.K. today and expanding over time to more countries and languages.

Until next time, Bard out!

View at TechPowerUp Main Site | Source
 
Joined
Jun 18, 2021
Messages
2,570 (2.00/day)
You can practically smell the stench of Google sweat.

Not really, they've been at it for longer than anyone else, just didn't release anything to the public. OpenAI forced their hand, but they've been at it and producing results long before them

The long road to LaMDA​

LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.

Case in point, openai/gpt-3 uses the technology that google invented before anyone else
 
Joined
Oct 15, 2004
Messages
189 (0.03/day)
Location
Peterborough, UK
System Name IONE
Processor AMD Ryzen 9 5900X
Motherboard ASUS STRIX B550-A Gaming
Cooling Noctua NH-U12S SE-AM4
Memory 128GB (4x32GB) Corsair DDR4 Vengeance LPX Black, PC4-25600 (3200), CMK128GX4M4E3200C16
Video Card(s) PNY GeForce RTX 3080 12GB
Storage Samsung 980 1TB NVMe (system), Lexar NM790 4TB NVMe (temp), 16x Seagate IronWolf 10TB RAID6
Display(s) Dell UP3017
Case Lian-Li PC-777B
Audio Device(s) Focal Alpha 65 Evo
Power Supply Corsair AX1200
Mouse Logitech M510
Keyboard Keychron Q10, brass plate, Kailh Box Summer switches and PBT Cherry keycaps
Software Xubuntu 22.04
Benchmark Scores N/A
After using both ChatGPT and Bard, Google have quite a catch-up to do but eventually I'm sure they will get ahead.

It's a bit like the Google Home vs Amazon Echo launch.
 
Joined
Apr 19, 2018
Messages
1,227 (0.50/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
Hahaha. I love it when Google gets taken down a peg or two.
 
Joined
Aug 20, 2007
Messages
21,546 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 5800X Optane 800GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
Not really, they've been at it for longer than anyone else, just didn't release anything to the public. OpenAI forced their hand, but they've been at it and producing results long before them



Case in point, openai/gpt-3 uses the technology that google invented before anyone else
This. Google invented nearly all the stuff all your "better" models use, people. They aren't some kind of noob here.

I believe this is based on the same model that google fired an engineer for who was firmly convinced the model had become sentient, and deserved rights. As silly as that was, this is NOT a primitive model.

 
Last edited:

Wye

Joined
Feb 15, 2023
Messages
204 (0.30/day)
After decades of AI research, what we have in 2023 are glorified search engines that have a "thinking loop" to simulate thinking and type-delay to simulate human like typing.
It is simply gross.
 
Joined
Jun 18, 2021
Messages
2,570 (2.00/day)
"thinking loop" to simulate thinking and type-delay to simulate human like typing

The marketing might say that, but it's far from the only reason (considering it's even a real reason to begin with which I doubt). These things have to have delays because the queries take time to resolve and are freaking expensive.

One of the big challenges ahead for something like chatgpt and bard and all these tools will be profitability, those servers don't run on hopes and dreams, green hard cash is necessary to keeps the lights on.

A couple months ago when Alphabet took a dive I saw an estimate from Morgan Stanley that put an AI infused Google search at around a penny for query - doesn't sound like much right? how many google searches are done again? Google's first result says 6.3 million a minute

This article from 2 days ago puts it much much worse at an estimated 36 cents per chatgpt query.


 
Top