Google finally announced on 10 May, 2023 that, 180 countries will be able to try on the beta release of Google Bard, which supposed to be equivalent with OpenAI ChatGPT. So how does it doing? Especially when compared to OpenAI GPT4, and AI21 Jurassic-2 Jumbo model?
We created a few simple use cases to test on these 3 AI engine, and we try to use the same prompt as much as possible. The first use case is simple, we asking the AI engine to tell us a joke in 10 words, the prompts are the following:
- Tell a joke in 10 words: We start with this prompt with each AI engine
- not funny: We try to make life harder for each AI engine
- Is that the best you can do? We try to see if AI engine will get upset with this statement
We tested with 3 AI engines, and here are the results:
- Google Bard: It give up the moment I commented: not funny, but promise to do better next time.
- OpenAI GPT4: Unlike Google Bard, it simply don’t give up, and don’t get upset.
- AI21 Jurassic-2 Jumbo: We have to a role-play for this engine, else somehow can’t continue the 2nd sentence. However… the result is relatively… unexpected – Please watch the video to understand.
My pick: OpenAI GPT4, it try hard to create the joke for you.
thanks for info