I Tested Google Gemini and ChatGPT4 to Find Which One is Best

Shariq Ahmed
6 min readJan 10, 2024

--

Steak recipes. Restaurant recommendation. Homework solution. There’s literally nothing ChatGPT couldn’t do.

When ChatGPT was first launched on Nov 30, 2022, people were going bananas about it. Everyone was using it for everything. Then on March 14, 2023, OpenAI launched ChatGPT-4. And it was just astonishingly awesome! But the problem was: Money. People have to pay $20 to use it. This prevent many people from using GPT-4

And then there was this limitation thing. Every 3 hours, one can ask only 50 questions from ChatGPT-4. Fast forward to February 2023, Bard was launched by Google. People started using it as well. But the problem was Bard wasn’t as good as ChatGPT-4 when it comes to coding problems. So, what was expected happened. People again started using ChatGPT-4.

Maybe Google’s reaction would be this. Who knows?

But they decided to keep their chin up because behind closed doors they were working on something big.

Finally on December 6, 2023, Google released Gemini–an AI model like ChatGPT. Soon after, its video started sweeping across social media. A person was seen asking varying questions from Gemini. But to be very honest, I find that video too good to be true. Also, as per Google, Gemini is better than ChatGPT-4. But how true was their claim? To find this, I decided to test both Gemini and ChatGPT-4.

Brief Difference Between Gemini and ChatGPT-4

Gemini and ChatGPT-4 Difference
Gemini and ChatGPT-4 Difference

Gemini and ChatGPT-4 Comparison

Let’s buckle up and see what’s better between ChatGPT-4 and Gemini

  1. Time

I asked both Gemini and ChatGPT-4, ‘What time is it?’

ChatGPT-4 correctly answered the question. Bard, on the other hand, straightaway refused to even solve my question!

2. Shapes

Both ChatGPT-4 and Bard did a wonderful job in telling the right shape. So, I think it can be of great help to learn shapes using Gemini and ChatGPT-4.

3. Reasoning

I gave a simple riddle to ChatGPT-4 and Gemini. Both performed well here.

But then I decided to further test both for a complicated riddle. ChatGPT-4 aced the test! Bard gives every answer except the correct one.

4. Visual Question Answering

I provided an image of coins to Bard(Gemini). There were 15 coins in the image. But Gemini straightaway rejected counting the coins.

ChatGPT-4–the one paving the way for Gemini successfully counted coins.

5. Optical Character Recognition (OCR)

I asked Gemini to tell me the serial number. As expected, I was given the wrong answer. But I still appreciate it. At least it tried!

As per my expectations, ChatGPT-4, answered correctly.

Fun Fact: Gemini can’t remember the context. ChatGPT-4, even ChatGPT-3.5, remembers the context.

6. Document OCR

I provided Gemini with a picture that was talking about the difference between SupaBase and Firebase. I can’t tell you how happy I was when Gemini correctly answered this question! Yay!

Again, ChatGPT-4 gave me an accurate answer.

7. Object Identification

Given the massive amount of data on which Gemini is trained upon, I was expecting that it would ace my next test. But it was depressing to see it giving the wrong answer. Instead of straightaway mentioning that the image contains a bearded dragon, it said the image contains a central bearded dragon. Argh!

ChatGPT-4 gave the right answer. Again!

8. Movie Identification

I thought–maybe–Gemini is a movie fan, so I gave him an image from a Willie Wonka movie. Contrary to my expectations, it refused to even read the image. Sad reacts :(

The winner, ChatGPT-4 correctly answered my question–it didn’t give me a correct answer. But at least it tried.

9. Lyrics

By now, I know who’ll perform better. But still I decided to give one more chance to Gemini. But the answer wasn’t up to the mark. I mean Gemini should at least give some lyrics. What do you think?

10. Informal Words

Gemini was 90% correct.. ChatGPT-4 was 100% correct.

An open letter to Google Team: Hello isn’t an informal word 😁

So all in all, ChatGPT-4 passed my test with flying colors. And now, I’m convinced that Google’s claims were just full of hot air. As of 2024, ChatGPT-4 is leading this whole race of AI tools. Google’s Gemini is great. Sure. But Gemini lags behind. I’m not sure till when. But it seems like Google will soon catch up. Fingers crossed!

What to Use Between Gemini and ChatGPT-4 when it comes to Scientific Publications?

When it comes to research — like the actual research — there’s no way one should use any AI tool. Scientific observation should be made in labs. AI tools shouldn’t be used here. But if you want to fact-check anything, by all means use ChatGPT-4 or Bing. Good thing is that both backed their answers with popular articles.

ChatGPT-4 uses Bing. Gemini, on the other hand, uses Google.

All in all, between ChatGPT-4 and Gemini, ChatGPT-4 is the best. Gemini is only good at basic questions like shape identification. ChatGPT-4 is good at everything — visual questions, movie identification, OCR, Document OCR, lyrics , time, riddles, shape identification.

--

--

Shariq Ahmed

A Full Stack Developer of React | React Native | Next.JS | Node JS | Nest.JS | GraphQL | Firebase | Typescript