ChatGPT vs Google Bard for Programming: Which one to choose?

Yaroslav Dobroskok
6 min readNov 8, 2023

Link to original post on my website

ChatGPT vs Google Bard for programming tasks

If you are a programmer — I bet you have already tried to delegate some of the tasks to ChatGPT or Google Bard. And sometimes AI assistants become indeed really helpful and time-saving for us. But which one should we choose to use regularly?

ChatGPT (left) vs Google Bard (right) interfaces

I spent months comparing ChatGPT and Google Bard answers to my prompts and in this article I will share the results that I got from this AI competition.

The key differences

Let’s start from the beginning. Despite the fact that ChatGPT and Bard are both developed by competing companies (OpenAI and Google respectively), they have a lot in common.

Both of them are large language models which are meant to process the human-like texts. And for both of them programming skills are just a side-effect of a natural text processing.

But what are the key differences between ChatGPT and Google Bard? Here’s the list that I came up with:

  1. First, ChatGPT has both free and paid versions. Google Bard is completely free as of now.
  2. They both are trained on extremely large datasets. However, Bard has access to Google search which for me looks like a killer-feature! While ChatGPT (free version) is not aware of anything that happened after Jan 2022.
  3. Both can accept an image as input but again for Bard it’s free.
  4. GPT model is an industry-standard for generating and analyzing the texts which makes it better writer and analyst. Bard with its ability to query the Internet looks like a better researcher.

Let’s look at their differences using real-life examples.

Examples

Writing an SQL query

Prompt: Write an SQL query that will: Query all the users and show their lifetime value by summing up the “sum” column from table “sales” and grouping them by user_id. Then join the user’s first_name and last_name from the table “users”. take into account only the sales that are greater than $10.

The results: ChatGPT (left) vs Bard (right)

Summary: Both AIs were pretty good in generating an SQL query. Bard made a mistake by grouping on the names fields but this is not critical. On the other hand Bard was good by guessing that I’d like to order the LTV of the users in descending order.

Result: a tie!

Generate unit tests

Prompt: Generate unit tests for the following function using console.assert:

const memoize = f => {
const cache = {};

return function (...args) {
const key = args.join(', ');

const cachedResult = cache[key];

if (cachedResult !== undefined) return cachedResult;

const result = f(...args);
cache[key] = result;
return result;
};
};
The results: ChatGPT (left) vs Bard (right)

Summary: Both chatbots generated the good test cases with pretty the same testing functions. However, both of them did not test the memoization part itself. All these tests would have passed if memoize function would just return the argument function back. I will ask both of them to also test caching mechanism of memoize function.

Prompt: The tests you provided test that the memoize function does not change the behavior of an input function but do not test the caching mechanism. Rewrite them to check that the results are succesfully cached.

The results: ChatGPT (left) vs Bard (right)

Summary: Now ChatGPT just changed the test failure messages but did not change the tests themselves. Bard, in contrast to it, used jest.spyOn to check how many times the input function was called. And this makes it a winner in this section.

Result: Bard wins!

Explain and improve the code

Now the task will be to explain what the given regular expression means and to suggest possible improvements to it.

Prompt: Explain what the following regexp does

/^(?:a-z0-9?\.)+[a-z0-9][a-z0-9-]{0,61}[a-z]$/
The results: ChatGPT (left) vs Bard (right)

Summary: both assistants split the RegExp into smaller pieces and explained each of them nicely. Both of them guessed that the input RegExp is validating the domain names and that’s correct. However, I liked Bard’s answer because it provided more examples of what are the passing and non-passing strings for this case. But ChatGPT did even better job because it pointed out some errors that are really present in the initial RegExp.

Now let’s ask both chatbots to improve the RegExp.

Prompt: suggest how the given regexp can be improved

The results: ChatGPT (left) vs Bard (right)

Summary: Bard’s suggestions seem to be pretty useless. They could be useful only in a small range of situations. Whereas ChatGPT did a good job by fixing an error in the RegExp itself and also changed the way how the top-level-domain should look like.

Result: ChatGPT wins!

Up-to-date information

In regards to getting the up-to-date information or doing a research that includes the new technologies, libraries or frameworks — Bard is an order of magnitude better than free version of ChatGPT because it just has more information by searching it in Google.

I tried the following prompts:

Which Java conferences should I attend in 2023?

Tell me about the Bun JS runtime.

How do I use Angular Signals?

All of the above topics are not present in ChatGPTs dataset and it cannot answer all of them. However, Bard did a great job answering them so here the result i definitely

Result: Bard wins!

Conclusion

Both AI assistants do a great job in generating, debugging and explaining the code — in my opinion it’s pretty much a tie between ChatGPT 3.5 and Bard here.

However the ability to query the results from the Internet makes Bard a much better researcher. So in this field Bard is definitely a winner.

Please note that the quality of answers for both neural networks may vary depending on the question and even the time when the question was asked. But with these examples I wanted to showcase how tight this battle is between the biggest competitors. And who knows what changes will the nearest future bring us.

Also consider that there is a paid version of ChatGPT that uses GPT 4 and its answers are way better than GPT 3.5 generates.

So for programming-related tasks I would suggest the following: to use Bard as a default version especially if the actuality of information is critical. Use ChatGPT to verify that the given solution is the best or if you are in doubt about the Google Bard’s answer.

If you want to become a 10x better developer — discover my course “Mastering AI for Programming: ChatGPT, Github Copilot” on Udemy

👋🏻 Thank you for your reading.

💡 I hope this article inspired you to try using AI assistants for programming.

📣 Please share this story if you think it is useful and follow me to be the first to receive the next stories.

🌎 Visit my website to see more posts related to tech.

--

--

Yaroslav Dobroskok

👨🏼‍💻 Lead Software Enginner ✍🏼 Writing about tech, productivity and curious things