How to Effectively Use ChatGPT at Work

Learn how to use smart strategies and what pitfalls to watch out for when using ChatGPT at work

SAP Design
Experience Matters
7 min readJul 17, 2024

--

Written by Dr. Marieke Storm, Communications and Content Senior Specialist at SAP Design

Professional woman working thoughtfully at her laptop in a bright, modern office with a coffee cup and notebook on the desk.

It’s safe to say that ChatGPT caused an avalanche. Since the launch of the AI chatbot based on the text AI GPT-3 on November 30, 2022, companies have been trying to outrun each other by integrating AI into their products. This fast development of AI tools forces us to become (generative) AI literate as quickly as possible, which leads us to the question: how can we use generative AI tools intelligently (pun intended) and effectively?

LLMs and ChatGPT

First, let’s recap what we’re actually talking about. ChatGPT is an application built on a large language model. Large language models are neural networks trained on vast corpora. For the emergence of these large language models you needed the increasing availability of data (the internet as well as the comprehensive introduction of cloud computing, which means no expensive AI-infrastructure and larger computing power (Flood). ChatGPT is currently the leading tool due to the sheer corpus volume and the efficacy of the neural networks (Norton Rose Fulbright).

Programs such as ChatGPT compile their texts based on probability parameters. Thus, inspected up closely, there is nothing much ‘intelligent’ about them. This is why Emily M. Bender et al. call them “stochastic parrots”. And this is something we should be aware of at all times: Generative AI tools don’t think.

They aren’t search engines either. Their information isn’t up to date, and their process compiling information is different in that they don’t rely on one specific source but rather generate a text by checking the most frequent word collocations of the subject in their corpus (Kuzub).

Why is it worth emphasizing all of this again? Because the most common mistake we all tend to make is to think of LLM output as meaningful text (Bender et al. 616). And it isn’t. At least not per se.

Problems with ChatGPT

In short, there are certain problems with the text output of large language models such as ChatGPT, which you as their user should be (and I’m sure you already are) aware of:

  • It’s biased/offensive (because it reproduces the biases and prejudices of the internet sources it uses and by only using internet sources [white predominance]) (Bender et al. 613r ff).
  • It creates false information (by putting things together that don’t belong together).
  • It’s not up to date. (For example, the data of the GPT-4 March edition end in 2021, but later version are getting better) (ZDNET).
  • It makes mistakes (it’s lacking in mathematical and logical skills — as it doesn’t work that way) (for example Beguš et al.).
  • It doesn’t create unique outputs (Norton Rose Fulbright).
  • It sometimes produces text that is recognizably created by generative AI (Norton Rose Fulbright).

This is not to talk down the enormous achievement of developing such a tool. And the huge success speaks for itself: Within the first two months of its launch it had 100 million users — an unprecedented achievement (BR24). It is and remains a game-changer.

Companies and AI

So, what is the status quo of AI in the business world? 14% of German companies are currently using AI, and a further 23% intend to use it in the next three years (tagesschau). One third of all German companies don’t have an AI policy yet. But that doesn’t mean that AI isn’t used. According to a survey conducted by Salesforce, more than every fourth co-worker uses generative AI, half of them without the company’s knowledge. (BR24) Similarly, 56% of US-American employees use generative AI at work, even though only 26% of US businesses have an established policy for it. What’s really disconcerting is that ca. 75% of these users believe that the work produced by generative AI is of the same quality as that of an experienced or expert co-worker. This refers mainly to text production tools such as ChatGPT. 68% use it to draft texts, 60% to brainstorm, and 50% for background research (Investopedia).

There is also an increasing concern within companies as regards the leaking of sensitive information. That’s why businesses such as Samsung, JPMorgan, or Deutsche Bank have imposed bans on employees using generative AI at work. Samsung communicated that this decision was made after an internal source code was inadvertently uploaded to ChatGPT (The Verge).

However, as many employees have got used to working with it, many choose to continue using these tools clandestinely, as a study from February 2023 has found out. On popular platforms, methods of how to circumvent workplace bans are discussed, so even though 75% of IT companies now consider a ban on generative AI, this doesn’t seem a very viable thing to do (BBC Worklife).

Thus, next to urgently needed AI policies and secure access to generative AI tools, education and training is the way forward.

What not to do when using ChatGPT at work

And to start off with little bit of upskilling, here’s a list of no-gos that’s easy to implement:

  • Don’t access ChatGPT directly.
    Use the safe channels that your company provides or at least a VPN-enabled device for data protection purposes.
  • Don’t put internal/sensitive date into ChatGPT (unless your company data protection policy states that you can safely do so).
    Even though the ChatGPT input field looks a bit like a search engine, you put in much more information. And ChatGPT has the right to access that information. This means that info that is sensitive to your company may turn up in answers to queries by other people (Norton Rose Fulbright).
  • Don’t directly copy texts form ChatGPT.
    See the reasons above. Also because of possible copyright infringement, as the tool was trained on copyrighted works (Norton Rose Fulbright).
  • Don’t rely on ChatGPT instead of upskilling (Haynes).
    How would you know that what ChatGPT produces is good if you don’t know anything about the subject matter or text type? Also, ChatGPT tends to produce very similar texts of one type, so it becomes repetitive, which a trained writer would know how to avoid. So when in doubt, ask a writing expert.
  • Don’t use ChatGPT for factual answers.
    If you’re looking for facts, use proper sources, not this unfiltered mish-mash of millions of websites (Ovide).
  • Don’t use it for coding.
    A recent study has shown that the coding quality has dropped since the increased use of generative AI tools such as GitHub-Copilot and ChatGPT. Since then, there has been an increase in repetitive code, which makes it more difficult to read. The researchers warn that cleaning up code created by generative AI may take as long or even longer than creating it without these tools.

How to use ChatGPT instead

So, shouldn’t we use ChatGPT and similar tools at all? Of course we should! But in an informed way. As Alena Kuzub puts it:

If used properly, ChatGPT can make a paper shorter, rewrite some passages, check grammar, and improve or change the style of text. However, users will need to give the chatbot a detailed prompt on what they want it to do.

The point therefore is, yes, you can use this tool in many ways, but you need to be able to use it properly. Let’s first focus on things you can use ChatGPT for without bothering too much about the art of prompt writing. These will already improve your work quality and efficiency with immediate effect:

  • To correct your spelling.
    Especially if you’re a non-native speaker of English, you can use ChatGPT to help you with your language. Just make sure to tell it which kind of English you’re writing.
  • To find stylistic inconsistencies in your text.
    Use ChatGPT’s ability to recognize text types and their stylistic features to improve texts that you’ve written yourself, especially if you’ve written them in a foreign language.
  • To pick individual words/phrases from.
    Each text type and subject uses a set of specific words and phrases. ChatGPT is a good way to get a feel for the language used. It’s important that you just copy individual words and not entire texts.
  • As a basis for ideas/arguments.
    If you don’t know anything about a subject matter, it might well be your first go-to point to get a rough understanding of what is going on.
  • For brainstorming: To add further ideas to your list.
  • To get quick overviews over subject matters that are only of marginal importance to you.

If you’re using it for anything factual, cross-check. Like Wikipedia, don’t use ChatGPT as your source, but rather as an inspiration, leading you to a first outline and other sources.

If you want to use it for more complex writing, you’ll need to learn how to write prompts effectively. Learn all about that in the next article.

Homo ChatGPT Sapiens Est

It has hopefully become clear that, although programs such as ChatGPT are great tools, their work needs to be heavily edited by us. Joern Keller, Executive Vice President and Chief Product Officer of SAP Business Network, remarks the following about predictive AI:

It is important to note that autonomous functionality does not obviate the need for humans. Quite the contrary! There can be no replacement for the carefully considered judgment of a seasoned business leader.

Replace the word “business leader” with ‘user’, and this quote also applies to texts produced by generative AI. Treat them as suggestions but decide wisely what to use and what to discard. Always remember: You are the only sapient entity present.

Experience matters. Follow our journey as we transform the way we build products for enterprise on www.sap.com/design.

--

--