Photo by Anton Maksimov 5642.su on Unsplash

Writing About Technical Topics with AI

Brainstorming Examples

Margaret Eldridge
4 min readJul 1, 2024

--

The curse of knowledge (or expertise) occurs when a topic becomes difficult to see from the perspective of a novice.

Consider this definition:

The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid iterative method for solving combinatorial optimization problems.

Okay, what’s wrong with that? Nothing if you are an expert in quantum computing algorithms, but when you are learning, the definition is frustratingly abstract and jargon packed.

People who are new to a topic need much more than definitions. They need examples. More importantly, the examples should feel familiar and relate to things they already understand. For many tech writers, simple, concrete examples are a challenge to create precisely because of their expertise. AI can help.

Given strong prompts, large language models (LLMs) like ChatGPT-4o are pretty good at generating examples. Consider this conversation about the QAOA and real-world scenarios we could use to explain the algorithm and the types of problems it is well suited for solving.

The value in your tech content is your expertise — your advanced and scarce skills and knowledge. LLMs can spit out answers about programming in Python and generate Python code that works fairly reliably because they were trained on a large body of knowledge in that area. In areas where not much content exists, LLMs are less likely to provide reliable answers. We need human experts to write on advanced technical topics, but a little AI assistance can help you brainstorm ways to explain complex ideas.

Explaining QAOA with AI Help

Would you understand and feel comfortable with the topic if I explained QAOA in the following way?

Let’s say you are building self-driving cars and safety is your number one concern. Here are some pretty basic things the car must do:

  • Select routes. You want the car to select the safest and most efficient route possible while taking into account ever-changing factors like traffic conditions, roadwork, and accidents.
  • Avoid collisions. The car must be able to monitor its surroundings to predict and avoid collisions with other vehicles, pedestrians, and obstacles.
  • Interpret sensor data. Images, sounds, and other data constantly come in from multiple sensors. The car must process and interpret the data to operate most safely and efficiently.

The car’s software has to find the best solution quickly and efficiently while considering vast amounts of data and numerous combinations of possible actions. These challenges are known as combinatorial optimization problems. Although it’s easy to assess if the solution is good (the car takes the best path and arrives safely), processing all the data to find the best solution quickly is extremely difficult.

These hard problems are an area where quantum computing shines. The Quantum Approximate Optimization Algorithm (QAOA) can solve problems with numerous possible combinations and quickly identify optimal solutions. QAOA evaluates many options rapidly, improving with each iteration. Rapid evaluation is thanks to quantum computing, while the iterative improvement is from classical optimization algorithms. Because QAOA uses both quantum and classical techniques, we say it is a hybrid iterative method for solving combinatorial optimization problems.

Does the initial definition of QAOA make more sense now? If I were an expert writing about QAOA and quantum computing, this and other examples generated by an LLM might help me introduce advanced topics to nonexperts in a relatable way. An expert would also know if the suggested examples were valid and appropriate. Since my intent in this article is to write about writing and not about quantum algorithms, I hope you’ll forgive me if I went with an example that wasn’t perfectly suited to QAOA.

Acceptable Uses of AI in Technical Writing

Many writers (like me) are already using AI to help with writing tasks, but what boundaries should be in place to maintain the accuracy and authenticity of our work? First, limit use to help writing about topics on which you are already an expert. You might ask the LLM to:

  • Suggest scenarios or examples, as in this article
  • Simplify wording
  • Create a summary
  • Create a social post based on your writing
  • Analyze the audience
  • Suggest prerequisite knowledge readers should have
  • List the concepts and skills readers will gain
  • Illustrate a concept
  • Suggest sub-topics for a main topic
  • Analyze writing for flow or missing topics
  • Check writing against a style guide (upload both the guide and the piece)

I wouldn’t feel comfortable using wording generated by an LLM as is for any substantive content. Since it’s common for LLMs to generate inaccurate information (hallucinations), I’d want to check a few other reliable sources if I were writing about anything outside my area of expertise.

For the quantum algorithm example, I read some content from the IBM Quantum Learning lab and asked Dr. Nihal Mehta, the author of Quantum Computing: Program Next-Gen Computers for Hard, Real-World Applications, if the self-driving car example was acceptable. He said it was okay but too ambitious given the current capabilities — a programmer would be more likely to use QAOA in their work for boring things like curve fitting. So while the LLM was helpful for brainstorming examples, an expert would have recognized the suggestions that were most appropriate for the technology and the audience.

--

--