Questions every IT Operations team should ask about their Generative AI Tools

Warren Zhou
IBM Cloud
Published in
3 min readSep 11, 2023

Generative AI is a powerful technology that can create new content, such as text, images, audio, or code, from existing data or user input. It has many applications in IT operations, such as automating documentation creation, generating test cases, and synthesizing data. However, not all generative AI tools are created equal. Some may be more reliable, accurate, and ethical than others. How can you tell the difference?

In this blog post, we will provide you with a checklist of questions that you should ask before using any generative AI tool in your IT operations. These questions will help you ensure that the tool is transparent and trustworthy, and that it meets your quality and security standards. By asking these questions, you will avoid the pitfalls of using a black-box AI system that may produce unexpected, inaccurate, or harmful results.

Here are the questions you should ask:

How does the generative AI tool work?

  • What is the underlying algorithm or model that powers it?
  • How was it trained and tested?
  • What are its limitations and assumptions?

How can you control the generative AI tool?

  • What are the parameters or options that you can adjust to influence the output?
  • How can you provide feedback or corrections to the tool?
  • How can you stop or undo the generation process if needed?

How can you verify the generative AI tool?

  • How can you evaluate the quality and accuracy of the output?
  • How can you check for errors, inconsistencies, or biases in the output?
  • How can you compare the output with other sources or tools?

How can you trust the generative AI tool?

  • How can you ensure that the output is ethical and compliant with your policies and regulations?
  • How can you protect the privacy and security of your data and users?
  • How can you audit and trace the generation process and output?

What Happens If You Don’t Ask These Questions?

Imagine this: You are using a generative AI tool to automate some of your IT ops tasks, such as generating code snippets, configuration files, documentation, or reports. The tool seems to work well most of the time, producing output that matches your input and meets your specifications. You trust the tool and rely on it for your daily work.

But one day, something goes wrong. The tool generates output that is incorrect, incomplete, inconsistent, or incomprehensible. It causes errors, bugs, crashes, or security breaches in your IT systems. It damages your reputation, productivity, or profitability. It exposes you to legal, regulatory, or ethical issues.

You try to figure out what happened. You try to understand why the tool generated what it generated. You try to fix the problem and prevent it from happening again.

But you can’t. The tool is a black box. You don’t know how it works. You don’t know what data it used. You don’t know what logic it followed. You don’t know what assumptions it made. You don’t know how to test it. You don’t know how to explain it.

You are stuck with a faulty tool that you don’t understand and can’t trust.

By using a trusted and transparent Generative AI tool, an enterprise IT Operations team can ensure that their workflows are efficient, effective, and reliable. With the ability to understand how the tool works and the logic it follows, the team can better anticipate and prevent potential issues, as well as quickly diagnose and resolve any problems that may arise. This not only saves time and resources but also helps to maintain the team’s reputation for delivering high-quality work. Ultimately, trust and transparency in generative AI tools enable enterprises to achieve their desired outcomes and succeed in today’s fast-paced business environments.

--

--