Coding with A.I. — Part I

Improving our productivity with advanced language models.

Edison Rodas
Globant
8 min readNov 21, 2023

--

“Developers over on giants shoulders” image created by DALL-E with author prompt

Artificial Intelligence (AI) holds significant potential to substantially enhance software development's efficacy. It’s crucial to clarify the prevalent misbelief that AI is poised to supplant the workforce in the software sector. At present, it’s premature to reach such a conclusion. Rather, AI currently serves as an invaluable tool for software engineers, enabling them to tackle intricate challenges and craft more sophisticated solutions. By utilizing AI, software engineers are empowered to innovate continually and produce superior software products, simultaneously achieving time savings and heightened efficiency.

In the following post, we will delve into the capabilities of Large Language Models (LLMs) in terms of summarizing, inferring, transforming, and expanding content within the realm of AI-enhanced software development.

What is a Large Language Model?

AI is not a new concept. The development of AI began in the 1940s, and language models have existed since the 1960s. Far away from considering AI as a recent trend, modern large language models such as GPT (Generative Pretrained Transformer), LLaMA (Large Language Model Meta AI), and BARD (Language Model for Dialogue Applications) are consequences and recent fruits of work that began many years ago. This technology results from many years of research, development, and millions in investment.

Large Language Models (LLMs) are AI algorithms rooted in deep learning and massive datasets. These models are designed to comprehend, generate, predict, and summarize text-based content. LLMs fall under the broader category of generative AI, as they are specialized in generating text-based content.

LLMs are powerful tools that can help solve a variety of text-related problems, making them more accessible. As software engineers, we can leverage LLMs to develop advanced solutions that reduce the workload for certain types of problems, allowing us to focus on solving more complex ones.

Operations to solve problems with LLM

The LLM is a tool that provides solutions that can be structured and applied to various problems. In this post, I will describe some of these patterns with examples using OpenAI API for Node.js. However, these pattern solutions can also apply to other LLMs like LLaMA or BARD with their API and use guidelines.

Summarizing

This type of solution with an LLM can help us quickly comprehend large amounts of data.

Illustration by author: Summarizing process

For instance, we have to create a feature to evaluate product feedback. In this example, we have 25 GoPro 9 reviews made by real clients online. All of these reviews sum to 1,638 words in this case, and we need to summarize all of them briefly to understand the product's pros and cons.

The following script implements a summarizing operation with the OpenAI API:

The result is:

GoPro Hero 9 Black:
Pros:
- Excellent video and picture capabilities
- Comes with multiple attachments and accessories
- Good for outdoor activities and underwater filming
- Easy to use with the QUIK app

Cons:
- Short battery life, requires additional batteries
- No USB-C charging cable included in package
- No instruction manual provided
- Some missing advertised items, such as memory card and card reader

Inferring

LLM can facilitate sentiment analysis and extract key data from unstructured texts for use as structured data. This means that sources such as PDFs and other free text formats can be analyzed to extract key data with software instead of relying on traditional manual extraction methods that are time-consuming and error-prone.

Illustration by author: Inferring process

To explain one case of application, let’s suppose that we need to process a large number of customer service complaints for a telecommunications company. Reading every email, letter, or customer message from reception channels would be a titanic job. In addition, these kinds of companies (public services) have to act in a short time to respond to customer requests to comply with laws in most countries.

One example of a customer request looks like the letter below:

Illustration: Customer complaint example

The customer’s letter serves as an example, but not all letters may follow the same format. In such cases, we face the challenge of dealing with unstructured data. However, we can solve this challenge by processing the data with an LLM. This can help us extract contract and service details, identify the customer, detect anger, and determine the sentiment of the complaint. By doing so, we can handle the complaint effectively. The following code uses the LLM to automate the processing of the client’s request.

The following script implements an inferring operation with the OpenAI API for this case. We assume that the content of the letter is in plain text.

The result is:

{
"sentiment": "negative",
"anger": false,
"subject": "incorrect amount charged for internet usage",
"contract": "A6321899",
"name": "Pete Hoolin",
"address": "8 Park Avenue, Wonderland, NoWhere Street",
"email": "pete123@yourcompanyemail.com",
"phone": "+57300900000"
}

Transforming

Transformation operations with LLMs enable us to consistently change one text for another. These operations encompass a range of tasks, such as translating from one language to another, converting natural language text into code, and explaining the functions of specific code segments, among other applications.

Illustration by author: Transforming process

In the next example, we must translate customer feedback from English to Spanish. We will use the OpenAI API to produce the result we need.

I loved it!
If you want to be close to the action in Jardin this is a great place.
Right outside your hotel is the main square.
I loved being close to all the action and bars and restaurants.
The hotel is very clean and the staff are very kind and helpful

The result is:

Me encantó! 
Si quieres estar cerca de la acción en Jardín, este es un gran lugar.
Justo afuera de tu hotel está la plaza principal.
Me encantó estar cerca de toda la acción, bares y restaurantes.
El hotel es muy limpio y el personal es muy amable y servicial.

Expanding

The expanding operation with an LLM is an additional feature that complements the three operations mentioned earlier. This operation enables the generation of new, expanded content based on a brief data input.

Illustration by author: Expanding process

To illustrate this concept, consider an example. Imagine a fictional telecommunication company, as discussed previously. Suppose this company is required to email each customer who has complained. These emails should detail the outcome of their case and outline the steps to either restore the service or conclude the case.

In this scenario, we will assume the existence of a JSON file containing records from a database, which reflects the company’s final decisions regarding each case. Our application is designed to generate a tailored instruction message for each customer based on these records.

Illustration: Representation of expanding feature for customer service complaint answering.

The result is that a set of customized emails has been created to explain the status of each specific complaint case, along with all relevant details and clear instructions.

Illustration: Samples of email content generated by AI by expanding data

For reference, you can check all the emails resulting from this demonstration here:

Better results guidelines

As we saw it, these operations through the LLM API are requests with prompts. Getting a good solution will be a mix of high-quality prompts and fine-tuning tests to achieve a well-fitting solution.

OpenAI offers comprehensive guidelines on strategies and tactics to improve the results of its LLM and other products; however, to start, I think the most important recommendations are:

Write clear instructions

LLMs are software machines that work according to the quality of inputs. If our inputs are vague or insufficient, the model will produce results with gaps or hallucinations. Writing clear instructions will boost the results. Here are some recommendations:

  • Use delimiters to indicate parts of the request.
  • Avoid ambiguous prompts, include details, and use punctuation marks.
  • Write prompts that describe detailed instructions step by step.
  • Provide examples.
  • Specify the desired length of the output.

Split complex tasks into simpler subtasks

LLM models have a token limit, and complex prompts consume a lot of tokens, causing slow processing. In addition, complex prompts tend to have higher error rates than simpler tasks. Decomposing a complex request into many small and specific requests is more efficient and faster.

Give the model time to “think”

This strategy is about giving the LLM instructions that foster the rational process before concluding, indicating the reasoning process and quest if something is missing anything in the previous steps. In this way, the model can be process prompts with better results.

Final thoughts on trade-offs, risks, and ethical considerations

Gen A.I. has become a prevailing trend. Each of us has either heard about or seen its benefits, and it’s possible that many view A.I. as the new silver bullet, but we should keep in mind that is only a tool.

LLM is an impressive achievement; it’s remarkable. Leveraging it promises to propel us into a new phase of technological development. Nevertheless, before embarking on creating an AI-based solution, consider the following:

  • LLM systems require substantial energy consumption. Utilizing them unnecessarily contradicts the principles of green technology.
  • Using LLM on the Cloud operates on a pay-as-you-go billing model. A high-demand application could incur significant costs. Thus, adopting LLM is justifiable only if the business case supports it.
  • Biases are prevalent in AI models. Any solution that will impact the community should undergo exhaustive testing before delivery.

References

--

--

Edison Rodas
Globant
Writer for

Passionate about technology, cloud, and business. Software Engineer. Music hunter. | Tech Manager at Globant