Technical Troubleshooting for Tesla Using Large Language Models and Retrieval Augmented Generation — A Use Case

M. Baddar
BetaFlow
Published in
6 min readAug 29, 2023
source : https://fiixsoftware.com/blog/6-types-of-maintenance-troubleshooting-techniques/

Summary

This article show how to answer Domain-Specific questions more precisely using our AnswerMe API. This API is powered by LLM (Open-AI ChatGPT 3.5 and Faclon-LLM, developed by LangChain and deployed on Rapid-API platform) .
In this article, we show the problem with Open-API ChatGPT when asked very domain-specific questions and how it lacks precision. We show how we can use AnswerMe , with step-by-step code snippets and API calls, to solve this issue.

For support and development of LLM , AI and NLP solutions , contact us at (info at betaflow do ai)

For more informative articles and Articles about LLMs , AI and NLP
Follow us on Twitter : https://bit.ly/3rZPdg4
Subscribe to our News-letter : https://bit.ly/3OzvlZX

Now , let the fun begin !

Table of Contents

  1. What is the problem with Domain-Specific questions to ChatGPT
  2. How can we Solve it ?
  3. How to use AnswerMe ( an LLM and Lang-chain developed API) to solve this problem ( with Python sample script)

The Problem

If you have have used ChatGPT lately ( and I am sure you did ) , you have noticed that it performs quite good on general domain questions. For example, let’s ask it about the feature of Tesla Models S vs Model 3 :

Figure 1 : General Questions about Tesla Models to Chat-GPT-3.5

However, let’s assume that you are a Tesla Owner, technician or a mechanic, and you have faced an error code :

“Alert : APP_207 ”

Then, we can ask ChatGPT about it :

Figure 2: Domain Specific question to ChatGPT about Tesla Model S

To answer this question, we may go old school : Let’s first download the Tesla Model S Owner’s Manual.

Then we can look into the manual for this specific code. Hurray , we can find it at page 213

Figure 3 : page containing answer to domain specific question

How Can we Solve the problem ?

So, can we apply LLM to parse the document, understand and answer questions over it, just like ChatGPT ? Can we do that efficiently ?

The answer is YES, meet AnswerMe , an LLM-powered API to answer questions over PDF documents.

Creating Rapid API Account

As AnswerMe is deployed on RAPID Platform for distributing APIs, you first need to do the following to get the API-Key:

  1. Create a RAPID account by following the steps in this page .Or, simply go to the sign-up page directly
  2. After creating account, visit the AnswerMe page on RAPID platform
    https://rapidapi.com/betaflowcompany/api/answer-me

( Make sure you are opening the Endpoints tab)

3. You can copy the code snippets (based on Python or another language) , to get the API-Key and other headers to call the API

Using AnswerMe API for Technical Troubleshooting using User-Manual

After getting the API-Key , there are three main steps to write code for answering the technical question over the User-Manual PDF document

  1. Upload

We first upload the file ( assuming it is download to local directory) . The upload process is that we get a presigned upload URL from aws s3 using the /psurl endpoint which is documented here
https://rapidapi.com/betaflowcompany/api/answer-me
( Check the PresignedURL section of documentaion).
Here’s a sample Python code snippet to get the pre-signed url for upload

import requests
import ntpath
...
x_rapidapi_host = ... # you get it from the API documentation page as shown above
input_file_path = ... # the local PDF file path
pre_signed_url_endpoint = f"https://{x_rapidapi_host}" + "/psurl"
file_basename = ntpath.basename(input_file_path)
file_basename_without_ext = file_basename.split(".")[0]
file_basename_ext = file_basename.split(".")[1]
file_bin = open(args.input_file_path, 'rb').read()
pre_signed_url_response = requests.get(url=pre_signed_url_endpoint, headers=headers,params={'filename': file_basename})

Then we can use the following snippet to upload the file:

import requests
...
upload_url = pre_signed_url_response.json()["url"]
logger.info(f'Uploading file with url = {upload_url}')
upload_response = requests.put(url=upload_url, data=file_bin)

2. Index

We should wait ( typically 60 seconds) till the indexing process finished in the background.

import time
...
time.sleep(60)

3. Questions

After indexing is finished, we are ready to send questions for the AnswerMe API , using the /answerme endpoint documented here
https://rapidapi.com/betaflowcompany/api/answer-me ( see QA endpoint section in the page)

import requests
...
question = ... # the question

params = {'question': question, 'filename': file_basename}
# file_basename is based on the input file path shown above
qa_endpoint = f"https://{args.x_rapidapi_host}" + "/answerme"
qa_response = requests.get(url=qa_endpoint, headers=headers, params=params)

This Python script, contains all the shown snipptes and give you the oppurtunity to test AnswerMe easily.

Now we have all the knowledge to start using the API. Let’s rerun the question we have sent above to ChatGPT :

Figure 5 : AnswerMe sample run against a domain specific question over Tesla User-Manual

For clarity : we provide the question and answer as text :

{
'question': 'What is the alert APP_w207?',
'answer': "The alert APP_w207 indicates that the Autosteer feature on
your vehicle is temporarily unavailable.
There are several possible reasons for this.
It could be a temporary condition caused by external
factors such as missing or faded lane markers,
narrow or winding roads, poor visibility due to weather
conditions like rain, snow, fog, extremely hot or cold temperatures,
or bright light from other vehicle headlights or direct sunlight.
\n\nThis alert may also appear if you exceeded the maximum speed
limit for Autosteer while it was active. In this case, Autosteer
will not be available for the remainder of your current drive.
\n\nTo address this alert, you should continue driving to your
destination. If Autosteer is still unavailable when you reach your
destination and remains unavailable during your next planned drive,
you should check for any potential obstructions or damage.
This could include mud, ice, snow, or other environmental factors
obstructing the sensors, an object mounted on the vehicle like a
bike rack causing obstruction, or any damage or misalignment to the
bumper.\n\nIf there are no obvious obstructions or damage,
you can continue driving your vehicle. However, if you do find any
obstructions or damage, it is recommended to schedule a service
appointment at your convenience.\n\nFor more information and
troubleshooting tips, you can refer to the Autosteer section on
page 100 of your vehicle's manual."}

As we can see, by applying LLM along with document parsing and indexing, we can ask precise and domain specific questions and get more specific answers than ChatGPT.

For more information about what is going under the hood when using LLM to answer question over documents, check our article here.

If you have any problems running code snippet or the script, let us know at the comments below , or email us at ( info at betaflow do ai)

If you are interested in AnswerMe API and want help or support to apply it , or similar LLM / AI approaches to your business send is an email at (info at betaflow dot ai )

For more similar articles, subscribe to our newsletter
Follow us on
Twitter
And visit our
webpage

--

--

M. Baddar
BetaFlow

AI/ML Engineer, with focus on Generative Modeling. The Mission is enabling individuals and SMEs applying this technology to solve real-life problems.