LM Studio’s Python Play: Crafting Code Creatively

Lakshmi narayana .U
3 min readNov 28, 2023
image created by the author and DALL.E-3

Several weeks back, I authored and shared an article focused on locally deploying Large Language Models (LLMs), highlighting LM Studio as an effective and straightforward method for accomplishing this task. You can find the article below.

Moving forward, the next step in my project involved developing a Python-based client interface for mistral-7b-instruct-v0.1.Q4_K_M.gguf. However, I encountered challenges with the Python script example that was available. Complicating matters, their GitHub repository containing these examples is being prepared for archiving, and the scripts there are outdated. As a result, my only option was to start with the provided curlL script and build upon that.

# Turn on the server and run this example in your terminal
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "Always answer in rhymes." },
{ "role": "user", "content": "Introduce yourself." }
],
"temperature": 0.7,
"max_tokens": -1,
"stream": false
}'

I then used ChatGPT to help build my test code. First, I asked it to convert the cURL script into Python, which it did successfully, and it worked well.

import requests
import json

# The URL where the local server is running
url = "http://localhost:1234/v1/chat/completions"

# The headers to indicate that we are sending JSON data
headers = {
"Content-Type": "application/json"
}

# The JSON data payload
data = {
"messages": [
{"role": "system", "content": "Always answer in rhymes."},
{"role": "user", "content": "Introduce yourself."}
],
"temperature": 0.7,
"max_tokens": -1,
"stream": False
}

# Making the POST request to the local server
response = requests.post(url, headers=headers, data=json.dumps(data))

# Checking if the request was successful
if response.status_code == 200:
# Printing the response content
print(response.json())
else:
print("Failed to get response:", response.status_code, response.text)
```

Initially, I attempted to repurpose the existing OpenAI setup, using their example code as a starting point, but it was unsuccessful. Subsequently, after several iterations with ChatGPT, I managed to develop a basic chatbot using the `requests` library. Below is the code that achieved success.

import requests
import json

def send_request(message):
url = "http://localhost:1234/v1/chat/completions"

# Your request payload
payload = {
"messages": [
{ "role": "system", "content": "Always answer in rhymes." },
{ "role": "user", "content": message }
],
"temperature": 0.7,
"max_tokens": -1,
"stream": False
}

headers = {
"Content-Type": "application/json"
}

# Send POST request
response = requests.post(url, headers=headers, data=json.dumps(payload))

# Check for successful response
if response.status_code == 200:
return response.json()
else:
return {"error": "Request failed with status code " + str(response.status_code)}

def main():
print("Welcome to the Chatbot! Type 'quit' to exit.")

while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break

response = send_request(user_input)
bot_response = response.get("choices")[0].get("message").get("content") if response.get("choices") else "Sorry, I couldn't get a response."
print("Bot:", bot_response)

if __name__ == "__main__":
main()

I experimented with various debugging methods and indirect approaches, yet the OpenAI setup remained unresponsive. I’m hopeful that LM Studio will soon provide examples that address either their own issues or those related to the latest OpenAI API guidelines.

An interesting result of this endeavor is my shift towards leveraging successful code samples as a foundation for generating new code. Below, I have detailed a sequence of steps from a particular case, along with some broadly applicable guidelines that I’ve extrapolated. This comes after conducting additional tests, including experimenting with a piece of Langchain code

Specific Steps to LM Studio, openAI API and Python
General guide (needs improvement)

Over the next few weeks, I plan to refine and improve this approach.

--

--