Integration with Langchain Chatbot in Golang

Zekeriya Onur Yakışkan
Picus Security Engineering
4 min readJul 26, 2024
Gemini generated image for chatbot

The hot topic of 2020’s became language models. After chatgpt has an enormous success, many products started to integrate with natural language models along with Picus Security.

Natural language models provide users a smooth interface for information retrieval and has a great chat like experience. We wanted to provide our users a chatbot to give them an extended knowledge about their data in our application. To do so, our data team implemented an nlp model which they host with langchain. Langchain is a framework used for creating LLM applications. We implemented the integration between langchain server and picus application.

Architecture Concerns

Different Code Bases

Python has highly specialized nlp frameworks. Therefore, the data team is writing their code in python. However, other parts of the codebase mostly written in golang. Therefore, if other teams would implement python code, context switch between languages might come with a learning curve penalty.

Single Responsibility Principle

Chatbot is a highly isolated feature of the application and we did not want it to be responsible for general business logic like license control or authentication.

Security

Valuable customer information is used while generating a response with chatbot. Therefore, architecture should emphasize on security and not let unauthorized messages to generate response.

Limiting

We need to limit user request since each chatbot response comes with a cost.

Architecture

We have 3 elements in our architecture. First we have chatbot whose sole business is to generate answers to incoming messages. This is implemented in python by our data team. Second we have application backend written in golang which resolves business logic such as license checks, authentication, limiting etc. before redirecting requests to chatbot. End users does not directly communicate with chatbot to enhance security. Lastly, fronted takes messages from users and sends them to application backend.

Application Backend — Chatbot Communication In Golang

Langchain chatbot expects a post request. The parameters of the post request body is specified by the chatbot implementer. Here is an example body of a post request:

{
"session_id": <uuid>, // for chat context
"question": <user input>
}

After receiving this request, chatbot starts to generate an answer to user question. The answer is streamed to callee in the lifetime of the http request. The streaming is possible with chunked transfer encoding specified in the http rfc. We used following headers in http request to have a healthy connection with the chatbot:

req.Header.Add("Content-Type", "application/json")
req.Header.Add("Accept-Encoding", "chunked")

Here is the code of the chatbot post request:

import (
"bytes"
"encoding/json"
"github.com/pkg/errors"
"io"
"net/http"
"net/url"
)
func (c client) Stream(inputs StreamParams) (io.ReadCloser, error) {
if c.url == "" {
return nil, newEmptyUrlError()
}
jsonStr, err := json.Marshal(inputs)
if err != nil {
return nil, err
}
streamUrl, err := url.JoinPath(c.url, streamUri)
if err != nil {
return nil, err
}
req, err := http.NewRequest(http.MethodPost, streamUrl, bytes.NewBuffer(jsonStr))
if err != nil {
return nil, err
}
req.Header.Add("Content-Type", "application/json")
req.Header.Add("Accept-Encoding", "chunked")
resp, err := httpClientHelper.Do(req)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
statusCodeError, err := newStatusCodeErrorFromResponse(resp)
if err != nil {
return nil, err
}
return nil, statusCodeError
}
return resp.Body, nil
}

Notice that we returned resp.Body in this code. Later on, we used Read method of body to receive chatbot response in a streaming manner. As we read from body, we stream it to frontend to display.

func (r router) Stream(c *gin.Context) {
params := StreamParams{}
if err := c.ShouldBindJSON(&params); err != nil {
c.JSON(apiresponse.Generate(nil, err))
return
}
 setStreamingHeaders(c)
body, err := chatbotClient.Stream(params)
if err != nil {
c.JSON(apiresponse.Generate(nil, err))
return
}
c.Stream(func(w io.Writer) bool {
readBytes := make([]byte, 2048)
n, readerErr := body.Read(readBytes)
// use only meaningful bytes to stream.
readBytes = readBytes[:n]
_, err = w.Write(readBytes)
if err != nil {
log.WithError(err).Error("error while writing")
return false
}
// reader error processed after writing because it is recommended as in reader interface
if readerErr != nil {
if errors.Is(readerErr, io.EOF) {
return false
} else {
log.WithError(readerErr).Error("Error while reading reading stream connection")
return false
}
}
return true
})
}
func setStreamingHeaders(ginContext *gin.Context) {
ginContext.Header("Content-Type", "text/event-stream")
ginContext.Header("Transfer-Encoding", "chunked")
}

Notice that we set necessary headers for frontend to process. This way, frontend can make use of chunked connection in its streaming logic.

Chatbot returns the answer in event source(server-sent events) format. We direct the chatbot response directly to frontend to handle

Frontend — Application Backend Communication

FE sends a post request to application and then wait for the streaming response. Application redirects this request to chatbot and return its response to FE. The response is in event source format. We were able to parse the response using “@microsoft/fetch-event-source” library in javascript.

Example Communication

  1. FE sends message to BE in the same format.

2. BE receives message, checks license and other business related checks. If applicable, sends message to chatbot.

3. Chatbot streams message response to BE.

4. BE receives message stream and redirects it to FE.

Conclusion

This article is about the integration of a chatbot to picus application. We discussed about the architectural concerns of a chatbot and a possible implementation to the problem. In the implementation, we utilized http streams and provide examples to http streaming.

Thanks for reading the article. You can use this link to contact me for feedbacks and questions. Cheers…

--

--