Contributing to Guardrails Hub: Build safer LLM community together

Thanapong Boontaeng
SCB 10X OFFICIAL PAGE
3 min readApr 10, 2024

AI is becoming increasingly integrated into our daily lives. It is crucial to build a future in which AI is not only powerful, but also trustworthy and safe. Creating a culture of responsible innovation and fostering community involvement is at the heart of what we do at SCB 10X. These principles guided our investment in Guardrails AI, a startup focused on AI safety assurance platforms. Guardrails AI provides a framework that offers a large collection of validator modules to developers and allows them to tailor their reliability measures to meet specific risks and requirements. As of 15 February 2024, Guardrails AI offered an open-source platform called Guardrails Hub, which allows anyone to contribute their validators for LLM to the community. These validators can be deployed, repurposed, and reused by any developer (Figure 1).

Figure 1: A screenshot of Guardrails Hub website (https://hub.guardrailsai.com/) — a Correct Language validator from SCB 10X is among the total of more than 50 validators.

SCB 10X is one of the initial contributors of Guardrails Hub.

The SCB 10X Lab team is willing to assist in technical enhancements of portfolio companies. Our team members (Kunat Pipatanakul and Sittipong Sripaisarnmongkol) worked together with the Guardrails AI team and came up with a validator which could detect and mitigate issues in LLM output text. Our validator is called “Correct Language” (Figure 2).

Figure 2: Correct Language validator developed by SCB 10X (screenshot from https://hub.guardrailsai.com/validator/scb-10x/correct_language).

Source code and instructions for this validator are available on SCB 10X’s GitHub repository (https://github.com/scb-10x/correct_language_validator). Basically, the Correct Language validator will detect the language of the LLM-generated text response and will check whether it matches the expected language. When the text is in a different language than the expected one, the validator can attempt to translate it.

Let’s see some examples of the Correct Language validator in action. We will use our in-house LLM we have been developing — Typhoon: Thai Large Language Models (https://doi.org/10.48550/arXiv.2312.13951). Typhoon was pre-trained further from Mistral-7B with careful mitigation of catastrophic forgetting of the base model’s English knowledge, so it can understand both Thai and English pretty well.

Let say we use Typhoon to create a Q&A application for recommending tourist attractions in Thailand to foreigners. Our expectation is that Typhoon must respond in English, so we set up the Correct Language validator as a guard.

Case 1: incorrect output language.

In the event that Typhoon answers in Thai, the validator will detect this and translate it to English (Figure 3).

Figure 3: Correct Language validator in the “fail” case

Case 2: correct output language.

If Typhoon provides an answer in English, the validator will detect it. This is the expected language, so the final output will be the same as Typhoon output (Figure 4).

Figure 4: Correct Language validator in the “pass” case

Note that we can also set the language for LLM responses through the system prompt (a text instruction we send before any user message is sent). For example, if we want Typhoon to answer in English, we can add a system prompt like this.

<system>Always answer in English</system>

Although both approaches can achieve similar outcomes, the difference is that the system prompt instructs the LLM to carry out the task requested, whereas Guardrails filters verify whether the output is what was intended.

In Guardrails Hub, we can find crowdsourced guardrails for many use cases and risk categories. The beauty of this open forum is that everyone can create their own guardrails and share knowledge.

Just as we did, we encourage all of you to contribute your work to Guardrails Hub. Please check out the How-to Guides at https://www.guardrailsai.com/docs/hub/how_to_guides/custom_validator.

Together, let’s make LLM “safer” for all!

Acknowledgements

My thanks go to Kunat Pipatanakul, Lead Research Scientist of SCB 10X’s AI Open Innovation team, for providing me with examples of how to use the Correct Language validator on Typhoon.

--

--