As part of the 2023 Elaia Sustainability Report, we’re featuring startups from the portfolio that stand out for their commitment to ESG. Next up, Giskard.
To read our full 2023 Sustainability Report, click here.
Mission
Giskard is the first collaborative & open-source software platform to ensure the quality of AI models.
AI models raise many concerns about ethical & security risks. Giskard belie-ves that the only way to mitigate these risks is to test AI models before putting them into production. This is the common practice in mature industries, and Giskard believes it should be the same for AI.
Case study
Giskard released three different testing tools to check across common issues facing LLM-based models: stereotypes, harmful content and ethical biases.
The stereotype detector checks that the model does not generate responses containing stereotypes, discriminatory content, or biased opinions.
The harmful content detector checks whether the model has a tendency to generate responses that could be used for malicious purposes or to promote harmful actions.
Giskard’s ethical bias detector uses specific testing to detect bias in model prediction based on the transformation of gender, nationality, or religious terms in text when input text is given.