WhyLabs Weekly: One Million Downloads!
Our open source ML monitoring library reached one million downloads!
A lot happens every week in the WhyLabs Robust & Responsible AI (R2AI) community! This weekly update serves as a recap so you don’t miss a thing!
Start learning about MLOps and ML Monitoring:
- 📅 Join the next event: Monitoring LLMs in Production with Hugging Face & WhyLabs
- 💻 Check out our open source projects whylogs & LangKit!
- 💬 Join 1,257+ Robust & Responsible AI Slack members
- 🤝 Request a demo to learn how ML monitoring can benefit you
💡 MLOps tip of the week:
whylogs, our open source library for data and ML monitoring has reached over one million downloads! To learn more about monitoring your data pipelines, LLMs and ML models, check out our GitHub and don’t forget to give us a star ⭐: https://github.com/whylabs/whylogs
We’ve covered using it for monitoring data drift and ML performance in the previous posts!
Check out these getting stated guides to add monitoring to your data, LLMs, and ML pipelines!
- Data Drift Monitoring and Its Importance in MLOps
- Hugging Face and LangKit: Your Solution for LLM Observability
- Safeguarding and Monitoring Large Language Model (LLM) Applications
- Monitoring LLM Performance with LangChain and LangKit
📝 Latest blog posts:
Data Drift Monitoring and Its Importance in MLOps
Machine Learning (ML) is now an essential tool in most modern businesses, driving everything from predictive analytics to AI-enhanced applications. However, to ensure the effectiveness of your models, it’s important to continuously monitor and manage ML performance, this process is known as Machine Learning Operations (MLOps). One crucial aspect of MLOps is managing “data drift.” But what is data drift, and why is it so important to monitor it in your MLOps pipeline? Read more on WhyLabs.AI
Glassdoor Decreases Latency Overhead and Improves Data Monitoring with WhyLabs
Consider the scenario where we want to integrate a new tool into an existing service that potentially operates in real-time and involves some user interface. We need to make sure that the latency of the service in production is acceptable after the integration, while still keeping the overall maintenance costs low. Read more on WhyLabs.AI
🎥 Event recordings
Build and Monitor Computer Vision Models with TensorFlow/Keras + WhyLabs
If you want to build reliable computer vision pipelines, trustworthy data, and responsible ML models, you’ll need to monitor your models and data.
In this workshop, we cover how to use ML monitoring techniques to implement your own AI observability solution for computer vision classification applications.
- 9/6 Monitoring LLMs in Production with Hugging Face & WhyLabs
- 9/13 Intro to AI Observability: Monitoring ML Models & Data in Production
- 9/20 Monitoring LLMs in Production using OpenAI, LangChain & WhyLabs
💻 WhyLabs open source updates:
whylogs v1.3.2 has been released!
whylogs is the open standard for data logging & AI telemetry. This week’s update includes:
- Factor out the whylabs client cache so sessions can use it
- Adds condition validator udf example
- Update whylabs-client dependency, and poetry update
- Added UDF documentation & examples
See full whylogs release notes on Github.
LangKit 0.0.17 has been released!
LangKit is an open-source text metrics toolkit for monitoring language models.
- Safeguard example refactor
- Quieter logs in LanKit test suite
- safeguard example fix
See full LangKit release notes on Github.
🤝 Stay connected with the WhyLabs Community:
Join the thousands of machine learning engineers and data scientists already using WhyLabs to solve some of the most challenging ML monitoring cases!
- 1,257+ Robust & Responsible AI Slack members
- 2,347+ whylogs GitHub Stars
- 1234+ Robust & Responsible AI Meetup Members
- 9,723+ WhyLabs LinkedIn followers
- 911+ WhyLabs Twitter followers
Request a demo to learn how ML monitoring can benefit your company.
See you next time! — Sage Elliott