How to Detect Under-Performing Segments in ML Models
Machine Learning (ML) models tend to behave differently across different data segments. Without monitoring each segment, you wouldn’t notice the problem until it’s too late.
Building Custom Metrics for Predictive Maintenance
Most of the time, relying on traditional metrics doesn’t capture the true financial impact of predictions like preventing a machine breakdown or optimizing maintenance schedules. What you need is custom metrics.
3 Custom Metrics for Your Forecasting Models
You’ve worked hard to build a model that should add value, but the wrong metrics can make it look like your work is falling short. By employing custom metrics that align more closely with business needs, you can demonstrate the real value of your work.
Python Tutorial: Developing Custom Metrics for Insurance AI Models
Insurance companies are often concerned not just with the accuracy of their predictions but also with their financial implications. Custom metrics offer a way to measure model performance in a more nuanced and context-specific manner than traditional metrics. We’ll explore two machine learning applications in insurance and the custom metrics suited to each.
Monitoring Demand Forecasting Models without Targets
In this blog, I explore why demand forecasting models can fail after deployment and share some handy tricks to correct or contain these issues. With the right approach, you can keep your forecasts reliable and your business running smoothly.
Top 3 Custom Metrics Data Scientists Should Build for Finance: A Tutorial
In this blog, we’ll explore the differences between traditional and custom metrics and examine finance-specific classification and regression models. Also, there is a step-by-step tutorial on how to set up and utilize these features in NannyML Cloud.
Monitoring Custom Metrics without Ground Truth
Setting up custom metrics for your machine learning models can bring deeper insights beyond standard metrics. In this tutorial, we’ll walk through the process step-by-step, showing you how to create custom metrics tailored to classification and regression models.
Reverse Concept Drift Algorithm: Insights from NannyML Research
This blog explores concept drift and how it impacts machine learning models. We'll discuss the algorithms and experiments we conducted to detect and measure its impact and how we arrived at the Reverse Concept Drift Algorithm.
Prevent Failure of Product Defect Detection Models: A Post-Deployment Guide
This blog dissects the core challenge of monitoring defect detection models: the censored confusion matrix. Additionally, I explore how business value metrics can help you articulate the financial impact of your ML models in front of non-data science experts.
Why Relying on Training Data for ML Monitoring Can Trick You
The most commonly repeated mistake while choosing a reference dataset is using the training data. This blog highlights the drawbacks of this decision and guides you on selecting the correct reference data.
A Comprehensive Guide to Univariate Drift Detection Methods
Discover how to tackle univariate drift with our comprehensive guide. Learn about key techniques such as the Jensen-Shannon Distance, Hellinger Distance, the Kolmogorov-Smirnov Test, and more. Implement them in Python using the NannyML library.