The Post Deployment Data Science Blog
All things data science and machine learning, post deployment Run by nannyML
How to Detect Under-Performing Segments in ML Models
Machine Learning (ML) models tend to behave differently across different data segments. Without monitoring each segment, you wouldn’t notice the problem until it’s too late.
Building Custom Metrics for Predictive Maintenance
Most of the time, relying on traditional metrics doesn’t capture the true financial impact of predictions like preventing a machine breakdown or optimizing maintenance schedules. What you need is custom metrics.
3 Custom Metrics for Your Forecasting Models
You’ve worked hard to build a model that should add value, but the wrong metrics can make it look like your work is falling short. By employing custom metrics that align more closely with business needs, you can demonstrate the real value of your work.
Python Tutorial: Developing Custom Metrics for Insurance AI Models
Insurance companies are often concerned not just with the accuracy of their predictions but also with their financial implications. Custom metrics offer a way to measure model performance in a more nuanced and context-specific manner than traditional metrics. We’ll explore two machine learning applications in insurance and the custom metrics suited to each.
Monitoring Demand Forecasting Models without Targets
In this blog, I explore why demand forecasting models can fail after deployment and share some handy tricks to correct or contain these issues. With the right approach, you can keep your forecasts reliable and your business running smoothly.
Top 3 Custom Metrics Data Scientists Should Build for Finance: A Tutorial
In this blog, we’ll explore the differences between traditional and custom metrics and examine finance-specific classification and regression models. Also, there is a step-by-step tutorial on how to set up and utilize these features in NannyML Cloud.
Monitoring Custom Metrics without Ground Truth
Setting up custom metrics for your machine learning models can bring deeper insights beyond standard metrics. In this tutorial, we’ll walk through the process step-by-step, showing you how to create custom metrics tailored to classification and regression models.
Reverse Concept Drift Algorithm: Insights from NannyML Research
This blog explores concept drift and how it impacts machine learning models. We'll discuss the algorithms and experiments we conducted to detect and measure its impact and how we arrived at the Reverse Concept Drift Algorithm.
Which Multivariate Drift Detection Method Is Right for You: Comparing DRE and DC
In this blog, we compare two multivariate drift detection methods, Data Reconstruction Error (DRE) and Domain Classifier (DC), to help you determine which one is better suited for your needs.
Common Pitfalls in Monitoring Default Prediction Models and How to Fix Them
Learn common reasons why loan default prediction models degrade after deployment in production, and follow a hands-on tutorial to resolve these issues.
Prevent Failure of Product Defect Detection Models: A Post-Deployment Guide
This blog dissects the core challenge of monitoring defect detection models: the censored confusion matrix. Additionally, I explore how business value metrics can help you articulate the financial impact of your ML models in front of non-data science experts.
How to Monitor a Credit Card Fraud Detection ML Model
Learn common reasons why fraud detection models degrade after deployment in production, and follow a hands-on tutorial to resolve these issues.
Keep your Model Performance Breezy: Wind Turbine Energy Model Monitoring
Explore how NannyML tools can help maintain prediction reliability in wind energy prediction models.
Why Relying on Training Data for ML Monitoring Can Trick You
The most commonly repeated mistake while choosing a reference dataset is using the training data. This blog highlights the drawbacks of this decision and guides you on selecting the correct reference data.
Using Concept Drift as a Model Retraining Trigger
Discover how NannyML’s innovative Reverse Concept Drift (RCD) algorithm optimizes retraining schedules and ensures accurate, timely interventions when concept drift impacts model performance.
Retraining is Not All You Need
Your machine learning (ML) model’s performance will likely decrease over time. In this blog, we explore which steps you can take to remedy your model and get it back on track.
Getting Up To Speed With NannyML’s OSS Library Optimizations (2024)
Discover the latest optimizations to speed up your ML monitoring and maintain top performance with NannyML's improved open-source tools!
A Comprehensive Guide to Univariate Drift Detection Methods
Discover how to tackle univariate drift with our comprehensive guide. Learn about key techniques such as the Jensen-Shannon Distance, Hellinger Distance, the Kolmogorov-Smirnov Test, and more. Implement them in Python using the NannyML library.
Stress-free Monitoring of Predictive Maintenance Models
Prevent costly machine breakdowns with NannyML’s workflow: Learn to tackle silent model failures, estimate performance with CBPE, and resolve issues promptly.
Effective ML Monitoring: A Hands-on Example
NannyML’s ML monitoring workflow is an easy, repeatable and effective way to ensure your models keep performing well in production.
Population Stability Index (PSI): A Comprehensive Overview
What is the Population Stability Index (PSI)? How can you use it to detect data drift using Python? Is PSI the right method for you? This blog is the perfect read if you want answers to those questions.
Comparing Multivariate Drift Detection Algorithms on Real-World Data
This blog introduces covariate shift and various approaches to detecting it. It then deep-dives into the various multivariate drift detection algorithms with NannyML on a real-world dataset.
Detect Data Drift Using Domain Classifier in Python
A comprehensive explanation and practical guide to using the Domain Classifier method for detecting multivariate drift.
Monitoring Strategies for Demand Forecasting Machine Learning Models
Demand forecasting cases are one of the most challenging models to monitor post-deployment.
Harnessing the Power of AWS SageMaker & NannyML PART 1: Training and Deploying an XGBoost Model
A walkthrough on how to train, deploy and continuously monitor ML models using NannyML and AWS SageMaker.
How to monitor ML models with NannyML SageMaker Algorithms
A walkthrough on how to deploy NannyML Monitoring Algorithms via AWS Marketplace and SageMaker
How to Deploy NannyML in Production: A Step-by-Step Tutorial
Let’s dive into the process of setting up a monitoring system using NannyML with Grafana, PostgreSQL, and Docker.
91% of ML Models degrade in time
A closer look to a paper from MIT, Harvard and other institutions showing how ML model’s performance tend to degrade in time.
Bad Machine Learning models can still be well-calibrated
You don’t need a perfect oracle to get your probabilities right.
Detecting Covariate Shift: A Guide to the Multivariate Approach
Good old PCA can alert you when the distribution of your production data changes.