Arthur announces $3.3M seed to monitor machine learning model performance
Machine learning is a complex process. You build a model, test it in laboratory conditions, then put it out in the world. After that, how do you monitor how well it’s tracking what you designed it to do? Arthur wants to help, and today it emerged from stealth with a new platform to help you monitor machine learning models in production.
The company also announced it had closed a $3.3 million seed round, which closed in August.
Arthur CEO and co-founder Adam Wenchel says that Arthur is analogous to a performance-monitoring platform like New Relic or DataDog, but instead of monitoring your systems, it’s tracking the performance of your machine learning models.
“We are an AI monitoring and explainability company, which means when you put your models in production, we let you monitor them to know that they’re not going off the rails, that you can explain what they’re doing, that they’re not performing badly and are not being totally biassed — all of the ways models can go wrong,” Wenchel explained.
Data scientists build machine learning models and test them in the lab, but as Wenchel says, when that model leaves the controlled environment of the lab, lots can go wrong, and it’s hard to keep track of that. “Models always perform well in the lab, but then you put them out in the real world and there is often a drop-off in performance — in fact, almost always. So being able to measure and monitor that is a capability people really need,” he said.