Covering Scientific & Technical AI | Monday, December 2, 2024

Overcoming the Challenges of Machine Learning Models 

Organizations are always fighting to operationalize their machine learning models and get them into production to help their businesses. Data scientists build machine learning models, but they are typically unaware of the production aspects of deploying or scoring those models. They usually refrain from touching production in case something goes wrong. It is also typically not their responsibility to do DevOps tasks such as deployment of models. Traditionally, these DevOps functions and the work of the data scientists have been siloed.

With all this happening in the background, let us consider five challenges that machine learning models present in production.

  1. Periodic Redeployment of Machine Learning Models

Machine learning models need to be deployed again and again because they decay over time. This is contradictory to software engineering principles practiced by software developers. In their case, code deployed once is good forever, and only when code is improved does it need to be redeployed. But machine learning models may lose value over time. This needs to be taken care of over the lifetime of the model and requires close monitoring.

  1. All About the Monitoring

In contrast to software engineering code, there may be extra effort required in the monitoring of machine learning models. Since these models are trained over data and then deployed, data also needs to accurate and should not show any uncertain anomalies. Most of the time, tracking needs to be built for incoming feature vectors for detection of drift, bias or anomalies in data. Keeping this in mind, it is also important to have monitoring and alerts for incoming data.

  1. Machine Learning Models Span Three Verticals

When it comes to software engineering code, the only thing those dev teams need to worry about is the language in which the code is being deployed. Those developers build infrastructure based on that code and then the systems are good for a lifetime of deployment using this code. This does not hold true for machine learning model deployment. Along with programming languages, factors such as libraries and frameworks need to be looked at. To be able to support these three verticals, a proper infrastructure or platform-like tool is needed.

The development of machine learning models takes place in a very heterogeneous environment. Data scientists use a wide variety of machine learning frameworks and languages, which may use libraries that depend on underlying hardware such as Nvidia’s CUDA and other kinds of dependencies, which may lead to back-end challenges.

In the case of machine learning, data scientists would find tremendous support in a common platform that can work with all kinds of frameworks and programming languages. This is because they would be able to focus on domain knowledge, rather than confine themselves to a limited number of frameworks. A way to deploy developed models to production faster would give more liberties to data scientists, as they could deploy models and different versions more frequently. This would help to quickly capture root causes of actual production issues with the model.

  1. Compliance

Before being deployed into production, machine learning models also have a mandate to go through regulatory compliance. There may be different predictions, and history needs to be reviewed to establish that the machine learning model is behaving correctly. This is generally true in the banking or financial industries, where model predictions need to be tracked quickly and easily to demonstrate compliance to regulators and be able to explain why the machine learning model gave a certain price prediction. To be able to find a prediction made by a model in the past, tracking needs to be built in to easily find the model and the dataset it was trained on.

  1. Managing Similar Kinds of Models

It is hard to find a tool in the market today that allows several models to deploy at the same time and show a comparison of how they are behaving on the same data. It is currently very complex, laborious and hard to achieve this. If deployment of a single machine learning model is manual, the effort to deploy and compare several models multiplies and is almost impossible. Being able to see monitoring metrics for multiple models on production data is a powerful way of choosing correct models. Proper business decisions can be made as to which model is behaving correctly, and bad models can be retired easily and early.

Victory is Within Reach

The operationalizing of machine learning models has its challenges but is not impossible. In addition to the tips noted above, using a new model development lifecycle will streamline the process of model development and model production. It does this by helping data scientists, engineering and other involved teams make effective decisions in a timely manner, while also helping teams to mitigate production risks.

About the Author

Harish Doddi of Datatron

Harish Doddi is the CEO of Datatron. He earned a master’s degree in computer science from Stanford University, where he specialized in the field of systems and databases. He started his career at Oracle and then moved to Twitter to work on open source technologies. He managed the Snapchat stories product from scratch and the pricing team at Lyft. He earned his undergraduate degree in computer science from the International Institute of Information Technology (IIIT-Hyderabad). He is always interested in traveling and meeting new people.

AIwire