Iguazio Case Studies HCI’s Journey to MLOps Efficiency: A Case Study
Edit This Case Study Record
Iguazio Logo

HCI’s Journey to MLOps Efficiency: A Case Study

Iguazio
Analytics & Modeling - Data Mining
Analytics & Modeling - Machine Learning
Mining
Transportation
Logistics & Transportation
Maintenance
Last Mile Delivery
Time Sensitive Networking
Testing & Certification
Training
Home Credit International (HCI), a global consumer financial provider, recognized the potential of Machine Learning (ML) models in financial institutions, particularly in risk-related use cases. However, they faced challenges in deploying ML models efficiently. The time to delivery was long and access to data was limited. HCI’s internal research revealed that nearly 80% of the time spent on data science-related tasks was dedicated to collecting datasets and cleaning and organizing the data, leaving only about 20% of the time for core tasks like building training sets, mining data, and refining algorithms. In 2021, the average delivery time of an AI initiative, from prototype to production, was more than seven months. The biggest blocker for more efficient use of AI/ML was access to data, followed by the need for a proper AI/ML environment.
Read More
Home Credit International (HCI) is a global consumer financial provider. As leaders in their space, they identified the potential of Machine Learning (ML) models in financial institutions, particularly in risk-related use cases. Some of the top use cases include campaign management, generating offers, risk-based pricing, next best offer (cross selling) with the prediction of the next best offer, next best offer with the ability to decrease insurance-based risks, penalties when servicing products, payment behavior, calendars and promise to pay during collection processes, anti-fraud to protect clients, and creating a client behavioral profile based on predictions from data mining.
Read More
HCI identified four key areas to improve ML delivery and deployment efficiency: Access to Data, Automation, Performance, and Knowledge sharing and support. They built a data strategy using structured, semi-structured, and unstructured data, as well as federated and virtual data. They created a proper environment through automated building, training, and monitoring and operational efficiency. They also focused on creating a proper environment through elasticity, serverless, and a hybrid solution. HCI improved time to delivery through integrations, standardization and sharing, and automation. Operational efficiency was achieved by using both on-premises and the public cloud and implementing solutions for hardware, file systems, maintenance support, enterprise security, and data management. Elasticity was achieved through zero downtime, auto-scaling, using easy current Python code scaling, and the support of complex event processes. HCI also used open source MLRun, an MLOps orchestration framework for accelerating ML pipelines, which comprises four main components: The Feature Store, The Real Time Serving Pipeline, Monitoring and Retraining, and CI/CD for ML.
Read More
As a result of implementing these efficiency measures, HCI was able to significantly improve their ML operations. They were able to reduce the time to delivery by 3 to 6.6 times, and even up to 10 times in some cases. They also managed to cut operating costs by 60% and reduce storage capacity by 20 times. With the use of MLRun, they were able to run automated, fast, and continuous ML processes and deliver production data. Code was deployed to the microservice in one click, pipeline deployment was automated, and monitoring was automated and codeless. Through collaborative and continuous development and MLOps, HCI achieved faster time to production, efficient use of resources, high quality and responsible AI, and continuous application improvement.
Time to delivery reduced by 3 to 6.6X and even up to 10x
Operating costs cut by 60%
Storage capacity reduced by 20X
Download PDF Version
test test