Case Studies
-
(4)
- (4)
-
(4)
- (4)
- (1)
-
(2)
- (2)
-
(1)
- (1)
ANDOR
- (4)
- (3)
- (3)
- (2)
ANDOR
- (3)
- (3)
- (1)
- (1)
ANDOR
- (4)
- (3)
- (2)
- (1)
- (1)
- View all 5 Use Cases
ANDOR
- (3)
- (3)
- (3)
- (3)
ANDOR
- (6)
ANDOR
Please feel encouraged to schedule a call with us:
Schedule a Call
Or directly send us an email:
Compare
|
|
Deploying Large-Scale Real-Time Predictions with Apache Kafka: A Playtika Case Study
Playtika, a leading Game-Entertainment company, faced significant challenges in scaling the production of real-time machine learning. With over 10 million daily active users, 10 billion daily events, and over 9TB of daily processed data, the company's existing batch and web services deployment methods were unable to scale to meet their needs or produce predictions in real-time. The REST APIs in their ML Pipelines led to service exhaustion, client starvation, handling failures and retries, and performance tuning of bulk size for batch partitioning. Playtika’s event-driven ecosystem required a solution that could support real-time streaming of their production models and scale without downtime. They also needed a solution that could integrate with various processes including Airflow and Spark, and handle bursts, peaks, and fast creation of new ML Pipelines.
|
Download PDF
|
|
|
Smart Manufacturing: Seagate's Global Deployment of Defect Detection System with MLOps Automation
Seagate Technology, a global leader in data storage and management solutions, faced significant challenges in deploying a defect detection system across their global manufacturing facilities. The system had the potential to improve ROI by 300%, significantly reducing time processing defects and at a much lower cost. However, Seagate's legacy workflows made it difficult to deploy their model at scale. The team experienced low efficiency at many stages of the workflow due to manual tasks that prolonged the workflow, causing bottlenecks within the pipeline. Seagate was also experiencing low server utilization of their hybrid cloud infrastructure, as they had to run each workload separately, and did not have a mechanism in place to run different workloads on optimal machines. The team required an infrastructure to automate the pipeline components, such that the resources will be scheduled automatically, in real-time with maximum efficiency. At the production level, Seagate required advanced deployments that could serve on TensorFlow and Kafka endpoints.
|
Download PDF
|
|
|
Shotgun's Rapid Transition to AI-Driven Solutions with cnvrg.io
Shotgun, a global live entertainment solution, was seeking to integrate AI into their platform to enhance their services. However, they faced several challenges. Firstly, they had limited experience in AI and needed an efficient, flexible, and intuitive AI platform that would enable their small team of engineers to deliver AI quickly. Secondly, they found most platforms to be fragmented and constrained to only using the compatible computing vendors or tools. Their first AI project was to implement a recommender system that would take user history from various sources to offer advanced recommendations for events based on user’s event and music tastes. The system also required quick recommendations with a cache solution to give users real-time and relevant recommendations. As Shotgun embarked on their AI journey, they needed a platform that was flexible and scalable, so that their team could quickly build and support new AI innovations as they grew.
|
Download PDF
|
|
|
Achieving Massive Business Growth with Large-Scale Models in Production
Wargaming, an award-winning online game developer and publisher, faced a significant challenge in scaling their AI across business units. With over 110 million players worldwide, 15+ game titles, and almost 2PB of data, the company had 1,500+ models running in production on a single server solution. This severely limited infrastructure and constrained data scientists, as the overhead cost of adding servers was extremely high due to per core licensing. The existing platform also limited data scientists to a language and packages pre-approved by the platform. Wargaming needed a solution that could support large-scale models in production, scale their servers, minimize overhead costs, and provide flexibility for their data scientists.
|
Download PDF
|
|
|
Enabling Self-Service MLOps and Faster ML Delivery at monday.com
Monday.com, a work operating system (Work OS) that allows organizations to manage every aspect of their work, faced significant challenges in implementing machine learning (ML) solutions. The company's data team, BigBrain, was responsible for the data and analytics platform and ML initiatives. However, as demand for ML solutions grew, the data scientists found themselves heavily reliant on engineers to bring models to production. This resulted in a high time to value, with models often waiting for deployment until a developer was available to set up the infrastructure. Furthermore, the data scientists were siloed and had a disconnected workflow between where the model was trained, deployed, and monitored, creating unnecessary complexity. Key pain points included excessively high time to value due to production bottlenecks, dependency on developers and engineers for deployment, missing critical MLOps capabilities, inability to consolidate distinct endpoints into a multi-model endpoint pattern, and disjointed workflow due to each data scientist working with different machine learning tools.
|
Download PDF
|
|
|
AI-Powered Recruitment Transformation: PandoLogic's Journey with Automated MLOps Pipelines
PandoLogic, an AI-based programmatic job advertising platform, was facing challenges in operationalizing machine learning (ML) on premises due to lack of DevOps or infrastructure. Despite having a lean team of data scientists, their productivity was constantly interrupted with DevOps tasks and infrastructure challenges. They lacked the resources and infrastructure to operationalize their impressive models to achieve real business results. They were limited to on-premise deployment which caused technical challenges and DevOps overhead. They required dynamic Spark Clusters to handle terabytes of data, which wasted weeks of set up time, and caused major overhead costs to maintain. The PandoLogic team wanted a way to train and deploy on premise and in multiple clouds, without being locked into a single cloud. They needed an easy way to leverage open source tools and the compute resources they already had.
|
Download PDF
|