cnvrg.io Case Studies Deploying Large-Scale Real-Time Predictions with Apache Kafka: A Playtika Case Study
Edit This Case Study Record
cnvrg.io Logo

Deploying Large-Scale Real-Time Predictions with Apache Kafka: A Playtika Case Study

cnvrg.io
Analytics & Modeling - Machine Learning
Analytics & Modeling - Real Time Analytics
Education
Equipment & Machinery
Product Research & Development
Quality Assurance
Predictive Maintenance
Real-Time Location System (RTLS)
Data Science Services
System Integration

Playtika, a leading Game-Entertainment company, faced significant challenges in scaling the production of real-time machine learning. With over 10 million daily active users, 10 billion daily events, and over 9TB of daily processed data, the company's existing batch and web services deployment methods were unable to scale to meet their needs or produce predictions in real-time. The REST APIs in their ML Pipelines led to service exhaustion, client starvation, handling failures and retries, and performance tuning of bulk size for batch partitioning. Playtika’s event-driven ecosystem required a solution that could support real-time streaming of their production models and scale without downtime. They also needed a solution that could integrate with various processes including Airflow and Spark, and handle bursts, peaks, and fast creation of new ML Pipelines.

Read More

Playtika is a leading Game-Entertainment company that has been leading with top-grossing titles for over five straight years. The company provides audiences around the world with a wide variety of games based on quality, and personalized content. Playtika uses massive amounts of data to reshape the gaming landscape by tailoring UX based on in-game actions. With over 10 million daily active users, 10 billion daily events, and over 9TB of daily processed data, Playtika is able to provide its scientists the data they need to create a continuously changing and adaptive game environment for its users based on their in-game behavior.

Read More

Playtika turned to cnvrg.io AI OS to handle their experiments, scaling, and deployments. cnvrg.io provided a one-click streaming endpoint solution with built-in monitoring and MLOps, enabling Playtika to execute real-time predictions with advanced model monitoring features. cnvrg.io organized every stage of Playtika’s data-science projects, including research, information collection, model development, and model optimization at scale. It also bridged the work between their Data Scientists and ML Data Engineers, enabling them to continuously write, train, and deploy machine learning models to various stages in one click. cnvrg.io delivered a scalable solution for streaming endpoints with Apache Kafka, leading to a massive increase in successful throughput, and little to no latency. It also provided Playtika with event-at-a-time processing, exactly once processing, distributed processing and fault-tolerance with fast failover, reprocessing capabilities, and Kubernetes backed autoscaling for Kafka.

Read More

cnvrg.io delivered a simple, and quick solution to Playtika’s unique ML production challenges. It reduced technical debt in Playtika’s workflow, and connected data scientists from engineering teams. cnvrg.io MLOps solutions, enabled Playtika’s engineering team to easily deploy, update and monitor ML in production to ensure peak performance, and reduced complex triggering and scheduling as data came in. Their data scientists are able to visualize results and business impact in real-time in a unified platform. With multiple deployment options, cnvrg.io offered a one click deployment solution which resulted in frictionless deployment solutions for every model in production. cnvrg.io enabled Playtika to instantly deploy with Kafka and easily integrated into their existing system. Playtika’s ML services perform better than ever, with native integration with Kubernetes and Apache Kafka allowing them to successfully handle any spike in incoming demand and predict and handle workloads and scale consistently and linearly by adding more pods.

Increased performance by 40%

Gained up to 50% increase in successful throughput

Reduced latency and error rates to zero

Download PDF Version
test test