Run:AI Case Studies London Medical Imaging & AI Centre Speeds Up Research with Run:ai
Edit This Case Study Record
Run:AI Logo

London Medical Imaging & AI Centre Speeds Up Research with Run:ai

Run:AI
Analytics & Modeling - Machine Learning
Application Infrastructure & Middleware - API Integration & Management
Healthcare & Hospitals
Product Research & Development
Computer Vision
Predictive Maintenance
Data Science Services
System Integration
The London Medical Imaging & AI Centre for Value Based Healthcare was facing several challenges with its AI hardware. The total GPU utilization was below 30%, with significant idle periods for some GPUs despite demand from researchers. The system was overloaded on multiple occasions where more GPUs were needed for running jobs than were available. Poor visibility and scheduling led to delays and waste, with bigger experiments requiring a large number of GPUs sometimes unable to begin because smaller jobs using only a few GPUs were blocking them out of their resource requirements.
Read More
The London Medical Imaging & Artificial Intelligence Centre for Value Based Healthcare is a consortium of academic, healthcare and industry partners, led by King’s College London and based at St. Thomas’ Hospital. It uses medical images and electronic healthcare data held by the UK National Health Service to train sophisticated deep learning algorithms for computer vision and natural-language processing. These algorithms are used to create new tools for effective screening, faster diagnosis and personalized therapies, to improve patients’ health.
Read More
The AI Centre implemented Run:ai's Platform to address these challenges. The platform increased GPU utilization by 110%, with resultant increases in experiment speed. Researchers ran more than 300 experiments in a 40-day period, compared to just 162 experiments that were run in a simulation of the same environment without Run:ai. By dynamically allocating pooled GPU to workloads, hardware resources were shared more efficiently. The platform also improved visibility with advanced monitoring and cluster management tools, allowing data scientists to see which GPU resources were not being used and dynamically adjust the size of their job to run on available capacity. The platform also enabled fair scheduling and guaranteed resources, allowing large ongoing workloads to use the optimal amount of GPU during low-demand times, and automatically allowing shorter, higher-priority workloads to run alongside.
Read More
Increased GPU utilization by 110%, with resultant increases in experiment speed.
Researchers ran more than 300 experiments in a 40-day period, compared to just 162 experiments that were run in a simulation of the same environment without Run:ai.
Improved visibility with advanced monitoring and cluster management tools.
2.1X Higher GPU Utilization
31X Faster Experiments
1.85X More Experiments
Download PDF Version
test test