Labelbox
Case Studies
Non-Expert Labeling Teams Can Create High Quality Training Data for Medical Use Cases
Overview
Non-Expert Labeling Teams Can Create High Quality Training Data for Medical Use CasesLabelbox |
Analytics & Modeling - Computer Vision Software Analytics & Modeling - Machine Learning | |
Agriculture Education | |
Product Research & Development Quality Assurance | |
Computer Vision Visual Quality Detection | |
Testing & Certification Training | |
Operational Impact
The operational results of the study were significant. The researchers were able to challenge the widely held assumption that only medical experts can provide quality annotations for clinical deep learning models. They demonstrated that novice labeling teams can play a vital role in developing high-performing models in specialized medical use cases. This has the potential to significantly reduce costs and speed up the development process for machine learning algorithms in the medical field. Furthermore, the researchers were able to submit a paper that was accepted into the FAIR workshop at MICCAI 21, demonstrating the academic recognition of their findings. | |
Quantitative Benefit
The study found that novice annotators could perform complex medical image segmentation tasks to a high standard, similar to those provided by experts. | |
The prediction performance in the models were comparable in both image segmentation and downstream classification tasks, regardless of whether the data was labeled by experts or novices. | |
The speed of the Labelbox Workforce team was impressive and everything was annotated within the expected time-frame. | |