Foundation models require large amounts of training data to allow them to be adaptable and versatile. Aignostics hypothesized that by structuring and curating our data sample using pathologist expertise, we would be able to develop a more efficient foundation model that was both robust and representative of the entire landscape.
~
133k
slides
~
58
tissue types
~
1.2B
image patches
~
6
scanner types
~
129
staining types
~
15+
laboratories
~
34k
cases
~
FF & FFPE
preparations
Pathologist expertise & AI training: in-house pathologists and computational scientists collaborated to stratify images based on clinically relevant features prior to model training. The process we used was as follows:
Slides were assigned to one of 31 groups based on metadata such as lab of origin, tissue type, and staining modality. Assignments were made to maximize homogeneity within groups and heterogeneity across groups.
Tissue patches were also assigned to one of 9 morphologically-similar clusters based on pathologist review (e.g., solid growth pattern, mononuclear infiltrates, etc.). These 9 clusters were composed of 1.2 billion image patches extracted across the training dataset.
Slide groups and tissue clusters were sampled equally during model training, which helped to downsample overrepresented visual features and upsample less common ones to better account for the diversity present within real-life pathology cases.
The image below shows results when a sample is pulled using our balanced approach vs random sampling. The balanced approach pulled samples containing 4 different staining types, with all patches containing relevant tissue (e.g. carcinoma, mucosa, lymphoid tissue, necrosis). The default approach pulled samples containing only 1 discernible stain (H&E) and areas with artifacts or no tissue besides carcinoma and mucosa tissue.
Balanced sampling based on tissue and slide clusters
Default option: random sampling
This example shows that the balanced approach results in a more diverse sample, while the random sample favors more common images with similar pathologies.
Our foundation model RudolfV was trained by adapting the DINOv2 framework to sample training data from a specific distribution derived from these slide groups and tissue clusters. This framework was chosen due to its proven performance and wide adoption, as well as to enable clear comparisons to other published models.
We have already integrated our foundation model into all of our histopathology work with clients. Check out our product page to learn more about the products and services we offer.
We are continually updating our model with more images and data modalities over time, and will publish regular updates on our progress and benchmarking results. Check back here for the latest version of our paper and results.
Below is an example of how our foundation model was applied to anomaly detection for rare diseases. Click below and fill out the form to see the full case study.
Interested in learning how our foundation model enhances the generalizability of a cell classification model? Click below and fill out the form to see the full case study.
Trained on very large data sets, foundation models serve as a starting point for developing high performing machine learning models in a quick and cost-effective way.
Foundation models can be quickly fine-tuned for a wide range of downstream tasks and represent a major leap forward from traditional supervised learning approaches.
Our foundation model was developed by having AI reconstruct masked image data. By doing so, AI learns to understand images and their context, e.g., that in a dense tumor the appearance of immune cells is less likely.
In internal analyses, our foundation model:
Reduces the amount of training data/annotations needed to optimize model performance by up to 90%
Increased the balanced accuracy of cell classification tasks by an average of ~10% across cell types
Is robust across a wide range of scanners and stains