Foundation models pretrained on large-scale datasets are revolutionizing the field of computational pathology (CPath).
Existing foundation models excel at certain clinical task types but struggle to handle the full breadth of tasks in the field.
To improve the generalization of pathology foundation models, a unified knowledge distillation framework is proposed, combining expert and self-knowledge distillation.
The Generalizable Pathology Foundation Model (GPFM) achieved an impressive average rank of 1.6 in six distinct clinical task types.