Google DeepMind and Google Research have introduced two new models, MedGemma 27B Multimodal and MedSigLIP, under the MedGemma umbrella to advance open-source development in medical AI.
MedGemma models address data heterogeneity and limited supervision in healthcare AI, processing medical images and clinical text for tasks like diagnosis and report generation.
MedGemma 27B Multimodal offers enhanced vision-language architecture for medical reasoning, achieving high accuracy on tasks and excelling in agentic environments.
MedSigLIP, a lightweight image-text encoder, supports edge deployment, excels in healthcare applications, and outperforms dedicated image-only models in various medical domains.