Daily Paper Cast
MedSAM3: Delving into Segment Anything with Medical Concepts
Listen to this episode
About this episode
🤗 Upvotes: 38 | cs.CV, cs.AI
<strong>Authors:</strong><br />
Anglin Liu, Rundong Xue, Xu R. Cao, Yifan Shen, Yi Lu, Xiang Li, Qianqian Chen, Jintai Chen</p>
<strong>Title:</strong><br />
MedSAM3: Delving into Segment Anything with Medical Concepts</p>
<strong>Arxiv:</strong><br />
<a href="http://arxiv.org/abs/2511.19046v1">http://arxiv.org/abs/2511.19046v1</a></p>
<strong>Abstract:</strong><br />
Medical image segmentation is fundamental for biomedical discovery. Existing methods lack generalizability and demand extensive, time-consuming manual annotation for new clinical application. Here, we propose MedSAM-3, a text promptable medical segmentation model for medical image and video segmentation. By fine-tuning the Segment Anything Model (SAM) 3 architecture on medical images paired with semantic conceptual labels, our MedSAM-3 enables medical Promptable Concept Segmentation (PCS), allowing precise targeting of anatomical structures via open-vocabulary text descriptions rather than solely geometric prompts. We further introduce the MedSAM-3 Agent, a framework that integrates Multimodal Large Language Models (MLLMs) to perform complex reasoning and iterative refinement in an agent-in-the-loop workflow. Comprehensive experiments across diverse medical imaging modalities, including X-ray, MRI, Ultrasound, CT, and video, demonstrate that our approach significantly outperforms existing specialist and foundation models. We will release our code and model at https://github.com/Joey-S-Liu/MedSAM3
Want to find AI jobs?
Join thousands of AI professionals finding their next opportunity