Abstract
Medical image segmentation is a foundational component of numerous clinical measurement and quantitative analysis pipelines. However, deploying foundation models such as the Segment Anything Model (SAM) on clinical edge devices is hindered by stringent computational and memory constraints: their large model size renders full fine-tuning impractical, and existing efficient variants rarely incorporate anatomical priors or optimize for device-level resource budgets. We propose TiPESAM (Tiny, Parameter-Efficient SAM for Edge Devices), which augments a frozen SAM encoder with a shape-based mixture-of-experts prior, low-rank adaptation, and resource-aware execution. A learned dictionary of shape experts and a pixelwise gating network reconstruct a low-resolution shape map that is re-encoded as a dense structural prompt and fused with encoder tokens for the SAM decoder. To satisfy edge constraints, we define multiple execution profiles with device-specific costs and train a cost-aware routing network that trades off segmentation accuracy and expected computation. On multicenter echocardiographic segmentation benchmarks, TiPE-SAM matches or surpasses SAM-based baselines with only ~0.5M trainable parameters (less than 1% of the ViT-B) and provides explicit control over parameter and inference budgets, enabling resource-efficient deployment of medical foundation models.
| Original language | English |
|---|---|
| Journal | IEEE Transactions on Artificial Intelligence |
| DOIs | |
| Publication status | Accepted/In press - 2026 |
| Externally published | Yes |
Keywords
- Edge computing
- Medical image segmentation
- Parameter-efficient fine-tuning
- Vision Foundation Model
Fingerprint
Dive into the research topics of 'TiPE-SAM: Tiny and Parameter-Efficient SAM for Edge Medical Image Segmentation with Mixture-of-Shape-Experts Priors'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver