PartSAM: A Scalable Promptable Part Segmentation Model Trained on Native 3D Data

1Nanjing University of Aeronautics and Astronautics, 2Hong Kong University of Science and Technology, 3The University of Hong Kong, 4National University of Singapore, 5Lingnan University, 6Macau University of Science and Technology
† Corresponding authors
PartSAM Teaser

PartSAM is a promptable 3D part segmentation model trained with large-scale native 3D data. The combination of a scalable architecture and large-scale training data endows PartSAM with strong generalization ability, enabling it to automatically decompose diverse 3D models, including both artist meshes and AI-generated shapes, into semantically meaningful parts.

Abstract

Segmenting 3D objects into parts is a long-standing challenge in computer vision. To overcome taxonomy constraints and generalize to unseen 3D objects, recent works turn to open-world part segmentation. These approaches typically transfer supervision from 2D foundation models, such as SAM, by lifting multi-view masks into 3D. However, this indirect paradigm fails to capture intrinsic geometry, leading to surface-only understanding, uncontrolled decomposition, and limited generalization. We present PartSAM, the first promptable part segmentation model trained natively on large-scale 3D data. Following the design philosophy of SAM, PartSAM employs an encoder–decoder architecture in which a triplane-based dual-branch encoder produces spatially structured tokens for scalable part-aware representation learning. To enable large-scale supervision, we further introduce a model-in-the-loop annotation pipeline that curates over five million 3D shape–part pairs from online assets, providing diverse and fine-grained labels. This combination of scalable architecture and diverse 3D data yields emergent open-world capabilities: with a single prompt, PartSAM achieves highly accurate part identification, and in a “Segment-Every-Part” mode, it automatically decomposes shapes into both surface and internal structures. Extensive experiments show that PartSAM outperforms state-of-the-art methods by large margins across multiple benchmarks, marking a decisive step toward foundation models for 3D part understanding.

Method Overview

Method Overview

Automatic Segmentation Results on PartObjaverse-Tiny

Input
Results
Input
Results

Automatic Segmentation Results on AI-generated Meshes

Input
Results
Input
Results

Comparison: Interactive Part Segmentation

Method Overview


Comparison: Automatic Part Segmentation

Method Overview
Method Overview


BibTeX

@article{zhu2025partsam,
      title={PartSAM: A Scalable Promptable Part Segmentation Model Trained on Native 3D Data}, 
      author={Zhe Zhu and Le Wan and Rui Xu and Yiheng Zhang and Honghua Chen and Zhiyang Dou and Cheng Lin and Yuan Liu and Mingqiang Wei},
      journal={arXiv preprint arXiv:2509.21965},
      year={2025}
}