This is the offical code release for the paper
SAMa: Material-aware 3D selection and segmentation [3DV 2026]
by Michael Fischer, Iliyan Georgiev, Thibault Groueix, Vladimir G. Kim, Tobias Ritschel and Valentin Deschaintre.
For more information, make sure to check out the paper and project page.
We assume a working conda/mamba installation at ~/conda. Download the repository, cd into the folder, and run the setup cmd (note the leading dot):
git clone
cd ~/sama
. ./setup_env.sh
./auto_download.sh
This will install and activate the sama_gui_env environment (tested on Ubuntu 22.04, CUDA 12.4 and 13, and an NVIDIA A100).
The ./auto_download.sh should have setup everything correctly for you. If it did not, here are the manual steps to download the checkpoints, pre-trained NeRFStudio models and some sample data:
- Download the SAM2 and SAMa checkpoints from here and place them under
sam2/checkpts/. (TODO - add SAM2) - Download the example data from here and extract it to
data/ - Download the example pre-trained Splatfacto models from here and extract them to
outputs/splatfacto
To verify that the installation has completed successfully, and after having downloaded the above checkpts, data and models, run
ns-viewer --load-config outputs/splatfacto/lego/config.yml
(gsplat will take some init time on the first launch in a new environment). You should see the Lego bulldozer scene at localhost:7007.
Scroll down the GUI on the right and click "Click on Scene" before clicking anywhere on the object. SAMa will calculate the 3D material selection (first time takes slightly longer due to model loading and point cloud creation) and display it on the screen automatically.
To run SAMa on any other NeRFacto or Splatfacto asset, simply replace the config above by your trained (NeRFStudio) model. Note that if the asset's depth is bad / inconsistent, selection results will suffer significantly, as we rely on the depth for unprojecting the 2D similarities back to 3D. An additional note: we assume the assets to be centered at the origin and scaled to [-1, 1]. Everything outside of this bounding box will not be selected unless you manually adjust the box corners.
Follow the standard NeRFStudio syntax, explained in their Github. For instance, for a 3DGS asset, run:
ns-train splatfacto --pipeline.model.background-color white blender-data --data data/blender/lego/
We've found the white-background option to reduce floaters for png images with transparency, but it is not necessary. For a 3DGS asset, we recommend a spherical sampling - if your training views only cover the upper hemisphere, there is nothing that stops the algorithm from putting (white) Gaussians below the object, which will give you incorrect depth values and worse selection, or selection "floating in free space". Once trained, the above steps for using SAMa with existing assets apply.
Activate the environment (mamba activate sama) and ensure you have an actual nerfstudio installation in that environment.
In nerfstudio/models/materialistic_utils.py, near the top of the file, there is a global flag called save_images. It is normally deactivated for efficiency, but if you set it to true, the next click's selection input (the images) and output (the similarity values) will be saved to queried_imgs for inspection and debugging. This is usually a good first thing to do if the selection results are not good.
We use the official SAM2 repository, slightly adapted for fine-tuning. We use SAM2 in the Hiera-Large configuration, and use the initial release version - several updates have followed up since then, we can not guarantee that the SAMa weights work with those updated versions.
From top to bottom, in the NeRFStudio GUI:
Max Res: resolution at which the viewport is rendered inClamp Simadjusts the threshold under which we set the similarities to zeroSim. Alphaadjusts the transparency of the similarity overlayk Near. Neighb.adjusts the k in the kNN point cloud searchOutput Type: Set to "Similarity" to visualize the non-thresholded similarity and to "Distance" to visualize the distance to the nearest neighbour in the point cloud.Composite Depth: Standard NeRFStudio parameter, do not touch.Overlay Similarity: gets set automatically when a click is finished, tells the render state machine that we want the point cloud query to be performed. Don't manually touch, if you want to not see selection anymore and go back to RGB simply select RGB in theOutput Typedropdown menu.Binary Vote: Set to True to use binary voting instead of distance-weighted kNN aggregation. If this is enable and k is an even number, k will internally be set to k+1 to avoid tie situations. Note that enabling this option automatically thresholds the similarity so you might want to disable it for visualizing the non-thresholded similarities.Invert / Normalize / Min / Max: NeRFStudio default, these options adjust the colormap.Click On Scene: selection button - once clicked, click anywhere on the object to launch the selection computation.
If you find our work useful or plan to (re-) use parts of it in your own projects, please include the following citation:
@article{fischer2024sama,
title={SAMa: Material-aware 3D selection and segmentation},
author={Fischer, Michael and Georgiev, Iliyan and Groueix, Thibault and Kim, Vladimir G and Ritschel, Tobias and Deschaintre, Valentin},
journal={arXiv preprint arXiv:2411.19322},
year={2024}
}
