Blackbox Adaptation for Medical Image Segmentation
Jay N. Paranjape
Shameema Sikder
S. Swaroop Vedula
Vishal M. Patel
Johns Hopkins University
[Paper]
[GitHub]
(a) General finetuning of Foundation Models (FM) (b) Common Adaptation Methods for FM (c) BAPS

Abstract

In recent years, various large foundation models have been proposed for image segmentation. There models are often trained on large amounts of data corresponding to general computer vision taks. Hence, these models do not perform well on medical data. There have been some attempts in the literature to perform parameter-efficient finetuning of such foundation models for medical image segmentation. However, these approaches assume that all the parameters of the model are available for adaptation. But, in many cases, these models are released as APIs or blackboxes, with no or limited access to the model parameters and data. In addition, finetuning methods also require a significant amount of compute, which may not be available for the downstream task. At the same time, medical data can't be shared with third-party agents for finetuning due to privacy reasons. To tackle these challenges, we pioneer a blackbox adaptation technique for prompted medical image segmentation, called BAPS. BAPS has two components - (i) An Image-Prompt decoder (IP decoder) module that generates visual prompts given an image and a prompt, and (ii) A Zero Order Optimization (ZOO) Method, called SPSA-GC that is used to update the IP decoder without the need for backpropagating through the foundation model. Thus, our method does not require any knowledge about the foundation model's weights or gradients. We test BAPS on four different modalities and show that our method can improve the original model's performance by around 4%.


Method

Blackbox Adapter for Prompted Segmentation (BAPS) comprises of a pretrained image encoder and prompt encoder, followed by a trainable Image-Prompt Decoder. Since many foundation models are promptable, BAPS takes in an image and a point prompt and uses them to generate a per-pixel prompt. The image encoder and prompt encoder generate embeddings from the image and the prompt respectively. These are concatenated and passed to the IP decoder, which is the only trainable module. The output from the decoder is added to the image and is passed to the blackbox FM along with the prompt. The IP decoder is trained using a zero shot optimization method called SPSA-GC, which estimates gradients using two forward passes wit perturbed weights.


Results

Qualitative Results on all the datasets. GT - ground truth, VP - visual prompting. The green dot in the image denotes the point prompt given to the blackbox foundation model. Among other gradient free methods, BAPS is able to generate better predictions.


Visualizing Modified Images

The visual prompt learnt after training is shown in the figure above. As seen here, before the prompt is added, certain parts of the object are missed, which are correctly captured by the FM after adding the visual prompt to the image.


SPSA-GEASS

Overview of SPSA-GEASS, an additional modification we propose over SPSA-GC. Since the FM is already pretrained extensively, it might happen that during adaptation, it gets stuck at a local minima. In such a scenario, the estimated gradient is close to zero and the updates cannot occur. Hence, SPSA-GEASS increases the learning rate if the magnitude of gradient is below a certain threshold for k1 epochs that can sometimes aid in thelearning process. It consists of two parameters - strike and cooldown. The system starts at strike=0. If the estimated gradient is greater than the threshold, strike increases, or else the system reverts to the original state. If strike reaches k_1, the learning rate and perturbation step parameter increase significantly. Then, the cooldown reduces every iteration until 0. After this, the system returns to its initial state.


Paper and Supplementary Material


Blackbox Adaptation for Medical Image Segmentation

(hosted on ArXiv)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.