Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers

1 University of Toronto 2 University of Guelph 3 ModiFace, Inc. 4 Vector Institute
Accelerating ViT by learning instance-dependent, sparse attention patterns(d).

Abstract

Vision Transformers (ViT) have shown competitive advantages in terms of performance compared to convolutional neural networks (CNNs), though they often come with high computational costs. To this end, previous methods explore different attention patterns by limiting a fixed number of spatially nearby tokens to accelerate the ViT’s multi-head self-attention (MHSA) operations. However, such structured attention patterns limit the token-to-token connections to their spatial relevance, which disregards learned semantic connections from a full attention mask. In this work, we propose an approach to learn instance-dependent attention patterns, by devising a lightweight connectivity predictor module that estimates the connectivity score of each pair of tokens. Intuitively, two tokens have high connectivity scores if the features are considered relevant either spatially or semantically. As each token only attends to a small number of other tokens, the binarized connectivity masks are often very sparse by nature and therefore provide the opportunity to reduce network FLOPs via sparse computations. Equipped with the learned unstructured attention pattern, sparse attention ViT (Sparsifiner) produces a superior Pareto frontier between FLOPs and top-1 accuracy on ImageNet compared to token sparsity. Our method reduces 48% ∼ 69% FLOPs of MHSA while the accuracy drop is within 0.4%. We also show that combining attention and token sparsity reduces ViT FLOPs by over 60%.

Video

Visualization

Sparsifiner generates different sparse patterns for different tokens in the same image. The sparse attention retains all the most salient relations with the given query patch (marked as yellow squares)
Visualization of connectivity mask with budget size of 20 (10% of full attention connectivity)

Methodology

Network Architecture: Replacing the Dense MHSA in ViT with a Sparse MHSA. A connectivity pattern predictor module estimates the connectivity score between tokens.
How to efficiently generate 𝑵^𝟐 attention connectivity scores? Computing a sparse low-rank approximation of dense attention as the attention score.

Experiment Results

Sparsifiner can achieve over 73% reduction in FLOPs compared with the dense-attention baseline. Sparsifiner produces a superior trade-off between FLOPs and top-1 accuracy on ImageNet compared to token sparsity methods.

BibTeX

@InProceedings{Wei_2023_CVPR,
        author    = {Wei, Cong and Duke, Brendan and Jiang, Ruowei and Aarabi, Parham and Taylor, Graham W. and Shkurti, Florian},
        title     = {Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers},
        booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        month     = {June},
        year      = {2023},
        pages     = {22680-22689}}