The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Nov. 25, 2025

Filed:

Apr. 27, 2023
Applicant:

L'oreal, Paris, FR;

Inventors:

Cong Wei, Toronto, CA;

Brendan Duke, Toronto, CA;

Ruowei Jiang, Toronto, CA;

Parham Aarabi, Richmond Hill, CA;

Assignee:

L'OREAL, Paris, FR;

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G09G 5/00 (2006.01); G06T 7/246 (2017.01); G06T 11/60 (2006.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/20 (2022.01); G06V 40/16 (2022.01);
U.S. Cl.
CPC ...
G06V 10/82 (2022.01); G06T 7/246 (2017.01); G06T 11/60 (2013.01); G06V 10/764 (2022.01); G06V 20/20 (2022.01); G06V 40/161 (2022.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30201 (2013.01);
Abstract

Vision Transformers (ViT) have shown their competitive advantages performance-wise compared to convolutional neural networks (CNNs) though they often come with high computational costs. Methods, systems and techniques herein learn instance-dependent attention patterns, utilizing a lightweight connectivity predictor module to estimate a connectivity score of each pair of tokens. Intuitively, two tokens have high connectivity scores if the features are considered relevant either spatially or semantically. As each token only attends to a small number of other tokens, the binarized connectivity masks are often very sparse by nature providing an opportunity to accelerate the network via sparse computations. Equipped with the learned unstructured attention pattern, sparse attention ViT produces a superior Pareto-optimal trade-off between FLOPs and top-1 accuracy on ImageNet compared to token sparsity (48%˜69% FLOPs reduction of MHSA; accuracy drop within 0.4%). Combining attention and token sparsity reduces VIT FLOPs by over 60%.


Find Patent Forward Citations

Loading…