The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Sep. 06, 2022

Filed:

Nov. 15, 2020
Applicant:

Arizona Board of Regents on Behalf of Arizona State University, Scottsdale, AZ (US);

Inventors:

Mohammad Reza Hosseinzadeh Taher, Tempe, AZ (US);

Fatemeh Haghighi, Tempe, AZ (US);

Jianming Liang, Scottsdale, AZ (US);

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G06T 7/00 (2017.01); G06T 7/73 (2017.01); A61B 6/00 (2006.01);
U.S. Cl.
CPC ...
G06T 7/0012 (2013.01); A61B 6/468 (2013.01); A61B 6/50 (2013.01); G06T 7/74 (2017.01); G06T 2207/10116 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30061 (2013.01);
Abstract

Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases. This performance is unprecedented, because heretofore no self-supervised learning method has outperformed ImageNet-based transfer learning and no annotation reduction has been reported for self-supervised learning. These achievements are contributable to a simple yet powerful observation: The complex and recurring anatomical structures in medical images are natural visual words, which can be automatically extracted, serving as strong yet free supervision signals for CNNs to learn generalizable and transferable image representation via self-supervision.


Find Patent Forward Citations

Loading…