The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Jun. 03, 2025

Filed:

Sep. 07, 2022
Applicant:

Arizona Board of Regents on Behalf of Arizona State University, Scottsdale, AZ (US);

Inventors:

Shivam Bajpai, Tempe, AZ (US);

Jianming Liang, Scottsdale, AZ (US);

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G06V 10/77 (2022.01); G06T 7/00 (2017.01); G06V 10/46 (2022.01); G06V 10/70 (2022.01);
U.S. Cl.
CPC ...
G06T 7/0012 (2013.01); G06V 10/467 (2022.01); G06V 10/768 (2022.01); G06V 10/7715 (2022.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30096 (2013.01);
Abstract

Described herein are means for generating pre-trained models for nnU-Net through the use of improved transfer learning techniques, in which the pre-trained models are then utilized for the processing of medical imaging. According to a particular embodiment, there is a system specially configured for segmenting medical images, in which such a system includes: a memory to store instructions; a processor to execute the instructions stored in the memory; wherein the system is specially configured to: execute instructions via the processor for executing a pre-trained model from Models Genesis within a nnU-Net framework; execute instructions via the processor for learning generic anatomical patterns within the executing Models Genesis through self-supervised learning; execute instructions via the processor for transforming an original image using distortion and cutout-based methods; execute instructions via the processor for learning the reconstruction of the original image from the transformed image using an encoder-decoder architecture of the nnU-Net framework to identify the generic anatomical representation from the transformed image by recovering the original image; and wherein architecture determined by the nnU-Net framework is utilized with Models Genesis and is trained to minimize the L2 distance between the prediction and ground truth. Other related embodiments are disclosed.


Find Patent Forward Citations

Loading…