The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Apr. 30, 2019

Filed:

Aug. 22, 2017
Applicant:

Northrop Grumman Systems Corporation, Falls Church, VA (US);

Inventors:

Victor Y. Wang, San Diego, CA (US);

Kevin A. Calcote, La Mesa, CA (US);

Assignee:

Northrop Grumman Systems Corporation, Falls Church, VA (US);

Attorneys:
Primary Examiner:
Int. Cl.
CPC ...
G06K 9/62 (2006.01); G06T 7/246 (2017.01); G06T 7/73 (2017.01); G06K 9/46 (2006.01); G06N 3/04 (2006.01); G06T 11/60 (2006.01); G06K 9/00 (2006.01);
U.S. Cl.
CPC ...
G06K 9/6277 (2013.01); G06K 9/00664 (2013.01); G06K 9/00718 (2013.01); G06K 9/4628 (2013.01); G06K 9/6274 (2013.01); G06N 3/0445 (2013.01); G06N 3/0454 (2013.01); G06T 7/246 (2017.01); G06T 7/73 (2017.01); G06T 11/60 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/20076 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30236 (2013.01);
Abstract

An adaptive real-time detection and examination network that employs deep learning to detect and recognize objects in a stream of pixilated two-dimensional digital images. The network provides the images from an image source as pixilated image frames to a CNN having an input layer and output layer, where the CNN identifies and classifies the objects in the image. The network also provides metadata relating to the image source and its location, and provides the object classification data and the metadata to an RNN that identifies motion and relative velocity of the classified objects in the images. The network combines the object classification data from the CNN and the motion data from the RNN, and correlates the combined data to define boundary boxes around each of the classified objects and an indicator of relative velocity and direction of movement of the classified objects, which can be displayed on the display device.


Find Patent Forward Citations

Loading…