The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
May. 23, 2023

Filed:

Mar. 06, 2019
Applicant:

General Electric Company, Schenectady, NY (US);

Inventors:

Huan Tan, Clifton Park, NY (US);

Isabella Heukensfeldt Jansen, Niskayuna, NY (US);

Gyeong Woo Cheon, Clifton Park, NY (US);

Li Zhang, Clifton Park, NY (US);

Assignee:

General Electric Company, Schenectady, NY (US);

Attorney:
Primary Examiner:
Assistant Examiner:
Int. Cl.
CPC ...
G06F 19/00 (2018.01); G06T 7/11 (2017.01); G05D 1/02 (2020.01); G06T 7/73 (2017.01); G05D 1/00 (2006.01); G06N 3/08 (2023.01); G06T 7/194 (2017.01); G06F 18/2413 (2023.01); G06N 3/045 (2023.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 10/44 (2022.01); G06V 20/10 (2022.01);
U.S. Cl.
CPC ...
G06T 7/11 (2017.01); G05D 1/0088 (2013.01); G05D 1/0246 (2013.01); G06F 18/24133 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06T 7/194 (2017.01); G06T 7/74 (2017.01); G06T 7/75 (2017.01); G06V 10/454 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/10 (2022.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30244 (2013.01);
Abstract

A method of robot autonomous navigation includes capturing an image of the environment, segmenting the captured image to identify one or more foreground objects and one or more background objects, determining a match between one or more of the foreground objects to one or more predefined image files, estimating an object pose for the one or more foreground objects by implementing an iterative estimation loop, determining a robot pose estimate by applying a robot-centric environmental model to the object pose estimate by implementing an iterative refinement loop, associating semantic labels to the matched foreground object, compiling a semantic map containing the semantic labels and segmented object image pose, and providing localization information to the robot based on the semantic map and the robot pose estimate. A system and a non-transitory computer-readable medium are also disclosed.


Find Patent Forward Citations

Loading…