The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Feb. 13, 2024

Filed:

Nov. 09, 2020
Applicant:

Toyota Research Institute, Inc., Los Altos, CA (US);

Inventors:

Jiexiong Tang, Stockholm, SE;

Rares A. Ambrus, San Francisco, CA (US);

Vitor Guizilini, Santa Clara, CA (US);

Sudeep Pillai, Santa Clara, CA (US);

Hanme Kim, San Jose, CA (US);

Adrien David Gaidon, Mountain View, CA (US);

Assignee:

TOYOTA RESEARCH INSTITUTE, INC., Los Altos, CA (US);

Attorney:
Primary Examiner:
Assistant Examiner:
Int. Cl.
CPC ...
G06T 7/00 (2017.01); G06T 7/579 (2017.01); B60W 60/00 (2020.01); G06T 7/246 (2017.01); G06T 7/33 (2017.01); G06T 7/269 (2017.01); G06T 7/73 (2017.01); G06N 3/08 (2023.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 10/46 (2022.01); G06V 20/56 (2022.01); G06V 20/64 (2022.01);
U.S. Cl.
CPC ...
G06T 7/579 (2017.01); B60W 60/001 (2020.02); B60W 60/0027 (2020.02); G06N 3/08 (2013.01); G06T 7/248 (2017.01); G06T 7/269 (2017.01); G06T 7/337 (2017.01); G06T 7/75 (2017.01); G06V 10/462 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/56 (2022.01); G06V 20/64 (2022.01); B60W 2420/42 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/30241 (2013.01); G06T 2207/30248 (2013.01); G06T 2207/30252 (2013.01);
Abstract

A method for learning depth-aware keypoints and associated descriptors from monocular video for ego-motion estimation is described. The method includes training a keypoint network and a depth network to learn depth-aware keypoints and the associated descriptors. The training is based on a target image and a context image from successive images of the monocular video. The method also includes lifting 2D keypoints from the target image to learn 3D keypoints based on a learned depth map from the depth network. The method further includes estimating ego-motion from the target image to the context image based on the learned 3D keypoints.


Find Patent Forward Citations

Loading…