The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Oct. 06, 2020

Filed:

Feb. 27, 2018
Applicant:

Imperial College of Science, Technology and Medicine, London, GB;

Inventors:

Robert Lukierski, London, GB;

Stefan Leutenegger, London, GB;

Andrew Davison, London, GB;

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G06K 9/00 (2006.01); G06T 7/579 (2017.01); G06T 7/55 (2017.01); G06T 7/73 (2017.01); A47L 9/00 (2006.01); B25J 9/16 (2006.01); B25J 11/00 (2006.01); G05D 1/02 (2020.01); H04N 5/232 (2006.01); G06T 7/80 (2017.01);
U.S. Cl.
CPC ...
G06K 9/00664 (2013.01); A47L 9/009 (2013.01); B25J 9/1666 (2013.01); B25J 9/1697 (2013.01); B25J 11/0085 (2013.01); G05D 1/0212 (2013.01); G06T 7/55 (2017.01); G06T 7/579 (2017.01); G06T 7/74 (2017.01); H04N 5/23238 (2013.01); A47L 2201/04 (2013.01); G06T 7/80 (2017.01); G06T 2207/10012 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/20228 (2013.01); G06T 2207/30244 (2013.01); Y10S 901/01 (2013.01); Y10S 901/47 (2013.01);
Abstract

Examples described herein relate to mapping a space using a multi-directional camera. This mapping may be performed with a robotic device comprising a monocular multi-directional camera device and at least one movement actuator. The mapping may generate an occupancy map to determine navigable portions of the space. A robotic device movement around a point in a plane of movement may be instructed using the at least one movement actuator. Using the monocular multi-directional camera device, a sequence of images are obtained () at different angular positions during the instructed movement. Pose data is determined () from the sequence of images. The pose data is determined using features detected within the sequence of images. Depth values are then estimated () by evaluating a volumetric function of the sequence of images and the pose data. The depth values are processed () to populate the occupancy map for the space.


Find Patent Forward Citations

Loading…