The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.
The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.
Patent No.:
Date of Patent:
Oct. 12, 2021
Filed:
Oct. 04, 2019
Google Llc, Mountain View, CA (US);
Julien Valentin, Mountain View, CA (US);
Onur G. Guleryuz, San Francisco, CA (US);
Mira Leung, Seattle, WA (US);
Maksym Dzitsiuk, San Francisco, CA (US);
Jose Pascoal, Lisbon, PT;
Mirko Schmidt, San Francisco, CA (US);
Christoph Rhemann, Marina Del Rey, CA (US);
Neal Wadhwa, Mountain View, CA (US);
Eric Turner, Somerville, MA (US);
Sameh Khamis, Oakland, CA (US);
Adarsh Prakash Murthy Kowdle, San Francisco, CA (US);
Ambrus Csaszar, San Francisco, CA (US);
João Manuel Castro Afonso, Lisbon, PT;
Jonathan T. Barron, Alameda, CA (US);
Michael Schoenberg, San Francisco, CA (US);
Ivan Dryanovski, Mountain View, CA (US);
Vivek Verma, Oakland, CA (US);
Vladimir Tankovich, San Francisco, CA (US);
Shahram Izadi, Tiburon, CA (US);
Sean Ryan Francesco Fanello, San Francisco, CA (US);
Konstantine Nicholas John Tsotsos, San Francisco, CA (US);
Google LLC, Mountain View, CA (US);
Abstract
A handheld user device includes a monocular camera to capture a feed of images of a local scene and a processor to select, from the feed, a keyframe and perform, for a first image from the feed, stereo matching using the first image, the keyframe, and a relative pose based on a pose associated with the first image and a pose associated with the keyframe to generate a sparse disparity map representing disparities between the first image and the keyframe. The processor further is to determine a dense depth map from the disparity map using a bilateral solver algorithm, and process a viewfinder image generated from a second image of the feed with occlusion rendering based on the depth map to incorporate one or more virtual objects into the viewfinder image to generate an AR viewfinder image. Further, the processor is to provide the AR viewfinder image for display.