The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
May. 04, 2021

Filed:

Oct. 16, 2019
Applicant:

Google Llc, Mountain View, CA (US);

Inventors:

Christoph Rhemann, Marina Del Rey, CA (US);

Abhimitra Meka, Saarbrücken, DE;

Matthew Whalen, San Clemente, CA (US);

Jessica Lynn Busch, Long Beach, CA (US);

Sofien Bouaziz, Los Gatos, CA (US);

Geoffrey Douglas Harvey, Culver City, CA (US);

Andrea Tagliasacchi, Toronto, CA;

Jonathan Taylor, San Francisco, CA (US);

Paul Debevec, Culver City, CA (US);

Peter Joseph Denny, Venice, CA (US);

Sean Ryan Francesco Fanello, San Francisco, CA (US);

Graham Fyffe, Los Angeles, CA (US);

Jason Angelo Dourgarian, Los Angeles, CA (US);

Xueming Yu, Arcadia, CA (US);

Adarsh Prakash Murthy Kowdle, San Francisco, CA (US);

Julien Pascal Christophe Valentin, Oberageri, CH;

Peter Christopher Lincoln, South San Francisco, CA (US);

Rohit Kumar Pandey, Mountain View, CA (US);

Christian Häne, Berkeley, CA (US);

Shahram Izadi, Tiburon, CA (US);

Assignee:

Google LLC, Mountain View, CA (US);

Attorney:
Primary Examiner:
Assistant Examiner:
Int. Cl.
CPC ...
G06K 9/46 (2006.01); G06T 15/50 (2011.01); G06T 15/20 (2011.01); G06N 3/08 (2006.01); G06K 9/62 (2006.01);
U.S. Cl.
CPC ...
G06K 9/4661 (2013.01); G06K 9/6256 (2013.01); G06N 3/08 (2013.01); G06T 15/20 (2013.01); G06T 15/506 (2013.01);
Abstract

Methods, systems, and media for relighting images using predicted deep reflectance fields are provided. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample includes (i) a group of one-light-at-a-time (OLAT) images that have each been captured when one light of a plurality of lights arranged on a lighting structure has been activated, (ii) a group of spherical color gradient images that have each been captured when the plurality of lights arranged on the lighting structure have been activated to each emit a particular color, and (iii) a lighting direction, wherein each image in the group of OLAT images and each of the spherical color gradient images are an image of a subject, and wherein the lighting direction indicates a relative orientation of a light to the subject; training a convolutional neural network using the group of training samples, wherein training the convolutional neural network comprises: for each training iteration in a series of training iterations and for each training sample in the group of training samples: generating an output predicted image, wherein the output predicted image is a representation of the subject associated with the training sample with lighting from the lighting direction associated with the training sample; identifying a ground-truth OLAT image included in the group of OLAT images for the training sample that corresponds to the lighting direction for the training sample; calculating a loss that indicates a perceptual difference between the output predicted image and the identified ground-truth OLAT image; and updating parameters of the convolutional neural network based on the calculated loss; identifying a test sample that includes a second group of spherical color gradient images and a second lighting direction; and generating a relit image of the subject included in each of the second group of spherical color gradient images with lighting from the second lighting direction using the trained convolutional neural network.


Find Patent Forward Citations

Loading…