The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Apr. 18, 2023

Filed:

Jan. 29, 2018
Applicant:

Tetavi Ltd., Petach Tiqva, IL;

Inventors:

Michael Tamir, Tel Aviv, IL;

Michael Birnboim, Holon, IL;

David Dreizner, Raanana, IL;

Michael Priven, Ramat-Gan, IL;

Vsevolod Kagarlitsky, Ramat Gan, IL;

Assignee:

TETAVI, LTD., Ramat Gan, IL;

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
H04N 5/222 (2006.01); G06T 15/04 (2011.01); G06T 17/20 (2006.01); H04N 13/271 (2018.01); H04N 13/279 (2018.01); H04N 13/282 (2018.01); G01B 11/00 (2006.01); G06T 17/00 (2006.01); G01B 11/25 (2006.01); H04N 13/257 (2018.01); H04N 13/25 (2018.01); G06T 19/00 (2011.01);
U.S. Cl.
CPC ...
H04N 5/2226 (2013.01); G01B 11/00 (2013.01); G01B 11/25 (2013.01); G06T 15/04 (2013.01); G06T 17/00 (2013.01); G06T 17/20 (2013.01); G06T 19/003 (2013.01); H04N 13/25 (2018.05); H04N 13/257 (2018.05); H04N 13/271 (2018.05); H04N 13/279 (2018.05); H04N 13/282 (2018.05); G06T 2210/56 (2013.01);
Abstract

Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.


Find Patent Forward Citations

Loading…