The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Aug. 30, 2016

Filed:

Sep. 03, 2013
Applicant:

Alcatel Lucent, Boulogne Billancourt, FR;

Inventors:

Gerard Delegue, Nozay, FR;

Nicolas Bouche, Nozay, FR;

Assignee:

Alcatel Lucent, Boulogne-Billancourt, FR;

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
H04N 7/15 (2006.01); H04M 3/56 (2006.01); H04N 13/02 (2006.01);
U.S. Cl.
CPC ...
H04N 7/157 (2013.01); H04M 3/567 (2013.01); H04N 13/0239 (2013.01); H04N 2213/003 (2013.01);
Abstract

An immersive videoconference method wherein multiple participants () in different locations () remotely interact with each other through a telecommunication network architecture (), wherein the method comprises at the location () of a given participant (); —capturing video images of the participant by a pair of video cameras (A,B); —detecting, tracking and determining size and position related parameters of the participant in the video images; —generating a single elementary video stream related to the participant; —associating a room identifier to the elementary video stream, the room identifier being uniquely associated to the given participant; —sending the elementary video stream, the size and position related parameters and the room identifier (A,A,A) to a centralized entity (); —repeating the above steps for each participant () at the different location (); wherein the method further comprises at the centralized entity (): —creating a virtual room () by combining the elementary video streams (A,A,A) for all the participants; —staging the elementary video streams of all the participants in said virtual room and computing a scene specification associated to the room identifier of each participant based on the size and position related parameters of all the participants; and —generating, for each participant, a single composite video stream (B,B,B) of the virtual room () that displays the 2D video of the other participants sized and positioned as if the participants () were in the same virtual room () based on the scene specification and a combination of the elementary video streams of the other participants.


Find Patent Forward Citations

Loading…