The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Feb. 11, 2025

Filed:

Oct. 28, 2020
Applicant:

Nanjing University, Jiangsu, CN;

Inventors:

Xun Cao, Jiangsu, CN;

Zhihao Huang, Jiangsu, CN;

Yanru Wang, Jiangsu, CN;

Assignee:

NANJING UNIVERSITY, Jiangsu, CN;

Attorneys:
Primary Examiner:
Assistant Examiner:
Int. Cl.
CPC ...
G06T 9/00 (2006.01); H04N 13/117 (2018.01); H04N 13/246 (2018.01); H04N 13/257 (2018.01); H04N 13/296 (2018.01);
U.S. Cl.
CPC ...
H04N 13/117 (2018.05); G06T 9/002 (2013.01); H04N 13/246 (2018.05); H04N 13/257 (2018.05); H04N 13/296 (2018.05); G06T 2207/20081 (2013.01);
Abstract

A Free Viewpoint Video (FVV) generation and interaction method based on a deep Convolutional Neural Network (CNN) includes the steps of: acquiring multi-viewpoint data of a target scene by a synchronous shooting system with a multi-camera array arranged accordingly to obtain groups of synchronous video frame sequences from a plurality of viewpoints, and rectifying baselines of the sequences at pixel level in batches; extracting, by encoding and decoding network structures, features of each group of viewpoint images input into a designed and trained deep CNN model, to obtain deep feature information of the scene, and combining the information with the input images to generate a virtual viewpoint image between each group of adjacent physical viewpoints at every moment; and synthesizing all viewpoints into frames of the FVV based on time and spatial position of viewpoints by stitching matrices. The method does not require camera rectification and depth image calculation.


Find Patent Forward Citations

Loading…