The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Jan. 21, 2020

Filed:

Dec. 29, 2016
Applicant:

Zhejiang Gongshang University, Hangzhou, Zhejiang, CN;

Inventors:

Xun Wang, Zhejiang, CN;

Xuran Zhao, Zhejiang, CN;

Assignee:

ZHEJIANG GONGSHANG UNIVERSITY, Hangzhou, Zhejiang, CN;

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G06N 3/08 (2006.01); G06N 3/04 (2006.01); G06T 7/00 (2017.01); G06F 17/16 (2006.01); G06T 7/10 (2017.01); G06K 9/62 (2006.01); G06F 17/13 (2006.01);
U.S. Cl.
CPC ...
G06N 3/084 (2013.01); G06F 17/13 (2013.01); G06F 17/16 (2013.01); G06K 9/6215 (2013.01); G06N 3/0472 (2013.01); G06T 7/10 (2017.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01);
Abstract

A method for generating spatial-temporal consistency depth map sequences based on convolutional neural networks for 2D-3D conversion of television works includes steps of: 1) collecting a training set, wherein each training sample thereof includes a sequence of continuous RGB images, and a corresponding depth map sequence; 2) processing each image sequence in the training set with spatial-temporal consistency superpixel segmentation, and establishing a spatial similarity matrix and a temporal similarity matrix; 3) establishing the convolution neural network including a single superpixel depth regression network and a spatial-temporal consistency condition random field loss layer; 4) training the convolution neural network; and 5) recovering a depth maps of a RGB image sequence of unknown depth through forward propagation with the trained convolution neural network; which avoids that clue-based depth recovery method is greatly depended on scenario assumptions, and inter-frame discontinuity between depth maps generated by conventional neural networks.


Find Patent Forward Citations

Loading…