The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Apr. 14, 2025

Filed:

Jun. 04, 2024
Applicant:

Samsung Electronics Co., Ltd., Suwon-si, KR;

Inventors:

Kun Wang, Jiangsu, CN;

Jichun Li, Jiangsu, CN;

Mengze Wang, Jiangsu, CN;

Youxin Chen, Jiangsu, CN;

Assignee:
Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G06F 3/01 (2005.12); G06T 7/11 (2016.12); G06T 15/20 (2010.12);
U.S. Cl.
CPC ...
G06F 3/013 (2012.12); G06T 7/11 (2016.12); G06T 15/20 (2012.12); G06T 2207/10016 (2012.12); G06T 2207/10048 (2012.12); G06T 2207/20081 (2012.12); G06T 2207/30201 (2012.12); G06T 2207/30241 (2012.12); G06T 2207/30268 (2012.12);
Abstract

A method and a system for rendering video images in virtual reality (VR) scenes are provided. The method includes providing a video image at a current time point, dividing the video image at the current time point into a plurality of sub-regions, inputting image feature information of the sub-regions and acquired user viewpoint feature information into a trained attention model for processing to obtain attention coefficients of the sub-regions indicating probability values at which user viewpoints at a next time point fall into the sub-regions, rendering the sub-regions based on the attention coefficients of the sub-regions to obtain a rendered video image at the current time point, inputting the attention coefficients of the sub-regions and the image feature information of the sub-regions into a trained user eyes trajectory prediction model for processing, obtaining user eyes trajectory information in a current time period, dividing, for video images at subsequent time points within the current time period, the video images at the subsequent time points into a plurality of sub-regions, calculating attention coefficients of the sub-regions in a video image at each of the subsequent time points within the current time period respectively based on the user eyes trajectory information in the current time period, and rendering the corresponding sub-regions based on the attention coefficients of the sub-regions to obtain a rendered video image at each of the subsequent time points.


Find Patent Forward Citations

Loading…