The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Oct. 08, 2024

Filed:

Feb. 13, 2023
Applicant:

Google Llc, Mountain View, CA (US);

Inventors:

Terrance Paul McCartney, Jr., Allison Park, PA (US);

Brian Colonna, Pittsburgh, PA (US);

Michael Nechyba, Pittsburgh, PA (US);

Assignee:

Google LLC, Mountain View, CA (US);

Attorney:
Primary Examiner:
Assistant Examiner:
Int. Cl.
CPC ...
H04N 7/10 (2006.01); G06F 40/30 (2020.01); G06F 40/58 (2020.01); H04N 21/43 (2011.01); H04N 21/488 (2011.01);
U.S. Cl.
CPC ...
H04N 21/4884 (2013.01); G06F 40/30 (2020.01); G06F 40/58 (2020.01); H04N 21/43074 (2020.08);
Abstract

A method for aligning a translation of original caption data with an audio portion of a video is provided. The method involves identifying original caption data for the video that includes caption character strings, identifying translated language caption data for the video that includes translated character strings associated with audio portion of the video, and mapping caption sentence fragments generated from the caption character strings to corresponding translated sentence fragments generated from the translated character strings based on timing associated with the original caption data and the translated language caption data. The method further involves estimating time intervals for individual caption sentence fragments using timing information corresponding to individual caption character strings, assigning time intervals to individual translated sentence fragments based on estimated time intervals of the individual caption sentence fragments, generating a set of translated sentences using consecutive translated sentence fragments, and aligning the set of translated sentences with the audio portion of the video using assigned time intervals of individual translated sentence fragments from corresponding translated sentences.


Find Patent Forward Citations

Loading…