The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Jul. 30, 2024

Filed:

Jan. 30, 2020
Applicant:

Nippon Telegraph and Telephone Corporation, Tokyo, JP;

Inventors:

Hirokazu Kameoka, Musashino, JP;

Ko Tanaka, Musashino, JP;

Takuhiro Kaneko, Musashino, JP;

Nobukatsu Hojo, Musashino, JP;

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G10L 21/007 (2013.01); G06F 17/16 (2006.01); G06N 20/20 (2019.01); G10L 25/30 (2013.01);
U.S. Cl.
CPC ...
G10L 21/007 (2013.01); G06F 17/16 (2013.01); G06N 20/20 (2019.01); G10L 25/30 (2013.01);
Abstract

A conversion learning device includes: a source encoding unit that converts, by using a first machine learning model, a feature amount sequence of a source domain that is a characteristic of conversion-source content data, into a first internal representation vector sequence that is a matrix in which internal representation vectors at individual locations of the feature amount sequence of the source domain are arranged; a target encoding unit that converts, by using a second machine learning model, a feature amount sequence of a target domain that is a characteristic of conversion-target content data, into a second internal representation vector sequence that is a matrix in which internal representation vectors at individual locations of the feature amount sequence of the target domain are arranged; an attention matrix calculation unit that calculates, by using the first internal representation vector sequence and the second internal representation vector sequence, an attention matrix that is a matrix mapping the individual locations of the feature amount sequence of the source domain to the individual locations of the feature amount sequence of the target domain, and calculates a third internal representation vector sequence that is a product of an internal representation vector sequence calculated by linear conversion of the first internal representation vector sequence and the attention matrix; a target decoding unit that calculates, by using the third internal representation vector sequence, a feature amount sequence of a conversion domain that is used to convert the source domain into the conversion domain, by using a third machine learning model; and a learning execution unit that causes at least one of the target encoding unit and the target decoding unit to learn such that a distance between a submatrix of the feature amount sequence of the target domain and a submatrix of the feature amount sequence of the conversion domain becomes shorter.


Find Patent Forward Citations

Loading…