The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.
The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.
Patent No.:
Date of Patent:
Nov. 17, 1992
Filed:
Sep. 18, 1991
Hynek Hermansky, Denver, CO (US);
Louis A Cox, Jr, Denver, CO (US);
U S West Advanced Technologies, Inc., Boulder, CO (US);
Abstract
A method for synthesizing human speech using a linear mapping of a small set of coefficients that are speaker-independent. Preferably, the speaker-independent set of coefficients are cepstral coefficients developed during a training session using a perceptual linear predictive analysis. A linear predictive all-pole model is used to develop corresponding formants and bandwidths to which the cepstral coefficients are mapped by using a separate multiple regression model for each of the five formant frequencies and five formant bandwidths. The dual analysis produces both the cepstral coefficients of the PLP model for the different vowel-like sounds and their true formant frequencies and bandwidths. The separate multiple regression models developed by mapping the cepstral coefficients into the formant frequencies and formant bandwidths can then be applied to cepstral coefficients determined for subsequent speech to produce corresponding formants and bandwidths used to synthesize that speech. Since less data are required for synthesizing each speech segment than in conventional techniques, a reduction in the required storage space and/or transmission rate for the data required in the speech synthesis is achieved. In addition, the cepstral coefficients for each speech segment can be used with the regressive model for a different speaker, to produce synthesized speech corresponding to the different speaker.