The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.
The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.
Patent No.:
Date of Patent:
Jul. 06, 1999
Filed:
Jun. 26, 1997
Joel M Gould, Winchester, MA (US);
Frank J McGrath, Wellesley, MA (US);
Jed M Roberts, Newton, MA (US);
Dragon Systems, Inc., Newton, MA (US);
Abstract
A computerized word recognition system, such as a speech recognition system, stores word models of a first and second set for each of a plurality of vocabulary words. The system has a user interface which enable a user to selectively prevent the use of a second set's word model for a selected word. Often the first set of word models are spelled word models, such as models represented by a sequence of phonetic component models, each of is derived from similar speech sounds occurring in different words. In such systems the second set of word models are custom words models derived largely from word signals which are presumed to correspond only to the model's associated word. In many embodiments, the user interface allows a user to select to stop using a selected word's custom, or second set, model by selecting a menu or control window of a user interface. It is preferred that the system automatically create custom models when word signals presumed to correspond to a given word score poorly against the word's spelled model. It is also preferred that the system respond to a command to delete a word's custom model by increasing, in subsequent adaptive training of word components for the word's spelled model, the weight given to information from such subsequent word signals relative to model information previously associated with such word components. This is done, in such a case, to compensate for the fact that past training data for the word has probably been corrupted.