The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Oct. 24, 2023

Filed:

May. 26, 2021
Applicant:

Salesforce.com, Inc., San Francisco, CA (US);

Inventors:

Kazuma Hashimoto, Menlo Park, CA (US);

Caiming Xiong, Menlo Park, CA (US);

Richard Socher, Menlo Park, CA (US);

Assignee:

Salesforce, Inc., San Francisco, CA (US);

Attorney:
Primary Examiner:
Assistant Examiner:
Int. Cl.
CPC ...
G06N 3/04 (2023.01); G06N 3/08 (2023.01); G06F 40/30 (2020.01); G06F 40/205 (2020.01); G06F 40/216 (2020.01); G06F 40/253 (2020.01); G06F 40/284 (2020.01); G06N 3/063 (2023.01); G10L 15/18 (2013.01); G10L 25/30 (2013.01); G10L 15/16 (2006.01); G06F 40/00 (2020.01); G06N 3/084 (2023.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/047 (2023.01);
U.S. Cl.
CPC ...
G06N 3/04 (2013.01); G06F 40/205 (2020.01); G06F 40/216 (2020.01); G06F 40/253 (2020.01); G06F 40/284 (2020.01); G06F 40/30 (2020.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/047 (2023.01); G06N 3/063 (2013.01); G06N 3/08 (2013.01); G06N 3/084 (2013.01); G06F 40/00 (2020.01); G10L 15/16 (2013.01); G10L 15/18 (2013.01); G10L 25/30 (2013.01);
Abstract

The technology disclosed provides a so-called 'joint many-task neural network model' to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called 'successive regularization' technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.


Find Patent Forward Citations

Loading…