The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Mar. 28, 2023

Filed:

Aug. 18, 2020
Applicant:

Salesforce.com, Inc., San Francisco, CA (US);

Inventors:

Bryan McCann, Menlo Park, CA (US);

Nitish Shirish Keskar, San Bruno, CA (US);

Caiming Xiong, Mountain View, CA (US);

Richard Socher, Menlo Park, CA (US);

Assignee:

salesforce.com, inc., San Francisco, CA (US);

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G06F 40/30 (2020.01); G06N 3/08 (2006.01); G06N 5/04 (2006.01); G06N 3/04 (2006.01); G06F 40/56 (2020.01); G06F 16/242 (2019.01); G06F 16/33 (2019.01); G06F 16/332 (2019.01); G06N 20/20 (2019.01); G06N 20/10 (2019.01); G06N 20/00 (2019.01); G10L 15/16 (2006.01); G10L 15/18 (2013.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01);
U.S. Cl.
CPC ...
G06F 40/30 (2020.01); G06F 16/243 (2019.01); G06F 16/3329 (2019.01); G06F 16/3334 (2019.01); G06F 16/3344 (2019.01); G06F 40/56 (2020.01); G06N 3/04 (2013.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06N 5/04 (2013.01); G06N 20/20 (2019.01); G06N 20/00 (2019.01); G06N 20/10 (2019.01); G10L 15/16 (2013.01); G10L 15/1822 (2013.01);
Abstract

Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.


Find Patent Forward Citations

Loading…