The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
May. 27, 2025

Filed:

Apr. 13, 2021
Applicant:

Microsoft Technology Licensing, Llc, Redmond, WA (US);

Inventors:

Benjamin David Van Durme, Baltimore, MD (US);

Adam D. Pauls, San Francisco, CA (US);

Daniel Louis Klein, Orinda, CA (US);

Eui Chul Shin, San Francisco, CA (US);

Christopher H. Lin, Bellevue, WA (US);

Pengyu Chen, Union City, CA (US);

Subhro Roy, Walnut Creek, CA (US);

Emmanouil Antonios Platanios, Pittsburgh, PA (US);

Jason Michael Eisner, Baltimore, MD (US);

Benjamin Lev Snyder, Bellevue, WA (US);

Samuel McIntire Thomson, Berkeley, CA (US);

Assignee:
Attorney:
Primary Examiner:
Assistant Examiner:
Int. Cl.
CPC ...
G06F 40/30 (2020.01); G06F 40/205 (2020.01); G06F 40/55 (2020.01); G06F 40/58 (2020.01);
U.S. Cl.
CPC ...
G06F 40/30 (2020.01); G06F 40/205 (2020.01); G06F 40/55 (2020.01); G06F 40/58 (2020.01);
Abstract

Systems and methods are provided for automatically generating a program based on a natural language utterance using semantic parsing. The semantic parsing includes translating a natural language utterance into instructions in a logical form for execution. The methods use a pre-trained natural language model and generate a canonical utterance as an intermediate form before generating the logical form. The natural language model may be an auto-regressive natural language model with a transformer to paraphrase a sequence of words or tokens in the natural language utterance. The methods generate a prompt including exemplar input/output pairs as a few-shot learning technique for the natural language model to predict words or tokens. The methods further use constrained decoding to determine a canonical utterance, iteratively selecting sequence of words as predicted by the model against rules for canonical utterances. The methods generate a program based on the canonical utterance for execution in an application.


Find Patent Forward Citations

Loading…