The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.
The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.
Patent No.:
Date of Patent:
Jul. 07, 2020
Filed:
Oct. 15, 2018
Intel Corporation, Santa Clara, CA (US);
Amrita Mathuriya, Portland, OR (US);
Sasikanth Manipatruni, Portland, OR (US);
Victor Lee, Santa Clara, CA (US);
Huseyin Sumbul, Portland, OR (US);
Gregory Chen, Portland, OR (US);
Raghavan Kumar, Hillsboro, OR (US);
Phil Knag, Hillsboro, OR (US);
Ram Krishnamurthy, Portland, OR (US);
Ian Young, Portland, OR (US);
Abhishek Sharma, Hillsboro, OR (US);
Intel Corporation, Santa Clara, CA (US);
Abstract
The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution. Thus, the systems and methods described herein beneficially leverage the on-chip processor memory circuitry to perform a relatively large number of in-memory vector/tensor calculations in furtherance of neural network processing without burdening the processor circuitry.