The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Feb. 01, 2022

Filed:

Sep. 28, 2018
Applicant:

International Business Machines Corporation, Armonk, NY (US);

Inventors:

Brian Taba, Cupertino, CA (US);

Andrew S. Cassidy, San Jose, CA (US);

Myron D. Flickner, San Jose, CA (US);

Pallab Datta, San Jose, CA (US);

Hartmut Penner, San Jose, CA (US);

Rathinakumar Appuswamy, San Jose, CA (US);

Jun Sawada, Austin, TX (US);

John V. Arthur, Mountain View, CA (US);

Dharmendra S. Modha, San Jose, CA (US);

Steven K. Esser, San Jose, CA (US);

Jennifer Klamo, San Jose, CA (US);

Attorneys:
Primary Examiner:
Int. Cl.
CPC ...
G06N 3/08 (2006.01); G06F 17/16 (2006.01); G06N 3/06 (2006.01); G06N 3/04 (2006.01);
U.S. Cl.
CPC ...
G06N 3/084 (2013.01); G06F 17/16 (2013.01); G06N 3/0454 (2013.01); G06N 3/06 (2013.01);
Abstract

Parallel processing among arrays of physical neural cores is provided. An array of neural cores is adapted to compute, in parallel, an output activation tensor of a neural network layer. A network is operatively connected to each of the neural cores. The output activation tensor is distributed across the neural cores. An input activation tensor is distributed across the neural cores. A weight tensor is distributed across the neural cores. Each neural core's computation comprises multiplying elements of a portion of the input activation tensor at that core with elements of a portion of the weight tensor at that core, and storing the summed products in a partial sum corresponding to an element of the output activation tensor. Each element of the output activation tensor is computed by accumulating all of the partial sums corresponding to that element via the network. The partial sums for each element of the output activation tensor are computed in a sequence of steps whose order is described by tracing a path through the weight tensor that visits every weight tensor element that contributes to any partial sum.


Find Patent Forward Citations

Loading…