The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.
The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.
Patent No.:
Date of Patent:
Apr. 20, 2021
Filed:
Jan. 24, 2019
Southeast University, Jiangsu, CN;
Bo Liu, Jiangsu, CN;
Yu Gong, Jiangsu, CN;
Wei Ge, Jiangsu, CN;
Jun Yang, Jiangsu, CN;
Longxing Shi, Jiangsu, CN;
Southeast University, Jiangsu, CN;
Abstract
The present invention relates to the field of analog integrated circuits, and provides a multiply-accumulate calculation method and circuit suitable for a neural network, which realizes large-scale multiply-accumulate calculation of the neural network with low power consumption and high speed. The multiply-accumulate calculation circuit comprises a multiplication calculation circuit array and an accumulation calculation circuit. The multiplication calculation circuit array is composed of M groups of multiplication calculation circuits. Each group of multiplication calculation circuits is composed of one multiplication array unit and eight selection-shift units. The order of the multiplication array unit is quantized in real time by using on-chip training to provide a shared input for the selection-shift units, achieving increased operating rate and reduced power consumption. The accumulation calculation circuit is composed of a delay accumulation circuit, a TDC conversion circuit, and a shift-addition circuit in series. The delay accumulation circuit comprises eight controllable delay chains for dynamically controlling the number of iterations and accumulating data multiple times in a time domain, so as to meet the difference in calculation scale of different network layers, save hardware storage space, reduce calculation complexity, and reduce data scheduling.