The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Jan. 19, 2021

Filed:

Jun. 21, 2018
Applicant:

Advanced Micro Devices, Inc., Santa Clara, CA (US);

Inventors:

Marius Evers, Santa Clara, CA (US);

Dhanaraj Bapurao Tavare, Santa Clara, CA (US);

Ashok Tirupathy Venkatachar, Santa Clara, CA (US);

Arunachalam Annamalai, Santa Clara, CA (US);

Donald A. Priore, Boxborough, MA (US);

Douglas R. Williams, Mountain View, CA (US);

Assignee:

Advanced Micro Devices, Inc., Santa Clara, CA (US);

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G06F 9/38 (2018.01);
U.S. Cl.
CPC ...
G06F 9/3824 (2013.01); G06F 9/3802 (2013.01); G06F 9/3808 (2013.01); G06F 9/3818 (2013.01); G06F 9/3844 (2013.01); G06F 9/3867 (2013.01);
Abstract

The techniques described herein provide an instruction fetch and decode unit having an operation cache with low latency in switching between fetching decoded operations from the operation cache and fetching and decoding instructions using a decode unit. This low latency is accomplished through a synchronization mechanism that allows work to flow through both the operation cache path and the instruction cache path until that work is stopped due to needing to wait on output from the opposite path. The existence of decoupling buffers in the operation cache path and the instruction cache path allows work to be held until that work is cleared to proceed. Other improvements, such as a specially configured operation cache tag array that allows for detection of multiple hits in a single cycle, also improve latency by, for example, improving the speed at which entries are consumed from a prediction queue that stores predicted address blocks.


Find Patent Forward Citations

Loading…