The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.
The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.
Patent No.:
Date of Patent:
Jan. 22, 2008
Filed:
Jun. 30, 2003
Scott C. Smith, Mansfield, MA (US);
Christopher J. Kappler, Waltham, MA (US);
Andrew T. Hebb, Harvard, MA (US);
Gregory S. Goss, Dunstable, MA (US);
Robert T. Olsen, Dublin, CA (US);
Scott C. Smith, Mansfield, MA (US);
Christopher J. Kappler, Waltham, MA (US);
Andrew T. Hebb, Harvard, MA (US);
Gregory S. Goss, Dunstable, MA (US);
Robert T. Olsen, Dublin, CA (US);
Cisco Technology, Inc., San Jose, CA (US);
Abstract
Conventional schedulers employ designs allocating specific processor and memory resources, such as memory for configuration data, state data, and scheduling engine processor resources for specific aspects of the scheduler, such as layers of the scheduling hierarchy, each of which consumes dedicated processor and memory resources. A generic, iterative scheduling engine, applicable to an arbitrary scheduling hierarchy structure having a variable number of hierarchy layers, receives a scheduling hierarchy structure having a predetermined number of layers, and allocates scheduling resources such as instructions and memory, according to scheduling logic, in response to design constraints and processing considerations. The resulting scheduling logic processes the scheduling hierarchy in iterative manner which allocates the available resources among the layers of the hierarchy, such that the scheduler achieves throughput requirements corresponding to enqueue and dequeue events with consideration to the number of layers in the scheduling hierarchy and the corresponding granularity of queuing.