The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.
The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.
Patent No.:
Date of Patent:
Oct. 07, 2003
Filed:
Aug. 26, 1999
Gregory Scott Althaus, Austin, TX (US);
Tai-Chien Daisy Chang, Austin, TX (US);
Herman Dietrich Dierks, Jr., Round Rock, TX (US);
Satya Prakesh Sharma, Round Rock, TX (US);
International Business Machines Corporation, Armonk, NY (US);
Abstract
Network input processing is distributed to multiple CPUs on multiprocessor systems to improve network throughput and take advantage of MP scalability. Packets are received by the network adapter and are distributed to N receive buffer pools set up by the device driver, based on N CPUs being available for input processing of packets. Each receive buffer pool has an associated CPU. Packets are direct memory accessed to one of the N receive buffer pools by using a hashing function, which is based on the source MAC address, source IP address, or the packet's source and destination TCP port numbers, or all or a combination of the foregoing. The hashing mechanism ensures that the sequence of packets within a given communication session will be preserved. Distribution is effected by the network adapter, which sends an interrupt to the CPU corresponding to the receive buffer pool, subsequent to the packet being DMAed into the buffer pool. This optimizes the efficiency of the MP system by eliminating any reliance on the scheduler and increasing the bandwidth between the device driver and the network adapter, while maintaining proper packet sequences. Parallelism is thereby increased on network I/O processing, eliminating CPU bottleneck for high speed network I/Os and, thus, improving network performance.