For the issue of cache replacement, we do not update the NIC cache along with
the main memory since that will incur excessive overhead. Instead, we use a periodic
NIC cache replacement policy since the static content in a Web server is updated very
infrequently. The replacement period is a design parameter and will be examined in
detail in the performance evaluation section.
Using the NIC data caching schemes, we expect performance improvement for
several reasons. First, retrieving data in the NIC memory saves the DMA transfer time.
If the requested data resides in the NIC memory of a remote server, only the header
needs to be transferred from the main memory to the NIC memory via DMA (see the
dotted line (8) in Figure 5.2). Second, the NIC cache can reduce the PCI bottleneck due
to reduced data transfer between the main memory and the NIC . Third, the NIC
memory can be used as an expanded data cache, with the exclusive cache placement
Since the NIC memory can be used for the local and remote reads, the pro
posed scheme has two additional types of data communication (local NIC read and re
mote NIC read). local NIC read is required for the exclusive caching scheme, while re
mote NIC read is required for both inclusive and exclusive caching schemes. Figure 5.4
depicts the processing steps of an HTTP request when it is served by a local NIC read.
A requested data is read from the local NIC cache if it is in the local NIC cache of the
initial node, not in the main memory cache. As explained in the previous section, an
HTTP request processing is initiated when a request arrives at the Ethernet NIC (Step
1). Step 2 to 5 are the same as the steps of the local cache read in Figure 5.1. To fetch
the requested file from the NIC cache, the DMA engine in the NIC transfers the data