our simulator consists of the NIC processor, NIC memory and DMA engine components.
The DMA engine transfers files from the main memory to the NIC through a PCI bus
The client module reads requests from a trace, starting from a random selected
location in the trace file. A client sends the next request as soon as the previous request
is completed. The client requests arrive at a content oblivious layer 4 webswitch, and
then are dispatched to one of the backend nodes in the cluster in a Round Robin fashion.
Table 5.5 summarizes the system and network parameters (and their values)
used in our simulation. In the table, S denotes the file size (KBytes). The value of
disk access time is adopted from the specification of the ST340016A Seagate hard disk
drive . We calculated the disk access time as the sum of the disk seek time, over
head of disk controller, disk latency time and disk transfer time (150MBytes/sec). The
NIC to host, host to NIC and NIC to NIC latency values are obtained from experiments
in our prototype implementation. The network parameters are obtained from measure
ments on an 8 node Linux cluster. Opening of a TCP connection between the cluster
and a client costs 0.145 ms, while cache read time and network latency are dependent
on the message size. To explore the impact of the memory bus and the I/O bus on
performance, we used the memory bus and PCI bus parameters of our cluster.