local disk. Thus, NIC memory can be used to cache more frequently accessed data from
the disk (2 in Figure 5.3). The rationale of this scheme is that the NIC memory access
latency is 20 30 times faster than the disk access time. For this design, it is also easy
to see that the data placed in the NIC cache should be exclusive from the data cache in
the main memory. Thus, this design is called an exclusive caching scheme. The second
design is to make frequently accessed data available (via backend forwarding) to other
remote nodes (1 in Figure 5.3). The rationale for this scheme is that the most frequently
accessed or the least recently requested (LRU ) data is more likely to incur cache miss
at other nodes, assuming that the most highly accessed files are almost identical across
the nodes. Thus, remote nodes will use the backend forwarding scheme to access the
data from the NIC cache. Since the data cached in the NIC cache is part of the main
memory cache, we call this design an inclusive caching scheme. Both of these designs
may result in different performance gains depending on the data access patterns of the
server nodes.
Total Data Set 
Main Memory Cache 
Disk Accessed Files 
Most Recently 
Most Recently 
Accessed Files, 
Accessed Files 
Except Main 
Memory Cache 
NIC Cache 
Fig. 5.3. Cache Placement Policy



About Services Network Support FAQ Order Contact


Clan Web Hosting

Our partners:Jsp Web Hosting Unlimited Web Hosting Cheapest Web Hosting  Java Web Hosting Web Templates Best Web Templates PHP Mysql Web Hosting Interland Web Hosting Cheap Web Hosting PHP Web Hosting Tomcat Web Hosting Quality Web Hosting Best Web Hosting  Mac Web Hosting 

Lunarwebhost.net  Business web hosting division of Vision Web Hosting Inc. All rights reserved