expected, the latency of all request types in the three models increases, but especially the
latencies of local cache read and remote cache read of the dcs model increase sharply.
Only the adaptive model has approximately the same latency at the both points.
Figure 3.7 shows the breakdown of the accumulated latency and throughput of
the three models. The leftmost bar presents press via, the middle bar is the results of
dcs, and the rightmost bar shows the results of adaptive. The first observation from
Figure 3.7 (a) is that the accumulated latencies for the disk accesses of the three models
are relatively small and the difference is not very significant. This is because most of
the requests are served as cache reads (remote or local) and only a small portion of the
requests needs to access the disk.
Second, the accumulated latencies of local cache read are relatively small and the
difference between the three Web server models is less significant compared to that of
remote cache read. Figure 3.7 (b) shows that the number of requests completed by
local cache read and remote cache read is similar. The reason that many requests can be
served as local cache read is based on the characteristic of the Web content access pattern;
10% of the files accessed on the server typically account for 90% of the server requests
and 90% of the bytes transferred . Thus, replication of the most popular Web content
on the servers of the cluster help in serving many requests as local cache read. However,
since the required service time for a local cache read request is much shorter compared to
remote cache read, there is a significant difference in the accumulated latencies of the two
models. The press via and adaptive models have similar accumulated local cache read
latencies, while the response time of dcs continues to increase, as the incoming requests