36
results of dcs block soars at the point k = 8. Since the sending and receiving processes
of dcs block always block themselves to yield the CPU to other processes, the response
time of remote cache read requests becomes very long.
Figure 3.6 (b) shows the throughput results for the 32 node configuration. The
first observation from this figure is that the dcs block model yields the worst performance,
although for light loads, all the four models have similar throughput. This is because
remote cache read becomes the performance bottleneck in the the dcs block model due
to the frequent blocking of the communicating processes. The three other models, dcs,
press via and adaptive, show almost the same throughput over the entire workload
except at k = 3, at which the dcs model experiences throughput deterioration. At this
point, many remote cache read requests of dcs compete for the CPU by preempting, thus
the CPU time is wasted from frequent interrupts and context switches. However, the
adaptive model yields better throughput over the entire workloads.
Table 3.4. Average Response Time of Each Request
k
10
5
Parameter
press via
dcs
adaptive
press via
dcs
adaptive
local cache
1.27
1.74
1.19
2.44
6.52
2.01
remote cache
29.35
6.91
7.35
34.11
23.48
7.12
disk
22.32
22.33
20.95
27.20
27.79
22.66
Next, we further analyze the impact of the coscheduling technique by breaking
down of the latency and throughput results. Table 3.4 shows the average response time












  

Home

About Services Network Support FAQ Order Contact
 

 

Clan Web Hosting

Our partners:Jsp Web Hosting Unlimited Web Hosting Cheapest Web Hosting  Java Web Hosting Web Templates Best Web Templates PHP Mysql Web Hosting Interland Web Hosting Cheap Web Hosting PHP Web Hosting Tomcat Web Hosting Quality Web Hosting Best Web Hosting  Mac Web Hosting 

Lunarwebhost.net  Business web hosting division of Vision Web Hosting Inc. All rights reserved