next up previous
Next: Conclusions Up: Experimental results Previous: Scalability analysis

Performance results for realistic workload

In the last set of experiments we evaluate the performance of two content-aware algorithms (CAP and LARD, described in Section 3) that are integrated into the ClubWeb-1w switching mechanism. Unlike the previous stress tests, we now consider a more realistic workload which takes into account concepts like user sessions, user think-time, embedded objects per Web page, reasonable file sizes and popularity. The workload model consists of a mix of static and dynamic documents. We used the httperf tool [20] (version 0.8) as a basic benchmarking tool that we have modified to include the features reported in Table 1. Another modification concerns the possibility of measuring percentiles in addition to mean values. As the workload model is characterized by heavy-tailed distributions, the 90-percentile of the response time is a more precise measure of the system and algorithm behavior.
Table 1: Workload model for static requests.
Category Distribution PMF Range Parameters
Requests per session Inverse Gaussian $\sqrt{\frac{\lambda}{2 \pi x^3}} e^{\frac{-\lambda(x - \mu)^2}{2 \mu^2
x}}$ $ x > 0 $ $\mu = 3.86$, $\lambda = 9.46$
User think time Pareto $\alpha k^{\alpha} x^{-\alpha-1}$ $ x \geq k $ $\alpha = 1.4$, $k = 1$
Objects per page request Pareto $\alpha k^{\alpha} x^{-\alpha-1}$ $ x \geq k $ $\alpha = 1.33$, $k = 2$
HTML object size Lognormal $\frac{1}{x \sqrt{2 \pi \sigma^2}} e^{\frac{-(ln x-\mu)^2}{2\sigma^2}}$ $ x > 0 $ $\mu = 7.630$, $\sigma = 1.001$
Pareto $\alpha k^{\alpha} x^{-\alpha-1}$ $ x \geq k $ $\alpha = 1$, $k = 10240$
Embedded object size Lognormal $\frac{1}{x \sqrt{2 \pi \sigma^2}} e^{\frac{-(ln x-\mu)^2}{2\sigma^2}}$ $ x > 0 $ $\mu = 8.215$, $\sigma = 1.46$

The dynamic portion of the workload is implemented by means of CGI executables. We consider the following two workload scenarios which stress the CPU and the disk, respectively.
CPU-bound model: it consists of 40% of static requests, 40% of lightly dynamic requests, 20% of heavily dynamic requests that stress especially the CPU. This scenario emulates CPU-bound services, such as secure browsing.
Disk-bound model: it consists of 40% of static requests, 40% of lightly dynamic requests, 20% of heavily dynamic requests that stress especially the disk.
The request for a page includes the request for the base HTML file and for a number of embedded objects that follow a Pareto distribution as in Table 1. There is a probability of 0.4 and 0.2 that an embedded object corresponds to a lightly dynamic and a heavily dynamic request, respectively. The 90-percentile of page response times and the throughput in Mbps for increasing offered load are reported in Figure 9 and Figure 10, respectively.
Figure 9: Mean response times.
\begin{figure}
\centering
\epsfxsize 8.0cm
\epsffile{p740-realrt.eps}
\end{figure}
Figure 10: Throughput in Mbps.
\begin{figure}
\centering
\epsfxsize 8.0cm
\epsffile{p740-realthr.eps}
\end{figure}
The first important observation is that in all tests the CPU utilization of the Web switch was never higher than 0.5, hence the limit of the capacity of the Web clusters for realistic load is due to the servers and/or to the network. This is especially true for the disk-bound workload model. If we compare the content-aware algorithms, the CAP is clearly better than LARD for any workload model and offered load. This result was partially expected because LARD aims to maximize cache hit rates, and guarantees best results when the workload consists of requests for static files. On the other hand, just 40% of requests are cacheable in the workload models considered here, hence CAP confirms the simulation results obtained in [7] that is the best when the load is highly heterogeneous.
next up previous
Next: Conclusions Up: Experimental results Previous: Scalability analysis
Mauro Andreolini 2003-03-13