Fine Grained Content-based Adaptation Mechanism for Providing High End-User Quality of Experience with Adaptive Hypermedia Systems


Cristina Hava Muntean

Performance Engineering Laboratory
Dublin City University
Glasnevin, Dublin 9, Ireland

havac@eeng.dcu.ie
Jennifer McManis

Performance Engineering Laboratory
Dublin City University
Glasnevin, Dublin 9, Ireland

mcmanisj@eeng.dcu.ie



ABSTRACT

New communication technologies can enable Web users to access personalised information "anytime, anywhere". However, the network environments allowing this "anytime, anywhere" access may have widely varying performance characteristics such as bandwidth, level of congestion, mobility support, and cost of transmission. It is unrealistic to expect that the quality of delivery of the same content can be maintained in this variable environment, but rather an effort must be made to fit the content served to the current delivery conditions, thus ensuring high Quality of Experience (QoE) to the users. This paper introduces an end-user QoE-aware adaptive hypermedia framework that extends the adaptation functionality of adaptive hypermedia systems with a fine-grained content-based adaptation mechanism. The proposed mechanism attempts to take into account multiple factors affecting QoE in relation to the delivery of Web content. Various simulation tests investigate the performance improvements provided by this mechanism, in a home-like, low bit rate operational environment, in terms of access time per page, aggregate access time per browsing session and quantity of transmitted information.

Categories & Subject Descriptors

K.3.1 [Computer Uses in Education]: distance education; C.4 [Performance of Systems]: performance attributes, measurement and modeling techniques

General Terms

Algorithms, Measurement, Performance, Human Factors

Keywords

Adaptive Hypermedia, end-user Quality of Experience, content-based adaptation mechanism

1. INTRODUCTION

It has long been acknowledged that Web users may have different perceptions of the same factors, may seek different information or may have special needs (e.g. disabled people). Web users also differ in skills, aptitudes, goals and preferences for processing accessed information. The goal of Adaptive Hypermedia Systems (AHS) is to optimise the user experience with the online material by personalizing the Web content and the navigational structure to suit the user' individual requirements. A key requirement for innovative AHS is the delivery of high quality Web content. This involves providing not only relevant information, but presenting it in an appealing format as well.

Copyright is held by the International World Wide Web Conference Committee (IW3C2). Distribution of these papers is limited to classroom use, and personal use by others.

WWW 2006, May 23-26, 2006, Edinburgh, Scotland.

ACM 1-59593-323-9/06/0005.

In the context of new communication technologies that have been launched on the market, personalised Web content providers are becoming more interested in the delivery of content over a variety of networks. Wider access to broadband, WiFi and 3G mobile networks is making Web content more attractive for Web users to access "anytime and anywhere". At the same time, the latest wireless technologies and the deployment of a large number of wireless enabled devices (e.g. laptop, tabletPC, PDA, smart phone) enable an increasing number of Web users to access to e-services from any device. Many of the latest devices also enable access to multiple networks at the same time. These networks differ in characteristics such as bandwidth, level of congestion, state of the network (which may dynamically change over a navigational session), mobility support, and cost of transmission. These characteristics may affect the delivery of the information, the user perception of the service provided and thus decreasing the end-user QoE.

In this new and dynamic high-tech communication environment the providers of the personalised Web content have to ensure that the users have a positive experience (also called end-user Quality of Experience - QoE [1]) using their systems and that they are happy to re-use them. Therefore AHS are expected to provide not only efficient material that suits to the users needs but also a better integration of this material with the user's operational environment and network characteristics that may affect Quality of Experience for the Web users.

Currently the adaptive hypermedia research places very little emphasis on end-user QoE. This QoE-unaware approach is perhaps unsuited to a general Web browsing environment made possible by advances in technology. For instance, one can imagine a person with a laptop moving from a low bandwidth home connection, to a higher bandwidth office connection, and potentially to a public transport with a mobile connection with a widely varying bandwidth. It is unrealistic to expect that the quality of delivery of the same content can be maintained in this variable environment, but rather an effort must be made to fit the content served to the current delivery conditions. It should be noted that some AHS have taken into consideration some performance features (e.g. device capabilities, the type of the access, state of the network, etc.) in order to improve the end-user QoE. For example GUIDE [2] system considers hand-held units as tools for navigation and display of an adaptive tourist guide INTRIGUE [3], a tourist information system that assists the user in the organization of a tour, provides personalised information that can be displayed on WAP phones. Merida has considered HTTP protocol, type of the access and the server load in the design of the SHAAD [4]. However, these account for only a limited range of factors affecting QoE.

This paper analises a new QoE-based content adaptation strategy that enhances the functionality of a classic AHS and aims to improve the end-user QoE. The novel end-user QoE-aware adaptive hypermedia framework is described in Section 2. Next, the proposed content-based adaptation mechanism is illustrated. Section 4 presents simulation tests that assess the performance improvements brought by proposed adaptation strategy. The last section details the conclusions of the tests and briefly outlines the next steps in our research.

2. END-USER QoE AWARE ADAPTIVE HYPERMEDIA FRAMEWORK


Most of the AHS proposed in the literature follow the principles presented in the AHAM model. AHAM is a general reference model that provides a framework to describe the adaptation functionality of adaptive hypermedia at an abstract level [5]. It divides the adaptation process into the following main components: Domain Model(DM), User Model (UM), Adaptation Model (AM), and AHS Engine (AHS-E). Starting from this simple, abstract representation, we propose to extend this framework and to include a QoE-based content adaptation mechanism. We decided to use AHAM because an open-source systen (AHA!) built according to the AHAM model was available for testing the proposed QoE-based extension. QoE-based adpaation mechanism can be easily applied to other AHS models such as LAOS model [6]. AHAM was extended with a new component called Perceived Performance Model (PPM). PPM provides a representation of the end-user QoE and models different user-perceived performance-related information in order to learn about the user's operational environment characteristics, about changes in the network conditions, and how these changes may affect the user's quality of experience. The PPM's information is used later on during the adaptation process implemented by the AHS-E. The main goal of the adaptation mechanism is to provide personalised material that suits both the user's individual characteristics (e.g. goals, interest, knowledge) and the delivery environment in order to provide a high QoE.

Figure 1 presents at architecture level, the end-user QoE aware Adaptive Hypermedia Framework proposed in this paper. The DM, UM, AM and AHS-E represent the common parts of any Adaptive Hypermedia System. The Perceived Performance Model (PPM), Performance Monitor (PM) and Adaptation Algorithm (AA) represent the new modules that extend the classic AHS. Together they form the QoE-based adaptation mechanism. More details about each component are provided in the following.

The Domain Model(DM) organises the information provided by the AHS and physically stored in the Domain Database, in a hierarchical structure of concepts, amongst which logical relationships exist. At the lowest level, a concept corresponds to a fragment of information (section of text, image, video clip, etc.). These fragments are combined into composite concepts (also called pages) by defining relationships amongst them. Composite concepts may be further combined using relationships to eventually form more complex units of information. Content is selected from the DM and delivered to the users based not only on these relationships, but also on user characteristics.

The User Model (UM) maintains and stores in a User Database various demographic information related to the user such as age, gender, qualification, the user's current goal, the user's interest in the material managed by the AHS through the DM, the user's navigational history, etc. Both explicit (via registration) and implicit (through normal navigation and content selection) information gathering is used to generate and update the UM. In order to construct the user model, to analyse the user profile and to derive new facts about the user, different user modeling methods have been proposed. The most common ones are the overlay method [7, 8] and the stereotype method [9, 10]. In order to achieve better results, AHS such as Anatom-Tutor [10], Arcade [12] and Interbook [13] have combined the two methods. Lately, Bayesian networks have become popular for modeling user's knowledge and goals and to identify the best actions to be taken under uncertainty. There are a number of AHS that rely on Baysian Networks for user modelling such as: KBS Hyperbook System [14] (an adaptive hyperbook system for an introductory course on computer science), ANDES [15] (an intelligent tutoring system for Newtonian physics), Lumière [16] (a system that provides assistance to computer software users).

The Adaptation Model (AM) provides the adaptive functionality of the AHS. The main goal of the AM is to define how content adaptation, navigation support adaptation and updates of the user model are performed.Condition-action rules are used to express the adaptation mechanism. These rules combine information from the UM and DM and determine how the information is changed in the UM and which information stored in the Domain DB will be delivered to the user.

The Perceived Performance Model represents a new component proposed by us that extends the general Adaptive Hypermedia Framework. It has the important function of providing a dynamical representation of the user perceived QoE. The model is analogous to the User Model, storing information gathered about the user, but in this case from a perceived performance point of view. It models performance related information in order to learn about the user's operational environment characteristics, about changes in network connection and the consequences of these changes on the user's QoE. The PPM also considers the user's subjective opinion about his/her QoE explicitly expressed by the user. This introduces a degree of subjective assessment, which is specific to each user. The user related information is modeled using a stereotype-based technique that makes use of probability and distribution theory [17] and saved in the Perceived Performance database (PPDB). Finally, the PPM suggests the optimal Web content characteristics (e.g. the number of embedded objects in the Web page, the dimension of the based-Web page without components and the total dimension of the embedded components) that would best meet the end-user expectation related to QoE. The PPM aims to ensure that the download time per delivered page, as perceived by the user, respects the user tolerance for delay and does not exceed the satisfaction zone.

Based on a survey of the current research into user tolerance for delay, three zones of duration that represent how users feel were proposed in [18]: zone of satisfaction, zone of tolerance and zone of frustration. According to a number of studies [19, 20, 21, 22] on the effects of download time on users' subjective evaluation of Web site performance it was indicated that users have some thresholds (user tolerance) for what they consider adequate or reasonable delay. A user is satisfied if a page is loaded in less then 10-12 sec, but higher values cause disruption and users are distracted. Any delay higher then 30 sec causes frustration. At the same time it is significant to mention that when the user is aware of the existence of a slow connection, he/she is willing to tolerate a delay that averages 15 sec but does nor exceed 25 sec [23].

An Adaptation Engine (AHS Engine) that represents a software environment, interprets the condition-action rules described in the AM, performs the content selection, creates the navigational support (links), and delivers a personalised Web page. In this paper we propose to enhance the AHS Engine with an Adaptation Algorithm (AA). The objective of the AA is to determine and apply the correct transformations to the personalised Web page (according to the User Model) in order to match the PPM suggestions on the Web page characteristics and thus to provide a QoE to the user.

Therefore the adaptation mechanism of the AHS will allow for both coarse-grained and fine-grained adaptation:

· Coarse-grained adaptation is used to select the fragments of information from the DM for inclusion in a user-tailored performance-orientated document, based on information from the UM and PM.

· Fine-grained adaptation is applied when the delivery of the personalised document over a given connectivity environment would not provide a satisfactory end-user QoE. In this case, the AHS Engine will dynamically adjust the characteristics of the personalised document.

3. ADAPTATION POLICY

For the coarse grained adaptation, the AM proposes a Web document tailored to the user's personal characteristics based on the information from UM and DM. The proposed document consists of a collection of information fragments chosen from the Domain Database. These fragments are part of the concepts hierarchy represented by the DM. Then, for the fine-grained adaptation, the document is analysed by the AHS Engine in order to determine whether it respects the PPM suggestions. These suggestions indicate the optimal characteristics of a Web document for providing both a satisfactory performance- related experience for a Web user with the AHS, and for delivering the highest quantity of data. If the characteristics of the document do not match the PPM suggestions, alterations to the document are required. The adaptations performed on the personalised document consist of either the elimination (e.g. some information fragments will not be displayed) or the modification in the properties of some information fragments from the page (e.g. changes in the resolution or size of the images). The decision is based on the adaptation algorithm. This algorithm aims to minimise the negative impact on user satisfaction related to the quality of the content.

3.1 Adaptation Algorithm

The proposed adaptation algorithm uses the PPM's content related suggestions and information provided by the UM related to the strength of user interest in different concepts (defined in the DM). Hence, the adaptation algorithm performs changes starting with information fragments belonging to those concepts the user is the least interested in. These changes are applied until the suggestions on the Web content characteristics suggested by the PPM are matched. The PPM suggestions are used by the AHS Engine only for the content-level adaptation and not for navigational-level adaptation. It is considered that the characteristics of a Web page such as size, number,and type of the embedded components are those that mainly affect the user perceived performance and not the number of links.

Therefore, the objective of the algorithm is to determine and apply the correct transformations (modifications in the properties of the embedded components or/and eliminations of some of the components) on a page in order to match the PPM suggestions. Since images contribute the largest quantity of information to the total size of a Web page, in this work they were the only ones taken into consideration by this algorithm. In the case where some images have to be eliminated, each removed image is replaced with a link to the image. In this way, if a user does really want to see the image, the link will offer this possibility.

A block-diagram of the algorithm is presented in Figure 2. The algorithm uses as parameters the PPM suggestions: maximum number of embedded objects (NOsuggest), maximum total size of objects (SOsuggest) and maximum size of a Web page (SPsuggest). In the algorithm, the check for the number of embedded objects is performed before the verification of the total size of the embedded components. Therefore image elimination may occur before image compression. This was preferred due to the fact that setting up multiple connections to carry a higher number of objects is time consuming and reduces the overall performance of the delivery.

3.2 Adjusting the Total Number of Embedded Objects

If the number of objects embedded in a Web page (NOcurrent) exceeds the maximum number of objects suggested by the PPM (NOsuggest) there is a need for the elimination of some images. The image elimination mechanism makes use of information stored by the UM related to user's level of interest in the concepts defined in DM. These concepts are abstract representations of the information described through the images.

This image elimination mechanism involves the following principle. The image of lowest interest to the user is removed. The elimination process continues while NOcurrent > Nosuggest.


A pseudocode description of the mechanism is presented next.

AdaptAlg_TotalNoEmbeddedObjects(NOsuggest)

begin

NOcurrent= Count_NoImages();

while (NOcurrent > NOsuggest) do

begin

Eliminate_Lowest_Interest_Image ( );

NOcurrent = NOcurrent -1;

end;

end;

3.3 Adjusting The Total Size of Embedded Objects

Matching the suggestion related to the total size of the embedded images is the next phase in the adaptation algorithm. If the total size of embedded objects (SOcurrent) is higher than the PPM suggestion (SOsuggest), image compression techniques and/or image elimination methods have to be applied. First, the image compression is applied and if further reduction is necessary, image elimination is applied.

Step 1: Image Compression

The first step involves trying to apply image compression. Different compression rates (expressed as percentage) are applied to each image depending on: the total reduction suggested on the total size of embedded images, the image size and the user interest in the image. For example, if two images A and B need to be reduced by a total of 40% with the user being more interested in image A than image B, then a smaller reduction will be applied to image A than the one performed on image B. The actual compression rates will be computed according to their relative interest. The algorithm for determination of the compression factors for each image (R%i) takes into consideration user interest on them and uses the following formulas (Eq. 1):

(Eq.1)

Notation:

Si = the size (KB) of the image "i"

S = total size (KB) of the embedded images

Rs = reduction in size (KB) to be applied on the embed. images

Rsi = reduction in size (KB) to be applied on the image "i"

R% = percentage of reduction to be applied on the embed. images

R% i = reduction in percentage to be applied on the image "i"

N = total number of embedded images from the Web page

Ki = user's interest in concept (image) "i"

= user's non-interest in concept (image) "i" (100 - Ki ) normalised values

If one of the computed compression rates cannot be applied to a image (e.g. due to the fact that the quality will be lower than acceptable for the end-users. Image compression tools may have limits for the compression rate that would ensure good quality for the image), image elimination strategy (Step 2) will be applied. This assumes that the image compression algorithm has a maximum compression threshold rate in order to ensure good user perceived quality for the compressed images.

Step 2: Image Elimination

Step Two is performed in when image compression cannot match the PPM requirements. The objective is to eliminate one or more images and to replace them with links to the images. If a user really wants to see the image, the link will offer this possibility.

This strategy is applied to the original uncompressed images and is based on the following principles:

· the image of lowest interest to the user is removed

· if after the elimination is performed, the recomputed total size of embedded objects from the Web page (SOcurrent), is still higher than the PPM suggestion (SOsuggest), Step 1 (image compression) is again performed. In this case, lower compression rates will be required.

Example of Compression Algorithm

As an example of the compression algorithm, the following case is considered. A Web page consists of three embedded images and, based on the PPM suggestion related to total size of embedded images, a reduction of 20% has to be applied. User non-interest in these images and the size of each image is presented in Table 1. The compression rate for each image is computed as in Eq. 2, following the formulas from Eq. 1. Therefore the following compression rates should be applied on each image in order to match the PPM suggestions: 24.18% for reduction to image A, 7.5% to image B and 3.8% to the third image.

Table 1 Size of Images and User Non-Interest in the Images

Image No.

Image Size (KB)

UserNon-Interest

A

100

0.7

B

20

0.2

C

10

0.1

(Eq.2)

3.4 Adjusting the Size of the Web Page

The PPM suggests a maximum dimension for the base Web page (SPsuggest). The adaptation algorithm would have to enforce the size of the final Web page to this limit. However, often the image compression phase and the image elimination mechanism may result into objects whose total size is below the maximum size suggested by the PPM for all embedded objects (SOsuggest). In this situation the total size of the Web page, including the embedded images, may be still below the sum of PPM suggested limits (STsuggest = SOsuggest + SPsuggest) and there is no need for further adaptation related reduction.

For the case when the total size of the page including the embedded objects (STcurrent) is greater than STsuggest those fragments from the base Web page the user is least interested in, as indicated by the User Model, are eliminated. This process continues while STcurrent > STsuggest.

4. VALIDATION OF THE QoE- BASED ADAPTATION MECHANISM

The end-user QoE-based adaptation mechanism introduced in the previous section aims to improve the end-user QoE, by increasing system performance as perceived by the end-user. This section presents the results of various simulations that were performed in order to assess the performance improvements brought by the proposed QoE-based extension for the AHS. The simulation scenarios involve the transmission of different Web pages that are part of a simulated Web browsing session over various network conditions. All simulations are performed for various home-like low bit rate operational environments that involve connection speeds up to 128 kbps. Constant and step-wise changeable network conditions during a browsing session were considered for the simulation tests. Two cases were studied and the results are compared. The first case involves the usage of the QoE - based content adaptation mechanism before delivering the requested Web page, while in the second case the requested Web page is delivered with no modifications.

The objectives of the simulation tests are the following:

· to investigate the impact on performance when the content related suggestions generated by the QoE-based extension are applied during a simulated Web browsing session.

· to analyse the behaviour of the PPM when a client sends Web page requests over different operational environments with constant characteristics such as bandwidth and RTT.

· to analyse the behaviour of the PPM when a client sends Web page requests over different operational environments with dynamically changing characteristics.

Performance analysis is based on comparative measurements of two cases: with and without QoE based content adaptation. For both cases, Access Time (DownloadTime) per page, Aggregate Access Time per session, Quantity of Data Transmitted, andPercentage of the Data Reduction are measured. The Aggregate Access Time per session is measured as the sum of the access times per page.

The PPM behaviour analysis seeks to confirm the capability of the model to track any changes in the delivery conditions that may affect the user's QoE and the ability of the model to provide the content related suggestions that maintain the level of the user's perceived performance. The chain of the content related suggestions generated by the PPM during the navigational session is also analysed.

4.1 Set-up Conditions

The simulations were performed using Network Simulator version 2 (NS-2) [24] and a NSWEB [25] extension for simulating WWW traffic. The simulation set-up topology consists of a simple Web Server - Web Client system. Figure 3 represents a residential client 56 kbps modem connection and round-trip-time (RTT) of 310 msec. In the absence of other background traffic, the bottleneck link is the client network connection. Several different connections and network properties were considered that correspond to low bit rate operational environment as perceived by people through a home Internet connection. Therefore the bandwidth between Web Client and node N2 was set-up with different values between 28 kbps and 128 kbps while the RTT between the Web server and the Web client was considered to be between 530 msec and 150 msec respectively.


A browsing session that consists of ten Web pages was simulated. Two types of network conditions involving constant and variable network parameters (e.g. bandwidth, RTT) during the simulated browsing session were analysed. The Web pages are randomly selected, based on the SURGE technique, from a virtual Web site located on the Web Server. The Web site is populated with one hundred pages randomly generated. These pages have different properties such as Web page size, number of the embedded objects per page and size of each embedded objects. The Web pages were generated using the NS-2 Web Generator, based on a The Pareto-II probability distribution. This distribution best simulates the characteristics of the Web server resources, the distribution of a Web object size on a Web site and it is the most used function for Web traffic simulations.

Table 2 Parameters of Distribution Function of the Web Page Characteristics

Web Content Characteristics

Probability Distribution parameters

Basic Page Size

Pareto-II Distribution with avg = 3000B, shape = 1.2

No. of Embedded Objs per Page

Pareto-II Distribution with avg = 4,

shape = 1.5

Embedded Objects Size

Pareto-II Distribution with avg = 4500,

shape = 1.2

Pages per Web Server

100

Table 2 presents Pareto-II Distribution parameters setup used during the simulations. Shape parameter was setup according to the Web traffic specifications presented in the NSWEB [25] documentation. Avg parameter was set based on the results of research [26] that analysed and characterised Web pages from most popular Web sites based on the amount of content of a page, the number of bytes in the basic Web page, the number of embedded objects and the total number of bytes for the embedded objects. The results presented in [26] have shown that most of the Web pages would have the size of the basic page up to 12 KB, an average of 7 up to 20 embedded objects and the total size of the embedded objects around 55 KB or higher.

4.2 Assessment of the QoE - based Adaptation Mechanism in Constant Low Bit Rate Operational Environment

The simulations involved various Web sessions with different sequences of ten Web pages randomly selected from a virtual Web site populated with one hundred pages. In this paper we present the results for one set of ten pages selected from the virtual Web site. The characteristics of the selected pages are presented in Table 3. Six types of low bit-rate environments characterised by different network conditions (Table 4) were simulated.

Table 3 Characteristics of ten pages randomly selected from a virtually generated Web site

Web Page ID

Basic Page Size (KB)

Number Of Embd. Objects

Total Size Of Embd. Objs (Kb)

Total Size Of Web Page (KB)

1

9.18

8

82.39

91.57

2

3.10

8

57.96

61.06

3

3.17

6

93.96

97.13

4

10.80

8

190.22

201.02

5

5.61

6

37.73

43.34

6

3.42

9

169.01

172.43

7

12.24

5

64.37

76.61

8

9.38

7

57.68

67.06

9

5.4

10

134.32

139.72

10

3.38

5

36.30

39.67

Table 4 Simulated low bit rate operational environments

Connection Type

Bandwidth (kbps)

RTT (msec)

1

< 28

< 500

2

28 - 42

300 - 500

3

42 - 56

300-500

4

56-64

200 -300

5

64 - 96

100 -200

6

96 - 128

< 200

Next the results for a delivery environment characterised by a bandwidth in the range of 42 kbps and 56 kbps and RTT in the range of 300-500 msec (Connection Type 3) are presented. This scenario is common for users with a modem connection. Figure 4 illustrates a comparison between the two cases that involve the usage or not of the QoE-based adaptation mechanism for delivering the randomly selected ten Web pages. Taking into account the suggested content-based adaptations different percentages of reduction in the Web page size were applied. For this case, only image compression was considered and no image elimination was applied. One can observe that the download time per page did not exceed 14 sec. This value is below the 15 secs limit for acceptable download time for a user aware of low bit-rate connection. With the decrease in the access time per page, aggregate access time per session was significantly reduced from

Table 5 PPM's Web Content Related Suggestions After Each Requested Web Page over a 56 kbps Connection

ID of Requested Web Page

PPM Outputs

Size Page (KB)

No. Objs

Size Objs (KB)

1

11.09

8.0

31.19

2

11.89

9.0

35.60

3

12.16

9.0

37.08

4

12.30

10.0

37.81

5

12.38

10.0

38.25

6

12.43

10.0

38.55

7

12.47

10.0

38.76

8

12.70

10.0

38.92

9

12.51

10.0

39.04

10

12.67

10.0

40.89

Table 5 presents the Web page characteristics suggested by the PPM model for each requested Web page during the browsing session. As can be noticed from Figure 4 the QoE-based adaptation process affected almost all of the Web pages. This was due to the fact that characteristics of the affected Web pages (see Table 3) were higher than the PPM suggested ones (Table 5).


The results show that the model was able to "learn" rapidly about the current network conditions and to suggest correct characteristics for Web pages in order to ensure that their download time remained acceptable from the user QoE point of view. After the first three pages were requested and delivered the PPM state is stable, generating similar outputs as long as the network conditions do not change. As result the download time for the following seven Web pages does not exceed the 14 sec.

Simulations tests performed for the other types of delivery environments confirmed the PPM model's fast "learning" behaviour in relation to the delivery conditions, providing a satisfactory download time per page.

Next a summary of these simulations is provided. Figure 5 shows simulation results for the Aggregate access time for a browsing session that involved the set of ten pages presented in Table 3, for various connection types with bandwidth in the range of 28 kbps to 128 kbps and RTT in the 100 msec -1000 msec intervals.


Note that the required reduction in Web page sizes transmitted necessary to maintain the download time at an acceptable level throughout the browsing session decreases with the increase of the available bandwidth for the randomly selected ten Web pages. Most significantly one can notice that for these low bit rate connections, the QoE layer improved the Aggregate Access Time by up to 56.2 % (for the 28 kbps case). This improvement was achieved by reducing the quantity of data sent during the browsing session by up to 62.7 %.


The PPM behaviour as the network connectivity improves, is analysed next. Figure 6 and Figure 7 show that the model succeeded to determine the optimal characteristics for the Web pages such as the number of embedded objects, total size of the embedded objects and size of the basic Web page. As the network connectivity improves the model provides higher values for the Web page characteristics in order to allow for more information to be transferred to the user while maintaining an acceptable download time.

For example, if for 28 kbps connection the basic Web page was restricted to an average of 7.6 KB, it could contain only an average of 6.4 objects of maxim 23.3 KB in total size; For 60 kbps connection basic Web pages of up to an average of 14.6 KB and 12.1 embedded objects of total size up to an average of 51.2 KB were suggested for transmission. The quantity of information transmitted in the best of the delivery conditions considered (128 kbps connectivity) basic Web pages up to 19.9 KB with an average of 18.2 objects with an average of 892.5 KB could be transmitted.

The Web content-related suggestions provided by the PPM ensure a download time per page not higher than 15 sec for very slow connections such as 28 kbps and not higher then 10-12 sec for better network connections (up to128 kbps).

4.3 Assessment of the QoE - based Adaptation Mechanism in Variable Low Bit Rate Operational Environment

The objective of these simulation tests is to analyse the behaviour of the proposed adaptive mechanism and PPM outputs when a browsing session is performed over a network environment that changes in time. The browsing session involves Web Client accesses to the same ten Web pages presented in Table 3 over the same topology (Figure 3). The following two cases were considered:

· the network conditions improve from 56 kbps to 64 kbps and then up to 96 kbps. In the same time RTT decreases from 310 to 240 msec and then to 150 msec.

· the network conditions degrade from 96 kbps to 64 kbps and then down to 56 kbps. In the same time RTT increases from 150 to 240 msec and then to 310 msec.

In the same manner as in the non-changeable environment, the analysis of the test results includes the following performance parameters: Access Time per page, Aggregate Access Time per session and Quantity of Transmitted Data. A comparison of the values of these performance parameters with those measured in the case when the QoE-based adaptation mechanism was not used was also performed.

Next simulation results for the two studied cases are presented.

4.3.1 Step-Wise Up Changeable Environment from 56 kbps to 96 kbps


The user network connection properties include a bottleneck link whose bandwidth was changed in three stages during the simulated browsing session. In the first stage connectivity was 56 kbps and RTT = 310 msec and the first three pages out of ten were transferred. The second stage involved an improvement of the network delivery conditions to 64 kbps and 240 msec RTT while the following three pages were accessed by the end-user. During the last stage the network conditions further improve and the last four pages from were transmitted over a link characterised by 96 kbps bandwidth and 150 msec RTT (Figure 8).


Figure 9 shows the measured performance parameters when the QoE-based adaptation mechanism was used in comparison with the case when no adaptation was performed. In the adaptive case the access time per page was below 15 sec for the lowest connection stage (56 kbps) and further decreased to below 10 sec for the best connection stage.


Since the PPM suggestions on the optimal characteristics for the Web page to be delivered are combined with the previously generated ones, these suggestions improve slower than the network conditions. This is due to the learning behaviour of the PPM model that tries to overcome sharp fluctuations of the network environment and protect against eventual noise in the recorded delivery conditions. In consequence, higher size reductions were proposed on the largest Web pages such as Page 4, Page 6 and Page 9 during stage 2 and stage 3 than those performed when all ten pages were delivered over a constant 64 kbps or 96 kbps connections.

A very significant achievement when using QoE-based adaptation in comparison with the no-adaptation case is that Aggregate Access Time per session has improved with 38.3% confirming the benefit of the proposed solution.

4.3.2 Step-Wise Changeable Environment from 96 kbps to 56 kbps


In the same manner as in the previous scenario the connectivity environment changed during the browsing session that involved the same ten Web pages. Figure 10 illustrates the network conditions that deteriorated from 96 kbps bandwidth and 150 msec RTT to 56 kbps and 340 msec RTT via an intermediate step characterised by 64 kbps bandwidth and 240 msec RTT.


An analysis performed on the access time per page and on the quantity of transmitted information (Figure 11) reveals that the usage of adaptation mechanism reduced the access time for each page below 10 sec for high bit rate connection (Stage 1- 96 kbps) and below 15 sec for low bit rate connection (Stage 3 - 56 kbps). Aggregate Access Time per session was also reduced on average by 23.4%.

The influence of the generated suggestions for Web pages delivered over the highest bit rate connection case (Stage 1) was also noticed in the final suggestions proposed for the lowest connectivity case (Stage 3) that followed. This reflects the fact that history is taken into account when generating suggestions. As a result, a smaller reduction of the delivered quantity of information was performed for each page delivered during the Stage 3 in comparison with the case when all the learning session was performed over a constant 56 kbps connection (Figure 4). Although the access time for these pages such as Page 7, Page 8, Page 9 and Page 10 was slightly higher (an average of 11.85 sec) than for the constant connection case (an average of 9.64 sec) it did not exceed the 12 sec threshold value that is considered acceptable for low bit rate home connections.

5. CONCLUSIONS

This paper has introduced a new adaptive hypermedia framework that extends the adaptation functionality of the AHS with a fine-grained content-based adaptation mechanism. This mechanism ensures high Quality of Experience when AHS users access personalised material using various connectivity environments that may have different properties. Various simulation tests investigated the performance improvements provided by the proposed adaptive mechanism, in a home-like low bit rate operational environment, in terms of access time per page, aggregate access time per browsing session and quantity of transmitted information. The results showed that the QoE-based adaptive mechanism succeeded to determine very quickly the optimal characteristics of the Web pages for a given non-changeable and step-wise changeable network environments. These suggestions ensured an access time per page that did not exceed the 12-15 sec threshold considered satisfactory for the end-users by the research community. More significantly the aggregate access time per browsing session decreased by up to 56% in constant network conditions and by up to 39.8% in step-wise changeable network conditions.

In conclusion, simulations have shown important improvements of the access time per page and of the aggregate access time per session, which were due to the controlled reduction of the quantity of transmitted information performed by QoE-based adaptation mechanism.

Subjective tests that investigated the feasibility and usability of the proposed fine grain content based adaptation mechanism when applied in the area of adaptive e-Learning have also been performed and the results were presented in [25]. Different educational-based evaluation techniques such as learner achievement analysis, learning performance assessment, usability survey and correlation analysis between individual student performance and judgment on system usability were applied in order to fully assess the effectiveness of the proposed mechanism. Results of the subjective tests showed that the use of fine grained content-based adaptation mechanism brought significant improvements in terms of user learning performance, system usability and user satisfaction with the personalised e-learning system while not affecting the user learning achievement.

Further work is necessary to explore the effectiveness of this framework in a wider range of situations. Two possible directions are the extension to multimedia content where performance problems may arise even in higher bandwidth environments and the application to Adaptive Hypermedia Systems in areas other than education, such as on-line information systems and to investigate usability and benefits brought by the new system. These systems differ from the educational ones by providing a bigger navigational space, a higher flexibility to the users to navigate in the hyperspace and the users of the system have different objectives e.g. finding specific information.

6. ACKNOWLEDGMENTS

The support of Enterprise Ireland - Commercialisation Fund is gratefully acknowledged.

7. REFERENCES

[1] Muntean, C., McManis J., End-User Quality of Experience Oriented Adaptive E-learning System, accepted for publication in Journal of Digital Information, special issue Adaptive Hypermedia, 2005

[2] Cheverst, K., Mitchell, K., Davies, N., The Role of Adaptive Hypermedia in a Context-Aware Tourist Guide, Communications of the ACM, Vol. 45 (5), pp. 47-51, 2002

[3] Ardissono, L., Goy, A., Petrone, G., Segnan, M., Torasso, P., Ubiquitous User Assistance in a Tourist Information, 2nd Int. Conference on Adaptive Hypermedia and Adaptive Web Based Systems (AH2002), Malaga, Spain, pp. 14-23, 2002

[4] Merida, M., Fabregat, R., Matzo, J. L., SHAAD: Adaptible, Adaptive and Dynamic Hypermedia System for Content delivery, AH2002 Conference, Malaga, Spain, 2002

[5] De Bra, P., Houben, G., Wu, H., AHAM: A Dexter-based reference model for adaptive hypermedia in Proc. of ACM HYPERTEXT'98 Conference, Germany, pp. 147-156, 1999

[6] A. I. Cristea, A. de Mooij, "LAOS: Layered WWW AHS authoring model and their corresponding algebraic operators", tr 12th Int. World Wide Web Conference, Alternative track on Education, Budapest, Hungary, 2003

[7] De Bra, P., Calvi, L. AHA: A Generic Adaptive Hypermedia System, in Proceedings of ACM HYPERTEXT'98 Conference, 2nd Workshop on Adaptive Hypertext and Hypermedia, Pittsburgh, USA, pp. 5-12, 1998.

[8] Pilar da Silva, D., Van Durm, R., Duval, E., Olivié, H., Concepts and Documents for Adaptive Educational Hypermedia: A Model and a Prototype, in Proceedings of ACM HYPERTEXT'98 Conference, 2nd Workshop on Adaptive Hypertext and Hypermedia, Pittsburgh, USA, pp. 35-43, 1998

[9] Boyle, C., Encarnacion, A. O., MetaDoc: An Adaptive Hypertext Reading System, User Models and User Adapted Interaction Journal, Vol. 4, No. 1, pp. 1-19, 1994

[10] Murphy, M., McTear, M., Learner Modeling for Intelligent CALL, in Proceedings of The 6th International Conference on User Modeling (UM97), Jameson A., Paris C. and Tasso C. (Eds.), Springer Verlag Wien, pp. 301-312, 1997

[11] Beaumont, I, User Modeling in the Interactive Anatomy Tutoring System ANATOM-TUTOR, User Models and User Adapted Interaction Journal, Vol. 4, No. 1, pp. 21-45, 1994

[12] Encarnacão, M., Stork, A., An Integrated Approach to User-centered Interface Adaptation, Technical Report WSI-96-10, University of Tübingen, 1996

[13] Weber, G., Specht, M., User Modelling and Adaptive Navigation Supporting WWW-based Tutoring Systems, in Proc. of the 6th International Conference on User Modeling, pp. 289-300, 1997

[14] Nejdl, W., Wolpers, M., KBS Hyperbook - A Data-Driven Information System on the Web, in Proceedings of the 8th World Wide Web (WWW) Conference, Canada, 1999

[15] Conati, C., Gertner, A., VanLehn, K., Druzdzel, M., On-Line Student Modeling for Coached Problem Solving Using Bayesian Networks, in Proc. of 6th Int. Conference on User Modeling (UM97), Chia Laguna, Sardinia, Italy, pp. 231-242, 1997

[16] Horvitz, E., Breese, J., Heckerman, D., Hovel, D., Romelse, K., The Lumière Project: Bayesian User Modeling for Inferring the Goals and Needs of Software Users, in Proc. of the 14th Conference on Uncertainty in Artificial Intelligence, Madison, WI, Morgan Kaufmann: San Francisco, pp. 256-265, 1998

[17] Muntean, C., H., McManis, J., A QoS-aware adaptive Web-based system, In Proceedings of the IEEE International Conference on Communications (ICC04), France, 2004

[18] Sevcik, P. J., Understanding How Users View Application Performance, Business Communications Review, Vol. 32, No. 7, pp. 8-9, 2002

[19] Bhatti, N., Bouch, A. and Kuchinsky, A., Integrating User-Perceived Quality Into Web Server Design, Computer Networks Journal Vol. 33, No. 1-6, pp.1-16, 2000

[20] Bouch, A., Kuchinsky, A. and Bhatti, N., Quality is in the Eye of the Beholder: Meeting Users' Requirements for Internet Quality of Service, In Proc, of the ACM CHI2000 Conference on Human Factors in Computing Systems, Hague, Netherlands, 2000

[21] Servidge, P., How Long is Too Long to Wait for a Web Site to Load?, Usability News, 1999

[22] Ramsay, J., Barbasi, A., Preece, J., A Psychological Investigation of Long Retrieval Times on the World Wide Web, Interacting with Computers Journal, Elsevier Ed., March, 1998

[23] Chiu, W., Best Practices for High Volume Web Sites, IBM RedBooks, 2001

[24] Network Simulator - NS-2, http:/www.isi.edu/nsnam/ns/

[25] NSWEB - http:/www.net.informatik.tumuenchen.de/~jw/nsweb/

[26] Krishnamurthy, B. Wills, C. Zhang, Y., Preliminary Measurements on the Effect of Server Adaptation for Web Content Delivery, In Proceedings of ACM SIGCOMM, The 2nd Internet Measurement Workshop, France, Nov., 2002