Hypermedia learning environments limit access to information.

Hypermedia learning environments limit access to information

Megan Quentin-Baxter

Faculty of Medicine Computing Centre, University of Newcastle upon Tyne,
Newcastle upon Tyne, NE2 4HH U.K.

Abstract
Audit trails and questionnaires were used to evaluate students' use of a highly interactive hypermedia learning environment. The learning material under investigation had similar functionality to a WWW interface and was composed of images and text combined with rollover and clickable maps, hypertext links and interactive questions. Every interaction a student made was logged in the audit trail, and a method of analysing the audit trails was applied to measure the amount of information accessed by each student. The most successful student accessed only 32% of the available information in 93 minutes, and each student studied different material to the others. Students overestimated how much information they had accessed from the total available, with those accessing the least overestimating comparatively more. The interactive strategy adopted by learners affected the amount of information accessed, and it was concluded that the increasing use of interactive hypermedia in teaching highlights the need for further research in order to ensure that some students are not systematically disadvantaged.

Keywords
Evaluation; Hypermedia; Browsing efficiency; Audit trails; Logging

1. Introduction

With an increasing use of the Internet for delivering "distant" or "open" learning, course designers and administrators need substantial ways to assure the quality of the education received by students. Evaluation of interactive hypermedia learning materials using logging, indexes and audit trails is extremely important for informing developers about the way in which people learn from electronic sources. Hypermedia rewards curiosity learning by providing relevant information behind highlighted links [6], at the expense of presenting a clear learning agenda and placing each new concept to be learned in the context of the last. Students can follow their own path through interconnected information, producing an intricate trail which is difficult to interpret or evaluate numerically. "Evaluation" studies are often confounded in their attempts to measure learning gain resulting from the use of computer based teaching materials (for a fuller discussion see [1]). Many studies have assessed the relative value of the teaching materials based on students' qualitative responses to questionnaires, an approach supported by Wittrock [9] who indicated that student reports of "attention to learning" was a better predictor of learning achievement than time-on-task.

This study used questionnaires and audit trails to investigate how high school biology students accessed subject material embedded in a highly interactive hypermedia learning environment. The results indicated some of the factors affecting ability to access information, and illustrated how students conceptualised their own learning experience in relation to measured achievements. It did not extend to evaluating learning gain or associate learning strategy with learning style. This work was undertaken within a larger PhD study aimed at developing and evaluating a hypermedia computer-based learning package in biology, and which was completed in 1997. The package was written in an authoring system with features very similar in functionality to many of those currently available on the WWW.

2. Method

Thirty five learners used an interactive hypermedia package in order to learn as much as possible about the subject for up to two hours, and 22 of these used it again for a further hour on a separate occasion. The package was styled as an interactive textbook with many hypermedia links and a spatial map (for a discussion of the package see http://www.ncl.ac.uk/~nmqb/rats/). After each occasion each student completed a qualitative questionnaire asking for general details about themselves, and for their comments about aspects of the package, including estimates of the proportion of subject information that they had accessed. Each interaction a student undertook with the package was recorded, creating a complete audit trail (log) of the subject material accessed and preferred interactive methods. Teachers were questioned to obtain their views of how their students had fared.

Audit trails were quantified by counting the number of times each "object" (a question, clickable image or piece of text, such as hypertext or an annotation presented when the cursor rolled over an image) was accessed [8]. There were over 1200 "objects" of subject material which could be presented either in response to browsing activities or to stimulate the student to respond to questions. The method also quantified "error" interactions which occurred when further information or stimuli were not presented (such as clicking where no hypermedia links were available). Every object presented was either "new", the student had never seen that information or question before, or "repeated", where they were revisiting material that they had previously accessed. Students in this study were permitted to stop using the package when they wanted, in order to investigate whether compensatory learning strategies were employed, such as "quicker" students stopping earlier. Investigation of moderator variables (such as gender or experience of computers) using statistical analysis on the unmodified audit trail data was not possible because each student spent a different amount of time with the package, and it was expected that the amount of information accessed would be positively correlated in a diminishing way with the time spent using the package.

The quantified audit trails were compared to identify the amount of material presented to each student, and what material was presented to students (a) in common with each other and (b) between their first and second occasion of using the package.

3. Results

The 35 students in this study produced substantial audit trails which, when quantified, illustrated what material they had accessed and their preferred interactive strategy. The maximum amount of information presented to any individual was only 32% in 93 minutes (mean and standard deviation = 19 ± 8% within 73 ± 18 minutes; n = 28), which was subjectively considered to be inefficient as students were expected to have completed the package in this time. This lack of success was surprising to the teachers, as it contrasted with their impression of the students working successfully as a whole. Further analysis showed that, between them, the students accessed over 81% of the available information, indicating that they covered substantially different subject information to each other.

Student estimates of their own achievement were not correlated with the amount of information accessed (Fig. 1). They differentially overestimated the percentage of information, whereby students who had accessed comparatively less information from the package varied considerably in their estimates of achievement. Results to a similar question indicated that the spatial map may have increased students' appreciation of the depth and content of the material available.

Fig. 1. Relationship between the measured and the estimated percentage of information accessed from the package after the first occasion. High resolution JPEG version

Some students adopted a browsing strategy while others preferred to respond to questions. Students who browsed accessed much more information than those who concentrated on undertaking questions. Students changed their interactive strategy from browsing to questions, which was estimated by examining the composition of the audit logs over time. Audit trails from students who used the package more than once were characterised by "revision" of material covered on the previous occasion. Over 15% of information in the program was "redundant" (never accessed by any student on either occasion) and this was mostly composed of textual "supporting" information (definitions of words and terms). The amount of information accessed was correlated with time spent using the package (n = 28, r = 0.62, p = 0.001), and there was no indication in this study that "quicker" students with higher interaction frequencies stopped earlier.

4. Discussion

Simple collection of audit trails via logging, and attitudinal feedback via forms, increases the potential application of these data-gathering techniques to evaluate materials on the Internet. This small study offered a glimpse of how the two methods are complementary when investigating the functionality of hypermedia teaching materials in the classroom, and when identifying limitations to accessing information.

The method of quantifying the audit trails completely discarded the order of the information collected, and provided a fine-grained picture of student use of the package in "volume" rather than "sequencing" terms. This resulted in a useful method of observing what and how students had studied, and provided a basis for identifying "outliers" to the main group. It took no account of the effect of possible additive learning resulting from following a contextual path [5]. The method of quantifying data in this study could be adapted to partly addressing this by describing the relationship between, and then numerically evaluating, pairs of interactions [7].

Limiting content is one of the few ways to introduce structure into hypermedia, because true hypermedia is essentially a browse/search resource, rather than a contextualised learning approach. It gives a student control of the environment, but it does not help them to discern what information is important. Hypermedia is ideal for case-based teaching materials where students are expected to identify the important as opposed to the redundant information. It was concluded that students using the package in this study (which was not case-based) should be guided by a schedule or teacher, in order to ensure "selective attention" to the aspects of the package of greatest relevance to the learning objectives. The role of redundant information in this package has to be interpreted alongside interaction "errors" in order to identify the best ways of rewarding curiosity without over-burdening developers.

This study supported the findings of Hammond and Allinson [2] that students in a hypermedia learning condition significantly overestimated the amount of material that they had accessed. It was additionally shown here that students who accessed the least information overestimated comparatively more than their more successful colleagues. This appeared to contradict the concept that "each individual knows what's best for him or her" [3, p. 6], which underpins the beliefs of those advocating the benefits of browsing and discovery learning [5]. Underestimating the amount of content in a hypermedia package might have a negative effect on the opportunity for learning, whereby those who need to return to the material might have already experienced a premature sense of "closure" or completion. A quick "glance around the room" revealed students in different parts of the package, which could give an observer an impression that each student was achieving what was achieved by the class as a whole. "Access-indicators" based on quantifying the audit trail in real time would assist both learners and teachers to better estimate achievement and the amount of relevant material remaining.

It was concluded that comparative studies (such as investigating the effect of moderator variables) and whether students adopted compensatory learning strategies by adjusting the time they spent using a teaching package were incompatible if investigated in the same research study, unless a good model for standardising for the time spent using the package could be developed. Future comparative evaluation research should either expose all students in a hypermedia evaluation study to the learning materials for a fixed period of time, if it is necessary to make between-student comparisons, or concentrate on outcomes which are not dependant on needing to control the time spent using a program, such as percentage of material accessed regardless of time (individual achievement).

The conclusions from this study taken with the observation of a learning differential favouring "high ability" students when learning from a hypermedia simulation [10] indicate that some students might be systematically disadvantaged if courses are delivered via interactive hypermedia learning materials. Research into factors which affect accessing information (such as the efficiency of the preferred interactive strategy, repetition of previous material and premature sense of completion) is urgently required to identify whether differential outcomes routinely occur between students, and what mechanisms can be employed to minimise potentially negative effects while promoting learning opportunities for all students.

Acknowledgements

I would like to thank my supervisors Prof D.G. Dewhurst (Faculty of Health and Social Care, Leeds Metropolitan University) and Mrs G.R. Goldsmith (School of Computing and Management Sciences, Sheffield Hallam University) for their continual encouragement, and the Lord Dowding Fund (NAVS) for their financial support of this work. Thanks also to Dr G.R. Hammond and Dr A.M. McDonald (University of Newcastle) for comments on drafts of this script.

References

[1] Clark, R.E., Evidence for confounding in computer-based instructional studies: analyzing the meta-analyses, Educational Communication and Technology Journal, 33(4): 249–262, 1985.

[2] Hammond, N. and J. Allinson, Extending hypertext for learning: an investigation of access and guidance tools, in: Proceedings of the British Computer Society HCI '89, 1989, pp. 293–304.

[3] Kinzie, M.B. and H.J. Sullivan, Continuing motivation, learner control and CAI, Educational Technology Research and Development, 37(2): 5–14.(1989).

[4] Markle, S.M., Unchaining the slaves: discovery learning is not being told, British Journal of Educational Technology, 23(3): 222–227, 1992.

[5] Miller, G.A., The magical number seven, plus or minus two: some limits on our capacity for processing information, Psychological Review, 63: 81–97, 1956.

[6] Nelson, T.H., Managing immense storage, Byte, January, 225–238, 1988.

[7] Nielsen, J., The art of navigating through hypertext, Communications of the ACM, 33: 297–310, 1990.

[8] Quentin-Baxter, M. and D.G. Dewhurst, A method for evaluating the efficiency of presenting information in a hypermedia environment, Computers and Education, 18: 178–-182 1992.

[9] Wittrock, M.C., Students' thought processes, in: M.C. Wittrock (Ed.), Handbook of Research on Teaching. MacMillan, New York, NY, 1986, pp 297–314.

[10] Yildiz, R. and M.J. Atkins, Evaluating multimedia applications, Computers and Education, 21(1/2): 133–139, 1993.