http://www2002.orgblank dot spacer
WWW2002 Logo
  WWW2002 Masthead
 

THE ELEVENTH INTERNATIONAL
WORLD WIDE WEB CONFERENCE

   Sheraton Waikiki Hotel
Honolulu, Hawaii, USA
7-11 May 2002
WWW2002 Logo  

WWW2002 Panels Track

WWW2002 Panels Track Description


N1 - Do Web Measurements Measure Up?

Moderator:

Panelists:

Synopsis:

Back to Top


N2 - Law, Ethics and Virtual Learning Communities - Conflict or Progress?

Moderator:

Panelists:

Synopsis:

The session aims to encourage debate on the legal and ethical issues that increasingly impinge on on-line education, although it could be argued they always did or should have done. Is there a conflict between educators desire to make use of technology and legal or ethical constraints? How much privacy should 'learners' have on-line? In a world where 'anti-terrorism measures' may require increased monitoring, are there issues for on-line education? Is free speech non-negotiable? Is there a 'right' to education?

Students are encouraged to search out on-line data and information. Where is the boundary between copying for citation and plagiarism?

Should on-line learning content, its Copyright and IPR be protected by legal and technical means? What about access by the 'poor', the 'third world', those who cannot access learning in their own country? Should standards and 'methods' of on-line learning be patented? How should on-line education co-operate with 'for profit' firms such as on-line publishers, educators, ISP's?

Are institutions; are individuals, liable if something goes wrong in the on-line world? Are they morally liable if a 'learner' suffers an injustice, for example their privacy is compromised when an education products company obtains their personal data?

The aim is not to answer all these questions, inform delegates what the law is or put forward an ethical code. It is to contribute to the debate, to raise the issues. A discussion is important as increasing amounts of education and life in general are done on-line. Or should that be 'don't believe the hype'? Is there a conflict or just 'progress' as usual?

Back to Top


N3 - Web Experiments and Test Collections: Are They Meaningful?

Moderator:

Panelists:

Synopsis:

For a number of years now, research groups in academia and industry have developed web search algorithms and evaluated them using test collections. The majority of these collections have been either too small, not representative of the full web in some way, explicitly circumscribed, and/or proprietary. The end result is that it is difficult to reproduce results, compare results across experiments, or even claim meaningfully that the conclusions of an experiment are general. Can a web test collection be built that is at once useful, relevant, and within the reach of researchers?

The goal of this panel is to discuss if web collections and experiments can be improved to the point that they are at least as well understood as those in text retrieval and machine learning. This discussion has been ongoing in the retrieval research community, and a panel on the topic at WWW2002 will serve to inject the discussion back into the larger community. The questions the panel will be confronted with include:

Back to Top


N4 - XML and Databases

Moderator:

Panelists:

Synopsis:

Some of the topics that the panel will cover are:

Back to Top


N5 - Securing Web Services - Will the Pieces be Ready?

Moderator:

Panelists:

Synopsis:

The panel will engage in a thought provoking discussion of the current and future state of Web Services security, addressing the following questions in particular:

The panelists will draw on their experience of development of numerous network security standards, products and applications including DCE, SSL, SHTTP, Lotus Domino, and DARPA systems deployment, and the Web Services security standards XKMS, SAML and XACML.

Back to Top


N6 - RDF Applications in the Real World

Moderator:

Panelists:

Synopsis:

The Resource Description Framework (RDF) is the World Wide Web Consortium's specification for defining machine-understandable metadata. In the long term, RDF will serve as the foundation for the Semantic Web, which requires next generation scalable commercial-strength inference engines. However, it is possible to take immediate advantage of the RDF specification by using RDF models for building advanced Web services.

Early commercial applications that make use of machine-understandable metadata range from information retrieval to Web-enabling of old-tech IBM 3270 sessions. Current developments include metadata-based Enterprise Application Integration (EAI) systems, data modeling solutions, and wireless applications. Machine-understandable metadata is emerging as a new foundation for component-based approaches to application development. Within the context of reusable distributed components, Web services represent the latest architectural advancement. RDF Servers enable these two concepts to be synthesized providing powerful new mechanisms for quickly modeling, creating and deploying complex applications that readily adapt to real-world needs.

The objective of the panel is to bring together researchers and early adopters from the industry. The intent is to discuss first real world applications based on the RDF specification, to emerge with the better understanding of what applications are the most likely beneficiaries of the RDF technology, and to obtain additional insights into what (if anything) prevents wider and more rapid adoption of this technology.

Back to Top


N7 - Web Engineering

Moderator:

Panelists:

Synopsis:

Web Engineering deals with systematic, disciplined, quantifiable approach to development, operation, and maintenance of Web-based application development or the application of engineering to Web-based software. The contributions to this emerging discipline have come from all over the world. They have ranged from theoretical and methodological papers to practical case studies of large scale applications. Among the participants, there has been a general consensus that Web Engineering is the way forward if we are to avoid widespread problems arising out of undisciplined and ill-maintained Web-based systems that are not planned for scalability, maintainability and security.

Web Engineering as a way to develop Web sites and Web-based applications was first introduced at a one-day workshop at the WWW7 Conference in Brisbane in 1998. Since then, there have been similar workshops at each of the subsequent World Wide Web conferences, as well as two two-day workshops at the International Conference on Software Engineering (1999 and 2000). The Hawaii International Conference on System Sciences has hosted two mini- tracks in Web Engineering (in 2000 and 2001) and will host a third one in 2002. WWW2002 has an alternate track in Web Engineering and more tracks and conferences are in the offing for 2002. There is a book of edited papers on Web Engineering, two-part special issue of IEEE Multimedia on Web Engineering (January and April 2001) and graduate courses have started to cover aspects of Web Engineering.

However, the adoption and development of Web Engineering does not seem to have been widespread nor at a pace in keeping with the breakneck development in Web technologies themselves. We have many, very noticeable points of discontinuities between technologies, tools, techniques and methods. While technologies march on, tools struggle to keep up and techniques and methods are left far behind or simply borrowed from another paradigm that did not have to reckon with the Web at all. Furthermore, there are serious problems of attitude to change and to adopt to better ways of developing Web applications.

The questions thus arise: Is Web Engineering being practiced? If not, why not? What have we learned since Web Engineering was first introduced? How do we go forward and cope with managing the complexity of Web applications? What are the issues and problems? What agenda do we draw up for the medium term?

In order to address these questions, the Panel will define Web Engineering in greater detail, discuss why it is needed, review what has been achieved and then go on to try and arrive at consensual answers. The Panel will also be open for discussion by participants on these issues.

Back to Top


N9 - On Culture in a Worldwide Information Society

Moderator:

Panelists:

Synopsis:

The DOT Force (http://www.dotforce.org/) (Digital Opportunity Task Force) is an international task force launched by the G8 at its Okinawa Summit in July 2000. The DOT Force goals are to enhance global understanding and consensus on the challenges and opportunities posed by information and communications technologies, and to propose practical ways to overcome the digital divide and make digital opportunities available to all. The DOT Force report and action plan have been endorsed by the G-8 leaders at the Genoa Summit of July 2001, and the governments of Italy (president of the G-8 in 2001) and Canada (president in 2002) have launched a series of working groups, which are expected to formulate and initiate follow-up projects and activities contributing to the implementation of the nine action points of the Genoa Plan of Action http://www.dotforce.org/reports/matrix.html. A stocktaking exercise will take place at the next G-8 Summit (to be held in Kananaskis, in July 2002).

In parallel, the United Nations officially launched in November 2001 an ICT Task Force, whose main role is to advise the Secretary General of the United Nations on coordinating UN work on digital divide issues. (http://www.unicttaskforce.org). Like the DOT Force, the UN ICT Task Force includes representatives from governments, international organizations, private enterprises and non-governmental organizations. It foresees to meet twice a year, and to assess progress made on the implementation of the plan of action it has adopted at its inaugural meeting.

The Future of online Culture workshop will examine how the recommendations of a report prepared following the WWW10 in Hong Kong can be adapted to compliment the action points of the Genoa Plan of Action.

Back to Top


N10 - Device Independent Web Content: What is the Right Approach?

Moderator:

Panelists:

Synopsis:

Web developers are faced with numerous decisions when designing content that is intended to work both on PCs and new Web access devices such as mobile systems - for example:

Join our host Stephane Boyera (W3C) and a set of distinguished panelists to discuss on what is the best way to create device-independent Web sites.

Back to Top


N11 - The Semantic Grid: The Grid meets the Semantic Web

Moderator:

Panelists:

Synopsis:

"Grid computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation." (Foster et al, "Anatomy of the Grid"). The "Grid problem" is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources - virtual organizations.

Compare this with this TBL description of the Semantic Web as "...an extension of the current Web in which information is given a well-defined meaning, better enabling computers and people to work in cooperation. It is the idea of having data on the Web defined and linked in a way that it can be used for more effective discovery, automation, integration and reuse across various applications. The Web can reach its full potential if it becomes a place where data can be processed by automated tools as well as people."

Which begs some questions:

What is the relationship between Grid computing and the Web? e.g. Can we assume the Grid will scale in the same way?

Is there a major gap between current grid endeavours and the vision of a grid future in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale?

Is the Semantic Web necessary to bridge this practice-aspiration divide? If so, which bits?

The point of the panel is that the grid computing vision currently being described by Foster et al (eg in the "Anatomy of the grid" paper) probably needs the Semantic Web. But the grid computing and Semantic Web communities are disjoint (in fact the Semantic Web community itself is disjoint) and it's necessary to build bridges. The conference is a unique opportunity to get all these communities together.

The panelists have been chosen carefully. Grid computing (Foster) needs to deal with large scale content and metadata (Miller), with process-to-process service-oriented models/agency (Hendler) and potentially with ontologies, inference and knowledge (Goble).

Back to Top


Last Reviewed: 5/8/02
its-conf@hawaii.edu