The DCE Web Toolkit: Enhancing WWW Protocols With Lower-Layer Services

Steve Lewontin, OSF Research Institute, Cambridge, MA, USA
stevel@osf.org http://riwww.osf.org/~stevel/
Abstract:
New WWW services may be created either by extending Web protocols or by adding services in a lower layer. The DCE Web toolkit demonstrates that a broad array of new services can be provided in a layer below HTTP. Toolkit services include security, naming, and a transport-independent communications interface. Web applications can take advantage of these services by communicating their current protocols, such as HTTP, over the toolkit layer. The toolkit provides our prototype Web implementation with many new features, including security and location-independent hyperlinks without modification of the HTTP protocol and only minor changes to Web applications.
Keywords:
protocol extension, HTTP, DCE, security, encryption, authentication, access control, name service, toolkit.

Introduction

When thinking about how to extend World Wide Web protocol functionality, it may help to ask (with apologies to Shakespeare), "To encapsulate or to be encapsulated? That is the question." The point is that there are at least two broadly different approaches to the problem: one is to extend HTTP and other Web protocols so that new functionality is encapsulated within them; the other is to add functionality at a lower layer which itself encapsulates the Web protocols. This paper reports on work to provide a broad set of new services to the Web based on the second approach.

At this point, most proposals to enhance HTTP take the first approach. Thus we have proposals to extend HTTP to add security, such as SHTTP [Res94]and SHEN [Hal94], and to improve performance, such as HTTP-NG. [Spe94b] Perhaps the most general embodiment of this approach is Kristol's recent proposal of a general mechanism for wrapping and negotiation in HTTP [Kri94].

Proposals that take the opposite approach are also beginning to appear. Netscape's Secure Socket Layer (SSL) [Hic94] adds authentication and encryption to Web protocols in a layer between the application and TCP. In our work on the DCE Web [Lew94] we provide a broad range of new functionality in a layer between Web protocols and a variety of transports.

The DCE Toolkit Layer

In the DCE Web project we set out to demonstrate the benefits of applying sophisticated distributed computing technology to the World Wide Web. To construct the DCE Web, we created a layer of extended functionality--called the DCE Toolkit Layer--below HTTP. The toolkit is based on the OSF Distributed Computing Environment (DCE) [OSF92], which provides a suite of technologies for constructing secure distributed applications.

The toolkit includes communications services, a set of security services, an object and service binding model supported by a name service, transport connection caching and reuse, and a number of other features, all of which are based on underlying DCE technologies. HTTP protocol is communicated over this layer, gaining access to the extended functionality without requiring any change in the protocol or the client and server machines that process it. Because the layer itself is transport independent, it also makes it possible to communicate via HTTP over transports other than TCP.

Our experience with the DCE Web convinces us that the approach of adding services to the Web in a layer below HTTP may be generally applicable. We believe that the toolkit layer approach has much to recommend it. Certainly, one of the great virtues of HTTP is its simplicity: the protocol is stateless, consisting of a single request-response pair for each object requested. Such simplicity has much to do with the ease with which the Web has proliferated. Because the toolkit layer communicates the protocol unchanged, this simplicity is retained.

A related advantage is that existing client and server protocol-processing machinery is retained, so that it is easy to port existing applications to take advantage of new services. For example, only a few lines of code in libwww need to be changed to create a functioning browser that uses the toolkit layer. Similarly, a server fully compatible with the new browser can be created by porting the unchanged protocol processing machinery of an existing server.

A third advantage is that it should be possible to communicate protocols other than HTTP over the toolkit layer so that they can also take advantage of toolkit facilities. Several protocols in addition to HTTP are widely used to access the worldwide internet, including FTP, telnet, GOPHER, and others. It makes sense to ask whether the best way to provide new facilities for these protocols is to add each facility to each protocol separately (in the worst case creating nxn incompatible protocols). By adopting the layered approach, it is possible to provide the same new services to many protocols without modification of the protocols themselves.

The same argument can be applied to future protocols incorporated into the Web. It seems unlikely that, even with significant extensions to the current model, HTTP will be able to provide all the services likely to be demanded of the Web in the future, such as real time multi-casting of multimedia. It may be that a reasonable path for evolution of the Web would be to add specialized protocols, tuned for specific purposes, above a new layer of generic services that all protocols can share.

Compatibility

As with protocol extension, the toolkit layer does create incompatibility, since both clients and servers must communicate over the toolkit layer in order to take advantage of the new services. Old applications that do not make use of the toolkit layer are incompatible with new applications that do, although the locus of incompatibility is now moved from the application protocol itself to a lower layer. Such incompatibility is probably inevitable when adding new services to a protocol that are substantially different from those for which it was originally designed.

As in the case of protocol extension, such incompatibility can be dealt with by negotiation or other methods that allow new applications to continue to communicate using older protocols. In the DCE Web implementation, we deal with this by allowing clients to communicate using either the toolkit layer or via the traditional socket interface to TCP. Server compatibility is maintained by using gateways.

The Toolkit Implementation

The toolkit layer is implemented as a library to which existing applications (currently NCSA's Mosaic browser and HTTPD Web server) have been ported. The toolkit provides a communications interface that is largely transparent to application-level protocols. We have so far ported only HTTP, but the toolkit design is intended to be general enough to be used by other protocols as well. Figure 1 shows how the toolkit is located between the Web protocols and the underlying transport.

Figure 1: The DCE Web Toolkit Layer

The toolkit layer uses the DCE Remote Procedure Call (RPC) mechanism to transport application protocol messages [OSF93]. DCE RPC has direct access to a number of sophisticated services, including security and DCE naming, which in turn become available to the application-level protocol. We use DCE RPC because it provides a ready base of distributed computing technology. However, it is worth emphasizing that the toolkit layer approach is a general one, and that non-DCE implementations are possible.

In order to maintain the generality of the toolkit model, the toolkit insulates Web applications from DCE-specific mechanisms as much as possible. Clients and servers interface with the toolkit via an API that is analogous to, but simpler than, the socket interface they currently use. That is, clients make calls to establish a connection and then read and write their messages on file descriptors as usual. The connection interface is a good deal simpler with the toolkit, since the details of network programming are handled transparently by the toolkit. For example, servers need to make only two calls to the toolkit to set up all the state required to begin listening for requests.

Toolkit Services

It is important to emphasize that the toolkit is not simply a new protocol layer. Rather, it is a protocol layer and a set of well-integrated services required to construct client and server applications. For example, the toolkit handles all the work of creating a server daemon and spawning new processes (or threads) to handle requests. The server application simply supplies the work routines for processing an application-level protocol, such as HTTP. What we are really proposing with the toolkit, is a new, much higher-level model for constructing internet applications.

Some of the services provided by the toolkit, such as authentication and encryption, are transparently integrated into the underlying RPC protocol. Other services, such as naming, and access control, support or depend on the RPC protocol, but are not part of it. Naming, for example, is used by the RPC binding model, but is not required by it. Access control makes use of on an RPC protocol facility (authenticated certificates of individual and group identity), but again, it is not part of the RPC protocol.

This means that applications get services from the toolkit layer in two ways. Services such as authentication and encryption are transparently available via the RPC protocol, and applications can access these services transparently when they use the toolkit communications interface. Other services, such as access control, are explicitly requested by applications via the toolkit's extended services API (represented by the black box in Figure 1). Naming falls somewhere between: the DCE name services are available implicitly to applications using the toolkit connection interface, but they can also be accessed explicitly.

Of course, services accessed through the extended services API are not transparent to applications. The toolkit provides interfaces at a high level, so that applications need to make only a few calls to use services like access control. Nevertheless, the toolkit recognizes that not all toolkit layer services can be provided with the same degree of transparency.

The fundamental services provided by the toolkit layer include security, binding and naming, and a variety of other services. These are briefly described in the following sections.

Security

The toolkit layer provides a set of security-related services, including encryption and data integrity protection, authentication, a group identity mechanism, and access control.

Encryption, Data Integrity, and Authentication

The toolkit layer provides encryption based on the Digital Encryption Standard (DES) private key mechanism. Because the DCE-Web layer encapsulates the HTTP protocol, the entire HTTP protocol message can be encrypted so that it is impossible for an intruder to discover even that an HTTP exchange is underway. DCE-Web integrity protection is based on MD5 digital checksums (message digests). The secure RPC used by the DCE Web also provides strong protection against replay of messages. Toolkit layer authentication is based on DCE's private-key authentication which is derived from Kerberos [Ste88]. A trusted third party maintains the secret keys of all security principals.

Note that toolkit encryption and authentication services are private key-based and do not currently include public key mechanisms like those proposed for most Web security systems. The toolkit uses private key mechanisms because they are what is currently available from the underlying DCE RPC on which the toolkit is based. These private key mechanisms are well suited to security domains that are subject to coherent, centralized administration. Thus, the toolkit is well adapted to creating a secure enterprise-level Web where central administration of security databases is possible. DCE provides scaling mechanisms that potentially allow these private key mechanisms to be used over large and widely distributed domains.

However, public key mechanisms are obviously better suited for many purposes in the less structured environment of the World Wide Web. For example, they can be used for authentication, signatures, and electronic commerce between parties that are not part of any centrally administered domain. Thus they are well matched to the requirements of commerce on the Web. An important priority for the toolkit is therefore to add public key mechanisms in the future.

Groups and Access Control

The toolkit layer provides a sophisticated access control mechanism based on DCE Access Control Lists (ACLs). ACLs are lists of individual and group identities along with the access rights granted to each. Servers attach ACLs to each object they manage. Servers use ACLs to authorize access to objects based on both the individual and group identities of a calling client. Although access decisions are made at the application level, the access control mechanism is directly supported by the DCE RPC mechanism, which supplies authenticated certificates of individual and group identity with each RPC request.

To support ACL-based access control, DCE provides a distributed grouping mechanism. Groups are lists of users who usually share some set of access rights to a set of services or objects. In the DCE Web, for example, a group would typically be used to identify some set of users, such as the members of a project, who work with a specific set of Web documents. Groups are administered using the same trusted third party mechanism that is used for individual identities. This allows for consistent administration of groups across the Web, rather than leaving group administration up to each server.

Application use of the access control facility requires either that application-specific data be made available to the toolkit layer or that toolkit-layer data be made available to applications. For example, the DCE Web prototype uses an ACL model based on HTTP methods. Under this model, an ACL might list both GET and POST permissions for the user X_project_leader but only GET permission for members of the group X_project. Using such combinations of individual identities, group identities, and access methods, it is possible to tailor access to any web object to an extremely fine degree.

Using this access control model, application-specific knowledge (the HTTP method) and toolkit-layer data (the authenticated principal and group identities of the caller) are required to make each access decision. The toolkit deals with this by providing a set of interfaces that applications can use to call back to the toolkit with application-specific data. The access control interface to the toolkit provides a mechanism for registering server-controlled objects and associating them with ACLs. Because applications must be able to customize the kinds of permissions they use (such as the HTTP method-based permissions used by the DCE Web), the toolkit also provides an interface for registering permissions sets. Finally, the toolkit provides an ACL evaluation engine, which compares certificates of individual and group identity with ACLs. When a server receives a request and wants to make an access control decision, it can call back to the toolkit with the required permissions (such as the HTTP method requested by the caller), and get an access decision. Applications that want to make use of the ACL mechanism can do so by making a very small number of calls to the toolkit.

Binding and Naming

The toolkit layer provides for client-server binding using DCE name services. DCE provides a distributed name service divided into domains called cells. Each cell namespace is rooted in a global namespace (either DNS or X.500), and each cell provides a name service that can resolve names both in the local cell namespace and in other cell namespaces. A typical DCE name is /.../web.osf.org/subsys/WWW/web3, which represents a cell name (web.osf.org), rooted in DNS, and a multi-component name within that cell. The DCE name service is used for DCE RPC binding.

The RPC binding model allows clients to find servers based on the services and objects they export rather than their locations. Servers advertise their services and objects in the DCE namespace, where clients can look for them at binding time. With the name of an object or service it wants in hand, a client can ask the toolkit find a server that advertises under that name and return a binding to it. This is implemented at a high level in the toolkit connection interface: the client simply passes in a name and gets back a connection to the appropriate server.

Name-based binding demonstrates how the toolkit layer can provide a significant new service to the Web with only minimal change to current applications. The Web implementation makes use of this facility by allowing URIs to include DCE names rather than the server address and pathname used in standard HTTP URLs. The libwww HTTP connection interface is then implemented using the toolkit's name-based connection interface (a minimal change of a few lines of code). This allows clients to do name-based rather than location-based lookups of Web objects. This is a major advantage since Web objects and servers can then be moved without causing links to become stale, as long as objects are always advertised under the same names.

Name Resolution

As part of the naming interface, the toolkit supports junctions between server-maintained namespaces and the DCE namespace. Servers do not typically export the name of every object they manage to the DCE namespace. Instead, most servers maintain their own namespaces of objects and export the root of that namespace to the name service. In the Web implementation, this means that servers export the root of their document namespace to the DCE name service. The location in the DCE namespace where a server exports the root of its private namespace is called a namespace junction, since that is where the two namespaces join.

The names used by the toolkit to look for bindings therefore typically contain both DCE namespace and server-private namespace components. For example, if a server manages an object called /docs/report.html in its own namespace, and exports the root of its namespace to the DCE name /.../www.osf.org/WWW/Web3, then the object's fully qualified DCE name is /.../www.osf.org/WWW/Web3/docs/report.html.

It is obvious that a client cannot determine the location of the junction point in such a name by simple inspection: there is no requirement that the server's private naming scheme use a syntax that distinguishes it from the DCE portion of the name. However, clients typically do need to be able extract the server's object name. For example, in the Web implementation, this is the object name that is passed in an HTTP request.

The toolkit therefore provides an interface that clients can use to pass names to the DCE name service for resolution up to the junction point. The portion of the name after the junction point (the residual), is returned to the client so that it can be passed to the server as a server-specific object name. Thus, when a Web client asks the toolkit to resolve a link to the name /.../www.osf.org/WWW/Web3/docs/report.html, the toolkit returns the string /docs/report.html to be passed to the server in the HTTP request. This service is integrated into the toolkit's high-level connection interface.

URNs have been proposed as a general solution for the problem of persistent, location-independent naming in the Web [Sol94]. We are currently considering an implementation of URNs based on the toolkit's DCE name service facilities as described here. What our toolkit experience shows is that it is relatively simple to provide such a service to Web protocols using the toolkit approach.

Other Services

The toolkit layer provides many other facilities to client and server applications, including a transport-independent interface, threading, and secure remote server management. For example, the toolkit can provide the same semantics over any transport supported by DCE RPC, including both TCP and UDP. We currently operate the DCE Web over both transports. The underlying DCE technology also provides an operating system-independent threads package, which we have used to construct a prototype multi-threaded DCE Web server. The toolkit also provides mechanisms for secure remote management of servers. In the DCE Web prototype, Web objects can be registered and ACLs can be modified and set remotely in a secure fashion. We are convinced that such capabilities are essential to the management of even moderately large webs.

Results and Further Work

Our general goal for this work is to demonstrate the feasibility of the layered approach as a way to provide a broad range of additional functionality to the Web and other internet applications. So far, the toolkit meets this expectation. It has been extremely simple to add important new functionality, such as authentication, ACL-based access control, and name-based lookup, simply by porting existing Web client and server applications to the toolkit.

Porting existing applications has been straightforward. For example, we ported NCSA Mosaic to use the DCE Web protocol simply by modifying the http module (HTTP.c) of libwww to use toolkit connection and communication calls. This required adding or changing less than 70 lines of code. (We have also made changes to the Mosaic user interface to set and query toolkit security attributes, but none of these changes were required by the port to the toolkit.) Similarly, we were able to port NCSA httpd to the toolkit by adding or modifying about 50 lines of code.

So far, our work has emphasized ease of porting existing Web applications to the toolkit over performance. However, one goal has been to add toolkit functionality with minimal impact on performance. Measurement of our most efficient implementation of the DCE Web protocol showed performance to be within 8% of standard HTTP when using DCE name services but no authentication or other security. However, as we have added functionality to the toolkit we have noted some decline in performance, although we have not yet made extensive comparative performance measurements.

Obviously, some performance degradation is inevitable when adding new services such as authentication. However, we believe that our current prototype implementation of the toolkit still leaves much room for performance improvement. In particular, we believe that our port of NCSA httpd to the toolkit is quite inefficient due to incompatibility between the multi-threaded toolkit model and the one process per request model used by httpd and most other Web servers. (The toolkit creates a new thread for each connection request, but this must encapsulate all processing for each connection in a separate process in order to use the existing httpd protocol processing code, which is not thread safe.) It is possible that creating a highly efficient server will require significantly more work than we expended on the current httpd port to the toolkit. We believe the best results will be achieved by a truly multi-threaded toolkit server [Spe94a].

Toolkit performance may also be limited by the current capabilities of the underlying DCE RPC mechanism. We found it difficult to use RPC to efficiently reproduce the communications semantics that existing socket-based applications expect. To be as general as possible, the current toolkit implementation requires a minimum of 4 RPCs per connection, whereas we were able to implement a version of the DCE Web protocol specifically tuned to http that required only one RPC per connection. In the next stage of our work we plan to do extensive performance measurements and tuning of the toolkit with the goal of increasing performance without sacrificing generality or robustness.

At the same time we are aiming to improve performance, we also recognize that adapting client applications to the toolkit can potentially be made even simpler. We are currently developing a prototype toolkit client application that can be used as a proxy to permit any existing Web browser to communicate via the toolkit layer without the need to port browser code. The proxy application will provide both a communications interface to the toolkit layer and a user interface to toolkit security and other services.

As noted above, one capability that we would especially like to add to the toolkit is public key encryption. The current DCE private key model is well suited to enterprise-level security domains, such as corporate Webs, but it is not so well adapted to the such activities as commerce on open networks. By adding public key to the toolkit, we believe that it will become significantly more useful for many Web purposes.

Our long term goal is to develop a new model for internet application development that replaces the low-level socket interface with a high-level internet toolkit. This would include all the facilities in our prototype, but with significant extensions to the underlying RPC for performance and such services as multicasting and public key encryption. It would also include facilities to permit easy replication of services, an essential requirement for reliable Web applications. We believe that our experience with the DCE Web toolkit shows the viability of this approach.

References

[Hal94]
Phillip M. Hallam-Baker, "Shen: A Security Scheme for the World Wide Web", http://info.cern.ch/hypertext/WWW/Shen/ref/shen.html.
[Hic94]
Kipp E.B. Hickman, "The SSL Protocol", Netscape Communications Corp., Nov. 17, 1994, http://home.mcom.com/info/SSL.html.
[Kri94]
David M. Kristol, "A Proposed Extension Mechanism for HTTP", AT&T; Bell Laboratories, 1994, http://www.research.att.com/~dmk/extend.txt.
[Lew94]
Steve Lewontin and Mary Ellen Zurko, "The DCE Web: Providing Authorization and Other Distributed Services to the World Wide Web", WWW Conference '94, http://www.ncsa.uiuc.edu/SDG/IT94/Proceedings/Security
[OSF92]
Introduction to OSF DCE,Open Software Foundation, 1992.
[OSF93]
AES, Distributed Computing, RPC Volume,Open Software Foundation, 1993.
[Res94]
E. Rescoria and A. Schiffman, "The Secure HyperText Transfer Protocol", Internet Draft, Enterprise Integration Technologies, December 1994, http://www.commerce.net/information/standards/drafts/shttp.txt.
[Sol94]
K. Sollins, L. Masinter, "Requirements for Uniform Resource Names", Internet Draft, October 18, 1994, draft-ietf-uri-urn-req-01.txt.
[Spe94a]
Simon E. Spero, "MDMA - Multithreaded Daemon for Multimedia Access", Presented at WWW Conference '94. (Not currently available on line.)
[Spe94b]
Simon Spero, "HTTP-NG: Status Report". Distributed to the list www-talk, Nov. 20, 1994.
[Ste88]
J. Steiner, C. Neuman, J. Schiller, "Kerberos: An Authentication Service for Open Network Systems", Proc. of the USENIX Winter Conference, Dallas, TX, Feb. 1988.