Acceleration of Web Service Workflow Execution
through Edge Computing
Junichi Tatemura Wang-Pin Hsiung Wen-Syan
Li
NEC Laboratories America, Inc.
10080 North Wolfe Road, Suite SW3-350 Cupertino, California 95014
USA
Email:{tatemura,whsiung,wen}@sv.nec-labs.com
Tel:408-863-6021 Fax:408-863-6099
Abstract:
Integrating Web services over the Internet has difficulty in
achieving reliability, in terms of availability and performance.
Especially, a WSDL request-response operation, which expects
immediate response from remote services, is vulnerable to network
failure and latency. This paper proposes Overlay Web Service
Network (OWSN), a framework of integrated Web service management
and execution based on a CDN (Content Delivery Network). Integrated
services are described in a standard workflow language, deployed on
edge servers, and access other Web services through proxies on the
edge servers. These proxies, which may be generic cache proxies or
application-specific proxies, encapsulate
fault-tolerant/performance-conscious message handling between the
edge server and the original server so that the service integrator
can concentrate on business logic. The WS-Cache specification is
introduced so that cache proxies and service providers can
incorporate various caching techniques into Web services. The
Service Level Requirement specification is also introduced in order
to allow service integrator to indicate preferences for multiple
service levels provided from the proxies. A J2EE-based system
architecture and A prototype system for case studies are
described.
Web services are loosely coupled software components which
communicate with each other over the Internet using open-standard
XML protocols such as SOAP. A service provider describes the
interface of each service component using WSDL (Web Service
Description Language) and publishes the service with a registry
based on a standard called UDDI (Universal Description, Discovery
and Integration). Businesses can expose their specific business
functions as Web services, and users can integrate these service
components into their business applications so that they can
concentrate on their core businesses. Web service standards are
designed to achieve interoperability of the service components and
agility of the business integration. One of emerging businesses
expected is a service intermediary which aggregates service
components and provides value-added services for service consumers.
E-commerce sites can integrate other commerce services with their
own services and provide ``federated EC'' services to customers
(e.g. a real estate service provider can integrate mortgage
services and moving services as well as user identity and payment
services). ISPs can provide not only the Internet accessibility but
also ``service accessibility'' by aggregating and integrating
service components. System integrators can integrate services from
multiple ASPs (Application Service Providers) and provide
customized outsourcing services for enterprises. As web services
are widely adopted, we will see emergence of an integration tier
between component-service providers and integrated-service
consumers. Note that there are a service presentation tier and a
service integration tier, which provide view integration and
service integration, respectively. Information portal technologies
mainly focus on the service presentation tier: an EIP (Enterprise
Information Portal) only provides view integration based on
``portlets.'' To correlate multiple service functions with each
other, the service integration tier is required between portals and
service components. Although consumer portals such as Yahoo!
provide both view and service integration functions, services are
integrated in proprietary ways. Integrated Web services will be
described in an XML-based service composition language. At the time
of writing, there is no standard for Web service compositions
recommended from standardization organizations such as W3C and
OASIS. One of the specifications expected to be the Web service
standard is BPEL4WS (Business Process Execution Language for Web
Services) [14] proposed by
Microsoft, IBM, and BEA Systems among other XML-based languages
such as WSCI (Web Service Choreography Interface) and BPML
(Business Process Modeling Language). BPEL4WS can define workflow
of integrated Web services which consist of multiple WSDL
operations. One of the issues of integrated Web services is
difficulty in achieving reliability, in terms of availability and
performance, since service components are distributed over the
Internet. Especially, synchronous messaging, which expects
immediate response from remote services, is vulnerable to network
failure and latency. In WSDL, synchronous messaging to remote
services is represented as a ``request-response'' operation, which
is typically realized as SOAP-RPC or other RPC-like messages.
Although RPC-like messaging is often a natural and easy way to
integrate multiple Web services, response time and availability of
the integrated services are easily degraded by one of the
synchronous service calls. To improve availability and performance,
a service integrator could incorporate various asynchronous
execution techniques such as data caching, prefetching, and
replication. As a result, however, the workflow will become far
more complicated than the original business process and will cause
higher development and maintenance costs. Integration of
distributed Web services needs a simple but reliable way such that:
- a service integrator can concentrate on business logic to
create and deploy integrated Web service workflows.
- a service integrator does not have to be conscious of
fault-tolerant or performance-improving protocols but of service
levels such as availability and performance.
- integrated services have end-to-end fault tolerance.
In this paper we propose a framework of integrated Web service
management and execution, called Overlay Web Service Network
(OWSN), based on a CDN (Content Delivery Network). Integrated
services are described as standard workflows (BPEL4WS) and deployed
on edge servers. Component services are accessed through proxies on
the edge servers. These proxies may be either generic cache proxies
or application-specific proxies deployed by the component service
providers. The proxies encapsulate
fault-tolerant/performance-conscious message handling between the
edge server and the original server so that a service integrator
can concentrate on business logic. We introduce WS-Cache
specification so that cache proxies and service providers can
incorporate various caching techniques into Web services. We also
introduce Service Level Requirement specification which allows
service integrator to indicate preferences for multiple service
levels provided from the proxies. We describe a J2EE-based system
architecture and a prototype system for case studies.
In the Web content delivery context, there have been research and
development done on integrating components into a piece of content
on edge servers. Edge Side Includes (ESI) [7] is a markup language used to define Web
page components for dynamic assembly and delivery of Web
applications at the edge of the Internet. The ESI technology lets a
content provider break a dynamic Web page into fragments with
different cacheability. The edge server must retrieve only
non-cacheable or expired fragments from the original servers. Our
OWSN framework includes a similar content caching technology as a
part of workflow execution: We have designed WS-Cache specification
to bring extensibility to content caching intermediaries based on
the Web services architecture. There have been various studies on
general code execution on edge or proxy servers. Mobile code or
mobile agents paradigms have been proposed for effective and
reliable computation on distributed networks [11] [12]. A lot of research has been done on
Active Networks [10] that
incorporate general purpose computation into network nodes. Content
Services Network [13] has an
application proxy server as a building component that performs
value-added services between a content provider and an end-user.
Several specifications have been proposed to extend the
functionality of proxy and edge servers. ICAP (Internet Content
Adaptation Protocol)(www.i-cap.org)[8] arrows the proxy server to modify requests
from clients and responses from Web servers in order to serve
various value-added services such as content filtering, virus
scanning, and format translation. IETF OPES (Open Pluggable Edge
Services) working group is chartered to define a framework and
protocols to both authorize and invoke distributed application
services deployed on proxy servers [9]. To improve performance and reliability of
web applications that provide dynamic web content, a multi-tier
architecture is adopted to large-scale web sites. Caching and
replication of data (or objects) are applied to each tier (i.e.,
web servers, application servers, and databases) [21][20]. For recent web sites which provide
personalized services, reusing and managing intermediary content is
important since the final HTML page is rarely reusable. The J2EE
(Java 2 Platform Enterprise Edition) architecture supports
management of intermediary content: JSP and servlet provide
facilities of caching page fragments (HTML or XML). EJBs
(Enterprise JavaBeans) encapsulate data synchronization between
objects and databases. The OWSN framework can be regarded as the
extension of the multi-tier architecture from a web site to wide
area networks: As an application server manages content
integration, an edge server manages service integration. As EJBs
bridge between the application server and databases, proxies bridge
between the edge server and source of component services. For
better performance and reliability on Web content delivery over the
Internet, various technologies have been intensively studied (e.g.,
cache management, database replication, materialized views, server
replication, and overlay routing). [2] reports that combining several
technologies is important to improve end-to-end WAN service
availability. The OWSN framework is designed to incorporate such
technologies into integrated Web service execution.
Based on flexibility of the Web service architecture, various
technologies are being developed to enhance ability of Web service
intermediaries. WS-Routing [18] is a specification that defines a
SOAP message path. Intermediaries can route messages between
service user and provider in a flexible way in order to enhance
performance, availability, and security. WSIF (Web Service
Invocation Framework)[19] is a
Java API for invoking Web services without hardcoding protocol
binding and location information. The API provides a way of coding
WSDL operation invocations at the PortType level. At execution
time, an invocation can be dynamically bound to SOAP, IIOP, or any
other protocols based on WSDL binding information. If application
are built based on WSIF, various value-added proxies can be plugged
in between applications dynamically. These enabling technologies
can be incorporated into the OWSN to enhance its flexibility
although the current architecture does not utilize them. Service
level agreement (SLA) of Web services is important especially when
component services are integrated with workflows. The proposal of
WSFL (Web Services Flow Language) [15], which is a predecessor of
BPEL4WS, describes need for definition of a business context
between a workflow (WSFL) and a component service (WSDL) and
anticipates an appropriate WSEL (Web Service Endpoint Language)
will be defined for this purpose. Although the standard for Web
services SLA framework is yet to come at the time of writing,
intensive work on Web service SLA management is being done recently
[23] [25] [26]. In this paper we do not define or
assume any specific service level agreement framework. Instead, we
provide service integrators with a specification for service level
preference.
For reliable integrated Web services, transaction control over
multiple service providers is crucial. The OASIS Business
Transactions Technical Committee develops the Business Transaction
Protocol (BTP) specification [16] that
enables coordination between XML-based services offered by
autonomous organizations. WS-Transaction [17] has been also proposed for
transaction management for Web services. The BPEL4WS business
process language for Web services has also a compensation handling
facility [14], which is
essential for long-living transactions over federated services. The
focus of this paper does not include transaction management issues.
Our framework, however, should be kept consistent with ongoing
standardization of Web services transaction management. For the
service presentation tier, the OASIS WSIA (Web Services fro
Interactive Applications) and WSRP (Web Services for Remote
Portals) Technical Committees collaboratively develop a framework
for the user-facing part of Web services. Other issues for
integrated Web services on which we do not focus in this paper
include security related features such as single sign-on,
authentication, signature, and encryption. The OASIS Web Services
Security Technical Committee develops a framework that incorporate
XML-based security technologies into Web services.
We propose Overlay Web Service Network (OWSN), an approach for
integrated Web services based on a CDN (Content Delivery Network).
Service integration workflows are deployed on edge servers located
close to service consumers. A component service provider can
utilize a message cache proxy on the edge servers or deploy its own
application-specific proxy. These proxies, which we call ``service
frontend modules,'' are responsible to manage communication through
the Internet between edge servers and the original service sources,
which we call ``service backend modules.'' The frontend and backend
modules can use private protocols suitable for the application and
provide a certain service level (e.g., response time) to the
integrator's workflow. The workflow does not have to be aware of
such application-specific protocols. Instead, it can invoke WSDL
request-response operations with specification of service level
requirement including conditions of failure and priorities between
multiple service level metrics (e.g., response time and data
freshness).
As illustrated in Figure 1, we
assume the following roles to be acted by business organizations
involved in integrated Web service delivery from component services
to service end users: A service provider exposes service
components. It deploys service frontend modules (such as a cache
proxy) on a service manager's servers. It manages execution of
service backend modules which communicate with the service frontend
modules. A service integrator creates integrated Web
services by combining service components. It deploys Web service
workflows on a service manager's servers. A service manager
hosts service integrators' workflows and service providers'
frontend modules. It manages execution of those modules, monitors
service levels, and provides accounting services to both service
providers and integrators. With UDDI service registries, it may
also work as a broker between service providers and service
integrators and between service integrators and service consumers.
A service consumer finds an integrated service provider
through the service manager's registry, requests integrated
services to the service integrator, and is bound to the service
manager's service endpoint (i.e., an edge server).
Figure 1: Business Roles
for Integrated Web Services
|
Note that some of these roles may be acted by the same business
organization. An ISP may act both a manager and a integrator to
serve consumers. An enterprise may integrate services to consume by
itself and outsource the management of them.
Figure 2 shows how integrated
services are delivered to service consumers in the OWSN framework.
Service integration workflows and service frontend modules are
deployed on edge servers and bound to each other according to WSDL
descriptions.
Figure 2: OWSN: CDN
Approach of Integrated Web Service Execution
|
On edge servers, a workflow invokes component services by sending
messages to frontend modules. For a service expected to give an
immediate response, the workflow will call the frontend
synchronously (i.e., a WSDL request-response operation).
Synchronous messaging with the frontend module is less vulnerable
than with the original server since the messaging is done inside
server clusters managed by a single service manager. If the
frontend does not require communication with the backend during
execution of the workflow request, the response time will be kept
short. In addition, when components are deployed on the same
platform (e.g., J2EE), they can use platform-dependent messaging
(e.g., Java RMI) for better performance. Between the edge server
and the original server of the service provider, a frontend module
and a backend module communicate with each other. The service
frontend can be either (1) a generic cache proxy, or (2) an
application-specific module. To minimize effects from network
latency and disconnection, the frontend and backend modules utilize
various communication techniques such as data prefetching, push
cache, or application-specific protocols. Note that the service
integrator does not have to be aware of the messaging protocols
between the frontend and backend modules. Instead, the service
integrator can specify its preferences for multiple service levels
provided from the frontend modules.
Figure 3: System
architecture
|
Between the service consumer and edge servers, common CDN
technologies [22] can be
applied for better response time and availability: A request from
the service consumer is redirected to the best available edge
server determined based on network latency and server load.
Figure 3 shows an architecture
for integrated Web service delivery based on J2EE (Java 2 Platform
Enterprise Edition). On the edge server, a J2EE application server
manages a workflow engine and EJB (Enterprise Java Bean)
components. A message cache proxy and application-specific modules
are implemented as EJB components. Web service integration
workflows are described in BPEL4WS and deployed on the workflow
engine. Note that the architecture does not assume any specific
platform for service consumer and original service sources (i.e.,
backend modules).
Workflow
We suppose that workflows are described in BPEL4WS although our
basic approach can be applicable to any other business process
description languages. BPEL4WS is an XML-based specification that
defines a business process as a workflow which consists of
``activities''. The <receive>
and
<reply>
activities are provided for handling a
service request from a client. Invocations of other Web services
are done by the <invoke>
activity. Control flows
can be described with various structured activities such as
<sequence>
(sequential steps)
<switch>
(conditional branches),
<while>
(loops), and <flow>
(parallel threads of sequences). Data handling is done by the
<assign>
activity. Exception handling is also
supported with <throw>
and
<catch>
. The interfaces between communication
partners are defined in WSDL. The <receive>
and
<reply>
activities correspond to WSDL operations
of the integrated Web service, which are exposed to service
consumers. The <invoke>
activity corresponds to
a WSDL operation of a component service. The structure of
<invoke>
is as follows. We introduced an
optional <sla:serviceLevelRequirement>
entity,
which is inserted as an extensibility element, to specify required
service level for this Web service invocation. Its attribute
requirement
refers to a service level requirement
specification which details are described in Section 5.
<invoke partner="ncname" portType="qname"
operation="ncname" inputContainer="ncname"
outputContainer="ncname">
<sla:serviceLevelRequirement requirement="qname"/>
<catch faultName="qname" faultContainer="ncname">
[activity]
</catch>
</invoke>
With the combination of partner
,
portType
, and operation
, the system can
identify the specific WSDL operation to invoke. The attributes
inputContainer
and outputContainer
indicate variables which contain sending and receiving data
respectively (for an one-way operation,
outputContainer
is omitted). The
<catch>
construct handles faults occurred during
execution of the WSDL operation. As for translation from external
Web service calls to internal Java RMI calls to proxies, we can
consider two approaches: (1) BPEL4WS workflow translation and (2)
WSDL translation. Applicability of these approaches depends on the
implementation of a specific BPEL4WS platform. In the workflow
translation approach, the above <invoke>
activity can be translated to the following structure which calls
internal program components.
<scope>
<faultHandlers>
<catch faultName="qname" faultContainer="ncname">
[activity]
</catch>
</faultHandlers>
<sequence>
<ext:callWSProxy
partner="ncname" portType="qname"
operation="ncname" inputContainer="ncname"
outputContainer="ncname"
serviceLevel="qname"/>
</sequence>
</scope>
Here we introduced <ext:callWSProxy>
, a
platform-specific activity to let the workflow call service
frontend modules which are deployed as EJB components. The
execution of this activity should throw the same faults the
corresponding <invoke>
activity throws. The
<faultHandlers>
handles these faults
equivalently to the original <invoke>
activity.
Note that the <scope>
is a BPEL4WS construct
that provides nested contexts of execution and controls the scope
of fault handling. The serviceLevel
attribute is
optional and refers to the service level requirement specified in
<sla:serviceLevelRequirement>
. The alternative
to the workflow translation approach is to provide translated WSDLs
which contain binding and service definitions which specify that
operations are implemented as Java RMI calls to internal proxies.
This approach is more consistent with the WSIF approach [19] and can be taken when the platform
supports the Java RMI binding of WSDL PortType and handles service
level requirement.
There can be three alternative levels of Web service message
caching considered: (1) WSDL operation level (2) SOAP message level
(3) HTTP level. Our cache proxy is designed to handle WSDL
input-output operations. When an operation is cacheable (i.e.,
read-only), the proxy can cache the output message with the input
message as the cache key. A SOAP message is less reusable than the
corresponding WSDL-level message since it often includes extra
information such as routing (e.g., WS-Routing) in its header. The
existing HTTP caching facility is even less applicable than SOAP
message caching since SOAP messaging is often implemented using the
HTTP PUT method. Another reason why we take WSDL level caching is
that SOAP-over-HTTP is not the only protocol that realizes Web
service operations. To have messages cached, the service provider
does not have to deploy any application modules on the edge server.
Instead, it should provide information on cache control to the edge
server. We have designed a cache control specification, called
WS-Cache, to let the service provider specify which operations are
cacheable and what kind of cache control is supported by the
service provider. The following factors should be considered to
design a WSDL level caching framework: (1) It should retain
flexibility of WSDL and related Web services features such as
flexibility in binding. (2) It should make use of Web services
features to enhance extensibility of the cache control architecture
so that various advanced cache control mechanisms can be applied on
demand. Figure 4 illustrates the
framework of WSDL level cache control operations. The provider is
the source of the main services to which cache control is applied.
The main services are defined as the port type A and implemented as
the port port1. The requester is a cache proxy which communicates
with the provider on behalf of actual service users. Two types of
cache control operations can be realized within the Web services
framework:
- Embedded operations. Cache control operations are
attached with massages of WSDL operations which are to be cached. A
SOAP message which carries a WSDL message can contain such cache
control operations in its header. The provider's port (port1)
should handle cache control operations as well as the main WSDL
operations which are defined in the port type A.
- External operations. Cache control operations are
implemented as additional services provided separately from WSDL
operations to be cached. There are two types of such services:
inbound services and outbound services, which are provided by the
provider (i.e., the source) and the requester (i.e., the proxy),
respectively. Additional WSDL specifications are required to define
these services (the port types B and C).
By combining these two types of operations, various cache control
technologies can be applied to cache WSDL operations.
Figure 4: WSDL level Cache
Control
|
Since the cache control architecture is extensible, a service
provider and a proxy have to negotiate with each other and find a
specific cache control service which are supported by both of them.
Based on the Web services framework, a cache control service for a
specific service is established between the proxy and the service
provider as follows:
- Cache proxies publish cache control types. A cache control type
consists of (1) definition of embedded operations, (2) references
to inbound/outbound cache control services (which are defined in
WSDL), and (3) supported binding patterns. Cache control types can
be defined in WS-Cache specification and published to service
providers.
- A service provider defines and publishes cache controllers
applicable to its services. A controller is an instance of a cache
control type, which defines the provider-side implementation of the
cache control. The controller definition includes (1) binding of
embedded operations (2) ports for inbound services (3) supported
binding patterns of outbound services. The service provider also
defines cacheability of operations of the service port. For each
operation, it specifies a list of controllers, from which a proxy
can choose one for cache control. The provider publishes these
definitions as a WS-Cache specification as well as the WSDL
specification of the corresponding services.
- A cache proxy selects a controller for each operation. Given
the WSDL and WS-Cache specifications of the providers services, a
proxy can manage cache control of WSDL operations. To utilize
outbound operations, the proxy needs to notify the corresponding
port information through an embedded operation.
The following description is the overall structure of WS-Cache,
which implements the WSDL level caching architecture described
above:
<wscache name="ncname"? targetNamespace="uri"?
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns:xsd="http://www.w3.org/2000/10/XMLSchema">
<import namespace="uri" location="uri"/>*
<types>?
<xsd:schema ... />*
</types>
<controlType name="ncname">*
<operation name="ncname">*
<input type="qname">?
<output type="qname">?
</operation>
<inbound name="ncname" portType="qname"/>*
<outbound name="ncname" portType="qname"/>*
</controlType>
<controlBinding name="ncname" controlType="qname">*
<binding name="ncname">*
<-- extensibility element -->*
</binding>
<inbound type="qname">*
<binding type="qname">*
</inbound>
<outbound type="qname">*
<binding type="qname">*
</outbound>
</controlBinding>
<controller name="ncname" controlType="qname"
binding="qname"?>*
<inbound type="qname">*
<port name="ncname" binding="qname">...<port>+
</inbound>
<outbound type="qname">*
<binding type="qname">+
</outbound>
</controller>
<cacheable port="qname" operation="qname">*
<cachecontrol controller="qname">*
<default>?
<operation type="qname">*
<input> ... </input>?
<output> ... </output>?
</operation>
</default>
</cachecontrol>
</cacheable>
</wscache>
where characters are appended to elements and attributes as
follows: ``?
'' (0 or 1 occurrence), ``*
''
(0 or more occurrences), ``+
'' (1 or more
occurrences). Typically, the above WS-Cache specification is
separated into two files given from two partners. One is from a
cache proxy and includes the <controlType>
entity that defines a type of cache controllers and and the
<controlBinding>
entity that defines a set of
binding patterns supported by the proxy. The other is from a
service provider and includes <controller>
that
defines controller instances and <cacheable>
that correlates the controller instances and the WSDL operations of
the service provider's ports. In the
<controlType>
specification, the
<operation>
entities define embedded operations.
the <input>
and <output>
entities specify data attached with an input (request) message and
an output (response) message of a WSDL operation, respectively.
Each input or output of embedded operations is a single part of XML
data, for which the type
attribute refers to the type
definition (XML schema). The <inbound>
and
<outbound>
entities indicate WSDL port types of
the cache control ports for inbound and outbound cache control
services. The port types are defined in external WSDL
specifications. The <controlBinding>
defines a
set of binding patterns the proxy supports. The
<binding>
entities specify binding patterns
applied to the embedded operations. The extensibility element will
contain binding specific information. For example,
<soap:binding>
will specify the embedded
operations are bound to SOAP messages (i.e., realized as SOAP
header entities). Since embedded operations are attached with WSDL
messages of the main operation, the binding applied to the embedded
operations should be consistent with the binding of the main
operation. Multiple binding patterns can be specified by describing
multiple <binding>
entities so that an
implementer of cache controller (i.e., a service provider) can
choose one which is consistent with the main operation. The
<inbound>
and <outbound>
entities include sets of <binding>
entities that
refer to candidates of binding patterns. The binding patterns are
defined in external WSDL specifications. The
<controller>
entity specifies the actual
instance of a cache control type implemented by the service
provider. The binding
attribute specifies the binding
pattern of embedded operations which is chosen from
<binding>
s in
<controlBinding>
. In the
<inbound>
, the <port>
specifies a specific binding and locations (URIs) and indicates the
actual ports for cache control provided by the service provider.
The <outbound>
entity contains a set of binding
patterns for the corresponding outbound operations. It should be a
subset of binding patterns defined in the outbound
entity of controlBinding
so that both the provider and
the proxy support these binding patterns. The
<cacheable>
entity specifies a cacheable WSDL
operation with the combination of port
and
operation
. It includes a list of
<cachecontrol>
s which specify cache controllers
supported for this operation. The cache proxy will choose one of
the controllers based on the proxy's facility and expected
performance. The service provider can express its preference on
controllers with the order of the list. The optional
<default>
entity defines default data for
embedded operations. By defining default, the service provider can
omit to include cache control information in the SOAP message when
the information is fixed for a particular operation.
Example 1 For example, a cache control mechanism similar to the
HTTP cache control can be specified as:
<wscache
targetNamespace="http://cache.com/cachecontrol"
xmlns:tns="http://cache.com/cachecontrol"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:xsd="http://www.w3.org/2000/10/XMLSchema">
<types><xsd:schema>
<xsd:element name="TTLNotify"... />
<xsd:element name="ValidationResponse" ... />
<xsd:element name="ValidationRequest" ... />
...
</xsd:schema></types>
<controlType name="SimpleCacheControl">
<operation name="TTLControl">
<output type="tns:TTLNotify"/>
</operation>
<operation name="ValidationControl">
<input type="tns:ValidationRequest"/>
<output type="tns:ValidationResponse"/>
</operation>
</controlType>
<controlBinding name="SimpleCacheControlBinding"
controlType="tns:SimpleCacheControl">
<binding name="SimpleCacheSOAPBinding">
<soap:binding/>
</binding>
</controlBinding>
</wscache>
It specifies that a response (i.e., output) SOAP message can have a
cache-related header entity referred to as TTLNotify
.
The overall structure of TTLNotify
is as follows (the
exact schema is omitted in the above definition due to space
limitation).
<TTLNotify xmlns=...>
<LastModified>[time stamp]</LastModified>?
<Expires>[time stamp]</Expires>?
<MaxAge>[seconds]</MaxAge>?
<ETag>[identifier]</ETag>?
</TTLNotify>
The entities LastModified
, Expires
, and
MaxAge
work similarly to Last-Modified
and Expire
headers of the HTTP 1.0 and the
max-age
directive of the HTTP 1.1
Cache-Control
header, respectively. The entity
ETag
is similar to HTTP 1.1 ETags unique identifiers
that are generated by the server and changed every time the object
does. The embedded operation named ValidationControl is designed
for cache validation similar to the HTTP
If-Modified-Since
header.
Example 2 Advanced cache control mechanisms such as
invalidation and push cache can be implemented using inbound and
outbound operations. For example, a cache invalidation protocol can
be implemented as follows:
- An embedded operation for invalidation request. The
proxy attaches a request with a input message of the main
operation. The request contains information on the port which is
provided by the proxy for receiving invalidation notification. The
service provider returns a response attached with the corresponding
output message of the main operation. The response contains
acceptance (or denial) of the request and the key ID for
invalidation.
- An outbound operation for invalidation. The service
provider sends a message to the proxy to notify the data indicated
by the key ID is stale.
An application-specific proxy has two APIs: a public API, which is
published as a WSDL specification to service integrators, and a
private API, which is used to communicate with the backend modules.
Various patterns of proxies can be considered including the
following three patterns:
- Filtering-based proxies: One of the most obvious
applications which can be deployed effectively is a content
analysis/filtering application (e.g., virus filtering, intrusion
detection, business intelligence functions, language translation,
and map and drive routing service). Since filtering tasks are
usually independent of each other, the edge-side module rarely
needs communication with the backend during execution. A private
API will be used only when the backend occasionally sends update
information (e.g., new patterns of virus).
- Replication-based proxies: For an operation which is cacheable
(read-only) but only with low cache hit ratio, the service provider
can deploy application modules with replicated database. Its
private API will be used for synchronizing the replicated database
content.
- Resource-distribution-based proxies: For some
read-write (non-cacheable) operations, resource distribution or
task distribution approach can be applied. The backend server
proactively (and speculatively) assigns some portion of resources
to edge-side application modules so that they can work
asynchronously to each other. This approach are applicable to
application which has to manage supply and consumption of resources
typically when the total amount of resource is limited (e.g.,
selling merchandise in stock) or a unique number has to be attached
to each content copy (e.g., ticketing, distribution of digital
content with copyright)
Service Level Management
When the integration workflow invokes a WSDL request-response
operation, it can indicate service level requirement to the proxy.
In this paper we do not define or assume any specific service level
agreement framework. Instead, we focus on specification for service
level preference including tradeoff between competing service level
metrics (such as speed and accuracy) so that a proxy can flexibly
manage execution for better performance and reliability. We expect
a general service level agreement framework for Web services will
be standardized and our specification can be incorporated as an
extensible element of the standard. Given priorities between
multiple service level metrics, the system can manage graceful
degradation of services: Even when overload or faults of subsystems
make the system unable to provide the perfect service, it continues
to operate by providing a reduced (but still acceptable) level of
service rather than failing completely. The graceful degradation
facility is critical for delivering high availability in Web
services. Cache management, for example, has two competing service
level metrics: response time and data freshness. When the requested
data is in cache but stale, the cache manager should decide whether
it uses the cached data or retrieve new data from the original
source. When the response time of the original source is currently
longer than timeout due to network congestion or server overload,
the cache manager may use the cached data even it is stale. The
priority between service level metrics can be different between
different situations even for the same application: When the
service user is browsing lots of items offered in an auction
service, response time is more important than freshness of the
price information. When he or she is bidding for a particular item,
up-to-date price information is required. In this paper we
introduce two specifications: the Service Level Description and the
Service Level Requirement. The former is used by a service provider
to specify service level metrics available for each WSDL operation.
The latter is used by a service integrator to specify requirements
of service level for each WSDL operation. The structure of
<serviceLevelDescription>
is as follows:
<serviceLevelDescription name="ncname"
targetNamespace="uri">
<serviceType name="ncname">*
<metrics name="ncname" unit="name"?
min="name"? max="name"?/>+
</serviceType>
<serviceLevel serviceType="qname"
portType="qname" operation="qname"/>*
</serviceLevelDescription>
The <serviceLevelRequirement>
specification
allows the service integrator to define a service level function
which maps a set of service metrics (provided in the service level description) into
an overall service level
. The service provider's
proxy manages tradeoff between service level metrics based on this
function. The following is an example of service level requirement
specifications:
<serviceLevelRequirement
name="ncname"
portType="qname" operation="qname"
serviceType="qname">
<serviceLevelMatrix>
<level value="1">
<cond metrics="sld:Freshness" max="0"/>
<cond metrics="sld:ResponseTime" max="100"/>
</level>
<level value="0.8">
<cond metrics="sld:Freshness"
min="0" max="1000"/>
<cond metrics="sld:ResponseTime" max="100"/>
</level>
<level value="0.6">
<cond metrics="sld:Freshness"
min="0" max="1000"/>
<cond metrics="sld:ResponseTime"
min="100" max="500"/>
</level>
....
</serviceLevelMatrix>
<fault level="0.2" faultName="qname" />
</serviceLevelRequirement>
where <serviceLeevlMatrix>
maps service level
metrics (i.e., sld:Freshness
and
sld:ResponseTime
) into the overall service level. The
<fault>
entity specifies the service level at
which the execution is failed and the proxy returns a WSDL fault
message indicated with the faultName
attribute. As
described in Section 4.2, each
<invoke>
activity can include
<sla:serviceLevelRequirement requrement="qname"/>
to specify a service level requirement indicated with the
requirement
attribute. Thus, the integrator can
specify different requirement for each <invoke>
that invokes the same WSDL operation. Given a service level
function , -availability can be
defined as a probability:
When service level agreement is considered, the service provider
and consumer can refer to various level of availability such as
1.0-availability, 0.8-availability, and -availability ( is a service level
threshold for operation fault). After a certain service level
agreement is achieved between the integrator (workflow) and the
provider (proxy), a reward function (i.e., mapping from service
level to reward) can be defined based on the
service provider's reward. Note that penalties for poor service
levels can be represented as negative values of the reward
function. Since the service level is a random
variable due to nondeterministic behavior of the total system, the
management strategy of the proxy can be formalized as maximization
of the expectation value .
Latency-recency profile [6] has
introduced cache management based on a client's preference for
response time and freshness. In this approach, a parameterized
score function is defined to decide whether cached data is used or
new data is downloaded from the remote server. In our case, such
preference is given as a more generic function. For response time
and data freshness , a matrix
<serviceLevelMatrix>
defines the overall service
level . If the probability density function
of the source server's response time is known, the decision can be done as follows: If
then the proxy should request the source server for new data with
setting timeout as . Otherwise it should use
cached data which freshness is . If the request
to the source server is timed out, it should use cached data which
freshness is . Given a
<serviceLevelMatrix>
and the source server
latency histogram (measured or estimated), the decision table can
be pre-computed for any data freshness . When a reward
function is defined for service level , in the above equation is replaced
with to maximize the service reward.
Traditional cache replacement strategies, such as LRU and LFU, are
usually designed in order to maximize the overall cache hit ratio
since every hit of cached data has the equal value. In our
framework, cache replacement should be done to improve the overall
service level instead of hit ratio: the management task is to
maximize expected service level gain from keeping cached data in a
limited memory space. Since it is impractical to predict the future
access pattern and estimate the expectation value for each cached
item for each time, simplified algorithms should be developed. As a
preliminary study, we have developed a cache replacement control
algorithm which improve service level in terms of response time.
Section 7 describes the
algorithm and preliminary experiment results.
In this section, we demonstrate feasibility of our framework
through a case study. We discuss (1) applicability of service
frontend modules, and (2) validity of WS-Cache design and its
benefits for service integrators (i.e., easiness of describing
workflows and service level requirements).
As a case study, we have have implemented a prototype system for
federated EC, which aggregates multiple EC services. Sellers of
particular products provide catalog and purchasing facilities as
Web services. A service integrator correlates products from
multiple sellers and integrates them with other value-added service
components such as content filtering. Our prototype of federated
e-commerce systems integrates five Web services: it provides users
a single access point to retrieve information related to a movie
category, including top movies, reviews, theater and ticket
information, and DVDs of similar movies. Figure 5 shows a console window for
experiments. On the upper left hand side, the console provides a
menu to select movie categories and location information. On the
lower left hand side, the monitor shows the workflow and the call
status of each Web services, such as response time and if there is
a cache hit. On the right of the window, the output of integrated
services including various useful information related to the movie
of users' interests is visualized.
Figure 5: A Federated
E-commerce System based on Integrated Web Services
|
In the rest of this subsection, we discuss applicability of service
frontend modules (proxies) for various component services in the
federated EC case. Table 1 summarizes mapping between
applicable proxy types and component services. Among these
services, we have implemented a catalog integration service in
order to demonstrate a cache proxy.
Table 1: Applying proxies to component EC
services.
proxy type |
component services |
cache proxy |
catalog integration |
filtering-based |
personalization, content filtering |
resource-distribution-based |
ticketing, purchasing, advertisement |
no proxy applied |
user authentication, billing, payment |
|
Cache proxies can be applied to catalog integration. The main part
of federated EC is aggregating merchandise information from
multiple EC services and provide an integrated catalog which is
customized to the service consumers. Even when the integrated
catalog is too personalized to reuse for others, each component
retrieved from the source is often reusable. The components have
various life time: a large portion of information on merchandise is
static while inventory information is dynamic. For some services
such as auction, price information can be quite dynamic.
Filtering-based application proxies can be applied to service
components such as personalization and content filtering. Many
modern Web applications utilize personalization engines which
encapsulate decision making processes based on personalization
logic. Likewise, Web service integration workflows may utilize
``personalization services'' instead of coding personalization
logic into the workflows. A rule engine can be deployed as an
application-specific proxy, and rules can be managed through
private APIs between frontend and backend modules. Content
filtering, a service to filter out content according to clients'
policies, is a typical value-added service provided by
intermediaries [8]. A filtering
module can be deployed as an application-specific proxy. A
filtering policy can be given directly by the workflow or be
downloaded from a WSDL port of the client (which is cacheable). We
can consider applying resource-distribution-based proxies to.
ticketing and purchasing. When a service provider is selling a
limited number of items, proxies and the backend should manage
inventory information to avoid oversell the items. Each proxy
reserves some portion of the inventory so that it can assure that a
user can purchase the item without sending synchronous message to
the source server. When the user purchased the item, the proxy
asynchronously send a purchase order to the source server. One of
the difficult issues in implementing such a proxy is that its
action should be consistent with Web services' transaction
management such as BPEL4WS's compensation handler. Simple content
caching is not enough for advertisement delivery since it requires
measurement and management of the influence to viewers. There are
suppliers and consumers of advertisement: an ad supplier offers a
limited number of advertisement exposures, and an ad consumer
receives a limited number of advertisements which are relevant to
his or her interest. Therefore, the advertisement service should
manage the inventory of advertisement tasks. It can utilize proxies
similar to ones for ticketing and purchasing. It is hard to apply
proxies to component services such as user authentication, billing,
and payment. Since they handle information which corresponds to
individual users, it is hard to cache data for reuse or distribute
resources (e.g., available credits) among resource-distribution
proxies. Moreover, caching and reusing user information for
multiple sessions may not be allowed for security and privacy
reasons. One possible solution for performance and availability
issues of such services is that an ISP who hosts workflows also
provides authentication and billing services. In such cases,
messages between workflows and authentication/billing services are
locally managed by a single organization. In fact, most ISPs
already have platforms for such user management tasks although they
may not provide Web services interfaces.
Catalog Integration
In this section, we discuss applicability of cache proxies based on
WS-Cache to catalog integration. We also have conducted experiments
on cache management based on this setting, which is described in
Section 7. In our example, the
workflow integrates five different services (five WSDL operations)
which provide movie related information: movie ranking, movie
reviews, movie information, theater/ticket information, and DVD
information. All the WSDL operations the workflow invokes are
information retrieval services which are cacheable. These
operations, however, are different in content life time and cache
hit ratio. Moreover, the service sources may support different
levels of cache controls: some service sources may not support any
cache control facility. The workflow has different data freshness
requirements for different content services. Table 2 shows these differences among
operations in our case study.
Table 2: Cacheability of component
services
service |
change frequency |
provider's support |
requester's requirement |
Category/Ranking |
regularly changed |
TTL (1 week) |
- |
Movie Information |
may be changed |
TTL (1 week) |
- |
DVD Information |
may be changed |
- |
3 days |
Reviews |
changed when a customer posts a new review |
TTL (1 hour) |
2 months |
Theater/Ticket Information |
may be changed when a customer buys a ticket |
TTL (0-5 min) |
1 hour for browsing / 0 sec for buying |
|
Suppose the movie ranking data is updated weekly. Since the
service provider knows when its output data becomes stale, it can
utilize the SimpleCacheControl
cache controller
effectively. A SOAP message which contains a WSDL output of this
operation will include a TTLNotify
entity in its
header:
<S:Envelope xmlns:S=...>
<S:Header>
<c:TTLNotify
xmlns:c="http://cache.com/cachecontrol">
<c:LastModified>
2003-05-13T00:00:00Z</c:LastModified>
<c:Expires>2003-05-20T00:00:00Z</c:Expires>
</c:TTLNotify>
...
</S:Header>
<S:Body> ... </S:Body>
</S:Envelope>
For the DVD information service, we use the existing Web services
from Amazon.com. It supports the DirectorSearchRequest
operation which receives a director's name and returns product
information related to the director. Note that the Amazon.com Web
services are not cache-aware. Without modifying the existing Web
services implementation, Amazon.com can support TTL-based message
caching by providing the following WS-Cache specification. Although
a SOAP header for output of DirectorSearchRequest has no
cache-related information, data in <default>
is
used as if it is in the SOAP header.
<wscache targetNamespace="urn:amazonWSCache"
xmlns:tns="urn:amazonWSCache"
xmlns:amz="urn:PI/DevCentral/SoapService"
xmlns:ctl="http://cache.com/cachecontrol"
xmlns="http://cacheportal.com/wscache/">
<controller name="amazonDataCache"
controlType="ctl:SimpleCacheControl"
binding="ctl:SimpleCacheSOAPBinding"/>
<cacheable port="amz:AmazonSearchPort"
operation="amz:DirectorSearchRequest">
<cachecontrol controller="tns:amazonDataCache">
<default>
<operation type="ctl:TTLControl">
<output>
<ctl:MaxAge>36000</ctl:MaxAge>
</output>
</operation>
</default>
</cachecontrol>
</cacheable>
</wscache>
Given the WS-Cache description of Amazon.com Web services, the
cache proxy can offer QoS enhancement of the services by publishing
the following service level description to service integrators.
<serviceLevelDescription>
<serviceType name="ContentResponse">
<metrics name="Freshness"
unit="second" min="0"/>
<metrics name="ResponseTime"
unit="second" min="0"/>
</serviceType>
<serviceLevel serviceType="tns:ContentResponse"
portType="amz:AmazonSearchPort"
operation="amz:DirectorSearchRequest"/>
</serviceLevelDescription>
In a BPEL4WS workflow, each <invoke>
can
indicate its service level requirement with a reference to a
specific requirement:
<sla:serviceLevelRequirement
requirement="req:PreferResponseMode"/>
For a ticket information retrieval operation, the workflow can
specify two different requirement (prioritize response time or
freshness) for two different point of invocation (when browsing or
when starting purchase order workflow).
Cache Management: Preliminary Experiment Results
We have developed a response time gain (RTG) cache replacement
algorithm that takes into consideration (1) user access patterns,
(2) page invalidation pattern, (3) temporal locality of the
requests; and (4) performance gain of a cache hit. The caching
priority of each page is re-calculated periodically. In the current
implementation, the priority is re-calculated every minute. Note
that the frequency of re-calculation does have an impact on the
cache hit rate. Potentially, the more often the caching priorities
are re-calculated, the higher are the cache hit rates. The
frequency of re-calculation should be dynamically adjusted by
considering the trade-off between the benefit of higher hit rates
and the additional cost incurred due to frequent re-calculations.
The access rate and the invalidation rate is the access count and
invalidation count within a time period. The caching priority of a
page during a time period ,
, is calculated based on
the response time gain for a cache hit (), the access
rate (), and the invalidation rate () as
where is the temporal decay factor whose
value is between 0 and 1. A value of for makes the system treat all access patterns equally,
while a value of makes the system consider only the
access patterns during the current time period. In the experiments
the value of is set to . The
intuition behind this formula is that it estimates the average
number of accesses for the page between any two successive
invalidations. The higher this number the larger the benefit to
keep this page in the cache. And, when there are two Web service
results of the same priority, we select the content which yields a
high response time gain for a cache hit. After we calculate the
caching priority of each page, we calculate the caching priority of
each operation by aggregating the access rate and invalidation rate
for all results belonging to the same operation. Consequently, we
are able to select only a small number of Web service results, but
maintain a high hit rate and a high performance gain.
Based on this prototype we have conducted a series of experiments
to evaluate the effectiveness and performance gains of the proposed
Web Service Overlay Network framework. The total number of WSDL
operation results are 6000 and are as follows: Category/Ranking:
500; Theater/Ticket information: 2000; Movie review: 1500; Movie
information: 500; and DVD information: 1500. Figure 6 shows the correlation between the user
request distribution percentages and each movie category which is
based on the Zipf law. The cache size is set as 600, 1200, and 1800
result sets (i.e. 10%, 20%, and 30% of total number of all possible
Web service result sets). The network and process latency of Web
service response times are distributed from 1 second to 3 seconds.
Figure 6: User request
category distribution
|
The first experiment we conducted is to measure the response time
gains through the Web service caching in the integration tier. We
set the network and process latency for all Web services as 2
seconds and we vary the cache size from 600 result sets to 1800
result sets (i.e. 10% to 30%). Figures 7 and 8
show the experiment results. Our observations are as follows:
- Our cache management algorithm (RTG) consistently outperforms
all other algorithms. The LFU algorithm does perform better than
the LRU algorithm (as indicated by most of studies). When the cache
size is relatively small, our cache management algorithm (RTG)
performs much better than other algorithms.
- When the cache size is 10%, the hit rate of our algorithm is
25% higher than the LFU algorithm; however, the response time by
our algorithm is 50% lower than that of the LFU algorithm.
We also vary the network/process latency setting and observe the
response time. The experiment results are shown in Figure 9. Again, our algorithm performs better
than two other algorithms and when the network/process latency
increases, the benefit of Web Service Overlay Network is more
significant. As the experiment results show, the Web Service
Overlay Network framework can accelerate the Web service response
time up to 5 times in our evaluation.
Figure 7: Cache hit rates
of different cache size
|
Figure 8: Average response
time of different cache size
|
Figure 9: Average response
time for configurations with various network/process latency (Cache
size = 20%)
|
We proposed Overlay Web Service Network, a framework of integrated
Web service management and execution based on a CDN (Content
Delivery Network). Integrated services are described in a standard
workflow language, deployed on edge servers, and access other Web
services through proxies on the edge servers. These proxies, which
may be generic cache proxies or application-specific proxies,
encapsulate fault-tolerant or performance-conscious message
handling between the edge server and the original server so that
the service integrator can concentrate on business logic. We
proposed the WS-Cache specification so that cache proxies and
service providers can incorporate various caching techniques into
Web services. We also introduced the Service Level Requirement
specification in order to allow service integrator to indicate
preferences for multiple service levels provided from the proxies.
- 1
- Bharat Chandra, Mike Dahlin, Lei Gao, Amjad-Ali Khoja, Amol
Nayate, Asim Razzaq, Anil Sewani. Resource Management for Scalable
Disconnected Access to Web Services, WWW10, pp. 245-256, 2001.
- 2
- Bharat Chandra, Mike Dahlin, Lei Gao, and Amol Nayate.
End-to-end WAN Service Availability. In Proceedings of the Third
USENIX Symposium on Internet Technologies and Systems, 2001.
- 3
- Anindya Datta, Kaushik Dutta, Krithi Ramamritham, Helen Thomas,
and Debra VanderMeer. Dynamic Content Acceleration: A Caching
Solution to Enable Scalable Dynamic Web Page Generation, ACM SIGMOD
2001, p. 616, 2001.
- 4
- Bestavros, A., "Speculative Data Dissemination and Service to
Reduce Server Load, Network Traffic and Service Time for
Distributed Information Systems", in Proceedings of 1996
International Conference on Data Engineering (ICDE '96), New
Orleans, Louisiana, March 1996, pp. 180-189.
- 5
- Gwertzman, J. and Seltzer, M., The case for geographical push
caching. in HotOS ' 95: The Fifth IEEE Workshop on Hot Topics in
Operating Systems, (Washington, 1995).
- 6
- Laura Bright and Louiqa Raschid. Using Lagenct-Recency Profiles
for Data Delivery on the Web. In Proc. of the 28th VLDB Conference
(VLDB2002), 2002.
- 7
- Mark Tsimelzon, Bill Weihl, and Larry Jacobs, ESI Language
Specification 1.0, http://www.esi.org/language_spec_1-0.html,
2001.
- 8
- The ICAP Forum. Internet Content Adaptation Protocol (ICAP).
version 1.01 http://www.i-cap.org/docs/icap_whitepaper_v1-01.pdf,
2001
- 9
- Abbie. Barbir, et. al. An Architecture for Open Pluggable Edge
Services, IETF Internet Drafts, draft-ietf-opes-architecture-03,
August 2, 2002
- 10
- David L. Tennenhouse, Jonathan M. Smith, W. David Sincoskie,
David J. Wetherall, Gary J. Minden. A Survey of Active Network
Research, IEEE Communications Magazine, 35(1):80-86, Jan.
1997.
- 11
- A. Carzaniga, G. Picco, and G. Vigna, Designing Distributed
Applications with Mobile Code Paradigms, Proc. of the 19th
International Conference on Software Engineering (ICSE'97), pp.
22-32, Boston, May 1997.
- 12
- Carlo Ghezzi and Giovanni Vigna. Mobile Code Paradigms and
Technologies: A Case Study. Proc. of the 1st International Workshop
on Mobile Agents (MA'97), LNCS 1219, pp. 39-49. Springer,
1997.
- 13
- Wei-Ying Ma, Bo Shen, and Jack Brassil, Content Services
Network: The Architecture and Protocols, In Proc. of the 6th Intl
Workshop on Web Caching and Content Distribution (WCW'01), Boston,
MA, June 2001.
- 14
- F. Curbera, Y. Goland, J. Klein, F. Leymann, D. Roller, and S.
Weerawarana. Business Process Execution Language for Web Services,
Version 1.0. July 2002.
- 15
- F. Leymann. Web Services Flow Language (WSFL 1.0). IBM Software
Group. May 2001.
- 16
- OASIS Business Transactions Technical Committee. Business
Transaction Protocol. An OASIS Committee Specification Version 1.0,
June 2002.
- 17
- F. Cabrera, et al. Web Services Transaction (WS-Transaction),
August 9, 2002.
- 18
- Henrik Frystyk Nielsen and Satish Thatte. Web Services Routing
Protocol (WS-Routing) October 23, 2001.
- 19
- Matthew J. Duftler, Nirmal K. Mukhi, Aleksander Slominski, and
Sanjiva Weerawarana. Web Services Invocation Framework (WSIF).
OOPSLA 2001 Workshop on Object-Oriented Web Services, October
2001.
- 20
- Wen-Syan Li, Wang-Pin Hsiung, Dmitri V. Kalashinikov, Radu
Sion, Oliver Po. Divyakant Agrawal, and K. Selçuk Candan.
Issues and Evaluations of Caching Solutions for Web Application
Acceleration. Proc. of the 28th VLDB Conference, Hong Kong, China,
2002.
- 21
- C. Mohan. Caching Technologies for Web Applications. The 27th
VLDB Conference Tutorial,
http://www.almaden.ibm.com/u/mohan/Caching_VLDB2001.pdf Rome,
Italy, 2001.
- 22
- J. Dilley, B. Maggs, J. Parikh, H. Prokop, R. Sitaraman, and B.
Weihl. Globally Distributed Content Delivery. IEEE Internet
Computing, vol. 6, no. 5, pp. 50-58 September/October 2002.
- 23
- Heiko Ludwig, Alexander Keller, Asit Dan, Richard King. A
Service Level Agreement Language for Dynamic Electronic Services,
In Proceedings of the 4th IEEE Int'l Workshop on Advanced Issues of
E-Commerce and Web-Based Information Systems (WECWIS 2002),
2002.
- 24
- Sang H. Son and Kyoung-Don Kang. QoS Management in Web-based
Real-Time Data Services, In Proceedings of the 4th IEEE Int'l
Workshop on Advanced Issues of E-Commerce and Web-Based Information
Systems (WECWIS 2002), 2002.
- 25
- Sahai, A. et al. Automated SLA Monitoring for Web Services. The
13th IFIP/IEEE International Workshop on Distributed Systems:
Operations & Management (DSOM 2002) Montreal, Canada Oct.
2002.
- 26
- Daly, D. et al. Modeling of Service-Level Agreements for
Composed Services. The 13th IFIP/IEEE International Workshop on
Distributed Systems: Operations & Management (DSOM 2002)
Montreal, Canada Oct. 2002.