2

An e-Market Framework for Informed Trading

John Debenham
Faculty of IT
University of Technology, Sydney
PO Box 123 Broadway, NSW 2007, Australia
Simeon Simoff
Faculty of IT
University of Technology, Sydney
PO Box 123 Broadway, NSW 2007, Australia


Date: 30 September 2005

simeon@it.uts.edu.au

Abstract:

Fully automated trading, such as e-procurement, using the Internet is virtually unheard of today. Three core technologies are needed to fully automate the trading process: data mining, intelligent trading agents and virtual institutions in which informed trading agents can trade securely both with each other and with human agents in a natural way.This paper describes a demonstrable prototype e-trading system that integrates these three technologies and is available on the World Wide Web. This is part of a larger project that aims to make informed automated trading a reality.

H.5.1Information SystemsMultimedia Information SystemsH.4.mInformation SystemsMiscellaneousI.2.11Artificial IntelligenceDistributed Artificial IntelligenceI.2.mArtificial IntelligenceMiscellaneous

Market reliability

1 Introduction

The potential size of the electronic business market and the comparatively small amount of automated negotiation presently deployed provides a major incentive for research in automated trading. Fully automated trading, such as e-procurement, using the Internet is virtually unheard of today. Trading involves the maintenance of effective business relationships, and is the complete process of: need identification, product brokering, supplier brokering, offer-exchange, contract negotiation, and contract execution.Three core technologies are needed to fully automate the trading process:This paper describes an e-trading system that integrates these three technologies. The e-Market Framework is available on the World Wide Web1. This project aims to make informed automated trading a reality, and develops further the ``Curious Negotiator'' framework [12]. This work does not address all of the issues in automated trading. For example, the work relies on developments in: XML and semantic web, secure data exchange, value chain management and financial services.

The data mining systems that have been developed for mining information both from the virtual institution and from general sources from the World Wide Web are described in Sec. 2.Intelligent agents that are built on an architecture designed specifically to handle real-time information flows are described in Sec. 3. Sec. 4 describes the work on virtual institutions -- this work has been carried out in collaboration with ``Institut d'Investigacio en Intel.ligencia Artificial2'', Spanish Scientific Research Council, UAB, Barcelona, Spain.Sec. 5 concludes.


2 Data Mining

We have designed information discovery and delivery agents that utilise text and network data mining for supporting real-time negotiation. This work has addressed the central issues of extracting relevant information from different on-line repositories with different formats, with possible duplicative and erroneous data. That is, we have addressed the central issues in extracting information from the World Wide Web. Our mining agents understand the influence that extracted information has on the subject of negotiation and takes that in account.

Figure 1:The information that impacts trading negotiation

Real-time embedded data mining is an essential component of the proposed framework. In this framework the trading agents make their informed decisions, based on utilising two types of information (as illustrated in Figure 1):

The embedded data mining system provides the information extracted from the external sources. The system complements and services the information-based architecture developed in [5] and [11]. The information request and the information delivery format is defined by the interaction ontology. As these agents operate with negotiation parameters with a discrete set of feasible values, the information request is formulated in terms of these values. As agents proceed with negotiation they have a topic of negotiation and a shared ontology that describes that topic. For example, if the topic of negotiation is buying a number of digital cameras for a University, the shared ontology will include the product model of the camera, and some characteristics, like ``product reputation'' (which on their own can be a list of parameters), that are usually derived from additional sources (for example, from different opinions in a professional community of photographers or digital artists). As the information-based architecture assumes that negotiation parameters are discrete, the information request can be formulated as a subset of the range of values for a negotiation parameter. For example, if the negotiator is interested in cameras with 8 megapixel resolution, and the brand is a negotiation parameter, the information request can be formulated as a set of camera models, e.g. {``Canon Power Shot Pro 1'', ``Sony f828'', ``Nikon Coolpix 8400'', ``Olympus C-8080''} and a preference estimate based on the information in the different articles available. The collection of parameter sets of the negotiation topic constitutes the input to the data mining system. Continuous numerical values are replaced by finite number of ranges of interest.

The data mining system initially constructs data sets that are ``focused'' on requested information, as illustrated in Figure 2. From the vast amount of information available in electronic form, we need to filter the information that is relevant to the information request. In our example, this will be the news, opinions, comments, white papers related to the five models of digital cameras. Technically, the automatic retrieval of the information pieces utilises the universal news bot architecture presented in [13]. Developed originally for news sites only, the approach is currently being extended to discussion boards and company white papers.

Figure 2:The pipeline of constructing ``focused'' data sets

The ``focused'' data set is dynamically constructed in an iterative process. The data mining agent constructs the news data set according to the concepts in the query. Each concept is represented as a cluster of key terms (a term can include one or more words), defined by the proximity position of the frequent key terms. On each iteration the most frequent (terms) from the retrieved data set are extracted and considered to be related to the same concept. The extracted keywords are resubmitted to the search engine. The process of query submission, data retrieval and keyword extraction is repeated until the search results start to derail from the given topic.

The set of topics in the original request is used as a set of class labels. In our example we are interested in the evidence in support of each particular model camera model. A simple solution is for each model to introduce two labels -- positive opinion and negative opinion, ending with ten labels. In the constructed ``focused'' data set, each news article is labelled with one of the values from this set of labels. An automated approach reported in [13] extends the tree-based approach proposed in [10].

The data sets required further automatic preprocessing, related to possible redundancies in the information encoded in the set that can bias the analysis algorithms. For example, identifying a set of opinions about the camera that most likely comes from the same author, though it has been retrieved from different ``opinion boards'' on the Internet.

Once the set is constructed, building the ``advising model'' is reduced to a classification data mining problem. As the model is communicated back to the information-based agent architecture, the classifier output should include all the possible class labels with an attached probability estimates for each class. Hence, we use probabilistic classifiers (e.g. Naïve Bayes, Bayesian Network classifiers) [9] without the min-max selection of the class output [e.g., in a classifier based on Naïve Bayes algorithm], we calculate the posterior probability $ \mathbb{P}_{p}(i)$ of each class $ c(i)$ with respect to combinations of key terms and then return the tuples $ < c(i), \mathbb{P}_{p}(i)>$ for all classes, not just the one with maximum $ \mathbb{P}_{p}(i)$. In the case when we deal with range variables the data mining system returns the range within which is the estimated value. For example, the response to a request for an estimate of the rate of change between two currencies over specified period of time will be done in three steps: (i) the relative focused news data set will be updated for the specified period; (ii) the model that takes these news in account is updated, and; (iii) the output of the model is compared with requested ranges and the matching one is returned. The details of this part of the data mining system are presented in [14]. The currently used model is a modified linear model with an additional term that incorporates a news index Inews, which reflects the news effect on exchange rate. The current architecture of the data mining system in the e-market environment is shown in Figure 3. The $ \{\theta_{1},\dots,\theta_{t}\}$ denote the output of the system to the information-based agent architecture. In addition, the data mining system provides parameters that define the ``quality of the information'', including:

Overall the parameters that will be estimated by the mining algorithms and provided to the negotiating agents are expected to allow information-based agents to devise more effective and better informed situated strategies. In addition to the data coming from external sources, the data mining component of the project will develop techniques for analysing agent behaviourist data with respect to the electronic institution setup.

Figure 3:The architecture of the agent-based data mining system


3 Trading Agents

We have designed a new agent architecture founded on information theory. These ``information-based'' agents operate in real-time in response to market information flows. We have addressed the central issues of trust in the execution of contracts, and the reliability of information [11]. Our agents understand the value of building business relationships as a foundation for reliable trade.An inherent difficulty in automated trading -- including e-procurement -- is that it is generally multi-issue. Even a simple trade, such as a quantity of steel, may involve: delivery date, settlement terms, as well as price and the quality of the steel. The ``information-based'' agent's reasoning is based on a first-order logic world model that manages multi-issue negotiation as easily as single-issue.

Most of the work on multi-issue negotiation has focussed on one-to-one bargaining -- for example [6]. There has been rather less interest in one-to-many, multi-issue auctions -- [4] analyzes some possibilities -- despite the size of the e-procurement market which typically attempts to extend single-issue, reverse auctions to the multi-issue case by post-auction haggling. There has been even less interest in many-to-many, multi-issue exchanges.

The generic architecture of our ``information-based'' agents is presented in Sec. 3.1. The agent's reasoning employs entropy-based inference and is described in Sec. 3.2. The integrity of the agent's information is in a permanent state of decay, Sec. 3.3 describes the agent's machinery for managing this decay leading to a characterization of the ``value'' of information. Sec. 3.4 describes metrics that bring order and structure to the agent's information with the aim of supporting its management.


1 Information-Based Agent Architecture

This section describes the essence of ``information-based agency''.An agent observes events in its environment including what other agents actually do. It chooses to represent some of those observations in its world model as beliefs. As time passes, an agent may not be prepared to accept such beliefs as being ``true'', and qualifies those representations with epistemic probabilities. Those qualified representations of prior observations are the agent's information. This information is primitive -- it is the agent's representation of its beliefs about prior events in the environment and about the other agents prior actions. It is independent of what the agent is trying to achieve, or what the agent believes the other agents are trying to achieve. Given this information, an agent may then choose to adopt goals and strategies. Those strategies may be based on game theory, for example. To enable the agent's strategies to make good use of its information, tools from information theory are applied to summarize and process that information. Such an agent is called information-based.

Figure 4:Basic architecture of agent $ \Pi $
\includegraphics[width=2.2in]{agarch}

An agent called $ \Pi $ is the subject of this discussion.$ \Pi $ engages in multi-issue negotiation with a set of other agents: $ \{\Omega_{1},\cdots,\Omega_{o}\}$.The foundation for $ \Pi $'s operation is the information that is generated both by and because of its negotiation exchanges. Any message from one agent to another reveals information about the sender.$ \Pi $ also acquires information from the environment -- including general information sources --Êto support its actions. $ \Pi $ uses ideas from information theory to process and summarize its information.$ \Pi $'s aim may not be ``utility optimization'' -- it may not be aware of a utility function.If $ \Pi $ does know its utility function and if it aims to optimize its utility then $ \Pi $ may apply the principles of game theory to achieve its aim.The information-based approach does not to reject utility optimization -- in general, the selection of a goal and strategy is secondary to the processing and summarizing of the information.

In addition to the information derived from its opponents, $ \Pi $ has access to a set of information sources $ \{\Theta_{1},\cdots,\Theta_{t}\}$ that may include the marketplace in which trading takes place, and general information sources such as news-feeds accessed via the Internet.Together, $ \Pi $, $ \{\Omega_{1},\cdots,\Omega_{o}\}$ and $ \{\Theta_{1},\cdots,\Theta_{t}\}$ make up a multiagent system.The integrity of $ \Pi $'s information, including information extracted from the Internet, will decay in time. The way in which this decay occurs will depend on the type of information, and on the source from which it was drawn. Little appears to be known about how the integrity of real information, such as news-feeds, decays, although its validity can often be checked -- ``Is company X taking over company Y?'' -- by proactive action given a cooperative information source $ \Theta_{j}$. So $ \Pi $ has to consider how and when to refresh its decaying information.

$ \Pi $ has two languages: $ \mathcal{C}$ and $ \mathcal{L}$. $ \mathcal{C}$ is an illocutionary-based language for communication. $ \mathcal{L}$ is a first-order language for internal representation -- precisely it is a first-order language with sentence probabilities optionally attached to each sentence representing $ \Pi $'s epistemic belief in the truth of that sentence. Fig. 4 shows a high-level view of how $ \Pi $ operates.Messages expressed in $ \mathcal{C}$ from $ \{\Theta_{i}\}$ and $ \{\Omega_{i}\}$ are received, time-stamped, source-stamped and placed in an in-box $ \mathcal{X}$. The messages in $ \mathcal{X}$ are then translated using an import function $ I$ into sentences expressed in $ \mathcal{L}$ that have integrity decay functions (usually of time) attached to each sentence, they are stored in a repository $ \mathcal{Y}^{t}$. And that is all that happens until $ \Pi $ triggers a goal.

$ \Pi $ triggers a goal, $ g \in \mathcal{G}$, in two ways: first in response to a message received from an opponent $ \{\Omega_{i}\}$ ``I offer you 1 in exchange for an apple'', and second in response to some need, $ \nu \in \mathcal{N}$, ``goodness, we've run out of coffee''. In either case, $ \Pi $ is motivated by a need -- either a need to strike a deal with a particular feature (such as acquiring coffee) or a general need to trade. $ \Pi $'s goals could be short-term such as obtaining some information ``what is the time?'', medium-term such as striking a deal with one of its opponents, or, rather longer-term such as building a (business) relationship with one of its opponents. So $ \Pi $ has a trigger mechanism $ T$ where: $ T: \{\mathcal{X} \cup \mathcal{N} \} \rightarrow G$.

For each goal that $ \Pi $ commits to, it has a mechanism, $ G$, for selecting a strategy to achieve it where $ G: \mathcal{G} \times \mathcal{M} \rightarrow \mathcal{S}$ where $ \mathcal{S}$ is the strategy library. A strategy $ s$ maps an information base into an action, $ s(\mathcal{Y}^{t})=z\in \mathcal{Z}$. Given a goal, $ g$, and the current state of the social model $ m^{t}$, a strategy: $ s = G(g, m^{t})$. Each strategy, $ s$, consists of a plan, $ b_{s}$ and a world model (construction and revision) function, $ J_{s}$, that constructs, and maintains the currency of, the strategy's world model $ W^{t}_{s}$ that consists of a set of probability distributions.A plan derives the agent's next action, $ z$, on the basis of the agent's world model for that strategy and the current state of the social model: $ z = b_{s}(W^{t}_{s}, m^{t})$, and $ z = s(\mathcal{Y}^{t})$. $ J_{s}$ employs two forms of entropy-based inference:


2 $ \Pi $'s Reasoning

Once $ \Pi $ has selected a plan $ a \in \mathcal{A}$ it uses maximum entropy inference to derive the $ \{D_{i}^{s}\}_{i=1}^{n}$ [see Fig. 4] and minimum relative entropy inference to update those distributions as new data becomes available.Entropy, $ \mathbb{H}$, is a measure of uncertainty [8] in a probability distribution for a discrete random variable $ X$: $ \mathbb{H}(X) \triangleq - \sum_{i} p(x_{i}) \log p(x_{i})$ where $ p(x_{i}) = \mathbb{P}(X = x_{i})$.Maximum entropy inference is used to derive sentence probabilities for that which is not known by constructing the ``maximally noncommittal'' [7] probability distribution, and is chosen for its ability to generate complete distributions from sparse data.

Let $ \mathcal{G}$ be the set of all positive ground literals that can be constructed using $ \Pi $'s language $ \mathcal{L}$.A possible world, $ v$, is a valuation function: $ \mathcal{G} \rightarrow \{\top, \bot\}$. $ \mathcal{V}\vert\mathcal{K}^{s} = \{v_{i}\}$ is the set of all possible worlds that are consistent with $ \Pi $'s knowledge base $ \mathcal{K}^{s}$ that contains statements which $ \Pi $ believes are true.A random world for $ \mathcal{K}^{s}$, $ W\vert\mathcal{K}^{s} = \{p_{i}\}$ is a probability distribution over $ \mathcal{V}\vert\mathcal{K}^{s} = \{v_{i}\}$, where $ p_{i}$ expresses $ \Pi $'s degree of belief that each of the possible worlds, $ v_{i}$, is the actual world. The derived sentence probability of any $ \sigma \in \mathcal{L}$, with respect to a random world $ W\vert\mathcal{K}^{s}$ is:

$\displaystyle (\forall \sigma \in \mathcal{L}) {\mathbb{P}_{\{W\vert\mathcal{K}......ngleq \sum_{n} \{ \: {p_{n} \: : \: \sigma \: is \: \top \: in \: v_{n}} \: \}}$(1)

The agent's belief set $ \mathcal{B}_{t}^{s} = \{\Omega_{j}\}_{j=1}^{M}$ contains statements to which $ \Pi $ attaches a given sentence probability $ \mathbb{B}(.)$.A random world $ W\vert\mathcal{K}^{s}$ is consistent with $ \mathcal{B}_{t}^{s}$ if:$ (\forall \Omega \in \mathcal{B}_{t}^{s}) (\mathbb{B}(\Omega) = \mathbb{P}_{\{W\vert\mathcal{K}^{s}\}}(\Omega))$.Let $ \{p_{i}\}=\{\overline{W}\vert{\mathcal{K}^{s},\mathcal{B}_{t}^{s}\}}$ be the ``maximum entropy probability distribution over $ \mathcal{V}\vert\mathcal{K}^{s}$ that is consistent with $ \mathcal{B}_{t}^{s}$''. Given an agent with $ \mathcal{K}^{s}$ and $ \mathcal{B}_{t}^{s}$, maximum entropy inference states that the derived sentence probability for any sentence, $ \sigma \in \mathcal{L}$, is:

$\displaystyle (\forall \sigma \in \mathcal{L}) \mathbb{P}_{\{\overline{W}\vert{......angleq \sum_{n} \{ \: {p_{n} \: : \: \sigma \: is \: \top \: in \: v_{n}} \: \}$(2)

From Eqn. 2, each belief imposes a linear constraint on the $ \{p_{i}\}$.The maximum entropy distribution: $ \arg\max_{\underline{p}} \mathbb{H}(\underline{p})$,$ \underline{p} = (p_{1},\dots,p_{N})$,subject to $ M+1$ linear constraints:

\begin{displaymath}\begin{split}&g_{j}(\underline{p}) = \sum_{i=1}^{N} c_{ji}p_{......&g_{0}(\underline{p}) = \sum_{i=1}^{N}p_{i} - 1 = 0 \end{split}\end{displaymath}   

where $ c_{ji} = 1$ if $ \Omega_{j}$    is $ \top$    in $ v_{i}$ and 0 otherwise, and $ p_{i} \ge 0, i=1,\dots,N$, is found by introducing Lagrange multipliers, and then obtaining a numerical solution using the multivariate Newton-Raphson method. In the subsequent subsections we'll see how an agent updates the sentence probabilities depending on the type of information used in the update.

Given a prior probability distribution $ \underline{q} = (q_{i})_{i=1}^{n}$ and a set of constraints $ C$, the principle of minimum relative entropy chooses the posterior probability distribution $ \underline{p} = (p_{i})_{i=1}^{n}$ that has the least relative entropy3 with respect to $ \underline{q}$:

$\displaystyle \{\underline{W}\vert \underline{q}, C\} \triangleq \arg\min_{\underline{p}} \sum_{i=1}^{n} p_{i} \log \frac{p_{i}}{q_{i}}$

and that satisfies the constraints. This may be found by introducing Lagrange multipliers as above.Given a prior distribution $ \underline{q}$ over $ \{v_{i}\}$ -- the set of all possible worlds, and a set of constraints $ C$ (that could have been derived as above from a set of new beliefs) minimum relative entropy inference states that the derived sentence probability for any sentence, $ \sigma \in \mathcal{L}$, is:

$\displaystyle (\forall \sigma \in \mathcal{L}) \mathbb{P}_{\{\underline{W}\vert......angleq \sum_{n} \{ \: {p_{n} \: : \: \sigma \: is \: \top \: in \: v_{n}} \: \}$(3)

where $ \{p_{i}\}= \{\underline{W}\vert \underline{q}, C\}$.The principle of minimum relative entropy is a generalization of the principle of maximum entropy.If the prior distribution $ \underline{q}$ is uniform, then the relative entropy of $ \underline{p}$ with respect to $ \underline{q}$, $ \underline{p}\Vert\underline{q}$, differs from $ - \mathbb{H}(\underline{p})$ only by a constant. So the principle of maximum entropy is equivalent to the principle of minimum relative entropy with a uniform prior distribution.


3 The agent manages information

The illocutions in the communication language $ \mathcal{C}$ include information, $ [\textit {info}]$. The information received from general information sources will be expressed in terms defined by $ \Pi $'s ontology. We assume that $ \Pi $ makes at least part of that ontology public so that the other agents $ \{\Omega_{1},\dots,\Omega_{o}\}$ may communicate $ [\textit {info}]$ that $ \Pi $ can understand. $ \Omega$'s reliability is an estimate of the extent to which this $ [\textit {info}]$ is correct. For example, $ \Omega$ may send $ \Pi $ the $ [\textit {info}]$ that ``the price of fish will go up by $ 10\%$ next week'', and it may actually go up by $ 9\%$.

The only restriction on incoming $ [\textit {info}]$ is that it is expressed in terms of the ontology -- this is very general. However, the way in which $ [\textit {info}]$ is used is completely specific -- it will be represented as a set of linear constraints on one or more probability distributions. A chunk of $ [\textit {info}]$ may not be directly related to one of $ \Pi $'s chosen distributions or may not be expressed naturally as constraints, and so some inference machinery is required to derive these constraints -- this inference is performed by model building functions, $ J_{s}$, that have been activated by a plan $ s$ chosen by $ \Pi $. $ J_{s}^{D}([\textit{info}])$ denotes the set of constraints on distribution $ D$ derived by $ J_{s}$ from $ [\textit {info}]$.


1 Updating the world model with $ [\textit {info}]$

The procedure for updating the world model as $ [\textit {info}]$ is received follows. If at time $ u$, $ \Pi $ receives a message containing $ [\textit {info}]$ it is time-stamped and source-stamped $ [\textit{info}]_{(\Omega,\Pi,u)}$, and placed in a repository $ \mathcal{Y}^{t}$. If $ \Pi $ has an active plan, $ s$, with model building function, $ J_{s}$, then $ J_{s}$ is applied to $ [\textit{info}]_{(\Omega,\Pi,u)}$ to derive constraints on some, or none, of $ \Pi $'s distributions. The extent to which those constraints are permitted to effect the distributions is determined by a value for the reliability of $ \Omega$, $ R^{t}(\Pi,\Omega,O([\textit{info}]))$, where $ O([\textit{info}])$ is the ontological context of $ [\textit {info}]$.

An agent may have models of integrity decay for some particular distributions, but general models of integrity decay for, say, a chunk of information taken at random from the World Wide Web are generally unknown. However the values to which decaying integrity should tend in time are often known. For example, a prior value for the truth of the proposition that a ``22 year-old male will default on credit card repayment'' is well known to banks. If $ \Pi $ attaches such prior values to a distribution $ D$ they are called the decay limit distribution for $ D$, $ (d^{D}_{i})_{i=1}^{n}$. No matter how integrity of $ [\textit {info}]$ decays, in the absence of any other relevant information it should decay to the decay limit distribution. If a distribution with $ n$ values has no decay limit distribution then integrity decays to the maximum entropy value $ \frac{1}{n}$. In other words, the maximum entropy distribution is the default decay limit distribution.

In the absence of new $ [\textit {info}]$ the integrity of distributions decays. If $ D=(q_{i})_{i=1}^{n}$ then we use a geometric model of decay:

$\displaystyle q^{t+1}_{i}=(1-\rho^{D})\times d^{D}_{i} + \rho^{D}\times q^{t}_{i},$ for $\displaystyle i=1,\dots,n$(4)

where $ \rho^{D} \in (0,1)$ is the decay rate. This raises the question of how to determine $ \rho^{D}$. Just as an agent may know the decay limit distribution it may also know something about $ \rho^{D}$. In the case of an information-overfed agent there is no harm in conservatively setting $ \rho^{D}$ ``a bit on the low side'' as the continually arriving $ [\textit {info}]$ will sustain the estimate for $ D$.

We now describe how new $ [\textit {info}]$ is imported to the distributions. A single chunk of $ [\textit {info}]$ may effect a number of distributions. Suppose that a chunk of $ [\textit {info}]$ is received from $ \Omega$ and that $ \Pi $ attaches the epistemic belief probability $ R^{t}(\Pi,\Omega,O([\textit{info}]))$ to it. Each distribution models a facet of the world. Given a distribution $ D^{t}= (q^{t}_{i})_{i=1}^{n}$, $ q^{t}_{i}$ is the probability that the possible world $ \omega_{i}$ for $ D$ is the true world for $ D$. The effect that a chunk $ [\textit {info}]$ has on distribution $ D$ is to enforce the set of linear constraints on $ D$, $ J^{D}_{s}([\textit{info}])$. If the constraints $ J^{D}_{s}([\textit{info}])$ are taken by $ \Pi $ as valid then $ \Pi $ could update $ D$ to the posterior distribution $ (p_{i}^{[\textit{info}]})_{i=1}^{n}$ that is the distribution with least relative entropy with respect to $ (q_{i}^{t})_{i=1}^{n}$ satisfying the constraint:

\begin{displaymath}\begin{split}\sum_{i}&\{p^{[\textit{info}]}_{i}\,:\; J^{D}_{s......) \text{ are all } \top \text{ in } \omega_{i}\}=1. \end{split}\end{displaymath}(5)

But $ R^{t}(\Pi,\Omega,O([\textit{info}]))=r\in[0,1]$ and $ \Pi $ should only treat the $ J^{D}_{s}([\textit{info}])$ as valid if $ r=1$. In general $ r$ determines the extent to which the effect of $ [\textit {info}]$ on $ D$ is closer to $ (p_{i}^{[\textit{info}]})_{i=1}^{n}$ or to the prior $ (q_{i}^{t})_{i=1}^{n}$ distribution by:

$\displaystyle p^{t}_{i}=r\times p_{i}^{[\textit{info}]} + (1-r)\times q_{i}^{t}$(6)

But, we should only permit a new chunk of $ [\textit {info}]$ to influence $ D$ if doing so gives us new information. For example, if 5 minutes ago a trusted agent advises $ \Pi $ that the interest rate will go up by 1%, and 1 minute ago a very unreliable agent advises $ \Pi $ that the interest rate may go up by 0.5%, then the second unreliable chunk should not be permitted to `overwrite' the first. We capture this by only permitting a new chunk of $ [\textit {info}]$ to be imported if the resulting distribution has more information relative to the decay limit distribution than the existing distribution has. Precisely, this is measured using the Kullback-Leibler distance measure4, and $ [\textit {info}]$ is only used if:

$\displaystyle \sum_{i=1}^{n}p^{t}_{i}\log\frac{p^{t}_{i}}{d_{i}^{D}} > \sum_{i=1}^{n}q^{t}_{i}\log\frac{q^{t}_{i}}{d_{i}^{D}}$(7)

In addition, we have described in Eqn. 4 how the integrity of each distribution $ D$ will decay in time. Combining these two into one result, distribution $ D$ is revised to:

$\displaystyle q_{i}^{t+1}= \begin{cases}(1-\rho^{D})\times d^{D}_{i} + \rho^{D}......^{D})\times d^{D}_{i} + \rho^{D}\times q_{i}^{t}&\textit{otherwise} \end{cases}$   

for $ i=1,\cdots,n$, and decay rate $ \rho^{D}$ as before. We have yet to estimate $ R^{t}(\Pi,\Omega,O([\textit{info}]))$ -- that is described in Sec. 3.3.2 following.


2 Information reliability

We estimate $ R^{t}(\Pi,\Omega,O([\textit{info}]))$ by measuring the error in information. $ \Pi $'s plans will have constructed a set of distributions. We measure the `error' in information as the error in the effect that information has on each of $ \Pi $'s distributions. Suppose that a chunk of $ [\textit {info}]$ is received from agent $ \Omega$ at time $ s$ and is verified at some later time $ t$. For example, a chunk of information could be ``the interest rate will rise by 0.5% next week'', and suppose that the interest rate actually rises by $ 0.25\%$ -- call that correct information $ [\textit{fact}]$. What does all this tell agent $ \Pi $ about agent $ \Omega$'s reliability? Consider one of $ \Pi $'s distributions $ D$ that is $ \{q_{i}^{s}\}$ at time $ s$. Let $ (p_{i}^{[\textit{info}]})_{i=1}^{n}$ be the minimum relative entropy distribution given that $ [\textit {info}]$ has been received as calculated in Eqn. 5, and let $ (p_{i}^{[\textit{fact}]})_{i=1}^{n}$ be that distribution if $ [\textit{fact}]$ had been received instead. Suppose that the reliability estimate for distribution $ D$ was $ R^{s}_{D}$.This section is concerned with what $ R^{s}_{D}$ should have been in the light of knowing now, at time $ t$, that $ [\textit {info}]$ should have been $ [\textit{fact}]$, and how that knowledge effects our current reliability estimate for $ D$, $ R^{t}(\Pi,\Omega,O([\textit{info}]))$.

The idea of Eqn. 6, is that the current value of $ r$ should be such that, on average, $ (p^{s}_{i})_{i=1}^{n}$ will be seen to be ``close to'' $ (p^{[\textit{fact}]}_{i})_{i=1}^{n}$ when we eventually discover $ [\textit{fact}]$ -- no matter whether or not $ [\textit {info}]$ was used to update $ D$, as determined by the acceptability test in Eqn. 7 at time $ s$. That is, given $ [\textit {info}]$, $ [\textit{fact}]$ and the prior $ (q_{i}^{s})_{i=1}^{n}$, calculate $ (p^{[\textit{info}]}_{i})_{i=1}^{n}$ and $ (p^{[\textit{fact}]}_{i})_{i=1}^{n}$ using Eqn. 5.Then the observed reliability for distribution $ D$, $ R^{([\textit{info}]\mid[\textit{fact}])}_{D}$, on the basis of the verification of $ [\textit {info}]$ with $ [\textit{fact}]$ is the value of $ r$ that minimises the Kullback-Leibler distance between $ (p^{s}_{i})_{i=1}^{n}$ and $ (p^{[\textit{fact}]}_{i})_{i=1}^{n}$:

$\displaystyle \arg\min_{r}\sum_{i=1}^{n}(r\cdot p_{i}^{[\textit{info}]} + (1-r)......\cdot p_{i}^{[\textit{info}]} + (1-r)\cdot q_{i}^{s}}{p^{[\textit{fact}]}_{i}}$

If $ E^{[\textit{info}]}$ is the set of distributions that $ [\textit {info}]$ effects, then the overall observed reliability on the basis of the verification of $ [\textit {info}]$ with $ [\textit{fact}]$ is: $ R^{([\textit{info}]\mid[\textit{fact}])}=1-(\max_{D\in E^{[\textit{info}]}} \vert1-R^{([\textit{info}]\mid[\textit{fact}])}_{D}\vert)$. Then for each ontological context $ o_{j}$, at time $ t$ when, perhaps, a chunk of $ [\textit {info}]$, with $ O([\textit{info}]) = o_{k}$, may have been verified with $ [\textit{fact}]$:

\begin{displaymath}\begin{split}R&^{t+1}(\Pi,\Omega,o_{j})=\\ & (1-\rho)\times R......mid[\textit{fact}])}\times\mathrm{Sem}(o_{j},o_{k}) \end{split}\end{displaymath}   

where $ \mathrm{Sem}(\cdot,\cdot):O\times O\rightarrow[0,1]$ measures the semantic distance between two sections of the ontology, and $ \rho$ is the learning rate. Over time, $ \Pi $ notes the ontological context of the various chunks of $ [\textit {info}]$ received from $ \Omega$ and over the various ontological contexts calculates the relative frequency, $ P^{t}(o_{j})$, of these contexts, $ o_{j} = O([\textit{info}])$. This leads to a overall expectation of the reliability that agent $ \Pi $ has for agent $ \Omega$:

$\displaystyle R^{t}(\Pi,\Omega)=\sum_{j}P^{t}(o_{j})\times R^{t}(\Pi,\Omega,o_{j})$


4 Valuing Information

A chunk of information is valued first by the way that it enables $ \Pi $ to do something.So information is valued in relation to the strategies that $ \Pi $ is executing.A strategy, $ s$, is chosen for a particular goal $ g$ in the context of a particular representation, or environment, $ e$.One way in which a chunk of information assists $ \Pi $ is by altering $ s$'s world model $ W^{t}_{s}$ -- see Fig. 4. A model $ W^{t}_{s}$ consists of a set of probability distributions: $ W^{t}_{s}=\{D_{s,i}^{t}\}_{i=1}^{n}$.As a chunk of information could be ``good'' for one distribution and ``bad'' for another, we first value information by its effect on each distribution.For a model $ W^{t}_{s}$, the value to $ W^{t}_{s}$ of a message received at time $ t$ is the resulting decrease in entropy in the distributions $ \{D_{s,i}^{t}\}$.In general, suppose that a set of stamped messages $ X = \{x_{i}\}$ is received in $ \mathcal{X}$. The information in $ X$ at time $ t$ with respect to a particular distribution $ D_{s,i}^{t} \in W^{t}_{s}$, strategy $ s$, goal $ g$ and environment $ e$ is:

$\displaystyle \mathbb{I}(X\mid D_{s,i}^{t},s,g,e)\triangleq\mathbb{H}(D_{s,i}^{t}(\mathcal{Y}^{t}))-\mathbb{H}(D_{s,i}^{t}(\mathcal{Y}^{t}\cup I(X)))$

for$ \; i=1,\cdots,n$, where the argument of the $ D_{s,i}^{t}(\cdot)$ is the state of $ \Pi $'s repository from which $ D_{s,i}^{t}$ was derived. The environment $ e$ could be determined by a need $ \nu$ (if the evaluation is made in the context of a particular negotiation) or a relationship $ \rho$ (in a broader context). It is reasonable to aggregate the information in $ X$ over the distributions used by $ s$. That is, the information in $ X$ at time $ t$ with respect to strategy $ s$, goal $ g$ and environment $ e$ is:

$\displaystyle \mathbb{I}(X\mid s,g,e) \triangleq \sum_{i} \mathbb{I}(X\mid D_{s,i}^{t},s,g,e)$

and to aggregate again over all strategies to obtain the value of the information in a statement. That is, the value of the information in $ X$ with respect to goal $ g$ and environment $ e$ is:

$\displaystyle \mathbb{I}(X\mid g,e) \triangleq \sum_{s \in \mathcal{S}(g)} \mathbb{P}(s) \cdot \mathbb{I}(X\mid s,g,e)$   

where $ \mathbb{P}(s)$ is a distribution over the set of strategies for goal $ g$, $ \mathcal{S}(g)$, denoting the probability that strategy $ s$ will be chosen for goal $ g$ based on historic frequency data, and to aggregate again over all goals to obtain the (potential) information in a statement. That is, the potential information in $ X$ with respect to environment $ e$ is:

$\displaystyle \mathbb{I}(X\mid e) \triangleq \sum_{g \in \mathcal{G}} \mathbb{P}(g) \cdot \mathbb{I}(X\mid g,e)$(8)

where $ \mathbb{P}(g)$ is a distribution over $ \mathcal{G}$ denoting the probability that strategy $ g$ will be triggered based on historic frequency data.


4 Virtual Institutions

This work is done on collaboration with the Spanish GovernmentÕs IIIA Laboratory% latex2html id marker 2980$ ^{\ref{iiia:lab}}$ in Barcelona. Electronic Institutions are software systems composed ofautonomous agents, that interact according to predefined conventions on language and protocol and that guarantee that certain norms of behaviour are enforced. Virtual Institutions enable rich interaction, based on natural language and embodiment of humans and software agents in a ``liveable'' vibrant environment. This view permits agents to behave autonomously and take their decisions freely up to the limits imposed by the set of norms of the institution. An important consequence of embedding agents in a virtual institution is that the predefined conventions on language and protocol greatly simplify the design of the agents. A Virtual Institution is in a sense a natural extension of the social concept of institutions as regulatory systems that shape human interactions [2].

Virtual Institutions are electronic environments designed to meet the following requirements towards their inhabitants:

The first requirement has been addressed to some extent by the Electronic Institutions (EI) methodology and technology for multi-agent systems, developed in the Spanish Government's IIIA Laboratory in Barcelona [2]. The EI environment is oriented towards the engineering of multiagent systems. The Electronic Institution is an environment populated by autonomous software agents that interact according to predefined conventions on language and protocol. Following the metaphor of social institutions, Electronic Institutions guarantee that certain norms of behaviour are enforced. This view permits that agents behave autonomously and make their decisions freely up to the limits imposed by the set of norms of the institution. The interaction in such environment is regulated for software agents. The human, however, is ``excluded'' from the electronic institution.

The second requirement is supported to some extent by the distributed 3D Virtual Worlds technology. Emulating and extending the physical world in which we live, Virtual Worlds offer rich environment for a variety of human activities and multi-mode interaction. Both humans and software agents are embedded and visualised in such 3D environments as avatars, through which they communicate. The inhabitants of virtual worlds are aware of where they are and who is there -- elements of the presence that are excluded from the current paradigm of e-Commerce environments. Following the metaphor of the physical world, these environments do not impose any regulations (in terms of language) on the interactions and any restrictions (in terms of norms of behaviour). When this encourages the social aspect of interactions and establishment of networks, these environments do not provide means for enabling some behavioural norms, for example, fulfilling commitments, penalisation for misbehaviour and others.

Virtual Institutions addressed both requirements, retaining the features and advantages of the above discussed approaches, as illustrated in Figure 5. They can be seen as the logical evolution and merger of the two streams of development of environments that can host electronic markets as mixed societies of humans and software agents.

Technologically, Virtual Institutions are implemented following a three-layered framework, which provides deep integration of Electronic Institution technology and Virtual Worlds technology [3]. The framework is illustrated in Figure 6. The Electronic Institution Layer hosts the environments that support the Electronic Institutions technological component: the graphical EI specification designer ISLANDER and the runtime component AMELI [1]. At runtime, the Electronic Institution layer loads the institution specification and mediates agents interactions while enforcing institutional rules and norms.

Figure 5:Relations between the three concepts

Figure 6:The three layer architecture and its implementation

The Communication Layer connects causally the Electronic Institutions layer with the 3D representation of the institution, which resides in the Social layer. The causal connection is the integrator. It enables the Electronic Institution layer to respond to changes in the 3D representation (for example, to respond to the human activities there), and passes back the response of the Electronic Institution layer in order to modify the corresponding 3D environment and maintain the consistency of the Virtual Institution. Virtual Institution representation is a graph and its topology can structure the space of the virtual environment in different ways. This is the responsibility of the Social layer. In this implementation the layer is represented in terms of a 3D Virtual World technology, structured around rooms, avatars, doors (for transitions) and other graphical elements. Technically, the Social layer is currently utilising Adobe Atmosphere virtual world technology. The design of the 3D World of the Virtual Institution is developed with the Annotation Editor, which ideally should take as an input a specification of the Electronic Institution layer and produce an initial layout of the 3D space. Currently, part of the work is done manually by a designer.

The core technology -- the Causal Connection Server -- enables the Communication Layer to act in two directions. Technically, in direction from the Electronic Institution layer, messages uttered by an agent have immediate impact in the Social layer. Transition of the agent between scenes in the Electronic Institution layer, for example, must let the corresponding avatar move within the Virtual World space accordingly. In the other direction, events caused by the actions of the human avatar in the Virtual World are transferred to the Electronic Institution layer and passed to an agent. This implies that actions forbidden to the agent by the norms of the institution (encoded in the Electronic Institution layer), cannot be performed by the human. For example, if a human needs to register first before leaving for the auction space, the corresponding agent is not allowed to leave the registration scene. Consequently, the avatar is not permitted to open the corresponding door to the auction (see [3] for technical details of the implementation of the Causal Connection Server).

Virtual Institutions are immersive environments and as such go beyond the catalogue-style markets with form-based interaction approaches currently dominating the World Wide Web. Embedding traders (whether humans or software agents) as avatars in the electronic market space on the Web positions them literally ``in'' the World Wide Web rather than ``on'' it.


5 Conclusions

A demonstrable prototype e-Market system permits both human and software agents to trade with each other on the World Wide Web. The main contributions described are: the broadly-based and ``focussed'' data mining systems, the intelligent agent architecture founded on information theory, and the abstract synthesis of the virtual worlds and the electronic institutions paradigms to form ``virtual institutions''. These three technologies combine to present our vision of the World Wide Web marketplaces of tomorrow.

Bibliography

1
Electronic institution development environment: http:/e-institutor.iiia.csic.es/ .

2
J. L. Arcos, M. Esteva, P. Noriega, J. A. Rodríguez, and C. Sierra.
Environment engineering for multiagent systems.
Journal on Engineering Applications of Artificial Intelligence, 18, 2005.

3
A. Bogdanovych, H. Berger, S. Simoff, and C. Sierra.
Narrowing the gap between humans and agents in e-commerce: 3D electronic institutions.
In K. Bauknecht, B. Pröll, and H. Werthner, editors, E-Commerce and Web Technologies, Proceedings of the 6th International Conference, EC-Web 2005, pages 128-137, Copenhagen, Denmark, August 2005. Springer-Verlag: Heidelberg, Germany.

4
J. Debenham.
Auctions and bidding with information.
In P. Faratin and J. Rodriguez-Aguilar, editors, Proceedings Agent-Mediated Electronic Commerce VI: AMEC, pages 15 - 28, July 2004.

5
J. Debenham.
Bargaining with information.
In N. Jennings, C. Sierra, L. Sonenberg, and M. Tambe, editors, Proceedings Third International Conference on Autonomous Agents and Multi Agent Systems AAMAS-2004, pages 664 - 671. ACM, July 2004.

6
P. Faratin, C. Sierra, and N. Jennings.
Using similarity criteria to make issue trade-offs in automated negotiation.
Journal of Artificial Intelligence, 142(2):205-237, 2003.

7
E. Jaynes.
Probability Theory -- The Logic of Science.
Cambridge University Press, 2003.

8
D. MacKay.
Information Theory, Inference and Learning Algorithms.
Cambridge University Press, 2003.

9
M. Ramoni and P. Sebastiani.
Intelligent Data Analysis, chapter Bayesian methods, pages 132-168.
Springer-Verlag: Heidelberg, Germany, 2003.

10
D. Reis, P. B. Golgher, A. Silva, and A. Laender.
Automatic web news extraction using tree edit distance.
In Proceedings of the 13th International Conference on the World Wide Web, pages 502-511, New York, 2004.

11
C. Sierra and J. Debenham.
An information-based model for trust.
In F. Dignum, V. Dignum, S. Koenig, S. Kraus, M. Singh, and M. Wooldridge, editors, Proceedings Fourth International Conference on Autonomous Agents and Multi Agent Systems AAMAS-2005, pages 497 - 504, Utrecht, The Netherlands, July 2005. ACM Press, New York.

12
S. Simoff and J. Debenham.
Curious negotiator.
In S. O. M. Klusch and O. Shehory, editors, Proceedings 6th International Workshop Cooperative Information Agents VI CIA2002, pages 104-111, Madrid, Spain, September 2002. Springer-Verlag: Heidelberg, Germany.

13
D. Zhang and S. Simoff.
Informing the Curious Negotiator: Automatic news extraction from the Internet.
In Proceedings 3rd Australasian Data Mining Conference, pages 55-72, Cairns, Australia, December 2004.

14
D. Zhang, S. Simoff, and J. Debenham.
Exchange rate modelling using news articles and economic data.
In Proceedings of The 18th Australian Joint Conference on Artificial Intelligence, Sydney, Australia, December 2005. Springer-Verlag: Heidelberg, Germany.



Footnotes

... Web1
http:/e-markets.org.au
... Artificial2
http:/www.iiia.csic.es/
... entropy3
Otherwise called cross entropy or the Kullback-Leibler distance between the two probability distributions.
... measure4
This is just one criterion for determining whether the $ [\textit {info}]$ should be used.