Auto-FAQ: an experiment in cyberspace leveraging
Steven D. Whitehead
GTE Laboratories Incorporated
40 Sylvan Road
Waltham MA 02254
swhitehead@gte.com
Table of Contents:
Intelligent behavior requires, in one form or another, access to an
enormous reservoir of knowledge and information. Unfortunately,
intelligent systems are currently limited by a severe knowledge
acquisition bottleneck. In most cases, the cost of getting knowledge
is simply too high. Hand-coding is often too arduous or too time-consuming,
while learning algorithms are often too slow or handcuffed by insufficient
training data.
This paper explores the idea of harnessing computer networks to overcome the
knowledge acquisition bottleneck. We introduce the idea of a CYLINA
(CYberspace Leveraged INtelligent Agent) --- an intelligent system that
gains knowledge/information through interactions with a large population
of network users. Instead of depending on the big efforts of a few knowledge
engineers, CYLINAs rely on small, incremental contributions from a global
population of experts. Our thesis is that the shear volume of interaction
will allow significant knowledge to be acquired in a short amount of time.
We consider potential applications for CYLINAs, then focus on an experimental
system currently under development at GTE Laboratories. This system, called
Auto-FAQ, is a question-answering system. Its intent is to make information
typically found in USENET News FAQs much more accessible (It has many other
uses as well). Users ask questions in natural language forms.
These questions index directly into the systems infobase. Infobase entries
are question-answer pairs. Answers can be raw content (e.g., text) or
pointers (e.g., URLs). By using the system recursively, users can
explore entire subjects with a series of questions. Facilities exist
to tag gaps in the systems knowledge base. When a gap is found, it is posted
to a public list. Individuals in the cyberspace community can search the list,
volunteer expertise, and fill in gaps as appropriate.
A version of Auto-FAQ is currently operating on a private network at GTE
Laboratories. The system is currently able to answer basic questions about
itself, WWW, and Mosaic. Future plans are to make Auto-FAQ and its associated
software available on the global Internet.
Some things can only be achieved by through actions of many. Large construction
projects (The Pyramids and The Empire State Building), democracy, polling,
statistics and evolution all involve the efforts or engagement of
many. We use the term Population Leveraging to refer to the
idea of harnessing the power of many to do what cannot be accomplished by a few.
The recent growth in the Internet and its enabling technologies has provided
unprecedented connectivity between people and computers, and with it
new opportunities for population leveraging. Cyberspace leveraging is the
marriage of population leveraging and computer networks.
Cyberspace Leveraging refers to the idea of using
computer networks to enable population leveraging --- to harness the power
of a large population of networked users.
Examples of cyberspace leveraging that exist today include:
- Usenet News & other bulletin boards: where thousands if not millions
of people find each other and exchange information on specific
topics.
- Self-Publishing Centers & Malls: where many individuals and
organizationscome together in one place to publish information and offer
services to the global community. Two examples are Gold-Site
(http://www.cityscape.co.uk)
and Web Self Publishing
(http://sparc57.cs.uiuc.edu:8000).
- Genetic or Evolutionary Systems: where massive user feedback
is used to drive an evolutionary process. Two examples are
Jeff Putnam's Genetically Programmed Music
(http://nmt.edu/~jefu/notes/notes.html) and The Interactive
Genetic Art System
(http://robocop.modmath.cs.cmu.edu:8001).
Another area where cyberspace leveraging could prove extremely useful
is for knowledge acquisition in intelligent systems. In particular, the
development of successful intelligent systems (e.g., expert systems) has been stymied
for years by a severe knowledge acquisition bottleneck. Many otherwise
valuable applications have been rendered impractical
by the cost of getting information. Traditional knowledge engineering,
because it typically involves a small number of highly skilled knowledge engineers, is
expensive and does not scale. Machine learning, for most domains, remains
a distant promise. On the other hand, cyberspace leveraging, by relying upon small,
incremental contributions from a large population of users, has the potential to overcome the
knowledge acquisition bottleneck (See Figure 1). We use the term CYLINA (for CYberspace Leveraged
Intelligent Network Agent) to refer to an intelligent system that uses cyberspace
leveraging to acquire knowledge.
The ingredients needed to make CYLINAs practical are as follows:
- an easy to use and widely accessible medium of exchanging information (e.g., WWW);
- access to a vast pool of knowledge and information (embodied in a large cyberspace
user community)
- an altruistic or otherwise motivated cyberspace community, where people
volunteer not only information and expertise, but also time and energy;
- an application where useful knowledge can be incrementally contributed
and maintained by non-programmers.
We have begun to explore the potential of cyberspace leveraging by developing
an question-answering CYLINA called Auto-FAQ. Simply put, Auto-FAQ
is an intelligent question-answering system that interacts with a population of
networked users to both answer questions and acquire information. Presently, the
Auto-FAQ prototype can answer questions on WWW, Mosaic, and itself. Other potential
applications include:
- Intelligent Q/A for Usenet newsgroups and other special interest groups;
- Intelligent Q/A for network navigation and directory services;
- Online help for network services and applications software;
- Intelligent Q/A for customer service, support, and product information;
- Intelligent Q/A for corporate/government information and public relations;
- Tutoring, training and operational support for legacy systems.
Distinguishing features
Auto-FAQ distinguishes
itself from most other question-answering systems in the way it implements each of
its three main functions: question processing, information acquisition, and information
management.
Shallow language understanding for question processing
Auto-FAQ uses shallow language understanding to process questions.
By shallow language understanding, we mean that Auto-FAQ does not attempt to
understand the content of the question at a deep, semantic level. Instead it uses raw language
input to generate an index into an information base of question-answer pairs. In Auto-FAQ, query
processing in more like text-retrieval than traditional natural language understanding.
We claim that shallow language understanding is not only practical and effective, but
that it can also create the illusion of intelligence, especially in cases where the infobase
is densely populated, the scope of the domain is narrow, and the system is augmented with
rudimentary language processing skills. Shallow language understanding is critical to Auto-FAQ
because it can be applied to databases consisting of small, loosely structured
chunks of information.
Population leveraging for info-acquisition
The ability to acquire and maintain information is critical to any intelligent system.
In Auto-FAQ, this requirement is magnified because of its need for a densely populated infobase.
Traditionally, knowledge has been gained through the big efforts of a small number of
"expert knowledge engineers." Auto-FAQ turns that idea on its head. Instead of depending upon
the big efforts of a dedicated few, Auto-FAQ relies upon small, incremental contributions from a
large population of experts. In Auto-FAQ this takes the form of a Gap List. When the system
cannot find an adequate answer for a question, the question is added to the Gap List. The Gap
List is then made available to a population of experts, who can view it and provide
the information necessary to fill gaps. Depending on the application, the "expert population"
can range from a single knowledgeable guru, to the staff of a customer service center, to the
global Internet community.
Population leveraging for adaptive information management
To be useful, Auto-FAQ must maintain the integrity, accuracy, and quality of its
infobase over time. A mechanism is needed to identify and promote the most useful
information and to filter useless, outdated, or incorrect information. This is especially
important is an open network environment where inappropriate information may be submitted
by malicious or ignorant "experts."
In Auto-FAQ users are given the opportunity to rate the usefulness of the answers
generated by the system. This feedback is used to adjust scores associated with
question-answer pairs. These scores are used, in turn, for filtering useless entries
and for biasing the answer retrieval process.
The Auto-FAQ architecture
The Auto-FAQ architecture is shown in Figure 2.
The system's two main data structures are
the info-base and the gap list. The info-base is organized as a set of question-answer
pairs. The gap list holds questions the system doesn't know how to answer.
Logically, Auto-FAQ consists of three independent components, one for
each major function. They are the query processor (for question processing),
the gap/information editor (for info-acquisition), and the information
manager (for info-management). The detailed operations of these components
are described in the following sections.
Auto-FAQ really does not understand the questions/inputs it receives in any deep way.
It does not do a deep semantic analysis of the input, as in traditional language
understanding. Instead it relies on a shallow, surface level analysis.
We call this approach shallow language processing (or shallow intelligence).
It's more akin to Feigenbaum's ELIZA program and information retrieval than it
is to traditional natural language processing.
To answer a question, Auto-FAQ uses a relatively simple (surface level)
matching algorithm to match the raw natural language input against Question-Answer
records in the infobase. Records with the closest match are returned as answers.
The process is summarized graphically in Figure 3
and Figure 4.
Here are the details:
- Read the question and context fields from the query dialogue;
- Strip "irrelevant words" (such as "the" "to" "a") from both fields;
- Filter info-base Question-Answer records, based on context.
(Note: A record makes it through this filter only if its context field
contains *all* the context items in the query context. Context
is used to focus the question and improve search efficiency.)
- Use the remaining words in the question field to compute a match score
for the filtered records. (Note: This best match score is performed in two stages.
In the first stage simple keyword matching is used to score the word match
between the input question and each record's question. Next, each record's score is modified to
reflect the value of its content. The scores of high utility records
are magnified, while the scores of low utility records are diminished.)
- Records are sorted, thresholded, and displayed to the user.
On first brush, one might think shallow language understanding would be a terrible approach to
intelligent question-answering. How can such a system be considered intelligent? Without
necessarily denying the value of deep language understanding, we claim shallow
language understanding can be extremely effective. In particular, our view
is that a system is intelligent if it is robust and can provide useful information
in response to a wide range of inputs. We argue that shallow language understanding
in Auto-FAQ can be extremely effective (and thus intelligent) for the following reasons:
- Questions on the same topic tend to use a common set of words, making keyword-based
searches effective;
- The context field can be used to supply additional keywords that focus
the search onto a specific topic;
- Interrogatives like who, what, when, where, and how are extremely useful for
isolating specific types of questions;
- shallow, text-based matching strategies are more robust with respect to
syntactic variation than deep systems;
- shallow, text-based matching strategies can be made even more robust by
filtering common keywords such as "be," "a," "the," "is," etc.;
- a densely populated infobase can cover almost all conceivable
forms of a question;
- and finally, if a question cannot be answered, it can be posted to a population
of experts who can supply an answer and patch the infobase.
Shallow language understanding has two other features that are critical to Auto-FAQ.
First, because shallow language understanding involves no deduction and no inference, there
is little or no interaction between pieces of information. Small changes in the infobase, have
small effects on system behavior. Second, there is no need for a canonical knowledge
representation. Thus, knowledge can be added to the system in small, incremental pieces
by non-programmers. Shallow language understanding is compatible with cyberspace leveraging.
Nevertheless, one cannot deny the value of deep language understanding.
Shallow methods can be effective, they have their limits. For example, Auto-FAQ currently
has no mechanism for resolving pronoun references (e.g., she, he, I, it, that).
It also does not maintain discourse structure over multiple questions. Context must be
reestablished with each query. Clearly, some knowledge of language and discourse structure
would be very useful. Therefore, we do not wish rule out deep language
understanding altogether. Rather, our approach is one of progressive deepening --- everything simple evolves.
Auto-FAQ uses population leveraging to improve its performance in three ways:
1) users identify, for Auto-FAQ, weaknesses (or gaps) in its infobase;
2) networked "domain" experts add information to the infobase; and
3) users rate the value of the information retrieved.
This section focuses on information acquisition (items 1 & 2), the next
sections discusses information management (item 3).
Unsolicited contributions
There are two ways to add new information to the infobase: unsolicited
contributions and gap filling. Unsolicited contribution are
more or less straightforward. When a user (or network domain expert) has
new information she wishes to add, she can do so by navigating to
Auto-FAQs New Information Editor. Here, there is a form that can be edited
to add new information. The form has four main entry areas: the question
area, the context area, the answer area, and the answer type menu.
New information is entered in the form of question/context - answer pairs, by
filling out entries in the form.
Questions entered in the question field are nominally posed in natural
language forms (i.e., the forms most likely to be asked by users). Multiple
questions can be specified, one per line.
The context field takes input in the form of a series of comma delimited phrases, which
together define a scope for the question field. During retrieval, a Q/A entry is matched to a user's
query only if it's context field contains every element in the query context.
This simple method is very effective for defining the logical scope of the
information, allowing for both broad and narrow searches.
Each new entry takes one answer (if a new entry contains multiple questions,
each question gets mapped to the same, single answer).
Answers come in three flavors: raw text, URLs, and Links. A Raw text answer
is simply textual input answering the question. A URL type answer is a URL
hyperlink naming a document that answers the question. Links are internal
pointers that can be used to link new questions to answers already existing
in the infobase. When a URL type answer is returned in response to a user query,
a hyperlink is automatically built into the search-results list, so users can
retrieve the document immediately.
In addition to the main fields described above, the New Information Editor
also allows contributors to associate themselves (i.e., name, address, email, etc.)
with their submission.
Gap filling
Gap filling is a second, more interesting approach to information acquisition.
The process is simple. When a user asks a question that Auto-FAQ cannot adequately
answer, the user can post that question to the "Gap List." At that point, the question
has become an identified gap (or hole) in the infobase. The Gap List is then made
available to a population of domain experts (e.g., all or part of the regular
user population). These expert users can search, edit, answer, and delete gaps from
the Gap List. In this way, Auto-FAQ can acquire knowledge and fill gaps in the infobase.
From the users standpoint, submitting a gap is identical to submitting a query.
The question and context fields are entered in the query dialogue as usual. The
only difference is that instead of submitting the form to the query processor,
the form is submitted to the gap list. It's that easy.
Gaps get filled in the Gap/New-Info Editor. Once there, an expert user can either
search for specific gaps (by keywords) or view the entire list. Selecting a
gap from the list brings up a Gap Editing Dialogue. From here, the user can edit the
question, context, and answer fields of the gap as in the New Info Editor.
Submitting the form amounts to filling the gap. After submitting the entry, Auto-FAQ
takes the user through a confirmation sequence, during which the contributor is given
an opportunity to 1) identify herself with the new entry, 2) automatically send an email
reply to the gap's originator, and 3) delete the gap from the Gap List.
The beauty of gap filling is that it offloads and distributes all the hard work onto the
user population:
- Users identify the biggest, most important holes in the infobase (i.e.,
by definition the biggest gaps are the questions that need answers the most.)
- Users decide which answers are satisfactory and which are unacceptable.
(Auto-FAQ doesn't have to reason about the adequacy of its knowledge.)
- Expert users supply the information needed to fill gaps, while the Gap List
focuses their attention to the places that need it most.
- Users need to only fill gaps in their area of expertise, leaving questions
outside their area to others.
Auto-FAQ's open approach to information acquisition makes adaptive
information management especially important. Auto-FAQ must constantly separate
the information wheat from information chaff.
There are basically four ways garbage can get into the infobase:
- Overtime information can become dated, obsolete, and incorrect;
- Experts can inadvertently enter useless information (e.g., typos or ambiguous or poorly
written response);
- Not-so-expert experts can enter incorrect or useless information, even though
they genuinely believe it to be valuable;
- Malicious users can enter incorrect, useless, or misleading information in an
attempt to subvert the system for their own purposes.
Presently, Auto-FAQ uses a very simple technique for identifying information gems.
Whenever a user views a Q/A record, she is given an opportunity
to score it. Scores range from 0 (useless) to 5 (excellent) and indicate the usefulness of
the answer (with respect to its associated question). Scores are used to update temporally
weighted utility averages maintained for each Q/A record. In particular, if S(t) is the
score attributed to Q/A record q at time t, then the utility average, U_q, is
updated as follows:
U_q(t+1) <--- gamma * U_q(t) + (1-gamma)*S(t)
where gamma is a temporal discount factor (currently, gamma = 0.9).
Adaptive information management is accomplished by using utility averages to bias
the retrieval process in favor of useful information. Currently, Auto-FAQ uses the
following simple heuristic for determining the relevance score of a Q/A record
with respect to a user's query:
FinalScore = (MatchScore)^3 * Utility
where MatchScore is roughly the number of relevant words that match between the
user's question and the Q/A record's question. The effect of the
rule is to put closely matched entries at the top of the retrieval list, but
to favor high utility answer when the degree of match is about equal.
Utility averages can also be used to prune the infobase, by deleting entries whose
values drop below a certain threshold.
Though simple, utility averages provide an effective mechanism for separating
the most useful information from the rest. As before, the system relies on cyberspace
leveraging for doing the hard work --- deciding what is valuable and what is not.
Status: Prototype operational
Auto-FAQ is currently implemented on an NCSA http server. The back-end scripts are
written in TCL. Users can access the system using NCSA's Mosaic for X
or other browsers that support HTML+ and forms. The system is currently running
on a private network at GTE Laboratories, serving a population of approximately
500 users. The infobase currently contains rudimentary knowledge about WWW, Mosaic,
and Auto-FAQ itself. Knowledge about WWW and Mosaic was acquired in about a
day by scanning in FAQs from NCSA.
Future: Global access
Following is our list of things to do in the near future:
- provide global WWW access by porting Auto-FAQ to GTE's public
web server (www.gte.com);
- clean up, package, and freely distribute the source code for the Auto-FAQ
back-end;
- encourage and support new Auto-FAQ applications in areas such as
newsgroup FAQs, online help, tutorials, institutional information
servers, etc.;
- continue to experiment with cyberspace leveraging on both global
and local scales;
- continue Auto-FAQ's technical development:
- explore deeper language support (especially contractions, synonyms,
stemming, pronouns, and rudimentary discourse structure);
- explore alternative ranking and information management schemes;
- explore ideas for large-scale distributed Auto-FAQ systems.
Auto-FAQ is an experiment
Auto-FAQ is an experiment, and through it we hope to shed light
on the validity of cyberspace leveraging as a new paradigm for building
intelligent systems. Following is a discussion of ingredients needed
to make Auto-FAQ a success.
The two main theses posited by Auto-FAQ are 1) cyberspace leveraging
can be an effective tool for knowledge acquisition and 2) shallow
language understanding can be effective, given a sufficiently dense
infobase. The validity of these two theses depends upon many more specific
assumptions.
Effective cyberspace leveraging depends upon the following web
of interdependent assumptions:
- the population of networked "experts" are sufficiently motivated
to monitor the Gap List and contribute information.
- users are willing to post gaps and score answers when needed,
instead using the system only for their purposes
- users will generate a sufficient number of gaps and ratings to
allow information to be acquired and tracked at a reasonable rate.
- the population of "expert" users will be able to keep up with
the continual stream of gaps.
- the system can be bootstrapped from little or no knowledge to
a usable level fast enough to maintain user loyalty.
Similarly, shallow language understanding depends on several more specific
assumptions:
- reasonable Q/A performance can be obtained without an
unreasonably dense infobase;
- For a given topic, the bulk of questions fall into a relatively
small set of stereotypical forms;
- Traditional IR techniques can efficiently retrieve relevant Q/A
entries most of the time and without too many distractors;
- A high, but not perfect, coverage rate on questions is sufficient to satisfy
most user's needs;
- Auto-FAQ's inability to maintain linguistic context and
discourse structure does not interfere with convenient,
productive dialogue.
We believe these assumption do hold in certain situations and that
Auto-FAQ can be used for intelligent question-answering
in both a global and local applications. For example, Auto-FAQ should be
very useful for automatically generating and maintaining Usenet News FAQs.
Similarly, we expect Auto-FAQ to be well suited to customer service
and online help applications.
Related systems
Following is a list of other systems and services, along with a discussion
of how Auto-FAQ relates to them. The comparison helps to more clearly define
Auto-FAQ, its focus, and its distinguishing features.
- Feigenbaum's ELIZA program: ELIZA is the mother of all shallow
intelligent systems. Even though it was build as a prank, it forcefully
demonstrates the idea of shallow language processing. Many others
have followed ELIZA's lead, especially programs for MUDs (Multi-User Dungeons)
and for Turing Test competitions.
Auto-FAQ takes shallow intelligence serious. Trying not to dupe
the user, but to provide a valuable service. Nevertheless, the
techniques are similar. Auto-FAQ does distinguish itself from
ELIZA and other shallow systems in its use of cyberspace leveraging
for knowledge acquisition.
- Usenet News FAQs: Frequently Asked Questions Lists (FAQs) are
incredibly useful, capturing a topics most useful information in a core list of question-answer pairs.
Not only can FAQs be found
all over the Internet, they are finding their way into our
popular culture. The trouble with FAQs is they require a moderator;
that is, a person who compiles, publishes, and maintains
the list. This can be a daunting task, and the scope and quality of
a FAQ may suffer due to resource limitations. After all, there is
only so much a person can do. Another important feature of FAQs
is their length. FAQs tend to be short. This makes them easier to
scan, and easier to maintain. But, it also limits the depth of coverage
possible. Auto-FAQ, as its name implies, tries to automate FAQ
generation and maintenance. It offloads many of the moderator's tasks
onto the user population. Doing this not only distributes the work
load, but also allows topics to be covered in much more detail.
- Newsgroups and Bulletin Boards: People often use newsgroups
and bulletin boards to get answers to questions. Newsgroups and
bulletin boards are an excellent example of population leveraging.
However, there are problems. First, content is fleeting;
news items are transient and may last for only a few days. Second,
people tend to ask the same questions over and over, hence the
invention of FAQs. Third, there is too much noise; to get an answer
or contribute to a topic, one must often wade through a sea of
uninteresting messages. Smart news readers help, but noise remains
a problem. Auto-FAQ addresses these issues. It archives
information and provides search capabilities for both answers
and questions.
- Case Based Reasoning Tools: Several case based reasoning tools
are currently being applied to online customer service applications
with astounding success. Inference is leading the pack with CBRExpress.
Other close followers include Scopus, Software Artistry, and
Answer Systems. Auto-FAQ shares many features with these case-based
systems, among them natural language input, shallow (IR-based)
retrieval, and population leveraging for knowledge acquisition.
Auto-FAQ distinguishes itself from these systems 1) in its orientation
toward the global Internet community (via WWW), 2) in its use of
a Gap List for information acquisition, and 3) in its use of
answer scores for adaptive information management.
- Internet Retrieval Tools: Internet retrieval tools like ftp, gopher,
and WWW offer access to a terrific volume of information.
Unfortunately, accessing relevant information can be difficult
since it is often hidden in servers around the world. Indexes and
search engines are getting better every day, but the problem of
finding useful information remains. Auto-FAQ differs from these
services in that information is local, not distributed on computers
around the world. Obviously, this significantly simplifies the
information retrieval problem. (Note: we have recently begun to
look at an architecture for a large scale distributed Auto-FAQ system.)
Internet services also differ from Auto-FAQ in that they tend to
view information at the document level. Auto-FAQ tends to be
more specific, aiming to answer specific questions.
- Other online database services: The major features that distinguish
Auto-FAQ from the bulk of online information services available
today are:
- Internet orientation
- Population leveraging for information acquisition
and adaptive information management
- Natural language interface and shallow language
understanding;
- More oriented toward questions and answers, than towards
keywords and database records.
We began this paper by introducing the paradigm of cyberspace leveraging ---
the idea of using computer networks to harness the skill and energy
of a large population to do useful work. Following a few examples, we
focused our attention on applying cyberspace leveraging towards knowledge
acquisition and the development of intelligent systems. In particular, we
described a intelligent question-answering system called Auto-FAQ. The system
is currently being used to answer questions about itself, the World-Wide-Web, and Mosaic
on an internal network at GTE Laboratories. However, the concept has a wide range of
potential applications.
The Auto-FAQ prototype has enabled us to explore the feasibility of
cyberspace leveraging for knowledge acquisition. The system uses population
leveraging in three ways: to identify gaps in its infobase, to add
knowledge and fill gaps in its infobase, and to gather feedback on the
utility of the information it has. Although our experiments are far from complete,
preliminary results are very positive.
If cyberspace leveraging is to be feasible for knowledge acquisition, a large
population of users must be able to contribute feedback and information
with ease and convenience.
Among other things, this implies that the system should accept small incremental
contributions and that users should not have to be fluent in the internal language
(or knowledge representation) of the system. These requirements in turn, put a heavy
burden on the system, and render conventional AI techniques like deduction and
inference impractical. To overcome this problem for Auto-FAQ, we have explored the
possibility "shallow language understanding". Shallow language understanding
is a cross between Feigenbaum's ELIZA program and traditional information retrieval.
Instead of analyzing the deep semantic content of a question to derive an answer,
Auto-FAQ uses surface level features to generate an index into its infobase.
Given a densely populated infobase, we believe that traditional IR techniques
augmented with rudimentary language skills can be a extremely effective for
intelligent question-answering. Again, preliminary results are positive.
Our future plans are to put Auto-FAQ on the global
WWW and to release its back-end software to the Internet community. Longer term goals
are to explore new application for Auto-FAQ, incorporate rudimentary language
processing skills, and explore the feasibility of a large scale, distributed
distributed system.
Steven D. Whitehead is currently a member of the Adaptive
Systems Department
at GTE Laboratories in Waltham MA, where he is working on intelligent network
agents. Prior to joining GTE in 1992, Mr. Whitehead was a graduate student at
the University of Rochester, in Rochester, NY. There, his research focused on
machine learning, especially the application of reinforcement learning to intelligent
robot control.
Mr. Whitehead received a Bachelors degree in Physics from Washington State University
in 1982, a Masters degree in Electrical Engineering from Clemson University
in 1984, and a PhD in Computer Science from the University of Rochester in 1992.
He also spent several years in the mid-80's as a member of the Artificial Intelligence Group
at GTE Government Systems, Mountain View CA.
In his spare time, Steve enjoys hiking, fishing, woodworking, and gardening.
Steve can be reached at: swhitehead@gte.com or at (617) 466-2193.