Simulation is being extensively used by the armed forces to train their troops as an alternative to carrying out expensive exercises in the field. The objective of using a computer for training is to provide the soldiers with the experience of working part of a team in various battle situations in conditions as much like those in the real world as possible. The simulation should provide a realistic training ground for the soldiers where it is possible to test the soldiers' skills in a variety of situations.
This project investigates the issues involved when you use the multi-agent system paradigm to model the battlefield scenario. Agents are used to represent a variety of different entities in the organisational heirarchy of the army, who must work together as a team in order to achieve the battle objectives. It is important to look at the structures and protocols that need to be in place for the effective and efficient accomplishment of the overall task. This will be based on the joint intentions model of Jennings [jennings 93] and the STEAM execution model of Tambe [tambe 98]. We are particularly interested in how the agents deal with uncertainty and failure and what mechanisms need to be used to ensure that the task is still accomplished despite the breakdown of one or several parts of the team plan.
[Jennings 93] Commitments and Conventions: the foundations of Coordination in Multi-Agent Systems, N.R. Jennings,
Knowledge Engineering Review (8) no. 3, pp223-250, 1993.
[Tambe 98] Implementing Agent Teams in Dynamic Multi-agent Environments, Applied AI (12), 1998.
This EPSRC funded collaborative project aims to apply game theoretic techniques to the design of negotiation algorithms for use by autonomous software agents in electronic commerce. The other partner is the ESRC Centre for Economic Learning and Social Evolution (ELSE).
A JISC eLib project to provide a framework for publishing journals in a network environment for maximising access to the the publications. This involved adding hypertext navigation overlays (using the Distributed Link Service and information mining agents) to static archives of prepublished material. We are currently seeking outlets for the resulting hypertext technologies.
The OpCit project is developing a reference linking service for Open Archives using the Distributed Link Service and adapting tools for reference linking from PDF documents that were developed in the earlier Open Journal Project ().
Initially the project will hyperlink each of more than 100,000 papers in the Los Alamos physics eprint archive to every other paper in the archive that it cites. The project will extend to link references in papers held in other freely-accessible, distributed archives that conform with the proposal for Open Archives.
Working with the Cornell Digital Library Research Group the project will also investigate the semantics of documents to allow linking, and model interoperability between linking services and other digital library services.
It is hoped that this new way of navigating the scientific journal literature will encourage authors in other fields to create inter-linked online archives like Los Alamos, or Cogprints, across disciplines and around the world. A related project, EPrints, provides free software to enable institutions or special interest groups to build these new online archives.
The OntoPortal system uses ontological hypermedia principles to enrich the linking between resources (or concepts) within a scholarly community (such as the literature, projects and conferences), allowing researchers to not only position a concept within the context of the entire community in which they work, but more importantly, allows them to pose intricate research queries (such as What other papers discuss the XML standard?).
The links in ontological hypermedia are defined according to the relationships between real-world objects. An ontology that models the significant objects in a scholar's world can be used towards producing a consistently interlinked research portal. After formally defining the concepts and complex relations within a particular community, the OntoPortal system is used to project the relationships between the concepts over the information contained within the scholarly community. This greatly improves the navigational facilities offered by the system by adding rich and meaningful interlinking of the concepts.
While the underlying resources might only contain a few links, all concepts within the OntoPortal system are linked to every other related concept (as defined by the ontology). The resulting ontological hypermedia allows users to not only fully understand how the concepts relate to the rest of the community, but introduces the ability to respond to queries by following links (query-by-linking) as opposed to issuing a search query. For example, resolving the query, What other papers discuss the XML standard? simply involves following the link between the literature and the standard as this relationship has been made explicit through the ontological hypermedia.
Initially we applied the OntoPortal system to the metadata research community under a project funded by the Defence Evaluation and Research Agency (DERA). We used the OntoPortal system to provide a rich interlinking between the concepts in the metadata research community.
The research will investigate the role of intelligent agents in future generation, mobile communication environments. In particular, issues related to how agents can flexibly adapt their behaviour and their interactions to the characteristics of their current communication environment will be explored. Particular focus and attention is given to automated negotiation in such systems
The aim of the mohican project is to investigate the use of high-level multi-agent interaction mechanisms for providing network services. mohican addresses the overall aim of the ``Programmable Networks Initiative'' by designing patterns of interactions that facilitate the deployment of new network services where adaptive behaviour is required. In particular, mohican objectives are related to four of the research topics identified in the call for proposals, namely
The MAVIS project was a programme of research to develop Multimedia Architectures for Video, Image and Sound. Within the architecture separate modules are responsible for all the processing associated with a particular media-based feature type and, as new feature types are introduced and associated matching techniques developed, appropriate new modules may be added to the architecture. For example, to make use of the added richness which digital video presents, modules are being developed which understand the temporal nature of the video and which can extract combined spatial and temporal features.
The aim of the MAVIS 2 project, funded by EPSRC, is to introduce multimedia thesaurus (MMT) and intelligent agent support for content-based retrieval and navigation. The earlier MAVIS 1 project was concerned with enhanced handling of images and digital video sequences in multimedia information systems. The project will extend the Microcosm architecture to support the MMT, in which representations of objects in different media are maintained together with their inter-relationships. Intelligent agents will be developed to classify and cluster information from the MMT, as well as additional knowledge extracted during information authoring and indexing, to seek out discriminators between object classes and also naturally-occurring groupings of media-based features and to accelerate media-based navigation. Multimedia content-based retrieval and navigation also demands new media viewers which incorporate facilities for processing and analysis. Such viewers are being investigated, in particular to allow rapid identification of image-based objects.
Malibu was an eLib project working on Hybrid Libraries - these contain physical books etc as well as digitalcollections. The project developed and implement prototype hybrid libraries in each of the three major partner institutions and made a new search engine called GIGA.
One of the challenges is to link up library databases (via Z39.50) with archives running different databases, plus external services such as BIDS so that queries pass to all data sources. GIGA passes searches to sites like libraries which have a structured interface as well as to database-run web sites and semi-structured sites. It collates the results and unifies their appearance.