AGREEMENT TECHNOLOGIES CONSOLIDER PROJECT 2007

Main Website

http://www.agreement-technologies.org

Project objectives

This project, Agreement Technologies, AT for short, aims at developing models, frameworks, methods and algorithms for constructing large-scale open distributed computer systems. We plan to anticipate solutions for the needs of next generation computing systems where autonomy, interaction and mobility will be the key issues. The project will develop technologies to cope with the (high) dynamicity of the system topology and with semantic mismatches in the interaction, both natural consequences of the distributed and autonomous nature of the components. Also, the project will focus on security issues by developing a new concept of operating system that incorporates low-level security mechanisms and trust measures that complement the classical cryptographic methods. Trust measures are essential in open environments where interactions have to be made under uncertainty on the environment state. Most importantly, the project will concentrate on techniques that enable software components to reach agreements on the mutual performance of services. Negotiation, argumentation, decision making, knowledge modelling, virtual organisations and learning will be the sandbox techniques used by the project to build this next generation of software systems. We envisage a new programming paradigm that is based on two concepts: (1) a Normative context, that determines the rules of the game, i.e. how the interactions between agents are going to happen, and (2) a call-by-agreement interaction method that is based on a two step process: first the establishment of an agreement for action between the agents that respects the normative context, and second, the actual program call for the enactment of the action. We will also address the need for software engineering methodologies that deal with the issues raised in the project.

These objectives build upon the combined expertise of the three participant research groups: multiagent systems (MAS), learning and uncertainty in IIIA, agent-based software engineering, multiagent system architectures, inductive learning, distributed systems and planning at UPV and co-ordination, semantic web engineering, and virtual organisations at URJC. The project opens a number of new research lines that aim at producing a breakthrough in the current views on software development and web computing.

The approach

The project combines incremental research on the lines on which the groups are strong and disruptive research on some aspects that require new and different solutions. The project is framed on a combination of formal approaches with practical solutions in the form of software tools and prototypes. The research and the tools will be tested on three case studies.

We consider that the research to be developed has to be grounded on solid formal models that will allow for the development of computational theories, the proof of behaviour properties and the easy transmission of knowledge in the field. The project will thus build upon different types of logics, formalisms, and semantic methods; it will use several well established bodies of knowledge (e.g. information theory, game theory, or data mining), and will propose new formalisms where the ideas explored in the more disruptive research require them.

The project has a strong practical aim as well. One of its goals is to have a significant industrial impact through the demonstrators and the advisory boards. We want to see the impact of all the formal and theoretical research into a new generation of software tools and systems. We want to generate open distributed systems where the participation cost for a new member is just as simple as downloading a plug-in; we want users to be just a click away from being into a large society of regulated components, or a click away from using web services. The project will generate such tools that will combine enhanced versions of existing tools developed by the three research groups (EIDE, ENP, SIMBA, MAGENTIX, SPADE, MAST, TOAST, RICA) with newly developed software components.

Finally, the project aims at developing proof-of-concept applications in three domains that are massively distributed and that are rich enough to test the variety of techniques explored: eProcurement (electronic procurement), mHealth (mobile health), and mWater (water dispute resolution).

eProcurement: Open eProcurement is in urgent need of tools that permit the flexible negotiation of agreements, the verification of commitments, the building of long-lasting relationships, the modelling of trust and online dispute resolution. These are just the problems this project aims at studying.

mHealth: Efficiently providing high-quality healthcare services for mobile citizens is becoming increasingly important in the society of the 21st century. Through the use of Agreement Technologies we will address some highly relevant challenges in this field. In particular, we will provide value-added services for foreign business people and tourists on the move in case of medical emergencies. Through mobile devices such as PDAs or smart phones a person that suddenly feels ill can initiate coordination processes so as to dynamically orchestrate relevant services and to forge agreements among the organisations involved so as to ensure the possible best treatment for the patient. Such an application is especially relevant for countries like Spain with a high percentage of GDP coming from travel and tourism.

mWater: Water scarcity in countries like Spain is a crucial issue for the future. Traditionally institutions have played a major role in setting agreements and resolving disputes (like the ‘Tribunal de les aïgues’ in Valencia solving disputes since the middle ages for water sharing). We plan to apply the agreement technologies in the construction of a prototype that could mediate in the negotiations and dispute of water usage. This application is of strategic importance for the Spanish society and economy.

Project context

Most current transactions and interactions at business level, but also at leisure level, are mediated by computers and computer networks. From email to virtual worlds, the way people work and enjoy their free time has changed dramatically in less than a generation time. This change has made that IT research and development focuses on aspects like new Human-Computer Interfaces or enhanced routing and network management tools. However, the biggest impact has been on the way applications are thought and developed. These applications require components to which more and more complex tasks can be delegated, components that show higher levels of intelligence, components that are capable of sophisticated ways of interacting, as they are massively distributed, sometimes embedded in all sort of appliances and sensors. These autonomous components are usually termed ‘agents’ to stress their capability of representing human interests, of being autonomous and socially-aware.

These trends in software development have motivated a number of research initiatives in Europe and USA in recent years. One of the most related to our goals was the Global Computing initiative (GCI) launched in 2001 as part of the FP6 IST FET Programme. The vision of the call, also contained in the current Global Computing II (GCII) initiative, was to focus research on large-scale open distributed systems: a timely vision given the exponential growth of Internet and the turmoil generated on the media and scientific fora of some international initiatives like the Semantic Web, or the IBM autonomic computing concepts, and the peak of Napster usage in 2001 with more than 25 million users. Most projects had a highly interdisciplinary nature, and a large number of groups from theoretical computer science, agents, networks and databases worked together in a fruitful way.

The focus of GCI was on three main topics: analysis of systems and security, languages and programming environments, and foundations of networks and large distributed systems. Along these lines, GCI projects dealt with formal techniques, mobility, distribution, security, trust, algorithms, and dynamics. The focus was ambitious and foundational, with an abstract view of computation at global level, having as particular examples the Grid of computers or the telephone network. Both functional and non-functional (e.g. Quality of Service) properties had to be studied. The focus on GCII was shifted towards issues that would help in the actual deployment of such big applications, namely, security, resource management, scalability, and distribution transparency.

Other initiatives for large distributed systems (although not necessarily open in our sense) include P2P systems, where nodes in a graph act as clients and servers and share a common ontology that permit easy bootstrapping and scalability, or Grid applications where the nodes in a graph share and interchange resources for the completion of a complex task. The Semantic Web proposal that has received large funding in the EU and the USA is generating standards for ontology definition and tools for automatic annotation of web resources with meta-data. The size of the Semantic Web is growing at a high pace (10 million documents with meta-data by the end of 2006). Finally, the availability of applications as web services has permitted an approach to solving complex systems by combining already available web services. The annotation of those through standards like WSDL or BPEL permits the automatic orchestration of solutions for complex tasks. Combinations of Semantic Web and Web services standards are currently underway (SA-WSDL, SEE TC) within standardization bodies such as the W3C and OASIS. And finally a strong social approach to develop new web applications is at the heart of the Web 2.0 initiative (Wiki, Flickr, Blogs)

Project motivation

Although many efforts have been devoted in the projects and initiatives mentioned above there is still a large number of unsolved questions that require a significant research effort and in some cases a completely new and disruptive vision. This project will propose a new programming paradigm for open distributed systems that addresses some of these unsolved questions. It will perform fundamental research on a large number of functional and non functional aspects:

1. Semantics. The openness in the development of agents/components/services creates the need for semantic alignments between different ontologies. Although standards are in place for ontology representation (e.g. OWL) there is still no scalable solution on how to automate the alignment of semantics, and also Rule-based Languages for properly defining such alignments are still on W3C’s agenda.

2. Resource management. Many task solutions require the recruiting of agents to form teams, as in the mHealth case study or in rescue scenarios. How many agents are involved and what tasks are associated to each one of them are difficult questions. Traditional planning systems and balancing algorithms are not well adapted to the high dynamics and uncertainty of the environments.

3. Dynamicity. The way components/agents/services interact depends on two types of dynamicity. First, networks evolve, new nodes appear and nodes disappear. Second, the rules of the game that regulate the interaction between agents might change by decisions of the members of an organization. Therefore, the structure of the organization, the established agreements, the responsibilities and the workflow structure might need to be changed.

4. Adaptability. Agents have a behaviour that may change over time. There may be different normative contexts within which to reach agreements, and also conventions and norms regulating a particular agreement context may change along time. These and other considerations require that the code of agents be highly adaptive to its environment conditions. However, current approaches to code verification still rely on static verification techniques. Adaptive code is a need for the deployment of open distributed applications.

5. Workflow. The influence of business process analysis (e.g. BPEL or BPEL4WS) in system architectures has brought into the software engineering arena the need to define software according to a detailed workflow analysis that regulates the activities and the combination of roles in an organisation as well as their associated data flow. However, there are no declarative semantics of the obligations and prohibitions over roles, or over the combination of components. This is a handicap for the necessary decision making that in an open system an entity needs to perform in order to decide how to navigate through the workflow.

6. Composition. Most programming languages and methodologies base their semantics on a compositional view of code. Knowing the behaviour of the components, and how they are combined, we can know the overall behaviour. This approach is to a large extent not applicable to open systems where the behaviour of the components cannot be predetermined from the content of a call. First, the behaviour of an entity is determined by the normative context and the agreements signed, that are not completely determined at component definition time. Second, even when agreements are signed, the behaviour of the agents might not be completely determined as their autonomy and selfishness might cause them not to honour their commitments if there is a potential gain in doing so. New and radically different approaches are required to deal with this.

7. Scalability. Most scalable solutions for very large distributed applications rely on a shared fixed ontology (e.g. for music in Napster, or for phone calls in Skype), however there is no technological answer for cases where ontologies are diverse and a need for alignment is required. How a network can be hastily formed when the nodes are semantically heterogeneous is still an unanswered question. Also, most current software platforms scale badly and none has real-time properties. These pitfalls have been a barrier to the transfer of agent technologies to industry.

8. Security. There are several issues related to security over open networks. First is how to guarantee identity, and this is to a large extent solved by cryptographic methods. Second is how to guarantee behaviour. For this second aspect some promising recent research has been made through the concept of Proof Carrying Code, in which mobile code brings within itself a property of behaviour and the proof that the code satisfies the behaviour. Code and properties are input to compilers that generate code and certificates that permit to verify that the code has not been tampered. However, when the code is not mobile, as in the area of web services (where the service is executed remotely), the possibility of fraud and malevolent behaviour creates a security threat to applications. No definitive solution has been found yet.

9. Usability. The monitoring and debugging of open distributed applications is a difficult task and the tools existing nowadays are extremely poor.

Moreover the project will develop applied research in the form of tools and systems: * Algorithms. A number of algorithms will be developed as a consequence of the fundamental research. For instance, algorithms for negotiation, persuasion, argumentation, compilers between norm representation formalisms, semantic aligners, or an agreement planner. * Methodology. A complete methodology for developing software following the new programming paradigm of norm/agreement/call-by-agreement will be proposed and fully tested in the three highly complex case studies proposed. Graphic tools supporting the methodology will be generated by the project. * Toolbox. The building of an open distributed system will require a number of different tools that the engineer will use to specify, verify and test the applications. Also the infrastructure to run those applications, that is, the overlay global computer, will be provided by the project.

Finally the project will apply the tools and theories to build demonstrators for the three case studies that are selected as exemplar of open distributed systems: eProcurement (for electronic procurement), mWater (for water usage dispute resolucion) and mHealth (for health support to mobile citizens).

Expected impact

This project will have impact in a number of dimensions. First of all, it addresses a significant number of goals of the current Spanish National Plan of Computer Technologies (TIN), especially[1]: 3.1 Agent architectures and models. 3.2 Multiagent architectures. Organizational models. At a European level it also clearly dovetails with the Semantic Web and FET programs of the 7th Framework Programme of the European Commission and with the EUROCORES programme of the European Science Foundation, in particular the LogiCCC programme on modelling intelligent interaction.

The project will significantly strengthen Spain''s research community in open multiagent systems and consolidate the position of the three participating groups at the forefront of international research. By creating 15 new research positions it will contribute to education and the training of younger scientists. This will be reinforced at the two universities (UPV and URJC) through teaching in related Master programmes and through the organization of related workshops and summer schools. We will profit these workshops and summer schools to strengthen the team relationships with the Spanish community on multiagent systems.

Links to business, industry and government will be cemented via the participation of external advisors in the different project boards. Existing contacts with end-users will be consolidated and greatly extended through the project demonstrators covering three different application domains.

Scholarships:

Application form PhD scholarships at GTI-IA, Universidad Politecnica de Valencia