Adaptive reuse of learnt knowledge is of critical importance in the majority of knowledge-intensive application areas, particularly when the context in which the learnt model operates can be expected to vary from training to deployment. In machine learning this has been studied, for example, in relation to variations in class and cost skew in (binary) classification, leading to the development of tools such as ROC analysis to adjust decision thresholds to operating conditions concerning class and cost skew. More recently, considerable effort has been devoted to research on transfer learning, domain adaptation, and related approaches.
Given that the main business of predictive machine learning is to generalise from training to deployment, there is clearly scope for developing a general notion of operating context. Without such a notion, a model predicting sales in Prague for this week may perform poorly in Nancy for next Wednesday. The operating context has changed in terms of location as well as resolution. While a given predictive model may be sufficient and highly specialised for one particular operating context, it may not perform well in other contexts. If sufficient training data for the new context is available it might be feasible to retrain a new model; however, this is generally not a good use of resources, and one would expect it to be more cost-effective to learn one general, versatile model that effectively generalizes over multiple and possibly previously unseen contexts.
The aim of this workshop is to bring together people working in areas related to versatile models and model reuse over multiple contexts. Given the advances made in recent years on specific approaches such as transfer learning, an attempt to start developing an overarching theory is now feasible and timely, and can be expected to generate considerable interest from the machine learning community. Papers are solicited in all areas relating to model reuse and model generalization including the following areas:
This will be a highly interactive one-day workshop with two to three invited talks, three or four thematic sessions organised around the submitted papers, a poster session and a concluding discussion. To facilitate interaction and the exchange of ideas, the workshop and each of the thematic sessions will be introduced by a researcher with considerable experience in the area. We are fortunate that Dr José Hernández-Orallo from the Technical University of Valencia has agreed to give the first invited talk on Context Change and Versatile Models in Machine Learning, which will put the topics related to the workshop in context.
For the rest of the workshop, we will schedule up to four thematic sessions (two in the morning, two in the afternoon) of around 80 minutes, with breaks between them. Each session will be grouped around a thematically related subset of submitted papers (e.g., adapting to changes in data distribution; versatile models; etc). Each of these sessions will be chaired by one of the members of the Program Committee, who will give a short introduction presenting the main issues and state of the art, and also close the session formulating the main conclusions following discussion among the workshop participants. This format will encourage interaction between participants and also help introduce the topic to those with less experience in it.
A poster session will be scheduled either over lunch or during the afternoon. All submitting authors will be asked to prepare a poster to facilitate more directed one-on-one discussion during the poster session. We anticipate that we may have more submitted papers than plenary presentation slots, in which case we will allocate a subset of the authors a longer presentation slot and the remaining authors a five-minute "poster highlight" slot.
The workshop will end with an open discussion about promising open problems and research areas related to the workshop topics, continuation of the workshop, future related events, etc. The Editors of the Machine Learning journal have agreed in principle to host a special issue dedicated to work presented at the workshop and related work submitted after an open call for papers. We intend to gear the closing discussion towards drawing up a list of agreed conclusions that could form the basis of a position paper or editorial for the special issue.
We welcome submissions describing work in progress as well as more mature work related to learning over multiple contexts. Submissions should be between 6 and 16 pages in the same format as the main conference (LNAI). Authors of accepted papers will be asked to prepare a poster, and selected authors will be given the opportunity of a plenary presentation during the workshop.
Submission website: https://www.easychair.org/conferences/?conf=lmce2014
After the workshop, contributing authors will be invited to submit a paper to a special issue of the Machine Learning journal dedicated to the topic of the workshop.