Context Change and Versatile Models in Machine Learning Jose Hernandez Orallo. Reader, Departament de Sistemes Informatics i Computacio, Universitat Politecnica de Valencia Context change in machine learning has drawn much attention and many techniques and paradigms have recently been proposed. In this talk we overview a general principle and methodology for model reuse known as "reframing", defined as the process of making a model perform well on a range of operating contexts. This is usually achieved by the construction of a more versatile model, which is not discarded after each particular application or context-to-context transfer. We first introduce a formal characterisation of reframing, in terms of how problems in different operating contexts are solved. Three main groups of reframing are identified: input reframing, output reframing and structural reframing. We also show examples of plots and performance metrics that are useful to account for the quality of a versatile model for a range of contexts. From here, we review some of the areas and problems where the notion of reframing has already been developed and shown useful, even if the names that have been used are quite diverse: re-optimising, adapting, tuning, thresholding, etc. ##################################################################################################################################### Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection Antonio M. Lopez Head of CVC/ADAS Group, Computer Vison Center (CVC) and Univ. Autonoma de Barcelona Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on classifiers trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in real world images? Conducted experiments show that virtual-world based training can provide excellent testing accuracy in some real-world datasets, but it also appears the dataset shift problem in many others. Accordingly, during the last years we have explored different domain adaptation ideas for several state-of-the-art pedestrian detection approaches. In this talk, we review this work. Therefore, we will report adaptation results for different features as HOG, LBP, Haar and EOH, different pedestrian models as the Holistic-SVM/AdaBoost and Deformable Part-Based Models (DPM), as well as different strategies as supervised/unsupervised and batch/incremental. Overall, we will see that we have been able to actually adapt virtual and real worlds.