Incremental Learning in Constructive Languages

José Hernández-Orallo

Abstract

We present a framework for incremental reinforcement learning in constructive languages with the ability of 'representational redescription’ [Karmiloff-Smith 1992]. Our approach performs an apportionment of credit w.r.t. the ‘course’ the sample data flow in the learnt theory, instead of reckoning reinforcement by the separated use of the hypothesis in the description of the data.
The problem is especially troublesome for the induction in high-level representation languages, like ILP, but the same problem pervades to other representations that allow redescription (e.g. neural networks). However, in this work, we present an operative measure of rein-forcement for general theories, studying the growth of knowledge, theory revision and abduction in this framework.
Finally, we study a more common view of reinforcement, where the predictions (or actions) of an intelligent system can be rewarded or penalised and how this affects to the distribution of reinforcement.
The most important result of this paper is that the way we distribute reinforcement into knowledge is sufficient to construct an ontology. In this way, one of the most difficult dilemmas of inductive learning, the choice of prior distribution, disappears.

Key words:Reinforcement Learning, Incremental Learning, Ontology, Apportionment of Credit, Abduction, Induction, MDL principle, Knowledge Acquisition and Revision, ILP.


Go back to my home page.


© 1996-1998 José Hernández Orallo.