Title:  Learning Stochastic Logic Programs 
Authors:  Stephen Muggleton 
Series:  Linköping Electronic Articles
in Computer and Information Science ISSN 14019841 
Issue:  Vol. 5 (2000), No. 041 
URL:  http://www.ep.liu.se/ea/cis/2000/041/ 
Abstract:  Stochastic Logic Programs (SLPs) have been shown to be a generalization
of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed
Bayes' nets. A stochastic logic program consists of a set of labelled clauses
p:C where p is in the interval [0:1] and C is a firstorder rangerestricted
definite clause. This paper summarizes the syntax, distributional semantics
and proof techniques for SLPs and then discusses how a standard Inductive
Logic Programming (ILP) system, Progol, has been modified to support learning
of SLPs. The resulting system 1) finds an SLP with uniform probability labels
on each definition and nearmaximal Bayes posterior probability and then
2) alters the probability labels to furhter increase the posterior probability.
Stage 1) is implemented iwth CProgol4.5, which differs form previous versions
of Progol by allowing userdefined evaluation functions written in Prolog.
It is shown that maximising the Bayesian posterior function involves finding
SLPs with short derivations of the examples. Search pruning with the Bayesian
evaluation function is carried out in the same way as in previous versions
of CProgol. The system is demonstrated with worked examples involving the
learning of probability distributions over sequences as well as the learning
of simple forms of uncertain knowledge.


Original publication 20001221 
Postscript
part I  Checksum
Checksum (old) Information about recalculation of checksum Postscript part II  Checksum II Checksum II (old) Information about recalculation of checksum 
