>> Determining maximal entropy functions for objective Bayesian inductive logic
by Soroush Rafiee Rad, J. Landes, Jon Williamson | October 2022
According to the objective Bayesian approach to inductive logic, premisses inductively entail a conclusion just when every probability function with maximal entropy, from all those that satisfy the premisses, satisfies the conclusion. However, when premisses and conclusion are constraints on probabilities of sentences of a first-order predicate language, it is by no means obvious how to determine these maximal entropy functions. This paper makes progress on the problem in the following ways. Firstly, we introduce the concept of an entropy limit point and show that, if the set of probability functions satisfying the premisses contains an entropy limit point, then this limit point is unique and is the maximal entropy probability function. Next, we turn to the special case in which the premisses are simply sentences of the logical language. We show that if the uniform probability function gives the premisses positive probability, then the maximal entropy function can be found by simply conditionalising this uniform prior on the premisses. We generalise our results to demonstrate agreement between the maximal entropy approach and Jeffrey conditionalisation in the case in which there is a single premiss that specifies the probability of a sentence of the language. We show that, after learning such a premiss, certain inferences are preserved, namely inferences to inductive tautologies. Finally, we consider potential pathologies of the approach: we explore the extent to which the maximal entropy approach is invariant under permutations of the constants of the language, and we discuss some cases in which there is no maximal entropy probability function.