Frederick Eaton's Ph.D. Research
I conducted my Ph.D. research in
the Computational and Biological Learning Lab in
the Department of
Engineering at the University of Cambridge in England.
I graduated on January 21, 2012. This web page is an archive of my
old departmental web page, and hosts my research papers and related
files.
Research Interests
I studied approximate inference algorithms and frameworks, applied to
discrete statistical models.
Publications
Summary
2013

FH Eaton, Z Ghahramani. Model reductions for inference: generality of
pairwise, binary, and planar factor graphs. Neural Computation May 2013. Vol 25, No. 5, pp. 12131260
This paper proves results concerning the representability of
statistical models using factor graphs with constraints on topology,
factor size, or variable domain. It characterises the expressive power
of planar binary pairwise graphs, and introduces a new notion of model
reduction in machine learning.
2012

FH Eaton. Combining Approximations for Inference (PhD thesis)
My PhD thesis, which explores the approximate inference problem
from a functional perspective, asking what are the ways of "combining"
two approximations and how these could be used to build new
algorithms.
2011

FH Eaton. A conditional game for comparing approximations. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2011).
Notable Paper Award
Defines a game which can be used to compare the accuracy of two
approximations to the marginal probabilities of a statistical model.
Apparently this is the first method that anyone has proposed for
making such comparisons.
2009
 FH Eaton, Z Ghahramani. Choosing a Variable to Clamp:
Approximate Inference Using Conditioned Belief Propagation. In
Proceedings of the Twelfth International Conference on Artificial
Intelligence and Statistics (AISTATS 2009)
Proposes an algorithm for applying Belief Propagation to a
model using divideandconquer, by recursively conditioning on
specific variables. This algorithm can be seen as an approximate
version of cutset conditioning. The paper also introduces "BBP" or
"Back Belief Propagation", an application of backpropagation to
belief propagation. BBP is used here for choosing condition variables,
but has many other applications, for instance to parameter learning
tasks in computer vision (Domke, "Parameter Learning with Truncated MessagePassing". CVPR 2011) and to
empirical risk minimisation in general Machine Learning (Stoyanov et
al, "Empirical Risk Minimization of Graphical Model Parameters Given
Approximate Inference, Decoding, and Model Structure". AISTATS 2011).
