I've written some stuff and consider some of it good enough to put here.

## Research papers

2016: Logical Induction

How might a computer algorithm assign probabilities to propositions such as "the quadrillionth digit of pi is 5", far ahead of the time when their truth values can actually be computed?
We present an
algorithm assigning such probabilities in as asymptotically reasonable manner.

2016: Alignment for Advanced Machine Learning Systems

As learning
systems become increasingly intelligent and autonomous, what design principles
can best ensure that their behavior is aligned with the interests of the operators? This is the research agenda most of my effort is currently focused on.

2016: A Formal Solution to the Grain of Truth Problem

We show that reflective variants of AIXI solve a long-standing problem in game theory: how can two agents learn to model the other agent's policy in a Bayesian manner,
with their beliefs having a `grain of truth' in the sense of assigning non-negligible probability to the other agent's actual policy?

2016: Quantilizers: A Safer Alternative to Maximizers for Limited Optimization

An alternative to expected utility maximization, derived using worst-case assumptions about how much different actions cost. Presented at an AAAI symposium.

2015: Reflective Variants of Solomonoff Induction and AIXI

Using reflective oracles (see next paper) to implement variants of Solomonoff induction and AIXI that can reason about environments that contain them. Presented at AGI 2015.

2015: Reflective Oracles: A Foundation for Classical Game Theory When trying to define what it means for different programs to correctly predict each other's outputs, one runs into self-reference paradoxes. Here we use randomization to get around these, and use this result for defining causal decision theory in multi-agent environments, which turns out to yield Nash equilibria. Note that this is an extended version of a paper presented at LORI-V.

2013: Learning Stochastic Inverses

A class of algorithms to "invert" a probabilistic program, speeding up inference.

## Selected class papers

2014: Kernel-Based Extensions of Exponential Family Distributions

Replacing the dot product in exponential families with a kernel product yields a universal class of distributions. Here, we apply them to estimate densities using Newton's method.

2014: Black-Box Reductions in Mechanism Design

Chawla et al. (2010) proved that it is impossible to design a general efficient allocation mechanism that runs in polynomial time and gives bidders a dominant strategy of revealing their true preferences, when the mechanism only has access to an approximation algorithm in a black-box manner. I summarize the findings and proofs, and I also comment on possible relaxations of the problem that might allow useful mechanisms.

2012: Dominant Assurance Contracts with Continuous Pledges

I expand on Alexander Tabarrok's work on dominant assurance contracts to analyze the case when pledges can take on any value, not only 2 different values. Code is here.

2011: Compressionism: A New Theory of the Mind Based on Data Compression

My research-based argument for PWR 1. An analytic philosophy/AI paper on a theory of the mind based on data compression. Essentially, compressionism is based on the idea that finding short but complete descriptions for one's experiences is equivalent to understanding them. I attempt to expanded on the ideas of Ray Solomonoff and Phil and Rebecca Maguire to create a more comprehensive theory of the mind based on this principle and respond to Searle's criticisms of AI. I think a lot of these ideas are wrong or incomplete, but this is an interesting snapshot of the reasoning that has led me to my current research. Thanks to my PWR instructor, Michael Reid, for all his help on this paper.