Hi there, I'm Cam Allen!

I'm a second-year PhD student at Brown University, advised by George Konidaris. I study artificial intelligence, reinforcement learning, and value alignment.

I'm interested in anything that seems like it's plausibly necessary for general-purpose machine intelligence. In my view, that starts with reinforcement learning (RL), which is a way of thinking about the interactions between intelligent agents and their environments. This idea of agency is key –– any intelligent system needs at least enough agency that it can make decisions about how to act in its environment. Without agency, there's no way to measure intelligence, and without an environment, there's no way for the agent's actions to have any effect.

Within the RL framework, I think planning is super important. Planning is the process of deciding on a sequence of actions that lead to a desired outcome. For an agent to be able to plan effectively, it needs to be able to accurately predict the consequences of its own actions. That means it needs an internal model of how its environment works. I spend a lot of time thinking about ways that agents can learn better models of their environments so that they can then construct more intelligent plans.

I have also been paying close attention to the problem of value alignment –– that is, how to create machines that reliably do the things we want them to do. It doesn't matter how intelligent something is; if its goals aren't specified properly, it won't be especially useful. (The problem is that specifying goals properly turns out to be really hard.)