Publications

Thesis

Conference Papers

  1. Skill-Driven Neurosymbolic State Abstractions.
    A. Ahmetoglu, S. James, C. Allen, S. Lobel, D. Abel, and G. Konidaris. In Advances in Neural Information Processing Systems, December 2025.
    [Bibtex]

  2. Benchmarking Partial Observability in Reinforcement Learning with a Suite of Memory-Improvable Domains.
    R.Y. Tao, K. Guo, C. Allen, G. Konidaris. Reinforcement Learning Journal, August 2025.
    [Code] [Bibtex]

  3. Focused Skill Discovery: Learning to Control Specific State Variables while Minimizing Side Effects.
    J.C. Carr, Q. Sun, C. Allen. Reinforcement Learning Journal, August 2025.
    [Blog] [Poster] [Code] [Bibtex]

    • Also an extended abstract at the 6th Multidisciplinary Conference on Reinforcement Learning and Decision Making, June 2025.
  4. Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy.
    C. Allen*, A. Kirtland*, R.Y. Tao*, S. Lobel, D. Scott, N. Petrocelli, O. Gottesman, R. Parr, M.L. Littman, G. Konidaris. In Advances in Neural Information Processing Systems, December 2024.
    [Blog] [Video] [Poster] [Code] [Bibtex]

    • Selected for oral presentation (3 of 72) at the ICML Foundations of Reinforcement Learning and Control Workshop, July 2024.
    • Also a workshop paper at the RLC Finding the Frame Workshop, August 2024.
  5. Evidence of Learned Look-Ahead in a Chess-Playing Neural Network.
    E. Jenner, S. Kapur, V. Georgiev, C. Allen, S. Emmons, S. Russell. In Advances in Neural Information Processing Systems, December 2024.
    [Blog] [Poster] [Code] [Bibtex]

  6. Coarse-Grained Smoothness for Reinforcement Learning in Metric Spaces.
    O. Gottesman, K. Asadi, C. Allen, S. Lobel, G. Konidaris, and M. Littman. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics, April 2023.
    [Bibtex]

  7. Optimistic Initialization for Exploration in Continuous Control.
    S. Lobel, O. Gottesman, C. Allen, A. Bagaria, and G. Konidaris. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, February 2022.
    [Code] [Bibtex]

  8. Learning Markov State Abstractions for Deep Reinforcement Learning.
    C. Allen, N. Parikh, O. Gottesman, and G. Konidaris. In Advances in Neural Information Processing Systems, December 2021.
    [Blog] [Talk] [Poster] [Code] [Bibtex]

    • Also a workshop paper at the NeurIPS Deep Reinforcement Learning Workshop, December 2020. [Bibtex]
  9. Efficient Black-Box Planning Using Macro-Actions with Focused Effects.
    C. Allen, M. Katz, T. Klinger, G. Konidaris, M. Riemer, and G. Tesauro. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, August 2021.
    [Blog] [Talk] [Poster] [Code] [Bibtex]

    • Also a workshop paper at the ICAPS Workshop on Heuristics and Search for Domain-independent Planning, August 2021. [Bibtex]

Workshop Papers

  1. Neural Manifold Geometry Encodes Feature Fields.
    J. Yocum, C. Allen, B. Olshausen, S. Russell. At the NeurIPS Workshop on Symmetry and Geometry in Neural Representations (NeurReps), December 2025.
    [Bibtex]

  2. Transformers Represent Causal Abstractions.
    E. Altuzar, J. Yocum, C. Allen. At the NeurIPS Workshop on Symmetry and Geometry in Neural Representations (NeurReps), December 2025.
    [Bibtex]

  3. Echo of Bayes: Learned Memory Functions Can Recover Belief States
    J. Liévano-Karim, P. Koepernik, G. Konidaris, C. Allen. At the NeurIPS Workshop on Unifying Representations in Neural Models, December 2025.
    [Bibtex]

  4. The Influence of Scaffolds on Coordination Scaling Laws in LLM Agents.
    M. Meireles*, R. Bhati*, N. Lauffer, C. Allen. At the NeurIPS Workshop on Scaling Environments for Agents, December 2025.
    [Bibtex]

    • Also at the NeurIPS Workshop on Multi-Turn Interactions in Large Language Models, December 2025.
  5. Disentangling Independently Controllable Factors in Reinforcement Learning.
    R. Rodriguez-Sanchez, C. Allen, G. Konidaris. At the New York Reinforcement Learning Workshop, September 2025.
    [Bibtex]

  6. General Value Discrepancies Mitigate Partial Observability in Reinforcement Learning.
    P. Koepernik*, R.Y. Tao*, R. Parr, G. Konidaris, C. Allen. At the RLC Finding the Frame Workshop, August 2025.
    [Bibtex]

  7. Improving Reward Learning by Estimating Annotator Expertise.
    P. Czempin, R. Freedman, E. Novoseller, V.J. Lawhern, C. Allen, E. Bıyık. At the RSS Workshop on Continual Robot Learning from Humans, June 2025.
    [Bibtex]

  8. Memory as State Abstraction over Trajectories.
    A. Kirtland*, A. Ivanov*, C. Allen, M. Littman, G. Konidaris. At the 6th Multidisciplinary Conference on Reinforcement Learning and Decision Making, June 2025.
    [Preprint] [Blog] [Poster] [Bibtex]

    • Selected for spotlight presentation (20 of 339).
  9. Learning Transferable Sub-Goals by Hypothesizing Generalizing Features.
    A. de Mello Koch, A. Bagaria, B. Huo, Z. Zhou, C. Allen, G. Konidaris. At the AAAI Workshop on Generalization in Planning, March 2025.
    [Code] [Bibtex]

  10. Task Scoping: Generating Task-Specific Simplifications of Open-Scope Planning Problems.
    M. Fishman, N. Kumar, C. Allen, N. Danas, M. Littman, S. Tellex, and G. Konidaris. Presented at the IJCAI Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning, August 2023.
    [Bibtex]

  11. Characterizing the Action-Generalization Gap in Deep Q-Learning.
    Z. Zhou, C. Allen, K. Asadi, and G. Konidaris. At the 5th Multidisciplinary Conference on Reinforcement Learning and Decision Making, June 2022.
    [Code] [Bibtex]

  12. Bad-Policy Density: A Measure of Reinforcement Learning Hardness.
    D. Abel, C. Allen, D. Arumugam, D. E. Hershkowitz, M. Littman, L. L. S. Wong. In the ICML Workshop on Reinforcement Learning Theory, July 2021.
    [Bibtex]

Preprints

  1. Mean Actor Critic.
    C. Allen*, K. Asadi*, M. Roderick, A. Mohamed, G. Konidaris, and M. Littman. arXiv:1709.00503 [stat.ML], September 2017.
    [Code 1] [Code 2] [Bibtex]


*These authors contributed equally.