Publications

Thesis

Papers

  1. Benchmarking Partial Observability in Reinforcement Learning with a Suite of Memory-Improvable Domains.
    R.Y. Tao, K. Guo, C. Allen, G. Konidaris. Accepted, Reinforcement Learning Conference, August 2025.
    [Preprint] [Bibtex]

  2. Focused Skill Discovery: Using Per-Factor Empowerment to Control State Variables.
    J.C. Carr, Q. Sun, C. Allen. Accepted, Reinforcement Learning Conference, August 2025.
    [Preprint] [Bibtex]

    • Also accepted, Multidisciplinary Conference on Reinforcement Learning and Decision Making, June 2025.
  3. Memory as State Abstraction over Trajectories.
    A. Kirtland, A. Ivanov, C. Allen, M. Littman, G. Konidaris. Accepted, Multidisciplinary Conference on Reinforcement Learning and Decision Making, June 2025.

    • Selected for spotlight presentation (20 of 339).
  4. Learning Transferable Sub-Goals by Hypothesizing Generalizing Features.
    A. de Mello Koch, A. Bagaria, B. Huo, Z. Zhou, C. Allen, G. Konidaris. At the AAAI Workshop on Generalization in Planning, March 2025.
    [Code] [Bibtex]

  5. Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy.
    C. Allen*, A. Kirtland*, R.Y. Tao*, S. Lobel, D. Scott, N. Petrocelli, O. Gottesman, R. Parr, M.L. Littman, G. Konidaris. In Advances in Neural Information Processing Systems, December 2024.
    [Blog] [Video] [Poster] [Code] [Bibtex]

    • Selected for oral presentation (3 of 72) at the ICML Foundations of Reinforcement Learning and Control Workshop, July 2024.
    • Also a workshop paper at the RLC Finding the Frame Workshop, August 2024.
  6. Evidence of Learned Look-Ahead in a Chess-Playing Neural Network.
    E. Jenner, S. Kapur, V. Georgiev, C. Allen, S. Emmons, S. Russell. In Advances in Neural Information Processing Systems, December 2024.
    [Blog] [Poster] [Code] [Bibtex]

  7. Task Scoping: Generating Task-Specific Simplifications of Open-Scope Planning Problems.
    M. Fishman, N. Kumar, C. Allen, N. Danas, M. Littman, S. Tellex, and G. Konidaris. Presented at the IJCAI Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning, August 2023.
    [Bibtex]

  8. Coarse-Grained Smoothness for Reinforcement Learning in Metric Spaces.
    O. Gottesman, K. Asadi, C. Allen, S. Lobel, G. Konidaris, and M. Littman. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics, April 2023.
    [Bibtex]

  9. Characterizing the Action-Generalization Gap in Deep Q-Learning.
    Z. Zhou, C. Allen, K. Asadi, and G. Konidaris. In the 5th Multidisciplinary Conference on Reinforcement Learning and Decision Making, June 2022.
    [Code] [Bibtex]

  10. Optimistic Initialization for Exploration in Continuous Control.
    S. Lobel, O. Gottesman, C. Allen, A. Bagaria, and G. Konidaris. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, February 2022.
    [Code] [Bibtex]

  11. Learning Markov State Abstractions for Deep Reinforcement Learning.
    C. Allen, N. Parikh, O. Gottesman, and G. Konidaris. In Advances in Neural Information Processing Systems, December 2021.
    [Blog] [Talk] [Poster] [Code] [Bibtex]

    • Also a workshop paper at the NeurIPS Deep Reinforcement Learning Workshop, December 2020. [Bibtex]
  12. Efficient Black-Box Planning Using Macro-Actions with Focused Effects.
    C. Allen, M. Katz, T. Klinger, G. Konidaris, M. Riemer, and G. Tesauro. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, August 2021.
    [Blog] [Talk] [Poster] [Code] [Bibtex]

    • Also a workshop paper at the ICAPS Workshop on Heuristics and Search for Domain-independent Planning, August 2021. [Bibtex]
  13. Bad-Policy Density: A Measure of Reinforcement Learning Hardness.
    D. Abel, C. Allen, D. Arumugam, D. E. Hershkowitz, M. Littman, L. L. S. Wong. In the ICML Workshop on Reinforcement Learning Theory, July 2021.
    [Bibtex]

  14. Mean Actor Critic.
    C. Allen*, K. Asadi*, M. Roderick, A. Mohamed, G. Konidaris, and M. Littman. arXiv:1709.00503 [stat.ML], September 2017.
    [Code 1] [Code 2] [Bibtex]


*These authors contributed equally.