research
My research spans inverse optimal control, partially observed Markov decision processes (POMDPs), quickest change detection, dynamic and differential game theory, and event-based sensing and perception for applications in robotics and other engineering domains.
Inverse Optimal Control
Inverse optimal control (or inverse reinforcement learning) is the problem of inferring the underlying objectives of agents or decision-makers by observing their actions or the consequences of their actions. Applications include learning reward functions from human demonstrations, modelling bird mid-air collision avoidance, and inferring intent for aircraft collision avoidance.
Selected Papers:
- T. L. Molloy, J. Inga, S. Hohmann, T. Perez. Inverse Optimal Control and Inverse Noncooperative Dynamic Game Theory. Springer Nature, 2022. [link]
- T. L. Molloy, J. J. Ford, T. Perez. Finite-Horizon Inverse Optimal Control for Discrete-Time Nonlinear Systems. Automatica, 2018. [link]
- T. L. Molloy, J. J. Ford, T. Perez. Online Inverse Optimal Control for Control-Constrained Discrete-Time Systems on Finite and Infinite Horizons. Automatica, 2020. [link]
- M. Xu, T. L. Molloy, S. Gould. Revisiting Implicit Differentiation for Learning Problems in Optimal Control. NeurIPS, 2023. [link]
- T. Zhao, T. L. Molloy. Extended Kalman Filtering for Recursive Online Discrete-Time Inverse Optimal Control. ACC, 2024.
- O. Dry, T. L. Molloy, W. Jin, I. Shames. ZORMS-LfD: Learning from Demonstrations with Zeroth-Order Random Matrix Search. IEEE Robotics and Automation Letters, 2025. [link]
Partially Observed Markov Decision Processes (POMDPs)
POMDPs are a stochastic control framework for sequential decision-making when the system state is not fully observable. Applications include autonomous robot exploration, convert navigation, privacy, and cybersecurity.
Standard POMDP
Uncertainty-Aware POMDP
Selected Papers:
- T. L. Molloy. ISC-POMDPs: Partially Observed Markov Decision Processes with Initial-State Dependent Costs. IEEE Control Systems Letters, 2025. [link]
- T. L. Molloy, G. N. Nair. Entropy-Regularized Partially Observed Markov Decision Processes. IEEE Transactions on Automatic Control, 2024. [link]
- T. L. Molloy, G. N. Nair. Smoother Entropy for Active State Trajectory Estimation and Obfuscation in POMDPs. IEEE Transactions on Automatic Control, 2023. [link]
Quickest Change Detection
Quickest change detection is the problem of detecting an abrupt change in a stochastic process as quickly as possible after it occurs while avoiding false alarms. Applications include vision-based aircraft detection and fault diagnosis.
Selected Papers:
- T. L. Molloy. Misspecified and Asymptotically Minimax Robust Quickest Change Diagnosis. IEEE Transactions on Automatic Control, 2021. [link]
- T. L. Molloy, J. J. Ford. Minimax Robust Quickest Change Detection in Systems and Signals with Unknown Transients. IEEE Transactions on Automatic Control, 2019. [link]
- T. L. Molloy, J. J. Ford. Misspecified and Asymptotically Minimax Robust Quickest Change Detection. IEEE Transactions on Signal Processing, 2017. [link]
Dynamic and Differential Game Theory
Dynamic and differential game theory studies multi-agent systems in which each agent optimises its own objective, knowing that others do likewise. Applications include autonomous collision avoidance and human–machine teaming.
Selected Papers:
- P. Braun, T. L. Molloy, I. Shames. Prying Pedestrian Surveillance-Evasion: Minimum-Time Evasion from an Agile Pursuer. Journal of Guidance, Control, and Dynamics, 2025. [link]
- T. L. Molloy, T. Perez, B. P. Williams. Optimal Bearing-Only-Information Strategy for Unmanned Aircraft Collision Avoidance. Journal of Guidance, Control, and Dynamics, 2020. [link]
Event-Based Sensing and Perception
Event cameras are neuromorphic sensors that output asynchronous per-pixel brightness changes, enabling high-speed, low-latency perception. Applications include high-speed object detection and tracking for robot perception in low-light and dynamic environments.
Selected Papers: