ºìÐÓÊÓÆµ

Skip to main content

Jonas Buro

  • BSc Honours (University of Victoria, 2021)

Notice of the Final Oral Examination for the Degree of Master of Science

Topic

Policy-Value Concordance for Deep Actor-Critic Reinforcement Learning Algorithms

Department of Computer Science

Date & location

  • Monday, December 9, 2024

  • 11:00 A.M.

  • Virtual Defence

Reviewers

Supervisory Committee

  • Dr. Brandon Haworth, Department of Computer Science, University of Victoria (Supervisor)

  • Dr. Teseo Schneider, Department of Computer Science, UVic (Member) 

External Examiner

  • Dr. Homayoun Najjaran, Department of Mechanical Engineering, University of Victoria 

Chair of Oral Examination

  • Dr. Yu-Ting Chen, Department of Mathematics and Statistics, UVic

     

Abstract

Designing general agents to optimize sequential decision-making underneath uncertainty has long been central to artificial intelligence research. Recent advances in deep reinforcement learning (RL) have made progress in this pursuit, achieving superhuman performance in a collection of challenging and visually complex domains, in a tabula rasa fashion without human intervention. Although making progress towards designing general problem-solving agents, these methods require significant amounts of data to learn effective decision-making policies relative to humans, preventing their application to most real-world problems for which no simulator exists. It is clear that the question of how to best learn models intended for downstream purposes such as planning in this context remains unresolved. Motivated by this gap in the literature, we propose a novel learning objective for RL algorithms with deep actor-critic architectures, with the goal of further investigating the efficacy of such methods as autonomous general problem solvers. These algorithms employ artificial neural networks as parameterized policy and value functions, which guide their decision-making processes. Our approach introduces a learning signal that explicitly captures desirable properties of the policy function in terms of the value function from the perspective of a downstream reward-maximizing agent. Specifically, the signal encourages the policy to favour actions in a manner that is concordant with the relative ordering of value function estimates during training. We hypothesize that when correctly balanced with other learning objectives, RL algorithms incorporating our method will converge to comparable strength policies using less real-world data relative to their original instantiations. To empirically investigate this hypothesis, we incorporate our technique with state-of-the-art RL algorithms, ranging from simple policy gradient actor-critic methods to more complex model-based architectures, and deploy them on standard deep RL benchmark tasks, and then perform statistical analysis on their performance data.