Identifiability and Generalizability from Multiple Experts in Inverse Reinforcement Learning

30 Pages Posted: 18 Oct 2022

See all articles by Paul Rolland

Paul Rolland

École Polytechnique Fédérale de Lausanne

Luca Viano

École Polytechnique Fédérale de Lausanne

Norman Schuerhoff

Swiss Finance Institute - HEC Lausanne

Boris Nikolov

University of Lausanne; Swiss Finance Institute; European Corporate Governance Institute (ECGI)

Volkan Cevher

École Polytechnique Fédérale de Lausanne

Date Written: October 13, 2022

Abstract

While Reinforcement Learning (RL) aims to train an agent from a reward function in a given environment, Inverse Reinforcement Learning (IRL) seeks to recover the reward function from observing an expert’s behavior. It is well known that, in general, various reward functions can lead to the same optimal policy, and hence, IRL is ill-defined. However, [1] showed that, if we observe two or more experts with different discount factors or acting in different environments, the reward function can under certain conditions be identified up to a constant. This work starts by showing an equivalent identifiability statement from multiple experts in tabular MDPs based on a rank condition, which is easily verifiable and is shown to be also necessary. We then extend our result to various different scenarios, i.e., we characterize reward identifiability in the case where the reward function can be represented as a linear combination of given features, making it more interpretable, or when we have access to approximate transition matrices. Even when the reward is not identifiable, we provide conditions characterizing when data on multiple experts in a given environment allows to generalize and train an optimal agent in a new environment. Our theoretical results on reward identifiability and generalizability are validated in various numerical experiments.

Suggested Citation

Rolland, Paul and Viano, Luca and Schuerhoff, Norman and Nikolov, Boris and Cevher, Volkan, Identifiability and Generalizability from Multiple Experts in Inverse Reinforcement Learning (October 13, 2022). Swiss Finance Institute Research Paper No. 22-79, Available at SSRN: https://ssrn.com/abstract=4251437 or http://dx.doi.org/10.2139/ssrn.4251437

Paul Rolland

École Polytechnique Fédérale de Lausanne ( email )

Quartier UNIL-Dorigny, Bâtiment Extranef, # 211
40, Bd du Pont-d'Arve
CH-1015 Lausanne, CH-6900
Switzerland

Luca Viano

École Polytechnique Fédérale de Lausanne ( email )

Quartier UNIL-Dorigny, Bâtiment Extranef, # 211
40, Bd du Pont-d'Arve
CH-1015 Lausanne, CH-6900
Switzerland

Norman Schuerhoff (Contact Author)

Swiss Finance Institute - HEC Lausanne ( email )

Chavannes-près-Renens
Switzerland

Boris Nikolov

University of Lausanne ( email )

Lausanne, CH-1015
Switzerland

Swiss Finance Institute ( email )

c/o University of Geneva
40, Bd du Pont-d'Arve
CH-1211 Geneva 4
Switzerland

European Corporate Governance Institute (ECGI) ( email )

c/o the Royal Academies of Belgium
Rue Ducale 1 Hertogsstraat
1000 Brussels
Belgium

Volkan Cevher

École Polytechnique Fédérale de Lausanne ( email )

Quartier UNIL-Dorigny, Bâtiment Extranef, # 211
40, Bd du Pont-d'Arve
CH-1015 Lausanne, CH-6900
Switzerland

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
58
Abstract Views
262
Rank
658,995
PlumX Metrics