Type-based Methods for Interaction in Multiagent Systems
Studying interactions between agents is an active line of research in artificial intelligence (AI), and various approaches now exist. A method for modeling interactions previously studied in other disciplines, which is receiving much interest over the past decade in AI, is to reason about the interaction using a space of predefined behaviors. Specifically, in the absence of knowing the true behavior of the other agents, the approach supposes that the other agents' behaviors are drawn from a set of known or hypothesized behaviors, and to decide the agent's own actions in expectation of these dynamic types which may change as the agents act and observe. We refer to a private and sufficient abstraction of a hypothesized behavior as a type.
This idea has been studied extensively in economics by game theorists and investigated in preliminary ways by sub-groups in AI for game playing and planning. Preliminary outcomes emphatically suggest that reasoning about types is an important tool for solving problems involving interactions with high uncertainty about agent behaviors and in which extensive online learning based on trial-and-error is undesirable or infeasible. This half-day tutorial will provide a comprehensive and unified introduction to the theory and practice of type-based methods spanning early research in game theory to the latest work in AI, as well as outlining open problems for future research. The tutorial requires no prior knowledge of multiagent theory but assumes familiarity with basic probability and statistics.
Given that type-based methods are studied by researchers in several sub areas including game theory, opponent modeling, plan recognition, and multiagent planning and teamwork, this tutorial will be of interest to a wide range of conference attendees. Early graduate students, research faculty, and industry researchers who have had minimal prior exposure to type-based methods but encounter interactions will gain a detailed understanding of the methodologies. They will be able to judge whether type-based methods present a useful approach for their own work. Researchers having prior experience with type-based methods will appreciate the unified survey describing progress by different communities and may get acquainted with previously unknown work. Finally, early graduate students will learn about open questions which they may like to address in their own research. The tutorial is designed to appeal to both theoretical and applied researchers, by covering the theory and practice of type-based methods.
This half-day tutorial (3.5h + 30min break) will begin with an introduction to situate the main discourse, discuss the core approach of type-based methods and its potential strengths, and a road map for the remainder of the tutorial. The main material is then categorized into three parts: Part 1 will discuss seminal works in game theory related to type-based interactions. Part 2 will discuss research on type-based methods from the multiagent systems literature with a focus on models with full observability of states and actions. Finally, Part 3 will discuss research on type-based methods from the multiagent systems literature with a focus on models with partial observability of states and actions. This tutorial structure reflects a chronological order of development and the increasing complexity of decision models used therein. Parts 2 and 3 will each conclude with a discussion of open questions.
Download tutorial slides here.
- What are multiagent systems?
- Many methods of interaction; here: type-based methods
- What are types? How should we reason with them?
- Example (informal) to clarify core idea
- Potential advantages and pitfalls of type-based method
- How are types studied in various areas?
- Roadmap for remainder of tutorial
- Bayesian games and Bayesian Nash equilibrium
- Evolution of beliefs and equilibrium attainment
- Universal type spaces
- Impact of prior beliefs on equilibrium attainment
- Impossibility results on prediction and optimization
- Stochastic Bayesian games and HBA algorithm as basis of discussion
- Different implementations and experimental studies
- Convergence and correctness of different posterior formulations
- Impact of prior beliefs on long-term payoff maximization
- Exploration of types in multiagent reinforcement learning
- Implications and detection of incorrect hypothesized types
- Open questions
- Intentional and subintentional types
- Reasoning with types in decision-theoretic planning – Interactive POMDP framework
- Integration of types in probabilistic graphical models
- Applications of type-based methods
- Open questions
Dr. Stefano V. Albrecht is a postdoctoral fellow in the Department of Computer Science at The University of Texas at Austin, where he is a member of Prof. Peter Stone's group and supported by a fellowship from the Alexander von Humboldt Foundation. His research interests are in the area of autonomous agents and multiagent systems, specifically sequential decision making under uncertainty. His Ph.D. research (completed in 2015 at The University of Edinburgh) has made a number of contributions to the type-based method and led to publications in leading AI conferences, including AAAI, UAI, and AAMAS. Dr. Albrecht is co-chair of the AAAI workshop series on Multiagent Interaction without Prior Coordination (MIPC), now in its third edition at AAAI-16, and editor of a special issue on MIPC at the Journal of Autonomous Agents and Multi-Agent Systems.
Dr. Prashant Doshi is an Associate Professor of Computer Science and faculty member of the AI Institute at The University of Georgia, USA. His research interests lie in decision-making and specifically in decision-making under uncertainty in multiagent settings and in game theory. He has also had short stints at the IBM T. J. Watson Research Center where he worked in the eBusiness Group. He has published extensively in journals, conferences, and other forums in the fields of agents and AI. He has served on the program and reviewing committees of several conferences, including AAMAS and AAAI, and workshops in the field of agents. Prof. Doshi has taught introductory courses on AI to undergraduate and graduate students for more than 10 years, and an advanced course on decision making under uncertainty for more than 5 years, all of which were well received among the students. He has been a cospeaker on a longstanding tutorial on decision making in multiagent settings that has been held at the AAMAS conference consecutively for 8 years. He has given numerous presentations in conferences and invited talks at research institutions and universities.