Article | FT2019. Proceedings of the 10th Aerospace Technology Congress, October 8-9, 2019, Stockholm, Sweden | Multi-Agent Multi-Objective Deep Reinforcement Learning for Efficient and Effective Pilot Training Linköping University Electronic Press Conference Proceedings
Göm menyn

Title:
Multi-Agent Multi-Objective Deep Reinforcement Learning for Efficient and Effective Pilot Training
Author:
Johan Källström: Saab AB and Department of Computer Science, Linköping University, Linköping, Sweden Fredrik Heintz: Department of Computer Science, Linköping University, Linköping, Sweden
DOI:
10.3384/ecp19162011
Download:
Full text (pdf)
Year:
2019
Conference:
FT2019. Proceedings of the 10th Aerospace Technology Congress, October 8-9, 2019, Stockholm, Sweden
Issue:
162
Article no.:
11,
Pages:
101-111
No. of pages:
11
Publication type:
Abstract and Fulltext
Published:
2019-10-23
ISBN:
978-91-7519-006-8
Series:
Linköping Electronic Conference Proceedings
ISSN (print):
1650-3686
ISSN (online):
1650-3740
Publisher:
Linköping University Electronic Press, Linköpings universitet


Export in BibTex, RIS or text

The tactical systems and operational environment of modern fighter aircraft are becoming increasingly complex. Creating a realistic and relevant environment for pilot training using only live aircraft is difficult, impractical and highly expensive. The Live, Virtual and Constructive (LVC) simulation paradigm aims to address this challenge. LVC simulation means linking real aircraft, ground-based systems and soldiers (Live), manned simulators (Virtual) and computer controlled synthetic entities (Constructive). Constructive simulation enables realization of complex scenarios with a large number of autonomous friendly, hostile and neutral entities, which interact with each other as well as manned simulators and real systems. This reduces the need for personnel to act as role-players through operation of e.g. live or virtual aircraft, thus lowering the cost of training. Constructive simulation also makes it possible to improve the availability of training by embedding simulation capabilities in live aircraft, making it possible to train anywhere, anytime. In this paper we discuss how machine learning techniques can be used to automate the process of constructing advanced, adaptive behavior models for constructive simulations, to improve the autonomy of future training systems. We conduct a number of initial experiments, and show that reinforcement learning, in particular multi-agent and multi-objective deep reinforcement learning, allows synthetic pilots to learn to cooperate and prioritize among conflicting objectives in air combat scenarios. Though the results are promising, we conclude that further algorithm development is necessary to fully master the complex domain of air combat simulation.

Keywords: pilot training, embedded training, LVC simulation, artificial intelligence, autonomy,sub-system and system technology

FT2019. Proceedings of the 10th Aerospace Technology Congress, October 8-9, 2019, Stockholm, Sweden

Author:
Johan Källström, Fredrik Heintz
Title:
Multi-Agent Multi-Objective Deep Reinforcement Learning for Efficient and Effective Pilot Training
DOI:
http://dx.doi.org/10.3384/ecp19162011
References:
No references available

FT2019. Proceedings of the 10th Aerospace Technology Congress, October 8-9, 2019, Stockholm, Sweden

Author:
Johan Källström, Fredrik Heintz
Title:
Multi-Agent Multi-Objective Deep Reinforcement Learning for Efficient and Effective Pilot Training
DOI:
https://doi.org10.3384/ecp19162011
Note: the following are taken directly from CrossRef
Citations:
No citations available at the moment


Responsible for this page: Peter Berkesand
Last updated: 2019-11-06