首页    期刊浏览 2024年11月27日 星期三
登录注册

文章基本信息

  • 标题:Peeking inside the Black Box: Interpreting Deep-learning Models for Exoplanet Atmospheric Retrievals
  • 本地全文:下载
  • 作者:Kai Hou Yip ; Quentin Changeat ; Nikolaos Nikolaou
  • 期刊名称:The Astronomical journal
  • 印刷版ISSN:0004-6256
  • 电子版ISSN:1538-3881
  • 出版年度:2021
  • 卷号:162
  • 期号:5
  • 页码:1-29
  • DOI:10.3847/1538-3881/ac1744
  • 语种:English
  • 出版社:American Institute of Physics
  • 摘要:Deep-learning algorithms are growing in popularity in the field of exoplanetary science due to their ability to model highly nonlinear relations and solve interesting problems in a data-driven manner. Several works have attempted to perform fast retrievals of atmospheric parameters with the use of machine-learning algorithms like deep neural networks (DNNs). Yet, despite their high predictive power, DNNs are also infamous for being "black boxes." It is their apparent lack of explainability that makes the astrophysics community reluctant to adopt them. What are their predictions based on? How confident should we be in them? When are they wrong, and how wrong can they be? In this work, we present a number of general evaluation methodologies that can be applied to any trained model and answer questions like these. In particular, we train three different popular DNN architectures to retrieve atmospheric parameters from exoplanet spectra and show that all three achieve good predictive performance. We then present an extensive analysis of the predictions of DNNs, which can inform us–among other things–of the credibility limits for atmospheric parameters for a given instrument and model. Finally, we perform a perturbation-based sensitivity analysis to identify to which features of the spectrum the outcome of the retrieval is most sensitive. We conclude that, for different molecules, the wavelength ranges to which the DNNs predictions are most sensitive do indeed coincide with their characteristic absorption regions. The methodologies presented in this work help to improve the evaluation of DNNs and to grant interpretability to their predictions.
国家哲学社会科学文献中心版权所有