摘要:As the performance and complexity of machine learning models have grown significantlyover the last years, there has been an increasing need to develop methodologies to describe theirbehaviour. Such a need has mainly arisen due to the widespread use of black-box models, i.e.,high-performing models whose internal logic is challenging to describe and understand. Therefore,the machine learning and AI field is facing a new challenge: making models more explainablethrough appropriate techniques. The final goal of an explainability method is to faithfully describethe behaviour of a (black-box) model to users who can get a better understanding of its logic, thusincreasing the trust and acceptance of the system. Unfortunately, state-of-the-art explainabilityapproaches may not be enough to guarantee the full understandability of explanations from a humanperspective. For this reason, human-in-the-loop methods have been widely employed to enhanceand/or evaluate explanations of machine learning models. These approaches focus on collectinghuman knowledge that AI systems can then employ or involving humans to achieve their objectives(e.g., evaluating or improving the system). This article aims to present a literature overview oncollecting and employing human knowledge to improve and evaluate the understandability ofmachine learning models through human-in-the-loop approaches. Furthermore, a discussion on thechallenges, state-of-the-art, and future trends in explainability is also provided.