摘要:AI-based automated decisions are increasingly used as part of new services being deployed tothe general public. This approach to building services presents significant potential benefits,such as the reduced speed of execution, increased accuracy, lower cost, and ability to adapt toa wide variety of situations.However, equally significant concerns have been raised and arenow well documented such as concerns about privacy, fairness, bias, and ethics.On theconsumer side, more often than not, the users of those services are provided with no orinadequate explanations for decisions that may impact their lives.In this article, we report theexperience of developing a socio-technical approach to constructing explanations for suchdecisions from their audit trails, or provenance, in an automated manner.The work has beencarried out in collaboration with the UK Information Commissioner's Office.In particular, wehave implemented an automated Loan Decision scenario, instrumented its decision pipeline torecord provenance, categorized relevant explanations according to their audience and theirregulatory purposes, built an explanation-generation prototype, and deployed the wholesystem in an online demonstrator.