摘要:In 2015 Savulescu and Maslen argued, influentially, that a moral artificial intelligence (MAI) could be developed with the main function of moral advising human agents. The suggested MAI would monitor factors that affect moral decision making, would make moral agents aware of biases and would advise them on “right course of action”. Effectively, since the MAI is an AI, it would “Gather, compute and update data to assist human agents with their moral decisionmaking.” The data would encode information about agents and environment, moral principles and values. In this paper I critically analyze the suggestion of MAI and I argue that in the contemporary domain of digital technology and AI driven industry a version of MAI can and should be developed for practical purposes. My main argument is that even if, as Savulescu stresses, “a single account of right action cannot be agreed upon” this should not prevent us from harnessing the computational and reasoning power of AI in order to regulate the exploding AI industry. For this purpose, I suggest a modification of MAI – from a mere moral advisor to a full blown Artificial Moral Agent (AMA), capable besides MAI’s gathering, computing and analyzing data for moral advice to real time moral analysis of complex real-life situations (like one involving autonomous vehicles, drone swarms AI led local warfare and AI led nuclear warfare) and choosing an optimal moral decision as well as instructing the AI system to execute it in real time. I also suggest the encoding framework of AMA, based on multidimensional graphs, that would be able to quantize the acquired moral relevant data and moral reasoning engine, operating rationally on models, rendered from the data.