A summary of The Ten Commandments of Ethical Medical AI by Heimo Muller et al.
Nicholas M. Synovic
- 2 minutes read - 400 wordsA summary of The Ten Commandments of Ethical Medical AI
Heimo Muller et al.; IEEE Computing Edge, October 2022 DOI [0]
Table of Contents
Summary
AI is recognized as a key component of meeting the UN’s Sustainable Development Goal that:
- 1 billion more people should benefit from universal health coverage
- 1 billion more people should be better protected from health emergencies
- 1 billion more people should enjoy better health and well-being
AI is accelerating the growth of precision medicine and is generally well received by both engineers and medical practitioners.
However, training datasets and solutions are being developed by community members with no formal set of rules or ethics established specifically for medical AI datasets and models. There is also difficulty in describing why a model made the decision that it did which is crucial for the medical community. Additionally, current AI models are unable to grasp the context to a problem. Furthermore, there is a concern of medical bias in the both the training datasets and the predictions of the models.
Key Insights
The European General Data Protection Regulation (GDPR) is making black box solutions difficult to use in business because of their lack of explainability. Thus there exists questions as these standards becomes mandatory such as:
- Who is legally responsible?
- Can we agree on ethics of medical AI?
The Ten Commandments
- It must be recognizable that and which part of a decision or action is taken and carried out by AI.
- It must be recognizable which part of the communication is performed by an AI agent.
- The responsibility for an AI decision, action, or communicative process must be taken by a competent physical or legal person.
- AI decisions, actions, and communicative processes must be transparent and explainable.
- An AI decision must be comprehensible and repeatable.
- An explanation of an AI decision must be based on state-of-the-art (scientific) theories.
- An AI decision, action, or communication must not be manipulative by pretending accuracy.
- An AI decision, action, or communication must not violate any applicable law and must not lead to human harm.
- An AI decision, action, or communication shall not be discriminatory. This applies in particular to the training of algorithms.
- The target setting, control, and monitoring of AI decisions, actions, and communications shall not be performed by algorithms.