A Method for Ethical AI in Defence – new report and toolkit
“Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms.”
1. Responsibility – Who is responsible for AI?
3. Trust – How can AI be trusted?
4. Law – How can AI be used lawfully?
5. Traceability – How are the actions of AI recorded?
- Describe the military context in which the AI will be employed;
- Explain the types of decisions supported by the AI;
- Explain how the AI integrates with human operators to ensure effectiveness and ethical decision making in the anticipated context of use and countermeasures to protect against potential misuse;
- Explain framework/s to be used;
- Employ subject matter experts to guide AI development;
- Employ appropriate verification and validation techniques to reduce risk.
- The right to go to war (jus ad bellum);
- The right conduct in war (jus in bello);
- The morality of post-war settlement and construction (just post bellum).