Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Ethical Considerations for Civilian AI Developers Using Open-Source Military Data

  • For the bibliography file, please see citations.bib. The sources are accessible as of 2025-04-28 with no paywalls.
  • The following information is true but some references were misassigned by the AI, which ironically shows how unreliable and risky AI can be.

Civilian AI developers working with open-source military data need to be especially mindful of the ethical and legal challenges involved. Even though this data is publicly available, it often originates from complex military contexts where AI-driven decisions can have serious, sometimes life-or-death consequences. The NSCAI Final Report (n.d.) highlights the risks of using AI in military operations, particularly around accountability and compliance with international humanitarian law (IHL).

It’s important for developers to incorporate IHL principles, such as the proportionality standard, into their AI systems to help minimize harm to civilians and ensure lawful use (Woodcock, 2024). Careful auditing of training data is also essential to avoid hidden biases that could lead to wrongful targeting or misclassification-issues that contribute to what some call the “accountability gap” in AI decision-making (Crootof, 2022). Additionally, strong access controls and governance are needed to prevent military AI technologies from being misused, given their dual-use nature (Paoli & Afina, 2025).

Another key concern is the “mosaic effect,” where combining various pieces of open-source intelligence can unintentionally cause harm or violate privacy (Stewart & Hinds, 2023). The United Nations Secretary-General has stressed that decisions involving human life should never be left solely to algorithms or commercial interests, highlighting the need to maintain meaningful human oversight in AI systems (United Nations, 2024).

Ethical frameworks developed for military AI provide valuable lessons for civilian developers, especially when adapting these technologies for other fields like healthcare (Oniani et al., 2023; Morgan et al., 2020). By adopting these cross-domain principles, developers can promote responsible innovation while respecting human rights and international norms.

Therefore, civilian AI developers using military data should prioritize transparency, fairness, and strong ethical oversight to manage the risks inherent in this sensitive area (Roumate, 2020; Khan, 2023). Following these best practices helps ensure AI is developed safely and responsibly, in line with international law and ethical standards.

References

  • Crootof, R. (2022). AI and the Actual IHL Accountability Gap. SSRN.
  • Stewart, R., & Hinds, G. (2023). Algorithms of War: The Use of Artificial Intelligence in Decision Making in Armed Conflict. Humanitarian Law & Policy Blog.
  • Morgan, F. E., et al. (2020). Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. RAND Corporation.
  • Oniani, D., et al. (2023). Adopting and Expanding Ethical Principles for Generative Artificial Intelligence from Military to Healthcare. npj Digital Medicine, 6(1).
  • Paoli, G. P., & Afina, Y. (2025). AI in the Military Domain: A Briefing Note for States. UNIDIR.
  • Roumate, F. (2020). Artificial Intelligence, Ethics and International Human Rights Law. The International Review of Information Ethics, 29.
  • Khan, S. Y. (2023). Autonomous Weapon Systems and the Changing Face of International Humanitarian Law. International Law Blog.
  • United Nations. (2024). Secretary-General’s Remarks to the Security Council on Artificial Intelligence.
  • Woodcock, T. K. (2024). Human/Machine(-Learning) Interactions, Human Agency and the International Humanitarian Law Proportionality Standard. Global Society, 38(1).
  • National Security Commission on Artificial Intelligence. (n.d.). Chapter 4 – NSCAI Final Report.

Dataset Structure

BibTeX file

Usage

This dataset is intended for:

  • Researchers studying military AI ethics
  • Policy analysts examining IHL compliance
  • Developers working on defence-related AI systems
  • International relations scholars

Limitations

  • Only a small sample of what is publicly available.
    • There are more reputable, authoritative, and comprehensive sources
    • See also the Red Cross and other United Nations documents for more information on IHL
    • And your local or school library
  • The ethics of military AI depends on the capabilities and applications of the AI, its developers, and its users

Licence

CC-BY-4.0 (assumed - verify original source licences for specific entries)

Downloads last month
5