Battlefield Trust for Human-Machine Teaming: Evidence from the US Military

24 Pages Posted: 20 Dec 2023

See all articles by Paul Lushenko

Paul Lushenko

Cornell University - Department of Government; US Army War College (DEP)

Date Written: December 5, 2023


Experts agree that future warfare will be characterized by countries’ use of military technologies enhanced with Artificial Intelligence (AI). These AI-enhanced capabilities are thought to help countries maintain lethal overmatch of adversaries, especially when used in concert with humans. Yet it is unclear what shapes servicemembers’ trust in human-machine teaming, wherein they partner with AI-enhanced military technologies to optimize battlefield performance. In October 2023, I administered a conjoint survey at the US Army and Naval War Colleges to assess how varying features of AI-enhanced military technologies shape servicemembers’ trust in human-machine teaming. I find that trust in AI-enhanced military technologies is shaped by a tightly calibrated set of considerations including technical specifications, namely their non-lethal purpose, heightened precision, and human oversight; perceived effectiveness in terms of civilian protection, force protection, and mission accomplishment; and, international oversight. These results provide the first experimental evidence of military attitudes for manned-unmanned teams, which have research, policy, and modernization implications.

Keywords: Artificial Intelligence, autonomy, civil-military relations, human-machine teaming, military, trust

Suggested Citation

Lushenko, Paul, Battlefield Trust for Human-Machine Teaming: Evidence from the US Military (December 5, 2023). Available at SSRN: or

Paul Lushenko (Contact Author)

Cornell University - Department of Government ( email )

Ithaca, NY 14853
United States

US Army War College (DEP) ( email )

122 Forbes Avenue
Carlisle, PA 17013
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics