Accountability, Secrecy, and Innovation in AI-Enabled Clinical Decision Software
7 Journal of Law and the Biosciences __ (2020)
Duke Law School Public Law & Legal Theory Series No. 2020-66
44 Pages Posted: 15 Oct 2020
Date Written: September 2, 2020
Employing analytical and empirical tools, this article investigates the intricate interrelationship of trade secrecy, accountability, and innovation incentives in clinical decision software enabled by machine learning (ML-CD). While trade secrecy can provide incentives for innovation, it can also diminish the ability of third parties to adjudicate risk and benefit responsibly. However, the type of information FDA and adopters are asking for, and developers are willing to provide, itself represents something of a black box.
Our article shines light into the black box. We find that developers regard secrecy over training data and details of the trained model as central to competitive advantage. Some believe uncertainty regarding the availability and enforceability of patents in ML-CD contributes to secrecy. Meanwhile, neither FDA nor adopters are currently asking for these types of details. Additionally, in some cases, it is not clear whether developers are being asked to provide rigorous evidence of performance. FDA, developers, and adopters could all do more to promote information flow, particularly as ML-CD models move into areas of higher risk. Specifically, FDA and developers could, without sacrificing innovation incentives, release summary information regarding training data and process. Moreover, particularly for higher risk cases, FDA and adopters could ask for evidence of performance on completely independent data sets. Consistent with protecting trade secrecy, FDA could also set up procedures by which to ask for full details regarding data and models.
Keywords: machine learning, clinical decision software, accountability, secrecy, innovation, intellectual property
JEL Classification: K23
Suggested Citation: Suggested Citation