Assessing the Security Risks of Generative Ai in Software Development
23 Pages Posted: 7 Apr 2025
Abstract
The fast and industry-wide adoption of generative Artificial Intelligence (AI) tools in software organizations can introduce or exacerbate security risks. While software engineering research has identified various security risks of using generative AI, these risks remain insufficiently analyzed in the context of organizations’ actual security practices. Against this backdrop, we investigated the security risks of using generative AI in software development in relation to the 15 practices outlined in the Software Assurance Maturity Model (SAMM). In collaboration with three software organizations, we conducted a software assurance maturity assessment using SAMM, systematically identified generative AI security risks for each SAMM security practice, and finally assessed our results in a workshop with the organizations. Our study shows that on the overall level of business functions, the identified security risks are predominantly related to governance, irregardless of the organization’s maturity score and business context. On the level of security practices, the organizations are primarily concerned about threat assessment, strategy & metrics, security testing, and operational management. We discuss how these findings contribute to understanding the practical consequences for the security of adopting generative AI into organizations’ development processes.
Keywords: Generative AI, Software Development, Multi-case Study, Security Risks, Security Practices.
Suggested Citation: Suggested Citation