Generative AI adoption surges in software development
New research from software supply chain management company Sonatype reveals how generative AI is influencing and impacting the work of software engineers and the software development life cycle.
According to the 800 developer (DevOps) and application security (SecOps) leaders surveyed, virtually all (97%) are using the technology today, with three-quarters (74%) reporting they feel pressure to use it despite identified security risks.
In fact, most respondents agree that security risks are their biggest concern associated with the technology, underscoring the critical need for responsible AI adoption that will enhance both software and security.
While DevOps and SecOps respondents hold similar outlooks on generative AI in most cases, there are notable differences with regards to adoption and productivity.
Key findings among the two groups include:
- SecOps are early adopters: Nearly half (45%) of SecOps leads have already implemented generative AI into the software development process, compared to less than one third (31%) for DevOps.
- SecOps teams save more time: SecOps leads see greater time savings than their DevOps counterparts, with 57% saying generative AI saves them at least six hours a week compared to only 31% of DevOps respondents.
- There are differing opinions on benefits: When asked about the most positive impacts of this technology, DevOps leads report faster software development (16%) and more secure software (15%). SecOps leads cite increased productivity (21%) and faster issue identification/resolution (16%) as the top benefits.
- Open source code will be a bigger target: More than three-quarters of DevOps leads say the use of generative AI will result in more vulnerabilities in open source code. Surprisingly, SecOps leads are less concerned at 58%. Further, 42% of DevOps respondents and 40% of SecOps leads say lack of regulation could deter developers from contributing to open source projects.
- DevOps and SecOps leads both want more regulation: Asked who they believe is responsible for regulating the use of generative AI, 59% of DevOps leads and 78% of SecOps leads say both the government and individual companies should be responsible for regulation.
“The AI era feels like the early days of open source, like we’re building the plane as we’re flying it in terms of security, policy, and regulation,” said Brian Fox, Co-founder and CTO at Sonatype. “Adoption has been widespread across the board, and the software development cycle is no exception. While productivity dividends are clear, our data also exposes a concerning, hand-in-hand reality: the security threats posed by this still-nascent technology. With every innovation cycle comes new risk, and it’s paramount that developers and application security leaders eye AI adoption with an eye for safety and security.”
The licensing and compensation debate was also top of mind for both groups – without it, developers could be left in legal limbo dealing with plagiarism claims against Large Language Models (LLMs). Notably, rulings against copyright protection for AI generated art have already prompted discussion about how much human input is necessary to meet what current law defines as true authorship. Respondents agreed that creators should own the copyright for AI generated output in the absence of copyright law (40%), and both overwhelmingly agreed that developers should be compensated for the code they wrote if it’s used in open source artifacts in LLMs (DevOps 93% vs. SecOps 88%).
Head here to download the full report and learn more about in-depth patterns of generative AI usage, concerns, and its benefits.
Methodology
Sonatype commissioned research panel provider Sago to conduct a survey of 400 DevOps leaders and 400 SecOps leaders in the US whose responsibilities involve software development, coding, and developer operations or application security, threat intelligence and analysis, and security operations. The web-based survey was fielded 12—21 July 2023. The margin of error is 3.46%.