Black Duck’s latest edition of its annual “Global State of DevSecOps” report provides a comprehensive overview of the current state of application security. The report surveyed more than 1,000 software developers, application security professionals, chief information security officers, and DevOps engineers, and this year, uncovered three major findings.
More than 90% of respondents affirmed that they are using AI assistance in their code development. And as the report notes, “AI-generated code can create ambiguity about IP [intellectual property] ownership and licensing—especially when the AI model uses datasets that might include open source or other third-party code without attribution. AI-assisted coding tools also have the potential to introduce security vulnerabilities into codebases.”
But many organizations are not yet addressing these risks. According to the report, while 85% of respondents said they have some measures in place to address the potential issues with of AI-generated code, only 24% were “very confident” of those measures. Another 41% were moderately confident, but 26% were either “slightly” or “not at all” confident.
When one out of four respondents report lacking confidence in their own software security measures, there is a serious problem. Compounding that problem, 21% of those surveyed admit that “their development teams are bypassing corporate policies and using unsanctioned—and, one would assume, unsupervised—AI tools.”
The 2024 “Global State of DevSecOps” report indicates that most organizations are navigating how to permit and use AI-assisted development tools. The results imply that developers are employing AI either to develop code from inception or to complete code they have already begun.
The report details that 27% of organizations are allowing developers to utilize AI-based tools to write code and modify projects. It also shows that 43% of organizations currently restrict the use of AI solutions to specific developers or teams. Only 5% report that they have not yet embraced AI-assisted development and are certain that their developers are not utilizing AI development tools. However, the report also notes that 21% of organizations are aware that at least some developers are using AI tools in development despite organizational prohibitions.
It is this last statistic that is the most concerning. Using AI-based development tools without authorization is a major organizational risk. If security teams have no visibility into AI tools, or the code that comes from them, it will be exceedingly challenging to adjust DevSecOps programs to maintain adequate levels of security and augment productivity. What this statistic really tells us is that AI development tools are already in use even if organizations think they are not, and your organization needs to make a plan to implement them securely or risk losing visibility into your software security risk landscape.
Get a deep dive into the state of DevSecOps across roles and technologies in light of AI-assisted development.
Twenty-four percent of organizations surveyed for the report expressed high confidence in the automated mechanisms they’ve put in place to assess AI-generated or AI-completed code, and 41% of respondents reported moderate confidence in their capacity to automatically test this code. This leaves 20% of organizations that are only slightly confident, 6% that are not at all confident in the preparedness of their organization, and 5% that lack sufficient visibility or for which this is not a current priority.
While some organizations may be able to manage AI-generated code issues with their current AppSec infrastructure, others may need to allocate additional security resources, consolidate testing tools, integrate automated testing mechanisms, and unify policies across projects and teams. These measures will allow them to install safety nets and security gates to enable their organizations to adjust to the changes in their pipelines as rapidly as AI will propel them.
It is also important to note that, in addition to application security issues, AI-assisted development may introduce issues with software license compliance, potentially jeopardizing intellectual property by incorporating third-party code with associated reciprocal licenses.
One thing this year’s “Global State of DevSecOps” report makes clear is that organizations that embrace AI-enabled development are approaching this challenge with varying levels of caution. The key factor is the level of confidence each organization has in its own security protocols. The report shows a spectrum of responses to AI-generated code use; some organizations are proceeding with cautious confidence, while others appear to be taking serious risks with their development security.
It's no surprise that of the 27% of organizations that allow free AI use across their organization, 81% report having high and moderate confidence in AI. These organizations are ready to go and they’re confident that they have the controls in place to mitigate risk. However, it is a bit of a surprise that the 43% of respondents who are taking a more phased approach to AI-enabled development, also reported having moderate confidence in their ability to secure AI-generated code even while allowing only select development teams to use it in their work.
Meanwhile, 21% of surveyed organizations report lower overall confidence in their ability to secure AI-generated code—while recognizing that development teams are establishing unauthorized secondary AI workflows that circumvent security. And there are 5% of respondents that disallow the use of AI in development and are sure their developers are not using it. We can only speculate whether this confidence about managing AI risk stems from this disallowance, or because they’re getting controls in place before they open the gates.
However, each of these cohorts also includes respondents that admit to being only slightly, or not at all, confident in their ability to secure AI tools and their output within the context of their development pipelines. The least-concerning subset of this group are those that do not permit AI use at all, either because of a lack of confidence in preparation or because its use is not a priority for them. The risk for an organization in this group is when AI-generated code and risk mitigation controls are not a priority despite knowing that it’s already being used in development. While this may feel like controlled use, it is still critical to evaluate risk visibility and establish automated security gates.
The group most at risk, however, are those respondents that reported allowing AI use during development despite also reporting a clear lack of confidence in their preparations to mitigate risks.
While the risks posed by AI development are similar to those posed by traditional application development (e.g., weak source code, vulnerable open source), they manifest at an even faster velocity. Last year, 38% of the organizations that responded to this DevSecOps survey tested their business-critical apps less than weekly. And only 36% of them involved cross-functional teams in AppSec testing. In addition only 5% of respondents reported being able to resolve critical issues within a week. This is not a security posture that is going to work as AI development tools become more widely used.
In light of the ongoing adoption of AI-assisted development tools, organizations need to craft a strategy that closes the window of opportunity for an attack. Most organizations have learned to accommodate open source dependencies in their development pipelines and have built systems to discover vulnerabilities as they are published, so they can patch and update in a timely manner. Most organizations test proprietary source code regularly to detect weaknesses and insecure configurations. To incorporate the needs of AI-assisted development into these processes, security and development teams must cooperate by using a DevSecOps toolkit that satisfies each group’s needs for efficiency and reliability.
Data gathered by the “Global State of DevSecOps” report indicates that there are four primary ways to evolve your DevSecOps program so that you’re developing securely at speed.
Moving forward, the most successful organizations will likely be those that can effectively streamline their AppSec tech stacks, leverage AI responsibly during development, reduce noise in security test results, and foster closer collaboration between security, development, and operations teams. The DevSecOps journey is far from over, and AI-assisted development is propelling organizations down the path faster than ever. This year’s “Global State of DevSecOps” report helps define that path so you can navigate it more securely without slowing down.
Discover the latest insights and trends in secure software development, including AI-generated code in the latest DevSecOps report