The world of software development is evolving at a breakneck pace, and that evolution brings ongoing and new challenges in securing software. Black Duck’s latest edition of its annual “Global State of DevSecOps” report offers a comprehensive look at the current state of play in application security, based on a survey of over 1,000 professionals across multiple countries and industries. Let's dive into the key findings that are shaping the DevSecOps landscape in 2024.
When asked to identify their top priority for security testing, three major factors emerged as the main areas of importance.
Discover the latest insights and trends in secure software development, including AI-generated code in the latest DevSecOps report
The foremost consideration, cited by 37% of respondents, is protecting sensitive information whether accessed or transmitted, reflecting a mature understanding of the impact potential breaches can have across different parts of an application ecosystem.
To address these vulnerabilities, organizations need to implement strong encryption practices, use up-to-date security protocols, and ensure that sensitive data is properly protected both when it's transmitted and when it's stored.
Our data shows that organizations in sectors such as Application/Software (43%), Banking/Finance (46%), Healthcare (32%), and Government (38%) are particularly attuned to this priority, given the highly sensitive nature of the data they handle.
Thirty-six percent of organizations rely on the best practices recommended by third-party organizations like OWASP. Adherence to established guidelines ensures a baseline of security across diverse development environments. However, it also raises questions about the adaptability of these standards in the face of rapidly evolving threats such as the unique security challenges posed by AI-generated code.
For example, a common developer practice is to use “snippets” (small extracts from larger pieces of code) of open source code in software. Regardless of how small the snippet of code is, users of the software must still comply with any license associated with it. This problem is now exacerbated by the use of AI assistants, which may produce code without reference to its provenance. AI tools trained on public open source codebases could introduce potential IP, copyright, and license issues into the code it produces, particularly if that code is used in proprietary software.
Even one noncompliant license in software can result in legal reviews, freezes in merger and acquisition transactions, loss of intellectual property rights, time-consuming remediation efforts, and delays in getting a product to market.
Black Duck’s 2024 OSSRA report relates that over half—53%—of the applications examined contained open source with license conflicts, exposing those applications’ owners to potential IP ownership questions.
The emphasis on automation and ease of test configuration, prioritized by 35% of respondents, underscores the growing integration of security into DevOps processes. Overall, centralization and vendor consolidation in security testing can significantly enhance an organization's ability to protect its digital assets by simplifying management, improving coordination, and potentially reducing costs.
Centralizing security tools allows for a unified management interface, which simplifies the monitoring and configuration of security measures. This reduces the complexity associated with managing multiple disparate systems, facilitates integration at each stage of the pipeline, and ensures that security policies are consistently applied across the organization. With a centralized system, security efforts can be more easily coordinated, reducing the likelihood of gaps or overlaps in security coverage.
One of the most striking findings is the sheer number of security testing tools organizations have in use. A whopping 82% of organizations use between 6 and 20 tools. While this may provide comprehensive coverage, a proliferation of tools also introduces significant complexity in integration, results interpretation, and overall management. It correlates strongly with another key challenge—noise in security testing results.
Sixty percent of respondents report that between 21% and 60% of their security test results are noise, that is, false positives, duplicates, or conflicts. This high level of noise can lead to alert fatigue and inefficient resource allocation.
Over 90% of organizations report that they are using AI tools in some capacity for software development. This rapid adoption presents both opportunities for enhanced productivity and new challenges in securing AI-generated code. While 85% of respondents believe they have measures in place to address AI-related challenges, only 24% are "very confident" in their testing of AI-generated code.
Even though their development teams are rapidly adopting AI tools, our survey results show that many organizations are struggling to keep pace with that adoption and still in the process of putting policies and tools into place to manage the unique challenges posed by AI-generated code.
Figure 1: Developers’ AI usage, permitted or not, correlated against moderate to high confidence in security controls (Black Duck “2024 Global State of DevSecOps” report)
In Figure 1, the graph farthest left shows that the less than 5% of respondents forbidding the use of AI tools altogether report slight or nonexistent confidence in their security preparedness, with nearly 42% of this group claiming that it is not a priority. Their choice to disallow AI-enabled development may stem from this lagging organizational approach to securing AI-generated code.
The graph farthest to the right highlights the 21% of respondents with a greater exposure to risk, where automated testing of AI-generated code is a notably lower priority despite an awareness of the use of AI-assisted development.
The graph second from the right illustrates a seemingly phased adoption of AI-enabled development and security controls in 43% of respondents, with limited permission being granted, perhaps based upon a slight confidence in preparedness.
Most concerning is the graph second from left, which shows that 27% of respondents have some development teams using AI with permission, despite a clear lack of confidence in their preparations to mitigate risks.
Despite advancements in tools and processes, tension remains between thorough security testing and the need for development speed. Eighty-six percent of respondents feel that security testing slows down development by some amount (ranging from slightly to severely). The plurality (43%) feels that testing moderately slows down development. While one-quarter of respondents feel that security testing slightly slows down development/delivery, and another 18% feel that it severely slows the development life cycle.
More insight can be gained when looking at how respondents add software projects to the security testing queue and whether that is an impediment to development and delivery pipelines. Of those respondents that report security testing severely slows down their pipelines, 33% manage their test queues entirely manually, compared to 17% that manage pipelines entirely through automation.
These statistics underscore the ongoing challenge of integrating security seamlessly into fast-paced development cycles without becoming a bottleneck.
The 2024 DevSecOps landscape is characterized by rapid AI adoption, a proliferation of tools, and an ongoing struggle to balance thorough security practices with development speed. While there's a clear trend toward automation and integration of security into development processes, many organizations are still grappling with noise in security results and the persistence of manual processes that could be streamlined through automation.
Moving forward, the most successful organizations will likely be those that can effectively streamline their tool stacks, leverage AI responsibly, reduce noise in security testing, and foster closer collaboration between security, development, and operations teams. The DevSecOps journey is far from over, but the path ahead is becoming clearer.
- This blog post was reviewed by Steven Zimmerman.