The Synopsys Software Integrity Group is now Black Duck®. Learn More

close search bar

Sorry, not available in this language yet

close language selection

Is AI-generated code secure? Maybe. Maybe not.

Patrick Carey

Jun 12, 2024 / 4 min read

Generative AI has emerged as the next big thing that will transform the way we build software. Its impact will be as significant as open source, mobile devices, cloud computing—indeed, the internet itself. We’re seeing Generative AI’s impacts already, and according to the recent Gartner Hype Cycle™ for Artificial Intelligence, AI may ultimately be able to automate as much as 30% of the work done by developers.

AI coding assistants like GitHub Copilot can be a considerable force multiplier for programmers. Early analysis by GitHub showed that use of Copilot could increase overall productivity by 50%, deployments by 25%, code commits by 45%, and merge requests by 35%. GitHub also found that use of Copilot increased quality through faster unit testing, while reducing code errors and the number of merge conflicts. It also increased overall developer satisfaction as well as accessibility with its conversational interface.

That developers are eager to adopt AI coding assistants isn’t a huge surprise. They’ve been using IDEs with auto complete for the last 20 years. Given that, who wouldn't want to write a few lines of code and let AI finish the job?


Do AI coding assistants write better, more secure code?

While the potential productivity gains of AI coding assistants may be irresistible for developers, that doesn’t mean that teams get a free lunch. AI tools are improving rapidly, but a number of risks remain. The large language models (LLMs) these tools are built on are trained on millions of lines of code in the public domain. But what code? Good code? Bad code? The answer is both, and as a result, these tools are prone to

  • Generating code that contains bugs and/or security defects
  • Generating code it “thinks” is correct but isn’t

This doesn’t mean that AI can’t generate good code. Studies analyzing Copilot show that in general, it did well at avoiding certain types of security weaknesses (CWEs), including

  • CWE 787: Out-of-Bounds Write
  • CWE 79: Cross-Site Scripting
  • CWE 416: Use After Free
  • CWE 125: Out-of-Bounds Read
  • CWE 190: Integer Overflow
  • CWE 119: Improper Restriction of Operations

These defects are often easier to detect because they are the result of flaws in the syntax of a programming language. Other, more complicated, security defects are another story. Copilot was less effective at avoiding vulnerabilities that are the result of the way an application interacts with data and external inputs. These include

  • CWE 20: Improper Input Validation
  • CWE 502: Deserialization of Untrusted Data
  • CWE 78: OS Command Injection
  • CWE 22: Path Traversal
  • CWE 434: Unrestricted Upload of File with Dangerous Type
  • CWE 522: Insufficiently Protected Credentials

In addition, studies such as “Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions” from August 2021 showed that while AI coding assistants do indeed speed up development, 40% of the programs they generated were found to have vulnerabilities.

Another report, “Is GitHub’s Copilot as Bad as Humans at Introducing Vulnerabilities in Code?” from August 2023, took a different approach. It compared code generated by GitHub Copilot to that written by humans, when both were given the same prompt. Here, GitHub Copilot was found to have produced vulnerable code approximately one-third of the time, while avoiding vulnerabilities approximately 25% of the time. Interestingly, researchers observed that nearly half the time, CoPilot generated code that differed significantly from that produced by a human developer.

Finally, a third report, “Security Weaknesses of Copilot Generated Code in GitHub” from October 2023, found that approximately 35% of the Copilot-generated code in GitHub contained vulnerabilities.

Getting the benefits of AI-generated code while avoiding the security risks

Does this mean AI coding assistants are bad and your team should avoid them? Not at all. The reality is that the AI code genie is out of the bottle and it’s not going back. And besides, the AI-generated code is probably no more buggy or vulnerable than the code many developers (especially less-experienced ones) produce.

And therein lies the key takeaway. AI-generated code can significantly speed up your development, but you still need to review and verify it as much, if not more, than code written by your developers.

So, what should your organization be doing to get the benefits of AI-generated code, while avoiding the security and quality risks? Don’t just let developers download and use whatever tool they just read about on Stack Overflow. Instead, make a plan that addresses these three key areas.

  • Establish clear rules and guidelines: Do your homework and define clear rules and guidelines for the use of AI tools in development, taking into consideration the impacts to productivity, security, and intellectual property protection.

  • Vet AI coding assistant tools before you use them: It's crucial to carefully vet AI coding assistant tools to ensure compliance with organizational policies and standards. How does the tool vendor protect your IP? How transparent are they about the data their LLM is trained on?

  • Implement rigorous verification processes: Organizations should implement rigorous verification processes, including static analysis, to validate the security and quality of AI-generated code.

Embracing the inevitable

As AI continues to reshape the landscape of software development, organizations must strike a delicate balance between innovation and risk mitigation. By adopting proactive governance measures and adhering to best practices, your organization can harness the power of AI-generated code while safeguarding your intellectual property and ensuring the integrity of your software projects. As we venture further into the realm of AI-driven development, vigilance and strategic planning will be key to navigating the evolving challenges and opportunities that lie ahead.

How Black Duck can help

Black Duck is helping enterprises produce more-secure software at the speed their business demands by combining the power of our market-leading AppSec engines with generative AI, so developers and security teams will be able to ship more-secure software faster to provide the innovation your business needs.

 

Learn more about managing the risks of AI

Continue Reading

Explore Topics