The Synopsys Software Integrity Group is now Black Duck®. Learn More

close search bar

Sorry, not available in this language yet

close language selection

Are you making software security a requirement?

Jamie Boote

Jul 21, 2020 / 7 min read

What are software security requirements?

Have you ever heard the old saying “You get what you get and you don’t get upset”? While that may apply to after-school snacks and birthday presents, it shouldn’t be the case for software security. Software owners don’t just accept any new software features that are deployed; features must go through a strategic process of critique, justification, and analysis before being deployed. Your teams should treat security with the same attention to detail. After all, secure software doesn’t just happen out of nowhere—it has to be a requirement of the strategic development process. To deploy secure software effectively, you need clear, consistent, testable, and measurable software security requirements.

Why do I need software security requirements?

Traditionally, requirements define what something can do or be. A hammer needs to drive nails. A door lock needs to keep a door closed until it’s unlocked with a specific key. A car needs to move travelers from point A to point B along the nation’s roads. It also needs to work with the modern gasoline formulation. These types of requirements work fine for physical objects but fall short for software.

Additionally, people can use these objects for something other than their intended purpose or circumvent their purposes entirely. For instance, you can use a hammer to break a window, you can pick a door lock, and you can use a car to transport stolen goods. Similarly, software can be abused or made vulnerable. The key difference is that GM isn’t liable when their cars are used as getaway vehicles. However, someone hacks your software’s capabilities and permissions, you (as the software owner) are the one who suffers.

Security vulnerabilities allow software to be abused in ways that the developers never intended. Imagine being able to design a hammer that could only hammer nails and nothing else. By building robust software security requirements, you can lock down what your software does so that it can be used only as intended.

Fortunately, building software that is immune to the OWASP Top 10 is easier than building a hammer that turns to marshmallows when used to hit anything but nails.

How do I create security requirements?

A security requirement is a goal set out for an application at its inception. Every application fits a need or a requirement. For example, an application might need to allow customers to perform actions without calling customer service. Just as you lay out those actions and outcomes as goals for the final application, you must include the security goals.

A software security requirement is not a magic wand that you can wave at an application and say, “Thou shalt not be compromised by hackers,” any more than a New Year’s resolution is a magic wand that you can wave at yourself to lose weight. Just like a resolution to lose weight, being vague is a recipe for failure. How much weight? How will you lose it? Will you exercise, diet, or both? What milestones will you put out there?

In security, the same types of questions exist. What kinds of vulnerabilities are you looking to prevent? How will you measure whether your requirement is met? What preventative measures will you take to ensure that vulnerabilities aren’t built into the code itself?

When building a software security requirement, be specific about the kind of vulnerabilities to prevent. Take this requirement example: “[Application X] shall not execute a command embedded in data provided by users that forces the application to manipulate the database tables in unintended ways.” This is a fancy way of saying that the application should not be vulnerable to SQL injection attacks. You can prevent these attacks with a combination of rejecting or scrubbing bad input from the user, using a carefully crafted type of database query that flags data as data and not as commands to be acted upon, and modifying the output of the database calls to prevent bad data from attacking functionality down the line. Then you can test this requirement with specific kinds of software tests, both on the source code and on the compiled application.

Requirements for your requirements

To build good requirements, make sure that you’re answering questions about your requirements. A software security requirement should be much like a functionality requirement; it shouldn’t be vague or unattainable. Anticipate developers’ questions and answer them ahead of time. Here’s how:

  • Is this testable? Can we test this requirement in the final application? “Be secure” is not a testable requirement. “Encode all user-supplied output” is.
  • Is this measurable? When we test for this, can we determine coverage and effectiveness?
  • Is this complete? Are we forgetting something? Are we mandating checks for user-supplied data to databases but not logs?
  • Is this clear? Will the people responsible for designing, implementing, testing, and delivering on this requirement understand the intent of the requirement?
  • Is this unambiguous? Could someone interpret this requirement in any other ways?
  • Are these requirements consistent? Are we approaching each security requirement in the same way to ensure that the security measures are applied consistently across the board?

When building a requirement, remember that it is a goal that someone must achieve. Designers and developers can’t meet the security goals for an application unless you create specific and achievable requirements.

Types of security requirements

If you’re entrenched in the requirements or contracting world, you’re already aware of the basic kinds of requirements: functional, nonfunctional, and derived. Software security requirements fall into the same categories. Just like performance requirements define what a system has to do and be to perform according to specifications, security requirements define what a system has to do and be to perform securely.

When defining functional nonsecurity requirements, you see statements such as “If the scan button is pressed, the lasers shall activate and scan for a barcode.” This is what a barcode scanner needs to do. Likewise, a security requirement describes something a system has to do to enforce security. For example: “The cashier must log in with a magnetic stripe card and PIN before the cash register is ready to process sales.”

Functional requirements describe what a system has to do. So functional security requirements describe functional behavior that enforces security. Functional requirements can be directly tested and observed. Requirements related to access control, data integrity, authentication, and wrong password lockouts fall under functional requirements.

Nonfunctional requirements describe what a system has to be. These are statements that support auditability and uptime. Nonfunctional security requirements are statements such as “Audit logs shall be verbose enough to support forensics.” Supporting auditability is not a direct functionality requirement, but it supports auditability requirements from regulations that might apply.

Derived requirements are inspired by the functional and nonfunctional requirements. For example, if a system has a user ID and PIN functional requirement, a derived requirement might define the number of allowable incorrect PIN guesses before an account is locked out. For audit logs, a derived requirement might support the integrity of the logs, such as log injection prevention.

Derived requirements are tricky because these stem from abuse cases. Not only must requirements designers think like a user and a customer, but they also have to think like an attacker. For every bit of functionality given to users, an attacker could abuse it. For example, log-in functionality could become password guessing attempts, uploading files could open a system up to hosting malware, and accepting text could open the door to cross-site scripting or SQL injection.

Making requirements

Software security requirements can come from many sources in the requirements and early design phases. When you’re defining functionality, you must define it securely or provide supporting requirements to ensure that the business logic is secure. You should tailor generic guidance from industry best practices and regulatory requirements to meet specific application requirements.

Abuse cases are one way to think like an attacker. Designers flip a use case on its head and analyze how the functionality could be abused. If a user is allowed to generate reports with sensitive data, how might an unauthorized user gain access to those reports and their sensitive data? Abuse cases are often answered by industry best practices, which you can use to build requirements for how the application handles access to privileged data.

Software security requirements can also come from an analysis of the design via architecture risk analysis. If a web application uses a specific framework or language, you’ll need to apply industry knowledge of attack patterns and vulnerabilities. If a framework prevents cross-site scripting in some situations and not others, you’ll need to define a requirement that speaks to how the developers will prevent cross-site scripting in insecure situations.

Every security requirement should address a specific security need, so it’s essential to know about the vulnerabilities that could exist in an application. Generic guidance and knowledge are not enough. Specific security requirements will arise from specific application requirements.

What can requirements do for me?

It doesn’t matter whether you build software in-house or outsource your software to third-party vendors; building sound security requirements can benefit you. By defining your security requirements early, you can spare yourself nasty surprises later. Sound security requirements help internally by providing a clear roadmap for developers. They also help with external regulatory requirements. Implementing measures to keep software from getting hacked is a good strategy, and security requirements are a fantastic start to being happy with what you get.

The best time to plant an oak tree was 20 years ago. The next best time is now.

—Ancient proverb
 

Build your software security requirements early and sit in the shade of securely built software later.

Report

BSIMM Trends & Insights

BSIMM14 Report
Get the latest edition

Building Security In Maturity Model (BSIMM) is a data-driven model developed through analysis of real-world software security initiatives. The BSIMM report represents the latest evolution of this detailed model for software security.

Continue Reading

Explore Topics