Secure Coding Best Practices

In this two-part series, Matthew Butler covers some of his best practices for developing secure systems from design to testing. In this first blog, he covers the basics of the threat landscape and development best practices. In the second blog [Secure Architecture and Testing Best Practices], Butler covers architecture design choices and testing strategies. This material comes from his new book Exploiting Modern C++: Writing Secure Code for an Insecure World due out later this year.

 

There are three lies we tell ourselves when it comes to enterprise security:

 

#1 We have perimeter security.

 

While this is true, every company that’s been hacked in the past 20 years has also had perimeter security. Equifax had perimeter security. Home Depot had perimeter security. That company-you’ve-never-heard-of had perimeter security. The problem with perimeter security is that the other side has the best perimeter security other people’s money can buy and they use that to figure out how to penetrate your perimeter security. And then there’s forgetting to patch your systems…

 

The truth is we’ve lost the battle on keeping the enemy out. Now it’s about exfiltration.

 

#2 It’s been code reviewed and tested.

 

This is also likely to be true. It’s also irrelevant. Most engineers are trained to develop working software, not secure working software. They understand algorithms, data structures, the finer points of Agile and the language of their choice. But ask any developer if they know what to look for during a security code review and you’re likely to get a blank look. Ask any quality engineer if they know how to find and execute a SQL injection attack on a running system and you’ll likely hear, “a SQL what?”

 

These are all highly trained, seasoned, dedicated professionals. But we send them into battle unarmed against a heavily armed enemy that knows how to steal, kill, and destroy.

 

#3 We’re too big, too small, too something to be a target.

 

No, not really. Large companies have valuable technology, which makes them a target. Small companies have weak security, which makes them a target. Everyone has cash, which makes them a target and software scales really well.

 

If your company is in any way connected to the outside world, you’re a target tonight.

 

Why Secure Coding Is Important

 

When we talk about penetrations into a system, we need to define three terms: attack vector, attack surface, and critical system. Attack vectors are the way in which a system is attacked. A virus is an attack vector. Injecting malicious data into an interface is an attack vector. Attack surfaces are the part of a system being exploited. Inter process communication (IPC) interfaces are often unprotected and are prime targets for data injection attacks. Websites that accept data and use SQL are often vulnerable to SQL injection attacks which expose sensitive information.

 

A critical system is a system that has the job of protecting whatever we’re protecting. This can be intercontinental ballistic missiles, the national power grid, personally identifiable information (PII), cash, intellectual property, or the plans to the Death Star. But a critical system is also any other system capable of interacting with that system. This can be unrelated processes in the OS, hardware such as printers, or external unrelated systems capable of touching that system. When we look at the security of a system, we have to consider everything that touches it no matter how trivial or seemingly low risk.

 

And when we think about security, we usually stop at the perimeter. Everything inside the perimeter is considered safe; everything outside is considered dangerous. But we have to assume that the perimeter will be breached — because it always is — which means there needs to be layers of security behind the perimeter to slow the attackers down and give us time to react. This is why we practice Defense in Depth. Each layer that’s breached leads to another layer of security. It’s turtles all the way down.

 

Security is built in layers and the last layer is the code itself.

 

So, what are some of the best practices for secure software development?

 

Maintain Situational Awareness

 

Maintaining situational awareness is about validation and verification. We validate the data we’re operating on before we operate on it. And we verify the identity of who is sending us that data. All Denial-of-Service (DoS) attacks are a failure to validate the data we’re operating on and all penetrations are a failure to verify who we’re doing business with.

 

A buffer overflow is a common exploit that takes advantage of a loss of situational awareness. This exploit works when we are given more data than can be stored in a fixed size buffer. If the data isn’t checked against the buffer, it overflows and overwrites stack pointers allowing the execution of arbitrary code. Most operating systems use Address Space Layout Randomization (ASLR) to relocate vital libraries in memory, stack canaries to protect stack frames from tampering, and tamper resistant memory to guard against malicious changes. These safeguards are not perfect, though, and there are ways to work around them.

 

This is why maintaining situational awareness in your software is your best defense.

 

Study the Standard

 

Every programming language from C++ to Rust to JavaScript to C# has a standard that defines the language. Writing secure code begins with understanding the language and this becomes more important as the language increases in complexity over time. For example, the C++ standard is fifteen hundred pages long and has almost three hundred instances of what is known as undefined behavior. Undefined behavior is where the standard leaves the implementation details up to the compiler writers, which generally leads to a terminate call. Those instances are little land mines for the uninitiated.

 

You may be working with a language that is straightforward today. In 1990, C++ was a very straightforward language. In the intervening years, it has dramatically increased its surface area complexity. This makes C++ one of the most challenging languages to master even for engineers with decades of experience and can often produce unexpected results when compiled, a significant source of security vulnerabilities.

 

It is in the nature of all programming languages to begin simply then grow rapidly in surface area complexity as designers seek to satisfy everyone’s favorite feature. This provides opportunities for developers to unintentionally create security vulnerabilities. Your language of choice is no different in this respect and knowing its standard will help you to avoid creating security vulnerabilities. 

 

Warnings Are Errors

 

In the same way that pain in our bodies tells us something is broken, warnings in our code tells us something is inconsistent. Warnings are the compiler’s way of telling you that, while it can compile your code, it won’t work in the way you expect. Warnings are future vulnerabilities written today.

For most mature systems, simply failing the build by turning all warnings into errors would be impractical, but there are ways to deal with your warning backlog. Enforcing discipline in the development team where commits cannot increase the warning counts and adding stories into your technical debt to eliminate a specific percentage of warnings with each release are two ways of getting control of this hidden threat.

 

Complexity Is the Enemy

 

As engineers, we love complexity. Complexity makes us feel powerful, it makes us feel like we’ve accomplished something, conquered a hard problem. And yet complexity is one of the greatest sources of security vulnerabilities and architectural failures. In aviation, the skin of the aircraft is constantly expanding and contracting. As this happens, the metal begins to fatigue. If you were to look at the energy patterns along the skin, you would see that the energy concentrates at the areas of greatest stress, the place where the fatigue is greatest. This only makes the problem worse.

 

As with metal fatigue, security vulnerabilities also concentrate in areas of greatest complexity. It’s not that they move. It’s that the security vulnerabilities are easier to spot and eradicate in the areas of your design that are the simplest. What is left is in the areas of greatest complexity. 

 

Consider, for example, Dirty COW (CVE 2016-5195), a vulnerability in the Linux kernel introduced by Linus Torvalds while he was trying to fix another bug. Dirty COW was a copy-on-write vulnerability that allowed unprivileged attackers to write to protected files as root, a classic privilege escalation attack. It lay undiscovered for nine years and was actively exploited before it was found and fixed.

 

So how did Dirty COW go undetected? The copy-on-write function is hundreds of lines long and a highly complicated feature. Complex designs are hard to reason about, and a fix for one defect can lead to others. In this case, the engineer knew the code well but failed to understand the implications of the fix due to the code’s complexity. The reviewers missed the bug for the same reason and none of the testing caught the problem because of the nature of the vulnerability and the complexity of the feature.

 

Occam’s Razor says that, “All other things being equal, the simplest solution is usually the right one.” Practicing simplicity in your designs, architectures, and code goes a long way to helping you build secure systems and eliminate vulnerabilities.

 

Grow Bug Bounty Hunters

 

Few engineers know how to test their code for security. They’re just not trained that way. But they have an intimate knowledge of their systems and how they’re put together. Pen testers, on the other hand, know how to test systems for security but they lack the intimate knowledge of the systems and their construction. Training developers to be internal pen testers, or bug bounty hunters, is an invaluable tool in dealing with security vulnerabilities before they are released.

 

Once engineers are trained in what to look for, they now have the combined knowledge of testing for security and an insider’s knowledge of the code. Adding a financial reward to find security vulnerabilities gives them the incentive to increase their skills which, in turn, teaches them how to make safe design and coding choices.

 

In the end, the money spent in rewarding engineers for finding exploitable vulnerabilities is far outweighed by the savings from having found and fixed vulnerabilities that never made it into the wild.

 

Building Secure Software

 

We live in a zero-trust world where the last line of defense against tomorrow’s vulnerabilities is the code we design, write, test, and deploy today. In this blog, we’ve covered some of the best practices for writing secure code. In part two of this series, we look at secure coding best practices from an architecture design and testing perspective. 

 

If you’re interested in learning more about secure design and testing, check out part two of this series, Secure Architecture and Testing Best Practices.

Matthew Butler / About Author

Matthew Butler is an international speaker, trainer and security researcher who has been writing software professionally since 1990. He has spent the past three decades as a systems architect and software engineer developing systems for network and applications security, real-time data analysis and safety critical systems. He is a member of the ISO C++ Standards Committee and is focused on core language features, software vulnerabilities and safety critical systems. His first book, "Exploiting Modern C++: Writing Secure Software For An Insecure World" is due out in 2020.
He can be reached at: mbutler@laurellye.com