Sharks in the Moat
Page 52
SP 800-14: Generally Accepted Principles and Practices for Security IT Systems
This publication is a companion to SP 800-12 that provides a baseline to establish an IT security program by providing requirements.
SP 800-18: Guide for Developing Security Plans for Federal Systems
This publication provides a framework for developing a security plan. It covers asset classification based on CIA, a list of responsibilities and a template to jump start the process.
SP 800-27: Engineering Principles for Information Technology Security
This publication provides various IT security principles that can be used to establish basic security. Many principles are people-oriented while others deal with processes.
SP 800-30: Risk Management Guide for IT
SP 800-30 starts with an overview of risk management and a list of the critical success factors necessary for an effective program. It then covers how to integrate the program into an SDLC along with all required roles and responsibilities and wraps up with a discussion of the steps to take at the end of the risk management process. A nine-step methodology is presented to help with conducting a risk assessment of IT systems. Figure 136 illustrates points at which this standard suggests action.
Figure 136: Risk Mitigation Action Points
SP 800-61: Computer Security Incident Handling Guide
Whereas threats used to be short lived and easy to notice, modern threats require a more sophisticated approach when handling an incident, and this publication provides a useful guide on how to achieve that capability. It is useful for both new and experienced incident response teams.
SP 800-64: Security Considerations in the Information Systems Development Life Cycle
SP 800-64 is geared specifically for building security into the SDLC from the very beginning steps, targeted for just about all possible roles. One of the benefits of this guideline is that it succinctly states four benefits of implementing security at the earliest stages of a project instead of trying to bolt it on near the end.
It is much more cost-effective to take care of vulnerabilities and configuration issues early.
It will highlight any design or engineering decision that might require a redesign later.
It identifies existing security services that can be shared, reducing the required resources.
It can bring executives into the decision-making process to make better go/no-go decisions and to handle risk decisions.
SP 800-64 also helps to apply a security mindset to projects that do not always follow classic SDLC methodologies. For example, supply chain integration, virtualization or SOA services development can often have a life of their own outside of an SDLC process. In these cases, key success factors will require communication and documentation of the various stakeholder relationships.
SP 800-100: Information Security Handbook: A Guide for Managers
This publication is a must-read for just about anyone regardless of their role, as it provides management guidance for developers, architects, HR, operational and acquisition personnel. If you can think of it, it is probably mentioned in this document.
FIPS
Also produced by NIST, Federal Information Processing Standards, or FIPS, is a set of standards that cover document processing, encryption algorithms and other IT standards for use by non-military government agencies and their contractors. Just like special publications, FIPS are very commonly adopted by the private sector as well.
FIPS 140: Security Requirements for Cryptographic Modules
This standard documents the requirements that any acceptable cryptographic module will need to meet. It provides four increasing levels – Level 1 through Level 4 - that represent various functional capabilities. Beyond providing details on the secure design and implementation of a module, it also specifies that developers and vendors must document how their module mitigates non-cryptographic attacks such as differential power analysis or TEMPEST.
FIPS 186: Digital Signature Standard (DSS)
DSS specifies a suite of algorithms that can be used to generate a digital signature. Besides detecting unauthorized modifications, digital signatures can also be used to authenticate the identity of the signatory. This document contains guidelines for digital signature generation, verification and validation.
FIPS 197: Advanced Encryption Standard
This publication replaces the withdrawn FIPS 46-3 DES publication that described DES. Since DES was broken, and AES was designated as the replacement, FIPS 197 became the official standard.
FIPS 201: Personal Identity Verification (PIV) of Federal Employees and Contractors
This publication was created to ensure federal agencies properly verify the identity of federal employees and contractors against a standard set of technical requirements.
ISO Standards
The National Organization for Standardization, or ISO, is an international body dedicated to achieving global adoption of a variety of standards. In this section we will list a number of applicable ISO security standards.
ISO 15408: Common Criteria
This standard provides a common method for evaluating the security of a system.
ISO 21827: Systems Security Engineering Capability Maturity Model (SSE-CMM)
SSE-CMM is an internationally recognized standard that provides guidelines for securing engineering of systems, including all stages of the SDLC. By measuring processes and assigning a maturity level, one can quickly gauge the maturity of an organization and have confidence in its capabilities.
ISO 25000: Software Engineering Product Quality
This standard provides guidance on how to design, develop and deploy quality software products using the Software Product Quality Requirements and Evaluation criterion, or SQuaRE.
ISO 27000: Information Security Management System (ISMS) Overview and Vocabulary
This standard provides a common glossary and definitions to be used when implementing an ISMS. It sets the stage for ISO 27001 and ISO 27002.
ISO 27001: Information Security Management System Requirements
ISO 27001 specifies the requirements for implementing an ISMS as described in ISO 27000.
ISO 27002: Code of Practice for Information Security Management
Taking over where ISO 27001 left off, this standard provides guidelines and principles for implementing an ISMS by defining various controls that can be implemented.
ISO 27005: Information Security Risk Management
This standard is THE place to go when implementing information security risk management. It covers everything from defining scope to monitoring risk in the final product.
ISO 27006: Requirements for Bodies Providing Audit and Certification of Information Security Management Systems
This standard supports certification and accreditation bodies that audit and certify ISMSs. Any organization wishing to perform certifications on other organizations must demonstrate compliance with ISO 27006.
ISO 28000: Specification for Security Management Systems for the Supply Chain
This standard focuses on securing the supply chain when purchasing off-the-shelf components.
Security Testing Methods
Now let’s talk about the various approaches to security testing that we can take. There are two primary types – white box and black box.
Also known as glass box testing or clear box testing, white box testing allows the testing team to have intimate knowledge of how the target system is designed and implemented. This approach leverages a full knowledge assessment, because no information is hidden from the individuals involved. Although it is recommended to start with unit testing, white box testing can be performed at any time after development of a specific component, module or program has completed. Tests should consist of both use case and misuse cases, and the tester will take the intended design and turn it upside down just like an attacker would. In other words, the tester should purposefully stray from the happy path and make the application very ‘s
ad’.
Just to be clear, white box tests require access to the raw source code so that testing can validate the existence of Trojans, logic bombs, spyware, backdoors and other goodies that a developer may intentionally or accidentally leave behind. No artifacts or documentation is off-limits to a white box tester, and the final output is a report listing defects, flaws and any deviation from the design specs. It may include change requests to fix the discovered issues, as well as recommendations to address security problems. Figure 137 illustrates the overall white box testing process.
Figure 137: White Box Security Testing Flow
While white box testing best represents the approach that an internal testing team will take, black box testing is the exact opposite where virtually nothing is known about the system unless the testing team can discover it for themselves without access to all of the artifacts and documents that a white box testing team is able to consume. This approach is also known as a zero knowledge assessment for obvious reasons. The term ‘black box’ comes from how the testing team sees the system – it is essentially a black box that must be exploited with no knowledge of how it works internally. Whereas a white box test examines the internal structure of the system based on documentation, a black box test examines the behavior of the system from the outside. Black box testing can be leveraged at two different times – before deployment and post-deployment. The reason for testing before deployment to a production environment is to identify vulnerabilities that can be fixed prior to deployment when it is still relatively ‘cheap’ to do so. The purpose for black box testing post-deployment is two-fold:
1) Find vulnerabilities that exist in the production environment.
2) Validate the presence and effectiveness of the security controls built into the environment.
Because a pre-deployment black box test will not ferret out any production environment issues, an environment as close to production should be used. The three most common methodologies used for black box testing are fuzzing, scanning and penetration testing.
Now, how do you know when to use white box testing, and when to use black box testing? The answer will depend on your goal – let’s take a look at some common reasons for carrying out testing.
If we are faced with a known vulnerability and need to figure out how to fix it, then we will most likely need access to the source code in order to figure out the root cause of the vulnerability. That means that white box testing will apply. Or, perhaps we want to ensure that we have tested all of the functionality. In this case we will also need access to the source code, meaning a white box test, so that we can verify the extent of code coverage that has been tested.
White box testing can often result in a number of false positives and false negatives. A false positive is when we claim to have discovered a vulnerability that really isn’t exploitable. As an example, a source code scanner claims that it found a PII field called ‘Name’ that has not been properly encrypted, when in reality that field contains the name of a company, not a person. Or, it might result in a false negative, where a field that does contain PII named ‘VerifiedIdentifier’ contains a person’s first and last name, but the code scanner didn’t recognize it as PII.
Black box testing can also result in false positives and false negatives. An example of a black box false positive might be that we have found a SQL injection vulnerability because entering “’ 1=1” into a text input field causes an error. However, the error turns out to be caused by a validation routine that rejected the input by throwing an exception. While the manner in which the validation was carried out may not be the best, there is in fact no SQL injection risk. An example of a black box false negative is a server with an unprotected port that was missed when the testing team was enumerating all servers they could find.
While many claim that black box testing generates a larger number of false positives and negatives than white box testing, in has been my experience both approaches do so if using automated scanning tools. However, if humans are involved in white box testing by reading the source code, then it is true that fewer false positives and negatives are produced relative to black box testing.
White Box
Black Box
Also known as…
Full knowledge assessment
Zero knowledge assessment
Assesses the software’s…
Structure
Behavior
Root cause identification
Can identify the exact lines of code or design issue causing the vulnerability
Can analyze only the symptoms of the problem and not necessarily the cause
Extent of code coverage possible
Greater; the source code is available for analysis
Limited; not all code paths may be analyzed
Number of false negatives and positives
Less; contextual information is available
High; since normal behavior is unknown, expected behavior can also be falsely identified as anomalous
Logical flaws detection
High; design and architectural documents are available for review
Less; limited to no design and architectural documentation is available for review
Deployment issues identification
Limited; assessment is performed in pre-deployment environments
Greater; assessment can be performed in a pre-deployment as well as a post-deployment production or a production-like simulated environment
At some point we will need to start tracking down logical flaws, and in this case white box testing will not be very useful if we only look at source code, as this does not really help us to determine if business rules have been applied across the
application. In this case, we will need to also look at internal artifacts such as architectural and design documents when assessing the application, which still falls under white box testing.
Lastly, if we need to validate production resilience and discover configuration issues in that environment, the only real choice is black box testing. Since source code is never deployed, the only capability is to assess the system’s behavior.
Figure 138 provides a summary of everything we have discussed so far.
The real world is not quite as cut and dry as the previous discussion would like you to believe. There is a third type of testing, called gray-box, which as you might imagine is a combination between white and black box testing. The exact definition of this approach is not clearly spelled out, but in general it involves a high-level knowledge of the internal architecture and design of an application coupled with a user-facing only test plan. User acceptance testing is probably the best example of such an approach, as the user will know the purpose and design of an application but will only test it using the publicly accessible interface. Of course, UAT is meant more to validate functionality than assess security, but a similar approach could be used for security assurance as well.
Another definition of gray box testing is that white box testing is performed early in the life cycle, while black box testing is performed later. Regardless of how you define it, just about every project should employ, white, black and gray box testing.
White Box
Black Box
Also known as…
Full knowledge assessment
Zero knowledge assessment
Assesses the software’s…
Structure
Behavior
Root cause identification
Can identify the exact lines of code or design issue causing the vulnerability
Can analyze only the symptoms of the problem and not necessarily the cause
Extent of code coverage possible
Greater; the source code is available for analysis
Limited; not all code paths may be analyzed
Number of false negatives and positives
Less; contextual information is available
High; since normal behavior is unknown, expected behavior can also be falsely identi
fied as anomalous
Logical flaws detection
High; design and architectural documents are available for review
Less; limited to no design and architectural documentation is available for review
Deployment issues identification
Limited; assessment is performed in pre-deployment environments
Greater; assessment can be performed in a pre-deployment as well as a post-deployment production or a production-like simulated environment
Figure 138: White Box Testing Vs. Black Box Testing
Attack Surface Validation
We’re now moving into what some consider to be the juicy part of securing software – the seedy underside of the hacker world where we have to get into their mind and think like the bad guy in order to protect the good guy. While we call it by a fancy name – validating the attack surface – it really comes down to carrying out penetration testing to see how we can get past whatever security has been put in place.