Sharks in the Moat
Page 50
Use Case & Misuse Case Modeling
A use case models the intended behavior of a system as documented by the system owner. Included are all applicable actions and events and is a great way to document requirements. Why? Because it removes ambiguous and incomplete requirements by illustrating exactly what is expected, and if an action or event directly related to the flow is not shown, then it can be reasonably expected to not be a requirement. Having said that, use case modeling is designed to show only the most significant system behavior and cannot replace actual requirement specifications.
As shown in Figure 130, the model identifies actors, intended and abused system behavior, and sequences and relationships between the actors and use cases.
It is best if use cases are first documented, followed by misuse case identification. Misuse cases represent threats to a system and is taken from a hostile user’s perspective. Both accidental and intentional cases must be considered, as are external and internal attackers. Considering misuse cases is an excellent method for eliciting requirements that may not otherwise be considered.
There are some common templates that can help with use and misuse modeling, such as those by Kulak and Guiney, as well as by Cockburn. The Secure Quality Requirements Engineering, or SQuaRE, methodology, developed by US-CERT, consists of nine steps that create a list of categorized and prioritized security requirements.
Requirements Traceability Matrix (RTM)
We have discussed multiple approaches to PNE, including use and misuse case modeling, a subject/object matrix, data classification, surveys and policy decomposition. All of these requirements can be collected into a single location called the requirements traceability matrix, or RTM. This is essentially a three-column table with business requirements on the left, functional requirements that address business requirements in the center, and testing requirements on the right. This can be tailored to include security requirements as well. By using an RTM, we can achieve the following benefits:
Prevent scope creep by ensuring that all functional requirements can be mapped back to a business requirement.
Ensure that the design meets the stated security requirements.
Ensure that the implementation does not deviate from a secure design.
Provide a basis for defining test cases.
The chances of missing security functionality are greatly reduced when an RTM is used. An RTM also is a tremendous help when showing the business owner how security requirements map back to business requirements. Finally, an RTM will help when it comes time to allocate the required resources.
Guidelines for Software Acceptance
After the development team has completed coding, the testing team has vetted that they have met the requirements, and the infrastructure team (or DevOps as the case may be) has prepared for the move to production, we’re ready to go, right? Not so fast – we still need the business owner to sign off. Just because they wrote the requirements doesn’t mean that they will accept the software. Here, we need a process for the official acceptance before we can call it done, and this process is comprised of six categories – functionality, performance, quality, safety, privacy and security.
During this time several things will happen:
The software is verified that it meets the requirements.
The software is validated to be operationally complete and secure as expected.
Written approvals are received from the business owner.
Responsibility is transferred from the development team to the owner, support staff and operations team.
So far, we’ve been speaking in generalities about software development. But this book is about secure software, so let’s list some of the most important security objectives that must be met before we can pat ourselves on the back and wrap a project up.
The software must be secure by design, default and deployment. This is called SD3 and is crucial to success.
The software must compliment existing defense-in-depth protection, not compete against it. As an example, if the pending release requires specific ports to be opened that are not open in the production environment, the attack surface of the release has increased. This must not be allowed unless compensating controls are enacted to compensate for the increased risk.
The software must implement least privilege everywhere.
The software must be irreversible and tamper-proof by implementing the proper level of obfuscation and debugger detection capabilities. Contractual capabilities such as a EULA or login banner are useful as deterrents, but they are most certainly not preventative. Even the Digital Millennium Copyright Act, or DMCA, covers reverse engineering but has few teeth, especially when it comes to places such as Asia where piracy is rampant. Technical measures to prevent reverse engineering must be implemented.
The software must isolate and protect administrative and security interfaces. Such interfaces must be accessible only to a very small number of people based on roles and access rights, not security through obscurity. Any activity in this area should also be heavily audited.
The software must have non-technical protection mechanisms in-place, such as legal protections and escrow, if applicable, before being considered deployment ready.
Now, why is the official acceptance of software so important? For several reasons. When I think about this process, I am reminded of my kids in their earlier years and the running battle in getting them to keep their rooms clean. No matter how much direction and encouragement I gave them, their rooms remained a disaster area – you were taking your own life in your hands by walking into such a place, and I swear I could hear feral growling coming from underneath more than one stack of dirty clothes. They would go up to ‘clean’ their room and claim they were done as they skipped out the back door to go play. That is, until I started to implement the ‘Dad-worthy acceptance’ process. As part of this official process, I made sure they understood what the requirements for what ‘clean’ looked like, and they were not allowed to ‘deploy’ to the backyard until I officially ‘accepted’ the room as ‘done’. This did several things for me:
1) The ‘kid team’ was more diligent with quality since they knew their work would be inspected before acceptance.
2) It gave me a chance to point out what still needed to be done and to find flaws before they ‘deployed’ to the backyard.
3) I no longer had to be suspicious of their lack of progress since I was confident they would approach me when it was inspection time.
As silly as the example might be, it does map very well to the real world of software development. By ensuring that security is part of the requirements starting from the design phase, we can ensure that it is not bolted on at the end, which never ends well. The development team knows the work will require a formal acceptance that is accompanied by a deep inspection, so shortcuts and quality-decreasing behavior is eliminated. As a result, compliance with regulations are met, and any shortcomings can be addressed before the software is deployed to a production state, with legal and escrow mechanisms in-place. In short, an official acceptance process ensures that the resulting software is of a high quality, is reliable and is secure from risks.
Now let’s dive deeper into what the acceptance process looks like. There are five steps to consider – completion criteria, change management, deploy approval, risk acceptance, and documentation, as shown in Figure 131.
Figure 131: Software Acceptance Considerations
Completion Criteria
The completion criteria area is concerned with ensuring that the original security requirements have been completed according to documentation. Security requirements should have been properly documented during earlier stages – if they were not properly defined, assume this constitutes a serious red flag for the entire project. Beyond requirements, explicit milestones should have been defined well in-advance of the acceptance phase. Each milestone should include the actual deliverable that can be tracked and verified.
For example, a
requirements traceability matrix should have been created that included all security requirements, and the completion criteria step looks for these requirements and validates that each was properly implemented.
Likewise, the threat model should have been generated during the requirements phase and updated along the way and should contain a threat list along with the appropriate countermeasures.
The architecture should have been signed off before coding started and should include any components needed for the security profile and to implement the principle of secure design. Each component needs to be validated before acceptance can be provided.
Code reviews for security issues must be conducted and any issues identified must have been addressed during the testing phase.
Any outstanding documentation must be completed before acceptance is granted and the project continues to the deployment phase.
If any milestones were not completed, we need to seriously consider if the product is ready to be released to the wild. With an agile approach that produces incremental capabilities after each sprint, we need to ensure that all proper security levels have been completed before allowing such a build to reach deployment.
Approval to Deploy or Release
The final approval to move software to a production state is not something that is simply a box to check – it must be purposefully and carefully carried out with a full understanding of the associated risks. Therefore, a risk analysis for any changes must be executed to determine the residual risk. This residual risk must be communicated to the decision makers, along with any steps required to mitigate the risk, who then determine if the residual risk falls below acceptable levels, or if the mitigation steps must be carried out as well. Any approval or rejection must include the recommendations and support from the security team. Keep in mind that residual risk must be accepted by the business owner, not the IT department. IT people tend to make decisions based on their world alone and normally do not have insight into business concerns – some risks that IT finds unacceptable becomes acceptable to the business side when we take into account the business benefit that we
can achieve. Conversely, IT might be willing to chance something that Product knows will cause the product to lose 50% of revenue overnight if it happens, and would never approve of such a risk.
Documentation of Software
Proper documentation for a project includes multiple areas, including:
Functional requirements
Architecture
Installation instructions
Configuration settings
User manual
The most important reason for ensuring a proper level of documentation is to make sure the deployment process is easy and repeatable, and to make sure the impact of any change is understood. The best approach to ensuring proper documentation is to check for its completion at the end of each phase, but the reality is that it is seldom checked until the end of a project. Regardless, software must not be accepted until all documentation has been completed and validated. Figure 132 lists the various types of documentation commonly found in most software projects.
Documentation should clearly spell out both functional and security requirements so that the support team has a good grasp of what is required to keep the software functioning and secure. Because of this, it is a great idea to have members of the support team to participate as observers during the development and testing phases. It is likely that documentation for subsequent releases is even more lacking than that for the original release. If each subsequent release is not properly documented, however, we will not be able to track changes back to customer requests or requirements.
Document Type
Assurance Aspect
RTM
Are functionality and security aspects traceable to customer requirements and specifications?
Threat Model
Is the threat model comprehensively representative of the security profile and addresses all applicable threats?
Risk Acceptance Document
Is the risk appropriately mitigated, transferred or avoided? Is the residual risk below the acceptable level? Has the risk been accepted by the product owner with signatory authority?
Exception Policy Document
Is there an exception to policy, and if so, is it documented? Is there a contingency plan in place to address risks that do not comply with the security policy?
Change Requests
Is there a process to formally request changes to the software and is this documented and tracked? Is there a control mechanism defined for the software so that only changes that are approved at the appropriate level can be deployed to production environments?
Approvals
Are approvals (risk, design and architecture review, change, exception to policy, etc.) documented and verifiable? Are appropriate approvals in place when existing documents like BCP, DRP, etc. need to be redrafted?
BCP or DRP
Is the software incorporated into the organizational BCP or DRP? Does the DRP not only include the software but also the hardware on which it runs? Is the BCP/DRP updated to include security procedures that need to be followed in the event of a disaster?
Incident Response Plan (IRP)
Is there a process and plan defined for responding to incidents (security violations) because of the software?
Installation Guide
Are steps and configuration settings predefined to ensure that the software can be installed without compromising the secure state of the computing system?
User Training Guide/Manual
Is there a manual to inform users how they will use the software?
Figure 132: Typical Types of Documentation
For mission-critical software, a less-obvious type of documentation that must be updated is the business continuity plan, or BCP, and the disaster recovery plan, or DRP. Likewise, the incident response plan, or IRP, should be created and updated as new versions are released. The IRP provides guidance on how to handle security breaches but will only be effective if people are purposefully trained on the contents.
Verification and Validation (V&V)
The terms verification and validation are usually used interchangeably, but within software acceptance the two have a very subtle difference. Validation is what we normally think of during the acceptance phase where we check the original requirements and ensure the software meets the stated details. We validate to make sure software meets requirements.
Verification is a little more less-defined but looks at how a software product performs and feels. User experience is examined, as well as how well the product increases the business’s efficiency and productivity. We verify that software is useful to the business. The principle of phycological acceptability comes into scope with verification.
Figure 133: Verification and Validation Activities
For example, we could have a requirement that states the software must be able to track employee network usage, and we could validate that it meets the requirement. However, we also notice that the app slows all network traffic by 50% and is therefore a useless application, and
this is considered to be verification that the app is not useful to the business. Having said all of that, the reality is that the difference between verification and validation is mostly semantics, and for the remainder of our conversation we will lump them together as V&V.
V&V is a required step in the software acceptance process but is not an ad hoc process – it is very well defined. Whether it is carried out by an internal department or an external party, it is comprised of two steps – reviewing and testing. This applies to both developed software as well as to software purchased from an outside vendor. Figure 133 provides an overview.
In a nutshell, V&V should check for security protection mechanisms that ensure the following:
Confidentiality
Integrity of both data and systems
Availability
Authentication
Auth
orization
Auditing
Secure session management
Proper exception handing
Configuration management
In some cases, software might have to be compliant with external regulations such as FIPS, PCI DSS or Common Criteria, and V&V must be cognizant of such requirements. For example, when purchasing software, the Common Criteria evaluation assurance levels, or EALs, must be verified by vendors making such claims. Now, we’re not talking about simply checking for the existence of security features – the V&V process must verify that the mechanisms have been implemented properly. As another example, a security feature may exist in software but will need to be disabled in production for performance reasons – this is hardly useful, and V&V should be able to ferret out such conditions.
Reviews
At the end of each SDLC phase, an informal or formal review needs to be held to determine if the product meets requirements and is performing as expected. Informal reviews are typically carried out with a developer reviewing his or her own code to ensure it has been written properly, or perhaps including a peer to perform the same function. Informal peer reviews are a normal part of any SDLC, and if they are not happening on each source code check-in, the process should be reviewed and corrected.
While an informal review can include the design, the code, or both, a formal review must include both design and code, and is typically carried out by the development team presenting both to a review board or panel, comprised of individuals selected by the business owner having the Go/No-Go authority. The most effective approach is for the presentation to be followed by a Q&A session with the panel. A useful tool is to use a formal review process such as the Fagan inspection process which focuses on identifying defects in specifications, design and code. In addition to a functional design review, a security design review must be held to review artifacts such as threat models and mis-use cases.