Book Read Free

Sharks in the Moat

Page 47

by Phil Martin


  The proper security standards are being used, such as WS-Security for SOAP web services.

  Complete mediation has been implemented and is effective in preventing authorization bypass.

  Credential tokens cannot be stolen, spoofed, or replayed.

  Authorization checks after authentication are working correctly.

  Dependencies between environments must be verified such as proper key exchange, compatible payloads, and shared credentials.

  Obviously, we cannot bury our collective heads in the sand and pretend that our software will not experience an outage. Likewise, we must plan for the most unpleasant of scenarios – a disaster. Under this circumstance, something absolutely terrible has happened, and our system has been completely compromised and undergone significant damage. Disaster recovery testing is designed to figure out how well our software can handle the initial outage and how easily it can be rebuilt in a secure manner. Questions to ask during this type of testing are:

  Does it fail securely and how does it report errors when a disaster happens?

  Is there proper logging in-place that continues to work as effectively as possible?

  Does it come back up in a secure state?

  Does it record lost transactions such that they can be reconstituted after recovery has completed?

  Failover testing is part of disaster recovery testing and gauges the ability for access to a system to remain functional during a disaster, as control is handed over to a secondary capability. The accuracy of DR and failover testing will be directly dependent on how accurately we can simulate a real disaster. The more real a simulation is, the more it will cost in terms of money, resources, and downtime. This must be budgeted for in-advance.

  It is a well-known behavior that when an application is deployed into production that does not behave as expected due to environmental differences, the default answer is to simply open up security settings until the problem goes away. Obviously, this is a terrible approach but is nonetheless repeated time and again. The best solution to this problem is to have a staging environment that exactly mirrors production, allowing all problems to be solved prior to moving to production. Unfortunately, very few organizations can afford such an expensive solution. A more realistic expectation is to carry out simulation testing where the configuration of the production environment is mirrored in a staging environment, and issues are resolved there without changing the configuration.

  Other Testing

  In this section we discuss two types of testing that are crucial, but do not neatly fall under the functional and non-functional categories – privacy and user-acceptance testing.

  Privacy Testing

  Privacy testing looks at personal data and ensures that it is properly protected to prevent information disclosure. For any application that handles payments, PII, PHI or PFI, this type of testing must be part of the official test plan. Organizational policies should be reflected in both the requirements and test plan, and any requirements resulting from industry policies and federal regulations must be present as well. Data at-rest and in-transit must be examined, so this testing includes network traffic and any communication between end points. Following are a few items that should be specifically validated:

  Notices and disclaimers should notify the end-user when personal information will be collected. This is especially important when information will be collected from minors.

  Both opt-in and opt-out mechanisms should exist and function properly.

  There must be a privacy escalation response path when a privacy breach is encountered, and it should be tested to ensure documentation is accurate and the process carries out the intended actions.

  User Acceptance Testing (UAT)

  After all other testing has been successfully completed, there is one last step to take before we can release the software to a production state and go live. The business owner must perform user acceptance testing, or UAT, which primarily focuses on the functionality and usability of the finalized system. This type of testing is best carried out in an environment that closely resembles the production environment. In fact, in cases with a brand-new system UAT can be carried out in the actual production environment before it is made accessible to end-users. UAT for existing systems in production is also possible as long as there is a defined testing window with the expectation of a roll back if testing does not go well. UAT is a perfect time to also test security aspects of software and is basically the last chance we have to raise any red flags before it is released to the wild.

  Prerequisites for entering UAT are the following:

  The software must have completely exited the development phase.

  All other types of testing must be completed, including unit testing, integration testing, regression testing, and security testing.

  All known functional and security bugs must either be addressed or accepted as-is by the business owner.

  All major real-world scenarios have been identified and test cases have been completed that will address each.

  UAT will result in a go/no-go decision, and if the system is accepted by the business owner, a formal signoff should be delivered in writing by whatever entity officially represents end-users.

  Software Security Testing

  When releasing a new version of an existing software package, regression testing must be carried out to ensure that the security state has not gone backward in quality. In other words, we want to make sure that the RASQ has not increased. Obviously, this means we must have calculated a RASQ for a previous version in order to determine the change.

  For every release, there is a standard set of security tests that should be carried out. For example, we can use the NSA IAM threat list or STRIDE list, but we should ensure that we use the same list that was used to create the threat model. Otherwise, we will not be able to validate the threat model. In this section we are going to cover the following security tests:

  Input validation

  Injection flaws

  Script attacks

  Non-repudiation assurance

  Spoofing

  Error and exception handling

  Privilege escalation

  Anti-reversing protection

  When discussing each type of test, we will also cover some proper mechanisms that should have been coded so that the test can be successful. While it is not the testing team’s responsibility to write code, it will greatly increase the efficiency of the overall development team if testers are able to speak in ‘developer-ese’. Additionally, this knowledge will also result in superior test cases. With that in mind, we will discuss proper mitigation coding steps in the Testing role.

  Testing for Input Validation

  The vast majority of security weaknesses can be addressed if we would only properly validate and sanitize user-supplied input. There are two places to perform this type of input validation – on the client and on the server. We have already covered this multiple times, but it is so important we are going to state it again – client-side validation is great for performance and user experience, but NEVER skip the server side validation. If you have to choose between the two, always implement server-side validation first.

  Using Regular Expressions, or RegEx, is a great way to validate textual input, but a common mistake is to try and pack too much power into a single statement, rendering it incomprehensible and unmaintainable. The use of both white lists and black lists simultaneously is a powerful combination, but we must also protect these lists from alteration using anti-tampering mechanisms such as calculating the hash and verifying it at run-time. Both the normal and canonical forms of textual input should be compared to the validation rules. Fuzzing is an absolute necessary test approach, and smart fuzzing should be used if the input format is known. Otherwise, use dumb fuzzing with pseudo-random values.

  Testing for Injection Flaws

  If we have properly validated user-supplied input, then we should be protected against injection attacks. However, we must
still perform tests to validate that this is true by first determining the various sources from which user-supplied input will come from. Some typical browser sources include query strings, form fields, and hidden fields. There are still some defensive measures we can take to further mitigate the risk of an injection attack.

  Always used parameterized queries instead of concatenating user-supplied values into a string of SQL.

  Do not allow dynamic construction of SQL queries, regardless if user-supplied input is used or not.

  Properly handle error and exception messages so that even Boolean queries used in a blind SQL injection attack do not disclose information. This includes any information that might reveal table schemas.

  Remove unused procedures and statements from the database.

  Ensure parsers do not allow remote entities to be employed. An external entity is a feature of XML allowing an attacker to define his own entity, leading to XML injection attacks.

  Use white lists allowing only alphanumeric characters when querying LDAP repositories.

  Always use escape routines for shell commands instead of writing a custom one.

  Testing for Scripting Attacks

  A scripting attack occurs when user-supplied input is reflected back to the client as JavaScript where it is executed. This is allowed to happen due to improper validation of input. To decrease the risk of this type of attack, the development team should perform the following items:

  Sanitize all output by escaping or encoding input before it is sent back to the client.

  Validate input using an up-to-date server-side white list containing the latest attack signatures, along with their alternate forms.

  Only allow files with the appropriate extensions to be uploaded and processed.

  Ensure that secure libraries and safe browsing extensions cannot be circumvented.

  Ensure cookies are not accessible from client script.

  Ensure that software can still function if the browser disallows scripts from running.

  While that last item is ‘recommended’, in today’s world I strongly feel that it is no longer possible due to the global expectation that scripts will be used to create a better user experience. However, if it is possible to create an application that provides an enhanced user experience if scripts are allowed, while still continuing to work in a more basic mode if they are not, then go for it. To be successful with this approach, the design from the very beginning must take it into account. Tacking this type of behavior on at the end is almost guaranteed to fail.

  Testing for Non-Repudiation

  As a reminder, non-repudiation means that someone cannot deny having taken an action after-the-fact. While we often use this in conjunction with digital signatures, it is just as applicable to any application that has end-users taking actions. In this case, non-repudiation means that we record all activity for a given user in such a way that the audit trail is non-disputable and complete. To do this we must ensure the audit trail can uniquely identify each user and the recorded events are unique with sufficient meta data to reconstitute the user’s actions. Proper session management is required to do this.

  In addition to validating that an audit log is being properly generated, we must also ensure that the log itself is protected against unauthorized modification. NIST SP 800-92 provides guidance on how to carry this out. The retention time for audit logs must be identified in a security policy and enforced through processes. For example, audit logs tend to grow very large over time, and if we are not careful, the infrastructure team may purge older logs unless we have explicitly defined a retention period to prevent such a loss.

  Testing for Spoofing

  Spoofing can happen with a number of value types, with user IDs, IP addresses, MAC addresses, DNS lookups, session tokens, and certificates being the most common. With this type of an attack, the attacker claims that he is the owner of the actual value, while in fact he is spoofing, or pretending he is the owner. When an attacker spoofs a user’s identity, he is probably carrying out some type of phishing attack. If an attacker spoofs an IP address, then he is sending packets that list someone else’s IP address as being the source. If the attacker is spoofing a session, he has probably stolen a valid session token from someone and is trying to convince the server that he is the real owner. If he is spoofing DNS lookups, he is substituting his own URL to IP address mapping instead of the real one, tricking the client into sending traffic to his own malicious web site. If he is spoofing a certificate, he is more than likely carrying out a man-in-the-middle attack by substituting his own certificate to both parties.

  The most effective prevention to such an attack is to encrypt the communications channel and test how ‘spoofable’ a specific vector actually is. When addressing session spoofing, ensuring that cookies are encrypted and expire after a proper amount of time has elapsed is needed. Phishing attacks are best dealt with by carrying out security awareness training for employees.

  Testing for Error and Exception Handling

  Testing software for security failures must be intentionally carried out apart from functionality testing. This includes three primary areas – failing secure, exception handling, and overflow handling.

  To verify that software fails secure, we look at how well CIA is maintained when a failure is encountered. Special attention should be paid to the entire authentication process such as ensuring clipping controls work by locking out an account after multiple failed attempts.

  To validate that errors and exception conditions are handled properly, we need to look at how well messages are suppressed to the end user. Ideally, users will only ever see generic messages or be redirected to an error page that provides a friendly, yet detail-free, experience. At times an application allows details to be provided if the user is using the same private network, whereas a public user would only receive generic messages. This facilitates debugging and allows trusted users to see additional details that will help when communicating issues with the support team. In these cases, it is imperative to be able to simulate both remote and local users. Applications can also be written to generate a unique identifier per exception and provide this to the end user. This will help facilitate digging into the root cause as the end user can provide the identifier to the support desk. Testing must ensure that these IDs map to the actual logged error on the backend.

  While modern software tools automatically provide many protections against buffer overflows, they continue to be a significant threat. Overflow testing should include the following items:

  Ensure input is sanitized and validated.

  Each time memory is allocated bounds are checked.

  All data type conversions are explicitly performed by casting the results.

  Banned and unsafe APIs are not used.

  Compiler switches that protect the stack or randomize memory layout are used.

  Testing for Privilege Escalation

  Privilege escalation can be of two types – horizontal or vertical. With horizontal privilege escalation, an attacker is able to access information or features that only certain other users with the same level of access should be able to get to. For example, in a multi-tenant application, Tenant1 is able to see Tenant2’s data. Both Tenant1 and Tenant2 are equivalent in terms of privileges, but their data should be kept safe from the other. With vertical privilege escalation, the attacker is able to access features or information that should be beyond his access due to permission levels. As an example, a normal user is able to gain access to administrative features. Testing must account for both horizontal and vertical privilege escalation.

  The root cause for such an attack will usually be the result of an insecure direct object reference or complete mediation coding bugs. If you recall, an insecure direct object reference allows an attacker to directly manipulate an object that should be off-limits to any user, such as the current role being used to decide access permissions. If complete mediation can be bypassed due to a coding bug, then the attacker d
oesn’t even have to do anything except take advantage of the bug. When dealing with web applications, both GET and POST values should be checked. When producing a web service, GET, PUT, POST, and DELETE must all be accounted for, along with the other less-used HTTP verbs.

  Anti-Reversing Protection

  Anti-Reversing protection attempts to stop an attacker from reverse-engineering a software product. This is most commonly found in-use with commercial off the shelf (COTS) products, sometimes called shrink-wrapped products. However, anything that will make it more difficult for an attacker to access our internal logic is useful regardless of the type of software application. Following are some applicable tests to anti-reversing:

  Ensure code obfuscation is being employed and look at the processes used to carry out obfuscation. This test should attempt to de-obfuscate code to determine how usable the result is.

  Analyze the final binary to determine if symbols can be used to reverse engineer the logic. Symbols include such things as class names, class member names, and global objects.

  If anti-debugging code is supposed to be present, explicitly check for its effectiveness. Tests should attach a debugger and see if the process terminates itself.

  Tools for Security Testing

  As a developer implementing security, or as a tester validating security, it is not important that you know how to use the various tools that an attacker on an infrastructure team might use, but you should definitely know how the use of each type of tool impacts secure coding. Some of the most common types of tools are the following:

 

‹ Prev