Book Read Free

Sharks in the Moat

Page 53

by Phil Martin


  While successful load and stress testing is performed to ensure our software can stand up to various tests, a successful security test results in a breach. Well, ‘success’ depends on who you are – the pretend bad guy or the good guy. Attackers often think in an out-of-the-box manner as a norm and are pretty crafty critters – they are continually inventing new ways to bypass security and they learn from each experience, even if they are not successful. Attackers are not successful when the resiliency of a system is higher than their persistence.

  An important concept to grasp when approaching security testing is that it is quite different from testing security functionality. When we design and implement security functions such as authentication mechanisms, access control lists, and data encryption, we will want to test to make sure that functionality is working as-designed. But just because all of our security functionality is working per the original requirements does not necessarily say anything about how secure the application is. For example, we could easily create a gaping security hole by not including a requirement to encrypt the communications channel. While we’re off celebrating because all of our security functionality passed testing with flying colors, there will probably be a gleeful hacker making off with all of our unencrypted data traveling over the network. The point of security testing is to establish how secure a system is from an attacker’s viewpoint, not how well we met the list of requirements.

  In a mature and security-conscious organization, security testing is built directly into the SDLC process, with testers involved from the very beginning. Doing this gives us two wins:

  1) The development team can address weaknesses early on when it is still relatively easy to accommodate changes.

  2) The testing team gets an early start on writing test scripts.

  Motives, Opportunities, and Means

  In classic police work, a detective will be on the lookout for a person that has a MOM. By that we’re not talking about a maternal figure – I’m pretty sure that you are human and had one of those. In this case MOM is an acronym that stands for Motive, Opportunity and Means:

  Motive – why a criminal would act.

  Opportunity – if the criminal had the chance to act.

  Means – how the criminal was able to act.

  The relationship between each can be seen in Figure 139.

  Attackers act for any number of reasons but motive is usually tied to something he can gain. For example, young attackers generally crave the fame and recognition from peers they receive from carrying out an exploit. A disgruntled employee may act out of a desire for revenge. A hacktivist acts out of a desire to further some type of social agenda. Or perhaps someone is just greedy and wants to steal money. When it comes to securing a system, motive is interesting but not necessarily required to be known for us to get going. Opportunity and means, however, are very important to software security.

  Figure 139: Motives, Opportunities, and Means

  Opportunity will be closely tied to the level of connectivity between the software and the attacker, combined with the vulnerabilities the software has. The means is represented by the skill the attacker possesses coupled with the tools he has available.

  Cryptographic Validation

  Every application or system should use some level of encryption to protect data at-rest or in-transit. There are four different steps to validate that encryption has been implemented correctly – standards conformance, environment validation, data validation and implementation.

  When ensuring that encryption conforms to the appropriate standards such as FIPS 140-2, we first take notice of the algorithms used such as RSA, AES, DSA, etc. FIPS 140-2 testing is carried out against a specific cryptographic module and awards a level of conformance from four security levels, low to high security. The details of this standard are beyond the scope of this book, but it would be a great idea to look over this material on your own.

  The environment in which a system runs must also be validated. In this case, ISO/IEC 15408, known as Common Criteria, helps us with this task by awarding a security level representing how secure an environment is. Unfortunately, ISO/IEC 15408 levels do not map very well to FIPS 140-2, so you can’t simply choose one over the other.

  When considering data validation, keep in mind that FIP 140-2 considers any data that is not encrypted to be unprotected. Within this step, data that is not encrypted is examined a little closer to ensure the right decision was made.

  As the final step in validating cryptographic measures, the actual implementation is examined to ensure three things are handled securely – randomness, hardcoding of keys, and key management. Computers are notoriously unreliable in generating truly random values, and so the randomness of an algorithm will greatly depend on the seed value fed into the algorithm. The seed is based on some type of external factor such as time or hardware identifiers, and tests should be carried out to ensure generated values are truly random and not guessable. The source code must be examined to ensure no keys are hardcoded or stored as clear text. This often happens when an initial prototype is quickly generated with no thought to longevity, and it is so successful that the business says, “Let’s just put that in production!”. Bad idea. Key management must be handled properly including key generation, exchange, storage, retrieval, archival and disposal. Additionally, how well key cycling is carried out and the impact to system uptime should be examined. The length of time until a key should be cycled, or changed, is directly proportionate to the sensitivity of the data the key is protecting. The more sensitive the data, the more often a key should be swapped for a new one. Since the data will need to be decrypted with the old key and then encrypted with the new key, this process can be fraught with risk if not handled properly.

  Scanning

  Scanning source code is a great way to detect risky patterns in-code, but in this section, we are going to discuss how to scan a system from the outside as if it were a black box. Usually the first step is to scan the network so that we can generate a detailed map of all servers, network appliances, subnets, and wireless networks that exist within our target system. The types of information that we can uncover through scanning can include:

  Manufacturer of devices.

  Operating system types and versions.

  Active services.

  Which ports are open or closed.

  Protocols and interfaces being used.

  Web server types and versions.

  As an example of scanning, we can determine the type and version of an operating system in-use by carrying out OS fingerprinting. In essence, we simply look at the data an OS sends back and use it to compare against known patterns for all operating systems. We can use the Nmap utility to help us with this. There are two methods we employ to carry out OS fingerprinting – passive and active. When using passive fingerprinting, we simply sniff network packets as they go by and try and determine what OS is in-use. While this can take a very long time to execute, it has the advantage of being virtually undetectable, since no one knows we are on the network. Passive fingerprinting can be carried out using tools such as Siphon. We can greatly reduce this time by using an active fingerprinting approach, where we reach out and send data to a specific server and analyze the results. With this approach we are essentially advertising our presence. When using active fingerprinting, we need to remember that IDS and IPS capabilities will be on the lookout for us and may very well raise an alarm so that humans can take action.

  Think of a submarine in one of those World War II movies running in silent mode – the sub’s commander makes sure to eliminate all noise, so the sub does not send out any sound waves, but the sub is still listening to noises made by other ships – this is a passive mode. At some point the commander decides to start sending out sonar ‘pings’ that bounce off of other ships - this is an active mode. This gives the sub a much clearer picture of who is out there and where they are, but it also reveals the sub’s own position, as now everyone else is aware that someone
else is out there actively targeting them. In the same way a passive attack is difficult to detect, but an active

  attack is fairly visible.

  Similar to OS fingerprinting, we can carry out banner grabbing to find hosts that are running vulnerable services. In this activity, we actively poke open ports and examine the data that comes back. By comparing the data to known patterns we can figure out what service, protocol and version the host is running on that port. In fact, some services openly advertise their version such as web servers as shown in Figure 140. This is a common approach when carrying out black box testing and tools such as Netcat or Telnet can be easily used for this purpose.

  Figure 140: Banner Grabbing a Web Server Version

  Beyond operating systems and services, scanning can also reveal the existence, type and version of databases, and even the patch levels running on servers and services. Scanning, fingerprinting and banner grabbing are all useful tools for both employees and hackers. It is very common to see them in-use to stay on top of versions and vulnerabilities in our own network, but the exact same tools in the hands of a malicious person can quickly go bad for us. The sad news is that while organizations often employ such tools, the elapsed time in between their uses is much too great. Organizations should consider the use of these tools on a weekly or even daily basis.

  Now that we have covered the basics on scanning, let’s discuss the three primary uses for such an activity. They are - scanning for vulnerabilities, scanning content for threats and scanning to assure privacy.

  Vulnerability scanning is the act of scanning software or a network to detect and identify security weaknesses. The resulting report is used by employees to prioritize issues and to address the most important. It can also be used to show the system is ready for a compliance audit. For example, PCI DSS requires a periodic scan of the card holder environment. Scan reports usually include a description of the vulnerabilities and a relative ranking in terms of common risk, as shown in Figure 141.

  Level

  Severity

  Description

  5

  Urgent

  Trojan Horses; file read and write exploit; remote command execution

  4

  Critical

  Potential Trojan Horses; file read exploit

  3

  High

  Limited exploit of read; directory browsing; DoS

  2

  Medium

  Sensitive configuration information can be obtained by hackers

  1

  Low

  Information can be obtained by hackers on configuration

  Figure 141: Example of a Vulnerability Scan Report

  Network scanning works in much the same way as a signature-based IDS does – by detecting patterns and looking for a match against known threats. This is important to understand, because if we don’t keep the scan database up-to-date it will produce a lot of false negatives, meaning that it will miss some important vulnerabilities. As a result, scanners will not detect the latest and emerging threats.

  Software can be scanned in two manners – static or dynamic. Static scanning looks at source code and identifies risky patterns. Dynamic scanning looks at the compiled application as it runs. Static scanning is used during the development process, while dynamic scanners are used during the testing phase.

  In recent years we have experienced numerous attacks coming from active content. For example, the infamous Melissa virus was delivered in a Microsoft Word macro, while other attacks can arrive in the form of an HTML image tag leading to XSS. Malware may also be packed inside of seemingly useful executables. These attack vectors all use some type of content to deliver the payload, and we therefore must carry out content scanning. Of course, encrypting a payload will render content scanning completely useless, so some content scanners will sit right in the middle of traffic as a man-in-the-middle proxy, decrypting traffic, inspecting the content, and then re-encrypting the data before sending it on its way. This type of scanning should occur for both inbound and outbound traffic, but it can have a substantial negative impact on network performance.

  While privacy scanning used to be rare, it is becoming more common due to the prevalence of legislation to protect private data. This type of scanning includes two types – scanning passing network traffic content to see if it contains unprotected private data and scanning software to attest that is protects data properly.

  Penetration Testing

  We have already discussed that scanning can be both passive and active, but when we take a look at scanning versus penetration testing, we have to conclude that all scanning is relatively passive in terms of actively exploiting vulnerabilities. Whereas we use scanning to detect vulnerabilities, we use penetration testing to prove that a vulnerability can be exploited. Put another way, scanning identifies issues that can be attacked, while penetration testing, or pen testing, measures the resiliency of a system by seeing if an issue can be exploited. Pen testing is most often carried out after software has been deployed, but it can be useful to pen test software in the absence of production security controls as a way to compare the effectiveness of those production security controls. In other words, if we test software without external security controls, and then test it with those controls enabled, the delta between the two tests should be 100% attributable to the external security controls.

  Pen testing can be a very destructive activity. After all, we are trying to emulate a real attack, and how do we know if an attacker will be able to cause havoc unless we successfully cause havoc ourselves? The risk with this approach is that we accidentally go too far, and that is why establishing the rules of engagement is crucial. These rules establish the scope of the penetration test, including the following:

  IP Addresses that will be included.

  Software interfaces that are fair game.

  What is NOT in-scope, such as environments, data, networks and applications.

  NIST SP 800-115 can be very helpful in establishing guidelines on how to carry out a pen testing. The Open Source Security Testing Methodology Manual, or OSSTMM, also can be a great resource, as it describes required activities before, during and after a pen test, and provides instructions on how to evaluate the results.

  If you ask 100 security experts what steps are involved in pen testing, you will most likely get five different answers. For our purposes, let’s take a minimalist approach and define only four steps, as shown in Figure 142. They are reconnaissance, attack, cleanup, and reporting. Keep in mind that we are carrying out a black box test.

  Figure 142: Penetration Testing Steps

  The first step, reconnaissance, is where we discover and enumerate the various hosts and services that are part of the pen test scope. This will include scanning such as fingerprinting, banner grabbing, port and service scans, vulnerability scanning and mapping the network and software layout. This usually involves using web-based tools such as WHOIS, ARIN and DNS lookups.

  Step two is where we carry out the attack and can be referred to as resiliency attestation. Whereas the first step identified potential vulnerabilities, this step actively tries to exploit those weaknesses. This can include attacks such as:

  Brute force authentication bypass.

  Escalation of privileges to an administrator.

  Hiding or deleting log and audit entries.

  Stealing confidential information.

  Destroying data or applications.

  Causing a DoS.

  The third step is to cleanup by removing evidence and restoring the system to a running state if desired. Now, a successful penetration does not always destroy or take down a target – often the objective is to plant back doors, load scripts, or install agents and tools on a host system. In this case, we do not want anyone to discover that we exploited a weakness and compromised the system – the longer we go unnoticed, the better. In fact, the most experienced and dangerous of attackers do not ever want to be discovered. However, when carrying out a pe
n test our goal is not to leave a compromised system, but rather leave the system in the exact state it was before the attack was carried out. If we do not do this, then the system is more vulnerable after the test that it was before. The pen testing exercise is not considered to be complete until the original network and system conditions have been restored.

  The final step is to generate and present a report to the business owners. The purpose of this report is not only to list all vulnerabilities found and which ones were successfully exploited, but also to submit a plan of action and milestones, or POA&M, so that weaknesses will be actively addressed and fixed. Recommended changes might be a policy update, process redesign, a software re-architecture, patching and hardening hosts, implementing defensive coding, user awareness training, or even deployment of additional security controls. In short, the report should serve to provide the following:

  A clear idea of the current state of security.

  A plan for addressing discovered weaknesses.

  A definition of security controls that will mitigate specific identified weaknesses.

  Proof that due diligence has been carried out for compliance reasons.

  A path to increase life cycle activities such as risk assessments and process improvements.

  Fuzzing

  While my passion is software development, I used to be known as a great tester for a single reason – I routinely did the most stupid things I could think of when testing software user interfaces. If there was a user name and a password field, I would intentionally swap the values. I would randomly mash down keys and try to fill each field with as many values as possible. I would go and find unprintable characters and paste them in the fields. I would even try to click buttons at the wrong time and in the wrong order, and repeatedly click them as fast as possible, over and over and over. In short, I would not stop abusing the application until I caused some type of error, which as a tester is a very satisfying result. In the most successful projects that I have worked with, both the development and testing teams have a mutual respect for each other coupled with a healthy competitive attitude of ‘I dare you to find something wrong because I am that good!’. This results in a fun but fast-paced back-and-forth competition that results in far-superior software quality. Of course, without a healthy respect to begin with, this can often end in a very antagonistic relationship, so proper leadership is crucial.

 

‹ Prev