Sharks in the Moat

Home > Other > Sharks in the Moat > Page 36
Sharks in the Moat Page 36

by Phil Martin


  To this point we have focused on the ‘happy’ paths, where the software is intended to function correctly according to the stated business requirements. But with the sixth step in identifying threats we step over to the dark side and take a look at the ‘sad’ path by introducing mis-actors. If you recall, the first phase of threat modeling required us to identify both human and non-human actors. We may or may not have identified mis-actors at that time, or the bad guys who want to break through our security. If we did not list those threat sources, now is the time to do so. Examples of human mis-actors might be an external hacker, a hacktivist group, a rogue administrator or a sales admin up to no good. An example of a non-human actor might be an internal process that has gone wild and is erroneously deleting data, or perhaps malware that has somehow snuck in.

  The last step in the identifying threats phase is to determine potential and applicable threats. Essentially, this step takes all artifacts produced by the last six steps and generates a list of threats. This activity can be carried out using two different approaches – by thinking like an attacker or using a categorized list of threats. As this last step in phase 2 is so crucial, we’re going to examine each approach in considerable detail.

  Think Like an Attacker

  The first approach to determining threats is to take on the mindset of a hostile attacker and run the design through as many bad scenarios as we can think of, often by throwing ideas up on a whiteboard. While using a whiteboard is a quick and simple approach, it is not exactly very scientific and may result in gaps. Instead, we might choose to use something called an attack tree.

  An attack tree is a hierarchical tree-like structure, with the root node representing either an attacker’s goal or a type of attack. For example, if we are trying to explore an attacker’s goal, the root node might represent gaining administrative-level privileges, determining an application’s makeup, or bypassing authentication mechanisms. If we are exploring a type of attack, the root node might represent a buffer overflow attack or a cross site scripting attack.

  Figure 99: Using an Attack Tree to Model the Attacker's Objective

  Figure 100: Using an Attack Tree to Model the Type of Attack

  Figure 99 represents an attack tree using the attacker’s goal as the root node. Child nodes represent the various methods an attacker may use to achieve the root goal. Figure 100 represents using an attack tree to model a specific type of attack. In this case, the child nodes represent the conditions that make the attack possible, and the grandchildren nodes represent possible controls or conditions that will mitigate the attack.

  Using Categorized Threat Lists

  Instead of coming up with our own vulnerabilities or attacks, we can instead turn to a predefined list of threats such as the OCTAVE risk modeling, the NSA IAM methodology, or Microsoft’s STRIDE, shown in Figure 101. This last resource is an acronym representing various threat categories that should be considered.

  When using a category of threats such as STRIDE, you will often encounter a threat that crosses categories. For example, elevation of privilege may result from a spoofing attack, which itself resulted from information disclosure. In these cases, you just have to use your best judgement when choosing the appropriate category.

  Mnemonic for STRIDE

  You STRIDE into a room wearing a mask and encounter a group of people. You take off the mask with a flourish to show that you were spoofing an identity. Someone takes your mask and tampers with it by using a marker to color the face green. You grab the mask back and demand to know why they would damage the mask, but they repudiate your claim and say they had nothing to do with it. You then disclose information that they are hiding a marker in their pocket. The person denies this, so you elevate the matter to their boss.

  Phase 3: Identify, Prioritize and Implement Controls

  Once we have identified the threats facing us, it is time to figure out how to mitigate the most important ones by putting into place one or more controls. There are many existing controls that we can leverage, so it will always be preferable to use one of these instead of inventing our own. At times the cost of mitigation will be so high that we cannot justify the expense. However, this case only applies if the level of associated risk is below what is considered to be acceptable risk. If both the risk level and cost of mitigation is too high, then we will have no choice but to redesign the software to bring the threat down to a manageable level.

  Goal

  Description

  S

  Spoofing

  Can an attacker impersonate another user or identity?

  T

  Tampering

  Can the data be tampered with while it is in transit, storage or archives?

  R

  Repudiation

  Can the attacker or process deny the attack?

  I

  Information Disclosure

  Can the information be disclosed to unauthorized users?

  D

  Denial of Service

  Is denial of service a possibility?

  E

  Elevation of Privilege

  Can the attacker bypass least privilege implementation and execute the software at elevated or administrative privileges?

  Figure 101: STRIDE Categories

  While each control should be specific to the associated threat, at time it may take more than one control to bring a threat down to an acceptable level – this is referred to as defense in depth. When applying this approach, be sure that each control compliments the others by locating and removing any contradictions. For example, we could use two types of firewalls – level 3 and level 7 – in series, but if the rules for the first firewall contradict the rules that the second firewall has in place, then we are doing a lot of work for nothing. Always keep in mind that there is simply no amount of controls that will eliminate a threat completely – they can only reduce the level of risk to an acceptable level.

  It will be virtually impossible to address all identified threats, so we will need to focus on the most important by prioritizing our list. Unless the organization just happens to have an unlimited amount of money sitting around to address all threats, this will always be a crucial element and cannot be skipped. There are several approaches we can take to prioritize them, but it will help to categorize threats based on severity and to establish bug bars, or bug bands. For example, we can establish three bars – Severity 1, Severity 2 and Severity 3 and use these to decide which threats will be addressed after the initial rollout has been completed.

  Ranking methods are generally grouped into two categories – qualitative and quantitative. The three most common ranking methods are Delphi ranking, average ranking and Probability x Impact ranking. Let’s walk through each to see how they work.

  Delphi Ranking

  The qualitative Delphi method asks each participant to make his or her best guess on the level of risk for a particular threat along with a reasoning behind the ranking. This estimate is given to the facilitator only, who then gives out a summary of the results to all participants. All participants will read the anonymous responses provided by others, and then resubmit their rankings to the facilitator again. This process is continued until all participants have reached a confident consensus. Because opinions are submitted in a private manner, the tendency for dominant personalities to control the process is eliminated. The facilitator must provide a predefined ranking criterion, such as Minimal, Severe, and Critical to ensure that all participants use the same criteria. While allowing a group to quickly arrive at a consensus, one potential downside to this approach is that a complete picture of the risk may not be created. In fact, Delphi ranking should only be used in conjunction with at least one other method. A secondary concern is that a participant pool with differing backgrounds or viewpoints can lead to the results themselves being diverse.

  Category

  Description

  Da

  Damage Potential

  How much damage can be caused
?

  R

  Reproducibility

  How easy is it to reproduce the threat on our own?

  E

  Exploitability

  How much effort is required to materialize the threat?

  A

  Affected Users

  How many users or installed instances of the software would be affected?

  Di

  Discoverability

  How easy is it for external researchers and attackers to discover the threat?

  Figure 102: DREAD Categories

  Mnemonic for DREAD

  You are using the DREADed hand ax to cut a board. On your first swing, you damage the board, and on your next swing you are able to reproduce the first by striking the board in exactly the same place. You then toss the ax and exploit the damage by breaking the board in half with your hands. Turning around, you notice a group of users gasping in horror at the terrible carnage, so you quickly toss the evidence behind you, so it is not discoverable.

  Average Ranking

  A more quantitative approach – sometimes called semi-quantitative - is to calculate the average of numeric values assigned to risk ranking categories. Here, we are still using categories to rank risks, but we take it one step further by using multiple categories for each threat, with each category assigned a numerical value. A common risk categorization framework is DREAD, shown in Figure 102, which asks each participant to rank a threat using five different categories. Each category should have only a few possible values, usually ‘Low – 1’, ‘Medium – 2’ or ‘High – 3’, but each value must equate to a number. This simplification makes it easier to run through a large number of threats.

  For a given threat, once values have been assigned to each category the average of all values is calculated to give a final risk ranking number.

  Ranking = (Da + R + E + A + Di)/5

  Let’s look at an example to drive the point home. If we use the recommended category values, we might wind up with the values as shown in Figure 103.

  Threat

  Da

  R

  E

  A

  Di

  Avg Rank

  SQL Injection

  3

  3

  2

  3

  2

  2.6

  XSS

  3

  3

  3

  3

  3

  3.0

  Cookie Replay

  3

  2

  2

  1

  2

  2.0

  Session Hijacking

  2

  2

  2

  1

  3

  2.0

  CSRF

  3

  1

  1

  1

  1

  1.4

  Audit Log Deletion

  1

  0

  0

  1

  3

  1.0

  Figure 103: Average Ranking Example

  We can then categorize each calculated average in to a High, Medium or Low bucket, and then state that we will only address High ranked threats.

  Probability x Impact Ranking

  The last method is a true quantitative approach but is very similar to the Average Ranking method. In the simplest terms, we calculate the probability (P) of a threat materializing and multiply it by the impact (I) it will have. The formula becomes:

  ranking = Probability of Occurrence x Business Impact

  ranking = P x I

  The name of this method is sometimes just called ‘PxI’ ranking.

  To execute this method, we first start out exactly the same as the Average Ranking method by having all participants rank threats according to the DREAD acronym. Instead of calculating a simple average, though, we use the category values to calculate probability and impact:

  probability = R + E + Di

  impact = Da + A

  So, the final ranking for a given threat is calculated using the following formula:

  ranking = (R + E + Di) x (Da + A)

  Figure 104 shows the same example we used for the Average Ranking method but calculated using the PxI method.

  Threat

  Da

  R

  E

  A

  Di

  Probablity

  (R+E+Di)

  Impact

  (Da+A)

  Risk Ranking

  SQL Injection

  3

  3

  2

  3

  2

  7

  6

  42

  XSS

  3

  3

  3

  3

  3

  9

  6

  54

  Cookie Replay

  3

  2

  2

  1

  2

  6

  4

  24

  Session Hijacking

  2

  2

  2

  1

  3

  7

  3

  21

  CSRF

  3

  1

  1

  1

  1

  3

  4

  12

  Audit Log Deletion

  1

  0

  0

  1

  3

  3

  2

  6

  Figure 104: P x I Example

  We can see that XSS represents the greatest risk, followed by SQL Injection and then Cookie Replay.

  Which Method Should We Use?

  Each method has its own pros and cons. The Delphi method is quick but focuses on the business impact only, while the two other methods take into account the probability that a threat will materialize as well. Although quicker than the P X I method, Average Ranking assumes that probability and impact should be represented equally, which could result in us focusing too much on threats that most likely will never happen. P x I takes the most time but weights probability slightly more than impact. This gives the design team the ability to address a threat by simply reducing the likelihood of it occurring. For example, if the threat of an insider stealing data from a database is ranked high, we might be able to reduce this threat simply by implementing a separation of duties role that would require a DBA to collude with at least one other employee to carry out theft. This effectively lowers the ranking of the threat by reducing likelihood, without having to reduce the impact by implementing encryption or some other technical control. Additionally, P x I provides a more accurate picture of risk. As an example, look at how the Cookie Replay and Session Hijacking threats were calculated using the Average Ranking method as opposed to P x I. The Average Ranking method showed the two as the exact same value, while the P x I method was able to

  distinguish which ranked higher.

  Phase 4: Document and Validate

  The last threat modeling phase is concerned with two things - documenting the results of the first three phases and validating that gaps have not occurred.

  Documentation is key to threat modeling as it is an iterative process, and if on the next iteration we can’t point to the results of the previous iteration on which to build, we might as well pack it up and head home. Documentation can be recorded in two formats – diagrams or text. Text is great for details, but diagrams provide much-needed context. Both are needed to understand the results of threat modeling. Create a diagram for each threat, and then use text to expand on the details. A template should be used to enforce consistency, while capturing the following attributes:

  Type of threat.

  Unique identifier.

  Description.

  Threat target.

  Attack techniques.

  Security impact.

  Likelihood of materialization.

 
; Possible controls to implement (if available).

  As an example, Figure 105 describes an injection attack.

  Threat Identifier

  TID0032

  Description

  Injection of SQL commands

  Attack Techniques

  Attack appends SQL commands to user name, which is used to form a SQL query

  Security Impact

  Information disclosure

  Alteration

  Destruction (drop table, procedures, delete data etc.)

  Bypassing authentication

  Risk

  High

  Targets

  Data access component

  Backend database

  Controls

  Use a regular expression to validate user name

  Disallow dynamic construction of queries using user supplied input without validation

  Use parameterized queries

  Figure 105: Threat Documentation

  The second part of Phase 4 is to validate the threat model. This involves ensuring five things:

  The application architecture is accurate and up-to-date.

  Threats have been identified across each trust boundary and for each data element.

  Each threat has been explicitly considered, and a decision to accept, mitigate, avoid or transfer has been made.

 

‹ Prev