Book Read Free

Sharks in the Moat

Page 38

by Phil Martin


  Chapter 39: The Architect Role

  The role of architect is arguably the most important when it comes to ensuring proper security is implemented. While the Development role contains the most ‘boots-on-the-ground’ activities to keep software secure, the architect must absorb not only everything that the Development role includes, but the DBA, Infrastructure, DevOps, Engineering Manager and Product roles as well. Additionally, there are a large number of concepts specific to the architect level that will need to be acquired and applied.

  The Need for Secure Design

  Let’s talk about why a secure design is needed, and what benefits can be gained by including such a thing early in the SDLC.

  First, security is concerned with CIA, of which the ‘A’ represents availability. Addressing security concerns early can increase the ability of software to withstand increasing load or other unanticipated stressors, which most certainly contributes to the stability of any system.

  Secondly, implementing security from the beginning forces requirements to be thought through and flushed out before we get to the design stage, resulting in fewer re-implementation mistakes later on. Some aspects of security require certain patterns that increase resiliency. For example, security dictates that try/catch blocks be liberally used in order to create a usable audit trail. This same approach also naturally increases the ability of software to recover from unforeseen scenarios.

  Investing in a secure design upfront supports the ‘build-in’ mindset of security, as opposed to the ‘bolt-on’ approach which seldom ends well. By building in security starting with the requirements stage, we can avoid costly and time-consuming bugs later, not to mention the inevitable lack of quality a ‘code-first’ mentality will create.

  Software Assurance Methodologies

  An assurance methodology validates that some type of attribute has been correctly implemented. In our case, we want to assure that both quality and security have been infused into our software. In this section, we are going to cover the most popular assurance approaches.

  Socratic Methodology

  The Socratic approach really has nothing to do with software directly, but it is very useful in the SDLC when individuals have opposing views on the need for security in the software being designed and built. Also known as the Method of Elenchus, this methodology attempts to spark ideas and rational thought through cross-examination. Here’s how it works – the person with the opposing viewpoint is asked a question in the negative form of their own question. For example, if the opposing viewpoint asks, “Why do I have to wear a space suit when visiting the space station?”, then the other person, instead of listing the various reasons for the need to continue breathing, asks them, “Why is it you think you should NOT wear a space suit?” This often kindles a completely different line of thought that might have otherwise not come up. Beyond settling differences, this approach also can be used to analyze complex concepts and determine security requirements.

  Six Sigma (6 σ)

  In the 1980s total quality management, or TQM, came on the scene, but was eventually replaced by something called six sigma. Its primary objective is to measure process quality by using statistical calculations and works to identify and remove defects. A sigma rating is applied to a process to indicate the percentage of defects it contains.

  Six Sigma contains two sub-methodologies to achieve high quality. The first is called DMAIC, which stands for define, measure, analyze, improve and control. This approach is used to incrementally improve existing processes.

  The second is DMADV, an acronym for define, measure, analyze, design and verify, and is used to develop new processes. It can also be used for new versions of existing products or services when more than just an incremental improvement is needed.

  Notice the differences in the last two attributes. When working with an existing process, we improve and control. When creating a new process, we design and verify.

  It should be noted that an application can be of Six Sigma quality and still remain insecure if the requirements do not include security needs.

  Capability Maturity Model Integration (CMMI)

  The capability maturity model integration, or CMMI, was created by the Carnegie Mellon University for the US Department of Defense and determines the maturity of an organization’s processes. This tool is more heavily used within the security industry than either ITIL or Six Sigma, and CMMI is designed to make improvements in an incremental and standard manner.

  This framework helps organizations reach an elevated level of performance. This is done by benchmarking current capability performance, comparing those results with best practices, and then identifying gaps. CMMI recognizes that it is difficult to become “better”, because “better” is hard to quantify or measure. It therefore provides a way to categorize how mature each process is and provides a holistic view of all process maturity side-by-side. It has five maturity levels, and by assigning a maturity level to existing capabilities, a road map can be created to get the organization to higher levels and achieve more effective processes. Figure 107 shows all maturity levels along with a short description of the effectiveness an organization has achieved when that level is reached.

  Figure 107: Characteristics of CMMI Maturity Levels

  5 levels are defined:

  The initial Level 1, in which we have unpredictable, poorly controlled and reactive process.

  The managed Level 2, where we have a reactive process, representative of most projects.

  The defined Level 3, where we first have a proactive process, which is characteristic for most organizations.

  The quantitatively managed Level 4, which has a measured and controlled process.

  The optimizing Level 5, where we encounter a process focused on improvement.

  Operationally Critical Threat, Asset and Vulnerability Evaluation (OCTAVE)

  The operationally critical threat asset and vulnerability evaluation, or OCTAVE, is another approach to risk assessment. OCTAVE is great when we need a well-established process to identify, prioritize and manage information security risk, and it contains three phases:

  Phase 1 locates all assets and builds a threat profile for each.

  Phase 2 locates all network paths and IT components required for each asset, and then figures out how vulnerable those components are.

  Phase 3 assigns risk to each asset and decides what to do about it.

  STRIDE

  STRIDE was covered in the Product role, so reference that topic if you need a review.

  DREAD

  DREAD was also covered in the Product role, so reference that material if you need a review.

  Open Source Security Testing Methodology Manual (OSSTMM)

  The Institute for Security and Open methodologies, or ISECOM, created the open source security testing methodology manual, or OSSTMM, as a testing methodology for conducting security tests and measuring the results using the correct metrics. Beyond providing a scientific methodology, OSSTMM provides guidelines for auditors to ensure the tests themselves are valid. The final output is the Security Test Audit Report, or STAR.

  Flaw Hypothesis Method (FHM)

  The flaw hypothesis method, or FHM, uses penetration testing to evaluate the security strength for a given system and is very useful when certifying software. Not only can weaknesses be discovered, but the process can be used to create security requirements for future versions. FHM has four phases.

  In Phase 1 we read the documentation and hypothesize on the flaws we will find. The documentation can be internal or externally sourced. Something called the deviational method is used during this phase, in which mis-use cases are used to generate potential flaws.

  In Phase 2 we confirm hypothesized flaws by carrying out simulated penetration tests and desk checking the results. Desk checking affirms program logic by executing logic using sample data. If a flaw is deemed exploitable, it is marked ‘confirmed’ and those that cannot be confirmed are marked as ‘refuted’.
/>
  Phase 3 is where we use the confirmed flaws to uncover additional weaknesses.

  And finally, in Phase 4 we address confirmed flaws by adding countermeasures in the current version, or design in safeguards in future versions.

  Note that FHM can only uncover known weaknesses because it starts with known features or behaviors. However, this approach can be very useful when trying to play catch-up with applications that have already been deployed.

  Operating Systems

  The Developer role discussed at-length the architecture of a computer, and how the heap and stack memory is used to allocate memory for buffers. We’re going to add a little bit more information on that topic and ways to mitigate such threats.

  Input/Output Device Management

  Remember that the OS must manage input/output devices, such as serial ports and network cards. I/O devices will be either block or character devices. A block device such as a hard drive exposes data in fixed-block sizes, and each block has a unique address. A character device, such as a printer, operates using a stream of characters only. When an application needs to use an I/O device, it will communicate with the OS, which then communicates with a device driver. The device driver is very low-level software that knows all the specifics about the device.

  An interrupt is an event that the OS detects. One source of interrupts are I/O devices – the device will send a single across the bus to the CPU saying ‘Hey, I need attention’ – that is why we call them interrupts, because they ‘interrupt’ the CPU and force it to pay attention. However, if the CPU is busy and the device’s interrupt is not a higher priority than the job already being worked on, then the CPU simply ignores it.

  Operating systems can service I/O devices in several ways:

  Programmable I/O – the CPU will poll the device periodically to see if it is ready; very slow.

  Interrupt-Driven I/O – the CPU will send a command, and when the device is ready for another command it sends an interrupt back to the CPU; faster, but still not very fast.

  I/O Using DMA – the direct memory access (DMA) controller feeds data to memory that both the DMA and the device share without having to bother the CPU; may also be called unmapped I/O.

  Premapped I/O – the CPU gives the physical memory address of the requesting process to the device, and they then communicate directly; fast but insecure.

  Fully Mapped I/O – same as premapped I/O, but instead of sharing physical memory addresses, the CPU will only give out logical memory addresses to both the process and device – it does not trust either.

  CPU Architecture Integration

  An operating system is software, while the CPU is hardware. Therefore, for them to work together, the OS must be written exactly for a specific type of CPU. The glue that binds the two together is called an instruction set – a language that both the OS and CPU understand. One example is the x86 instruction set, which works with both Intel and AMD CPUs and Windows, OS X, Linux. All the things that make up the CPU – registers, ALU, cache, logic gates, etc. – are referred to as the microarchitecture. The OS talks to the microarchitecture using an instruction set.

  Operating systems are made up of multiple layers, with varying degrees of trust. For example, both the memory mapper and registry editors are part of the Windows OS, but Windows must have a higher level of trust in the memory mapper than a registry editor. So how does an OS implement multiple layers of trust, even within its own components? The answer is that the OS has layers we call rings. Ring 0 contains the heart of the OS – its kernel – along with access to physical memory, devices, system drivers and some very sensitive configuration parameters. This is the most trusted and protected of all the rings. A process running in Ring 0 is said to be running in kernel mode.

  The next ring is called Ring 1, then Ring 2, Ring 3 and so forth. The maximum number of rings is dictated by the CPU architecture, but the OS may choose to ignore some rings. For example, Windows uses rings 0 and 3 only, and completely ignores rings 1 and 2. Different OSs will choose to use rings differently, but they all operate on the same basic principle - the higher the ring number, the further away from the core it is, the less trusted it is and the less power processes running there have. Additionally, processes in an outer ring cannot directly contact processes in a more inner ring, but processes running in an inner ring can have direct contact with processes in a more outer ring if they wish. Now, a process in Ring 3 can certainly communicate with Ring 0, but not directly - the message must go through a gatekeeper which will inspect the message for security violations first. The gatekeeper is usually called an application programming interface, or API.

  Remember kernel mode? It is used to describe processes running in ring 0. Well, processes running in ring 3 (for Windows, OS X and most versions of Linux) are referred to as running in user mode. When a process is registered in the process table, the PSW stores the mode the process is running in – kernel or user. The CPU will then disallow certain instructions based on the mode a process is running under. Obviously, the OS Holy Grail for attackers is to get their process to load under ring 0 and operate in kernel mode. One method to do this is to replace kernel DLL or modules files with their own code. Once the OS does this, the attacker pretty much has complete control of the system. When we refer to the resources that a process has access to, we are referring to the process’ domain. The further out a ring is, the larger the domain that processes running in that ring have access to.

  Operating System Architectures

  We previously examined the system architecture, which includes hardware, software and firmware. Now, let’s focus on just the operating system architecture, shown in Figure 108. We have already discussed kernel vs user modes, and what components run in each of those modes is really the biggest difference when discussing the various OS architectures. In a monolithic architecture, all processes work in kernel mode. Early operating systems such as MS-DOS were monolithic, and suffered from:

  Lack of modularity – difficult to update.

  Lack of portability – difficult to port to

  another hardware platform due to lack of abstraction.

  Lack of extensibility – hard to add functionality due again to lack of abstraction.

  Unstable and insecure – since everything ran in kernel mode, one process could bring down the entire OS.

  As a result, architects came up with the layered operating system, in which functionality was divided into 5 layers, similar to rings. This addressed the issues of modularity, portability and extensibility, but the entire OS still ran in kernel mode, so it was still somewhat unstable and insecure. However, at least applications resided outside of the OS, providing some type of data hiding, or abstraction. Unfortunately, the layered approach had some significant drawbacks – due to the multiple layers, performance suffered, it was very complex, and security still had not been addressed.

  The next OS evolution saw the OS kernel shrink so that only the most critical processes ran in kernel mode, and complexity was reduced as a side-effect. Unfortunately, due to the small size of the kernel, the number of user-to-kernel mode transitions was so great that performance became unacceptable.

  So, the hybrid microkernel architecture was invented. With this architecture, the microkernel remains small to reduce complexity, but it is not the only resident in ring 0 (kernel mode) – the other services in ring 0, called executive services, communicate with the microkernel in a type of client-server model. This prevents the user-to-kernel mode transition but keeps the microkernel small and nimble.

  Figure 108: Operating System Architecture

  To summarize, we have four different OS architectures:

  Monolithic – everything is in kernel mode.

  Layered – only the OS is in kernel mode and is in layers.

  Microkernel – a medium-sized kernel is in kernel mode.

  Hybrid microkernel – a very small kernel and executive services run in kernel mode.

  Address Space Layout Rand
omization (ASLR)

  If you recall, before a process can execute, it must be loaded into memory at a specific memory address. For most memory exploits and malware to be successful, the attacker will need to know that memory address. While it might seem very unlikely how a remote attacker could gain such information, early on it became obvious that processes tended to be loaded into the same memory location each time due to the desire of an OS to optimize memory use. Address space layout randomization, or ALSR, is a memory management technique implemented at the OS level and is designed to change up the memory locations that a given process would be loaded into. ALSR has been implemented in both the Windows and Linux operating systems.

  Data Execution Prevention (DEP), and Executable Space Protection (ESP)

  When someone carries out a successful buffer overflow attack, the memory area directly following a legitimate storage location, usually a register, is overwritten with the attacker’s code. This adjacent location might be mistaken for executable code that the OS unwittingly will execute. To protect against this exploit, Windows will use data execution prevention, or DEP, to mark the area outside of the register as being off-limits to execution. In this manner, even if an attacker manages to overflow the buffer, the extraneous code will not be executed. DEP is also implemented in the Unix and Linux operating systems, but is called executable space protection, or ESP. DEP and ESP can be implemented in either software or hardware.

 

‹ Prev