Book Read Free

Sharks in the Moat

Page 32

by Phil Martin


  If (!busy)

  {

  busy = true;

  //do something here

  busy = false;

  }

  Leverage multi-threading and thread-safe capabilities, and abstract shared variables. Many languages have native primitives or objects designed to help keep critical sections of code from being executed by more than one thread at a time.

  Minimize the use of critical sections and shared resources.

  Avoid infinite loop constructs. No developer would intentionally write an infinite loop, but the more complex the logic, the easier it is to find yourself in one. Avoid looping based on more than one logic check.

  Implement the principle of economy of mechanisms. This keeps code as simple as possible, thereby reducing the chances of creating circular dependencies between two components or code blocks.

  Implement proper error and exception handling. This prevents information disclosure that might help an attacker identify and exploit a race condition.

  Carry out performance, load and stress testing. This ensures that software will perform reliably when stressed in a production environment. Stressing the software with load until the breaking point can help ferret out race conditions.

  A closely-related concept to a race condition is the time of check/time of use attack, or a TOC/TOU attack.

  The idea is that code normally implements a two-step process to access resources:

  Step 1: Check and see if I can access a resource

  Step 2: Access the resource

  The attack happens right between steps 1 and 2. For example:

  Step 1: A process checks to see if it can access a low-value file

  Step 1A: A hacker substitutes a high-value file in place of the low-value file

  Step 2: The process opens the file

  Step 2A: The attacker reads the contents

  Some sources for these conditions include the following:

  An undesirable sequence of events, where one event that should follow a previous event attempts to supersede its preceding event in order of operations.

  Multiple unsynchronized threads executing simultaneously for a process that needs to be completed atomically.

  Infinite loops that prevent a program from returning control to the normal flow of logic.

  If the requirements do not explicitly call for protection of these types of mechanisms, they will almost certainly not be implemented. Solutions such as race windows and the use of mutexes should be covered in the requirements.

  Buffer Overflows

  To understand the next few topics, you will need to have already grasped computer architecture, specifically how the stack and heap space operates. If that is not familiar to you, go back and review that material now.

  Anytime a variable or function is placed onto the stack or heap, it is expected that the contents of this memory allocation will be overwritten with new data. The allocated memory is just big enough to hold the data as defined by the program and no larger. However, if the allocated memory area, called a buffer, is overwritten by data that is larger than the buffer can handle, we encounter a buffer overflow condition. Unfortunately, by default the computer will just let this happen and keep on chugging. A malicious attacker can craft the data in such a way that the extra data too big to fit into the buffer is actually executable code that does something nefarious. Unaware that anything bad is going on, the computer may simply execute the rogue instructions.

  Stack Overflow

  When the memory buffer has overflowed into the stack space, it is known as a stack overflow. Here is how that happens.

  When a process is loaded into memory and executed, the instructions are placed in the program text segment of RAM, global variables are placed on the read-write data segment, and local variables, function arguments and the ESP register era placed on the stack. If you recall, the ESP register points to the currently executing function. Any large object or an object that is of a variable size will be placed onto the heap instead of the stack.

  As the process runs, it will sequentially call each function by placing the function‘s data onto the stack from the higher address space to the lower address space, thereby creating a chain of functions to be executed in the order the programmer intended. When a function has completed, it is popped off the stack and the next function in-line is executed. But here’s the big question and where we get into trouble – how does the processor know which function should be executed next? We mentioned a chain of functions, but how is that ‘chain’ represented? Well, it turns out that another special register within the CPU called the Extended Instruction Pointer, or EIP, sometimes called the Execution Instruction Counter, points to the next function to execute when the current function pointed to by the ESP is done. The EIP essentially points to the location in memory where the CPU should go to fetch the next instruction to be executed. The ‘gotcha’ here is that the EIP is placed on the stack. If an attacker could load his exploit code into memory somewhere, and then modify the EIP to point to his code, then the very next instruction that will be executed when the current function has completed will be his nefarious code. How do we do this? By intentionally overflowing the stack buffer and overwriting the EIP. This vulnerability is one of the core reasons that C and C++ are more susceptible to buffer overflow attacks than managed languages. C and C++ both have string manipulation functions such as strcpy() and strcat() that rely on the programmer to ensure the allocated memory is not overrun. Managed languages such as Java and .Net automatically handle this condition so they are much less vulnerable due to programmer mistakes. If you recall, we have already discussed some mitigation measures against this attack such as the use of a canary.

  Heap Overflow

  Whereas stack overflows can result in execution of an exploit, heap overflows are less dangerous. The heap only stores objects too large to fit into the stack space, and so normally the most damage that an attacker can cause is to overwrite objects in memory, resulting in instabilities. This will usually be the result of code not allocating sufficient memory and allowing too much data to be written to a storage address, thereby overwriting adjacent objects in memory. Some common reasons for heap overflows are the following:

  Copying data into a buffer without first checking the size.

  Accessing a buffer with incorrect length values.

  Accessing an array using an index that exceeds the original allocation. For example, if we allocate enough space for a zero-based array of 100 elements, and then try to access array[100], we have exceeded our allocated memory. Remember that array[100] is attempting to reference the 101st entry in the array since it is zero-based.

  Integer overflows and wraparounds can occur when the programmer does not ensure that an integer value is between the proper minimum and maximum values.

  An incorrect calculation of the original buffer size may result in a later overflow of the allocated memory.

  Regardless of the previous reasons, the biggest factor in introducing an overflow condition is not checking the length of incoming data. This is the primary mitigation against buffer overflows and includes ensuring the target buffer is big enough to handle the data, checking buffer boundaries in loops, and performing integer type checks to ensure they are within the expected range. Some programs aggressively truncate all strings if they are too large to fit into a buffer, and while this is a safe approach it can impact data integrity if we’re not careful.

  Beyond programming techniques, there are a number of mitigation steps we can carry out to protect ourselves from buffer overflows.

  First, we should choose a programming language that performs its own memory management using a garbage collector. This means that memory is allocated and deallocated for us, making the likelihood of incorrect buffer lengths and memory leaks much less. If we must use a language that does not offer memory management, then we should use a proven library or framework to handle safe string manipulation such as the Safe C String library, or the Safe
Integer handling packages.

  Second, we should choose a programming language that is type safe, sometimes also called strongly-typed. While purists will argue there is a difference between the two, for our purposes they are one and the same. A type safe language ensures that casts or conversions are handled properly, and that appropriate data types are declared and used. Ada, Perl, Java and .Net are examples of such languages. Of course, most languages allow a programmer to sidestep these safeguards if they really want to, so proper testing and code reviews should be carried out.

  Replace deprecated, insecure and banned API functions that are susceptible to overflow issues. A specific use case to recognize is when using a function to copy strings that accept the size of the buffers as an argument. If the two buffers are exactly the same, it may result in a string that is not terminated as there is no room in the destination buffer to hold the NULL terminator. If you are not familiar with C or C++, this explanation may not make much sense.

  Design the software to use unsigned integers wherever possible, and if signed integers must be used be sure to validate both minimum and maximum values.

  Use compiler security to prevent buffer overflows such as Visual Studio’s /GS flag, Fedora/Red Hat’s FORTIFY_SOURCE GCC flag, and StackGuard.

  Use operating system features such as ASLR and DEP/ESP, which we will discuss later. Keep in mind that code can randomize itself to appear innocuous to these mitigation steps.

  Use memory checking tools to prevent overrun of dynamically allocated memory, such as MemCheck, Memwatch, Memtest86, Valgrind and ElectricFence.

  Missing Function Level Checks

  Another example of security through obscurity – which is a really bad idea if you need to be reminded – is that of simply not exposing administration URLs as a link while not protecting those links. In other words, if an attacker is able to guess an administrative link, then they now have an elevated level of access. This is a terrible idea. A mature developer will always assume that the client can be completely bypassed – because it can be. Therefore, access to all functionality exposed to the client must implement complete mediation and check each and every access attempt against the user’s credentials and authorized access. This is called a function level check. Note that we are not talking about securing the interface alone, but rather internal functions as well. When implementing a proper SOA layer, the interface is purposefully designed to be put together in ways the designer never imagined in the beginning – that is part of the power of a SOA approach. However, to properly secure our code we need to not rely on just the interface, but instead secure everything behind the interface as well. An approach using the least common mechanism can help with this as many privilege escalation paths are a result of a single function path being called by more than one privilege level.

  Assuming you have a legacy application that uses security through obscurity and you don’t have the resources to implement proper security, then make sure the naming pattern of the URLs is not easy to guess. Instead of naming an admin interface ‘http://www.myweaksite.com/admin’, use ‘http://www.myweaksite.com/j6d7skww’. At least make it somewhat hard for an attacker to guess the right URL! Don’t assume that automated tools will detect such a weakness, as they often are not setup to look for vulnerabilities such as missing function level checks.

  The use of an RBAC approach using roles to define privileges is much-preferred over most other approaches. In this case, permissions are given to roles, and roles are assigned to users. However, for the vast majority of applications, at some point we must hard-code something if we are to implement function-level checking. For example, let’s suppose we have FunctionA that should only be executable by administrators. In a worse-case scenario we could hard-code the administrator’s name in code:

  If (principle.UserName == ‘Fred’) then …

  This approach is just pure evil. It would be better to create an administrator role and reference the role in code:

  If (principle.Roles.Contains(‘Administrator’) then …

  But the problem with this approach is that only a single role can access a given function and we are unable to add new roles if we want. It would be better to use a permission that represents an administrative role, and assign that to roles as we have need:

  If (principle.Permissions.Contains(‘IsAdministrator’) then …

  This is better, but not really good enough. Knowing that someone is an administrator is not good enough. Instead, we should break down the various functions that an administrator can do and then check for that permission:

  If (principle.Permissions.Contains(‘CanAddUser’) then …

  Now we’re talking. But we can even take this one step further and leverage an external capability to see if a given function block can be executed based on the permission:

  If (principle.CanExecute(‘namespace.userManagement.

  addUser’) then …

  In this case, the list of all possible functions is referenced by a textual name along with a matrix of permissions. The advantage of this approach is that all access is contained in a single location that can be analyzed for holes, and we have not had to hardcode anything related to permissions other than to invoke the authorization access layer. However, there is one last weakness – if we don’t wrap each function in the ‘CanExecute’ call, then our complete mediation has holes in it. In other words, we must be sure to type in the ‘CanExecute()’ syntax for every function block. If we forget to do this, then we just opened a security gap and are probably not even aware of it.

  To address this, we can even go one level further if the language we are using allows some type of dependency injection technologies. For example, with .Net we can leverage nInject to force all top-level function blocks to generate code at compile-time that will force a call to the access layer. This is carried out by specifying that all classes implementing a specific interface, such as ‘IMustAuthorize’ are protected in such a manner. The end result is that it is virtually impossible to ‘forget’ to implement complete mediation as long as our access layer employs a whitelist approach – any function that is not explicitly granted access is denied.

  This approach will most definitely incur some type of run-time performance hit, but the robust security that will result makes it well-worth it. Of course, the access layer must be properly configured with the correct permissions for this to work.

  The above discussion assumes that we have access to the current user. But prior to authorization this approach will not be very meaningful. Instead we must often resort to looking at the referred or incoming URL. A referred URL is the URL that a browser rendered prior to the latest request. For example, consider the following flow:

  A browser requests ‘https://www.mysite.com/login’ and renders the page

  A user enters their credentials and clicks ‘Submit’

  The browser POSTs the form back to ‘https://www.mysite.com/loginsubmit’

  In this case, the server will see ‘https://www.mysite.com/login’ as the referred URL and ‘https://www.mysite.com/loginsubmit’ as the incoming, or current, URL.

  When checking for access, we might need to look at the referred URL as a way to enforce workflow security. In this case, ‘https://www.mysite.com/loginsubmit’ should only come from the ‘https://www.mysite.com/login’ page. But, attackers are sneaky, and sometimes they will use obfuscation of the URL to try and bypass such security checks. For example, if our code simply looks for the text ‘/login’ in the referrer URL, then an attacker could trick our code by encoding it using escaped characters that could fool our access-checking logic. We therefore need to ensure that our server code fully decodes the URLs into their canonical, or original, forms before any type of validation is carried out.

  Safe Code

  When internal functionality can be accessed using public APIs, special attention must be paid to the various security levels of those APIs. In general, we can group APIs into three categories – anonymous, authentic
ated, and administrative. Anonymous APIs require no protection in terms of an existing authentication token, such as the login API or those that provide publicly available information.

  APIs belonging to the authenticated category obviously require some type of authentication, but do not contain any functionality that is considered to be administrative. Functionality in the last category, administrative, must be highly protected with additional mechanisms. APIs in this category might be those allowing us to setup a new customer and manage billing. These APIs must undergo enhanced auditing and continuous monitoring.

  Code reviews should include checks for unsafe code that references deprecated or banned APIs. Unused functions must be removed, although code for dark features can be accepted. A dark feature is one that is rolled out to production but is not yet available for use by customers. Reviewers should look for Easter eggs or bells-and-whistles code that is not needed. A requirements traceability matrix is the best way to detect such code.

  Code Access Security

  So far, we have focused on either operating system security or security implemented within software. But there exists a middle ground where the operating system can dynamically determine if certain blocks of code will be able to access sensitive functionality.

  As an example, suppose we write a software application called ‘CatInTheHat’, which has two internal functions: ‘Thing1()’ and Thing2()’. We decide to install ‘CatInTheHat’ on multiple servers. Server 1 might decide that the ‘Thing1()’ code block can access the system directory while ‘Thing2()’ cannot, while Server2 decides the opposite – the ‘Thing2()’ code block can access the system directory while ‘Thing1()’ is blocked.

 

‹ Prev