Book Read Free

Sharks in the Moat

Page 26

by Phil Martin


  Technological requirements include the encoding and display of text. The original character encoding scheme was ASCII which supported up to 128 characters, with seven bits per character. That works well for an alphabet consisting of 26 uppercase and lowercase letters, 10 numerical symbols and a host of other symbols, which describes the English language perfectly. But when we move into Europe, we quickly find languages that require more support, and so the Latin-1 standard was developed which allows up to 256 characters by using all 8 bits for each character. Unfortunately, Asian languages such as Japanese or Chinese require thousands of unique characters, resulting in the Unicode standard, which uses 16 bits for each character, allowing up to 65,536 unique characters. This was quickly updated to support 32 bits per character for a whopping 4 billion+ characters. To make all of the various standards play nice together, Unicode now supports three different encoding forms:

  UTF-8, 8-bits per character, which is equivalent to Latin-1 for European languages.

  UTF-16, 16-bits per character, for Asian languages.

  UTF-32, 32-bits per character, for extra-terrestrial languages (I might be exaggerating a bit, but who needs 4 billion characters?).

  UTF-8 is used for the Hyper Text Markup Language, or HTML. Obviously, the larger character sets use more memory. These various formats have a direct impact on security, as we must make sure that software uses the correct encoding to prevent spoofing, overflows and canonicalization.

  Another challenge with displaying international text is in the direction that text flows. The majority of western languages flow from left to right, top to bottom. Languages such as Chinese also flow primarily from top to bottom, but right to left. Other languages, such as Hebrew, change direction depending on the content. Requirements such as these must be recognized and recorded during the design phase.

  Session Management

  At first glance, it seems that the principles of complete mediation and psychological acceptability are directly at odds with each other. Complete mediation demands that every single access request be authorized against the authoritative source of permissions. Strictly speaking, for a web application this means that a user would have to provide authentication credentials each time the browser makes a request from the server. Now, most of us have enough common sense to know that there is no way anyone would use such a beast, and the principle of psychological acceptability would agree with us.

  To get around this problem, we can use a design pattern in which the application, upon a successful authentication, generates a unique token that represents the user’s credentials. Instead of having to provide their username and password over and over, the user simply provides this token, which is usually represented by a string of characters. On the backend, the server remembers the mapping. For example, a user authenticates using the ‘gmaddison’ username and the correct password, and the server creates an 8-character string of random characters ‘3DK*j!23’, called a token, and sends it back to the browser. Now, when the browser wants to request access, it simply sends the token and the server knows which account is mapped to the token. Obviously, we must encrypt the communications channel between the server and the browser, so an attacker cannot simply steal the token and use it themselves. This entire process is called a session, and will normally automatically expire after a set amount of elapsed time, or after a set amount of inactivity. With browsers, we depend on the client sending back this token on each HTTP request to the server. We have three primary mechanisms for storing and retrieving the token. Remember that a token is represented by a string of printable characters.

  We can put the token in the URL of the web page. This URL is automatically accessible using the ‘referrer’ server-side variable. This is a really bad idea, as it encourages the user to play around because the token is highly visible in the address bar. The token will also be stored as part of a bookmark.

  We can embed the token in the HTML form itself, as either a hidden form variable or as a Javascript variable. This is better than the URL, but does not help with security, as any half-way decent attacker knows how to manipulate the page contents.

  We can store the token in a cookie, which is simply a file on the client’s computer that is sent with each and every request to the server. This is a little harder to access than a form variable, but not by much. This is by far the most common mechanism for implementing sessions when the client is a browser.

  Figure 81: Client-side Session Token Storage Possibilities

  All three approaches, shown in Figure 81, are vulnerable to sniffing if the communications channel is not protected by TLS, and if the attacker can log onto the user’s computer, he or she will be able to steal the token in all three cases. This is why the token must have an automatic expiration period, usually measured in minutes. 20 minutes is often the default and is considered to be a good compromise between security and usability, or psychological acceptability. The token must be generated in a random manner to prevent an attacker from guessing the next token in a sequence. For example, if we simply increase a number for each new token, an attacker will deduce that the next token after ‘283921’ will be ‘283922’. Tokens are best represented by a GUID, or a globally unique identifier, which is a usually a 32-character string of random, printable characters. Session hijacking is the term used when an attacker steals an existing session token and pretends to be the user associated with the token.

  Some good examples of session management requirements are the following:

  “All user activity must be uniquely tracked.”

  “The user should not be required to provide user credentials once authenticated until the session expires.”

  “Sessions must be explicitly abandoned when the user logs off.”

  “Session identifiers used to identify user sessions must not be passed in clear text or be easily guessable.”

  Weaknesses in authentication usually result not from improper implementation of the initial sign-in, but rather secondary mechanisms such as sign-out, password management, session time outs, ‘remember me’ functionality, secret questions and user account updates. A failure in any one of these areas could result in the discovery and control of existing sessions. By executing session hijacking, an attacker can insert himself into the middle of the conversation between the user and backend system, impersonating himself as a valid entity to either party. The man-in-the-middle attack, or MITM attack, is the classic result of broken authentication and session management, as shown in Figure 82. If the compromised user has a high level of privileges, the attack can lead to a total system compromise.

  Figure 82: Man-in-the-Middle Attack

  Let’s run through the most common sources of session weaknesses and the best mitigation options for each vulnerability.

  Allowing more than a single set of authentication or session management controls allows access to critical sources through multiple paths, and greatly increases the likelihood that a vulnerability will be found and exploited. In other words, all authentication and session creation logic should be contained within a single server-side area that is reused as-needed. Multi-factor authentication and role-based access control must be supported by the system. Never try and roll your own mechanisms – if there is a proven third-party tool, use it. This follows the principle of leveraging existing components, and if it is not followed, there is a much greater chance of vulnerabilities being introduced into the system.

  Ideally, all program logic should be separated by role. For example, do not use the same mechanism to retrieve an employee name and to retrieve the employee’s social security number. If access is logically separated by a role, then separate the code as well. In this case, you would have one function to return basic employee information, and a completely different function that returns sensitive employee information. While this could increase the lines of written code, it actually decreases the attack surface in the long run. This approach is an example of both the least common mechanism and separation of
duty principles.

  Transmitting clear text authentication credentials and session IDs over the network is a clear sign that security was not baked in from the beginning. Hiding a session ID in a cookie or a hidden form field does no good – the data is still sent as clear text for every transaction between the client and server. You must encrypt either the data or the entire communications channel.

  Exposing session IDs in the URL by rewriting the URL is not as prevalent as it used to be but is still a concern. An example is putting the session ID as part of the querystring where it can be easily read and modified. When a developer realizes that the URL is itself encrypted when using TLS, he or she is tempted to go ahead and use it as a vehicle for sending the session back and forth. Placing the ID in a cookie or hidden form field does not provide any more security, but it at least discourages the casual user from experimenting with the easily-accessed value. Failure to encrypt the channel over which session IDs are sent back and forth will result in session fixation and session replay attacks. Keep in mind that XSS mechanisms can also be used to steal authentication credentials and session IDs. If some of these terms are unfamiliar, don’t worry too much - we will discuss them in just a bit.

  Storing authentication credentials without using a hash or encryption is a significant weakness. A strong hash is preferred over encryption, as the use of encryption implies the ability for someone to decrypt and discover the original password. A strong one-way hash such as SHA-2 is the ideal way to go.

 

 

 


 

 

 


  Figure 83: Improper and correct encryption of connection strings

  Hard-coding credentials or cryptographic keys in clear text inline with code, or in configuration files is a terrible idea, but is more common than you might believe. Often a prototype turns into a production deployment, and the original hacks are never removed. In other cases, a naïve developer believes security through obscurity is a good idea. A common pattern is to store database connection strings in a configuration file that embeds database credentials. These types of settings must always be encrypted in the configuration file and decrypted at run-time. The encryption key must be stored outside of the source code and retrieved at run-time as well, as shown in Figure 83.

  Generating passwords or session IDs with a non-random or pseudo-random mechanism is a common weakness. Many developers think that any random library function available will provide proper protection, but the reality is that it is extremely difficult for computers to be truly random. Random functions should always be seeded with some type of value that is always changing, such as the time of day in milliseconds, or a new GUID value. When generating session IDs, use a unique, non-guessable pattern – any pattern that simply increments each new value from the last value is essentially useless. Do not use easily-spoofed identifiers such as an IP address, a MAC address, referred headers or DNS lookups. In the best cases, use tamper-proof hardware to create the tokens.

  Using weak account management functions such as account creation, changing passwords and password recovery is also a common weakness. All of the hashing and encryption in the world does little good if the functions surrounding that security are weak. Users should be required to re-authenticate when attempting to change sensitive account information such as passwords. When crossing authentication boundaries, session IDs should always be forcibly retired and a new one generated if needed. For example, if a system generates a session ID for every anonymous user, and then a user logs in, they have just crossed an authentication boundary. In many implementations, pre-authentication traffic is not encrypted using TLS, and therefore a session ID can be easily stolen. If we continue to use the same ID after authentication, an attacker can simply grab a pre-authentication session ID, wait for the user to authenticate, and then impersonate the user with increased privileges without ever knowing the actual authentication credentials. This attack is known as session fixation. Likewise, when a user logs out, the session ID must be immediately retired, and a new one assigned if needed after the authentication boundary has been passed.

  Insufficient or improper session timeouts and account logout implementation will lead to security gaps. To mitigate the impact of session hijacking and MITM attacks, a session should automatically expire after a set period of time, forcing the creation of a new session. An explicit logout is always safer than allowing a session to expire. Unfortunately, since a user can always simply shut a browser application down, there is no fool-proof way to force a user to explicitly log out. We can increase the chances of the user performing an explicit logout by ensuring that every page has a logout link. If possible, detect a browser window close and prompt the user to logout first. Take care that psychological acceptability is not impacted though – we don’t want to be too insistent on explicit logouts.

  When deciding on the length of time before a session expires, take into account the sensitivity of the data being protected. For example, an HR application accessing payroll information might have a very short time period before a session is forced to expire, such as 5 minutes with no activity being detected. On the other hand, an intranet application showing the company calendar might allow sessions to remain for up to 24 hours at a time.

  Not implementing transport protection or data encryption is a serious issue. We must protect data at-rest by properly encrypting the data and storing the encryption key in a safe manner. Data in-transit must be protected with the proper encryption techniques such as TLS. Data in-use must be protected against in-memory attacks such as buffer overflows.

  Not implementing mutual authentication is a common weakness. Both parties at either end of a communications channel should be both identified and verified to prevent a MITM attack. Users should be authenticated only over a secure and encrypted channel, such as one encrypted by TLS.

  Storing information on the client without properly securing it can result in information disclosure. There is often a legitimate use case for caching data on the client in order to increase perceived performance or usability. For example, when a browser executes a backend API call that takes 20 seconds to complete, it is often a much better user experience to cache the results in the browser so that the next time the user returns to the page, it simply uses the local cache of data instead of running the rather expensive database query. This data must always be protected in terms of confidentiality and integrity. Hashing will provide integrity checking, while encryption can provide both. Cache windowing must be properly implemented with client-cached data to ensure it expires and forces the client to reload from the back end when appropriate.

  Not implementing proper clipping levels on authentication failures can allow automated brute force attacks to succeed. When an attacker uses brute-force methods to try and guess credentials during a login process, a system should always take measures to either lock out the impacted account or make it increasingly difficult for the attacker to carry out additional guesses. For example, when you try and guess a user password on Windows 10, each subsequent failure will seemingly take a longer time to come back and tell you that your attempt failed. This algorithm is purposefully designed to discourage brute-force attacks by increasing the time an attacker must spend on guessing each iteration. That approach really is not feasible for a web-based application as server threads cannot be tied up with purposefully-delayed tasks, and so accounts should be ‘locked out’ when a specific number of failed attempts are detected within a given time period. In this case, an
out-of-band process should be required to unlock the account. This might entail receiving an email with a unique token that will unlock the account or possibly requiring a call to the help desk. A form of a DoS attack can be executed by purposefully attempting failed logins until an account is locked, so the unlock mechanism should be as easy as possible to carry out without allowing the attacker to execute it.

  And while we are on the subject, let’s talk about improper account unlock or forgotten password mechanisms. We have three primary ways to carry out these activities, listed in order of increasing security – a self-service answer/question challenge, an out-of-band token authentication, or a manual process involving a human.

  The self-service question/answer challenge process requires the user to answer a series of questions that presumably only they will know the answer to. When this method was first used in the early web days, the same set of questions was used for everyone and a predefined list of responses was provided for the user to select from. For example, during enrollment, the user was asked the following:

  What is your favorite color?

  Blue

  Red

  Green

  Yellow

  The problem with this approach is that an attacker can simply guess the correct answer eventually. The next evolutionary step was to allow a user to type in their own answer, such as:

  What is your favorite color?

  Type your answer here

  The next iteration that increased security allowed the user to type in both the question and answer:

 

‹ Prev