by Phil Martin
update register set balance = 2000000 where id=40921
Unfortunately, someone designed the register table so that the balance column is limited to 999,999, resulting in an exception being thrown such as:
Table ‘Register’: The value 2000000 given to the column 'Balance' exceeds the maximum allowed for this field
If we were to emit this error message back to an attacker, they would immediately start clapping their hands in glee, as this just knocked hours off of the work required to break in and steal data. Instead, we should log the full error and send back a generic message such as:
We have encountered a problem, and the support team has already been notified. Please try again later!
It is important that the routine encountering this message not try to continue processing unless additional logic has been programmed in to specifically handle this case.
Chapter 23: Economy of Mechanisms
The economy of mechanisms principle can also be stated as ‘keep it simple’. The more complex a program is, the more bugs it will have and the greater number of vulnerabilities it will contain. The proper way to express this is to say that the attack surface has increased. This principle is sometimes called the principle of unnecessary complexity.
Scope creep is part of any software project, but as unneeded coding increases, the principle of economy of mechanisms is left behind. A great way to ferret out extraneous features is to create a requirements traceability matrix, or RTM, that shows the requirements that drove a feature to be implemented. The unnecessary ‘bells and whistles’ will never show up in this matrix and should be removed.
Why is the defeat of scope creep so important from a security point of view? Because as complexity increases, so do potential vulnerabilities. A simpler design also increases maintainability, which in turn decreases the amount of time required to locate and fix bugs. Modular programming, which encourages high cohesion and low coupling, supports not only the principle of least privilege but the economy of mechanisms as well. Let’s take a look at a few design guidelines for keeping complexity to a minimum.
Unnecessary functionality or unneeded security mechanisms should be avoided.
Many teams will roll out new features in a ‘dark release’, meaning that the feature is only accessible if you know how to get to it or have the required access. While this is an acceptable way to pilot new features to a subset of users to gauge value or interest, pilots will often fail yet the code remains behind. Unfortunately, in subsequent releases this code can become accidentally enabled and introduce stability issues or new vulnerabilities. Even if code is commented out, it can be accidentally uncommented and released into production without as much as a single test being executed to validate it. Never leave disabled features in a releasable code repository.
Strive for simplicity.
If you are ever faced with a decision between more features or a simpler code base, always choose the simpler approach unless a documented requirement explicitly calls for a feature. This applies to the data model as well. Complex regular expressions can greatly reduce the number of written lines but are very difficult to maintain and properly test. In this case, a larger number of lines of code actually decreases the complexity. There is most certainly such a thing as overly-optimized code.
Strive for operational ease of use.
This is closely aligned with usability, in that the focus is on making the end-user more comfortable. For example, implementing SSO can make the software operationally easier to use.
Chapter 24: Complete Mediation
Once authentication has been completed, authorization dictates what access a subject will have to a given object. In some designs, the initial request for the object is vetted against authorization rules, but later requests make assumptions and do not check the actual permissions granted between the subject and object. The principle of complete mediation requires that every request, without exception, always invokes the full permissions check. Otherwise, a number of security gaps can be exploited by an attacker.
Consider a classic example from the early days of web applications. A user visits a web site using the following URL:
https://app.mydomain.com
The user authenticates using a username and password, and the server adds the username to all subsequent URLs, such as:
https://app.mydomain.com?user=lmaddox
Now, the user is not stupid and sees the obvious connection between the URL and the username. So, she simply substitutes her bosses’ name in the URL and hits
https://app.mydomain.com?user=bshelton
Now she suddenly has access to everything only her boss is supposed to see!
These obvious types of vulnerabilities seldom exist anymore, but violation of the complete mediation principle continues, just behind the scenes. For example, the username might be stored in a cookie instead of the URL querystring. While less visible, an attacker can still intercept unencrypted traffic and steal the cookie information, since the contents of cookies are sent to the server on each and every transaction between the browser and the server.
There are proper ways to manage session state between the browser and server, and we will cover that later. For now, let’s assume that we are able to maintain a secure session, and we have implemented complete mediation for each and every access request by performing a lookup query against the database. Unfortunately, we discover that the site now has slowed down to a crawl, as is often the case when implementing complete mediation. So, we decide to implement caching at the server to increase performance, which is what most sites do.
For example, when a user logs in the server caches the permissions granted to a user in volatile memory, or RAM. Every subsequent access is checked against this cached list. While much, much better than the previous examples, this still does not follow the complete mediation principle as permissions could change, resulting in the server with cached credentials continuing to use outdated information. Complete mediation requires that the results of an authority check be skeptical and systematically updated when any change occurs. A partial answer to this conundrum lies in implementing caching with a very short TTL, a concept we have already covered when discussing availability. A full answer would involve the authoritative source pushing changes out to all cached copies in real time.
Beyond protecting against authentication and confidentiality threats, complete mediation can help with protecting integrity as well. For example, when a browser POSTs or PUTs a form back to a web server, access rights should be checked to ensure the user has update or create rights. In fact, the server could track the state of a transaction and prevent any changes from being made until the transaction has been completed. In this example, the user attempts to press the ‘Buy’ button on an eCommerce site multiple times, but the server prevents duplicate charges.
To allow the complete mediation principle to protect alternate paths, all possible code paths that access privileged and sensitive resources should be identified during the design phase. Once the various paths have been identified, they should all be required to use a single interface that checks access controls before performing the requested action. As an example, when implementing a series of web services, a single white list of APIs with the required permissions for each could be created that is checked by a single function. Since a white list contains only the functions that are accessible, if
a function were to be inadvertently left off, the site would default to a fail secure stance and any attempt to invoke the missing function would fail. All requests would use this single point to check for access.
Complete mediation is a great way to protect the weakest link, a subject we will be discussing shortly. But keep in mind that this principle when applied to an API is a technological control and does not address the weakest link in an organization – people. If a person is tricked into giving their credentials away, complete mediation does not do a bit of good. Humans are the strongest control we could possibl
y have if they are trained in security awareness, but they become the weakest link when they are not trained.
Chapter 25: Open Design
Back in the early wild and wooly days of the Internet, a common mechanism for increasing security was by using something called security through obscurity. This belief held that if we hide our algorithms and logic, then surely people won’t be able to break in and steal our precious treasure, which is almost always data. Unfortunately, we learned the hard way that there is no such thing as an un-hackable site – it is simply a matter of how hard someone wants to work before cracking our virtual safe wide-open.
The principle of open design holds a belief that is 100% opposite of security by obscurity. It states that the details of a design should be independent of the design itself, which can remain open. In other words, we don’t care if an attacker figures out how our code works because security relies on the strength of the algorithm, not the knowledge of the algorithm. Back in the 1800’s a man named Auguste Kerckhoffs came up with this idea, specifically surrounding encryption algorithms. He stated that an encryption algorithm should be completely known while the key used by the algorithm should be the only secret. Therefore, Kerckhoffs’s Principle and the open design principle are very close to the same thing, just applied to different areas of computing.
A classic example of security through obscurity – and a great example of what NOT to do – is the hardcoding of connection strings, passwords, encryption keys and other highly-sensitive bits of data within the algorithm itself. A simple application of reverse engineering or observation can quickly reveal these secrets to an attacker.
Let’s consider an actual moment of history in which this concept played out in the news cycle. After the 2000 presidential elections in which hanging chads became the most-used phrase in that year, there was a big push for computerized voting machines to eliminate any type of physical issues. A company named Diebold was at the forefront of this, who had the misfortune to employ someone who exposed 40,000 lines of source code on a website. The software engineering world quickly moved in to take a look, and just about everyone walked away in disbelief.
Based on viewing the source code alone, an attacker could:
Modify the smartcards to vote more than once
Change the vote of someone else
Use passwords embedded in code
Break through incorrectly implemented encryption algorithms
Easily escalate privileges
Whether this resulted from laziness or simply pure incompetence is unknown, but everyone should use this example as a lesson learned.
Amazingly, it is still fairly easy to find the most prevalent example of security through obscurity still in-use – the hidden form field. For whatever reason, some programmers think that by using the HTML tag, they have discovered an easy way to implement security. A server should always suspect data sent from a browser and perform validation. It is extremely easy to construct a client that can manipulate such browser-based mechanisms.
Now, having completely lambasted security through obscurity, it actually does increase the work factor for an attacker, and therefore is not a bad thing to implement AS LONG AS it is backed by real security. Many companies feel they can implement a more secure mechanism than a publicly available standard, but such thinking has led to more than one spectacular data breach. Always use an open standard algorithm such as AES and keep the secret key secret. Open algorithms have been tested again and again by smarter people than you or me, and they are the best bet.
Let’s end this discussion with three bullet points you should follow:
The security of your software should not depend on the secrecy of the design.
Security through obscurity should be avoided, unless it is simply icing on top of real security.
The design of a protection mechanism should be open for scrutiny by members of the community. It is better for an ally to find a vulnerability than for an attacker to do the same.
Chapter 26: Least Common Mechanisms
There is a design principle for service-oriented architectures, or SOA, that is called autonomy. Autonomy states that a single service should not share common mechanisms with other services as a method to remain independent and to reduce coupling.
The least common mechanism principle is similar in that it addresses two blocks of code that share a common underlying mechanism. However, this principle is not concerned with reducing coupling as much as ensuring that the proper level of privilege is respected.
For example, let’s say that we have an application that can return details about employees – not only office information such as cubicle number or phone number, but salary information as well. Obviously, it requires more privileged access for an HR user to read salary information, while anyone should be able to access contact information within the office. However, the programmer who coded this particular function saw the two sets of data as the same thing – it’s just information about an employee. So, he implemented a single function to return both, and left it up to the client to suppress whatever information should not be shown. The danger here is that a non-privileged user could possibly be given details they should not have access to if a client-side bug were to reveal it. The retrieval of the employee information is the common mechanism.
Instead, the programmer should have implemented two functions – one for retrieving public information and another for retrieving sensitive data. While this does increase the complexity of the application, it ensures that
sensitive information will not be accidentally leaked due to using a common mechanism that crosses privilege levels.
An interesting conundrum when limiting code by permissions is how to do it in a manner that reduces the amount of hard-coding. For example, we can define user roles in a database, and we can even assign permissions in a database. But at some point, we will be forced to hardcode something – otherwise, how can we possibly enforce security in our code base?
As an example, we could have code that checks to see if the current user belongs to the HR user role and decide whether a given function should be executed or not. But by doing so we have taken the ability to create custom user roles away from the application. We could drop down one level and check for a permission instead – permissions are assigned to a user role, and therefore we have retained the ability to leave the definition of user roles up to an administrator. But we have simply moved the problem down one layer – now permissions are hard-coded. This is probably not a huge deal as permissions are seldom created once an application is rolled out.
But some development environments allow one more level of abstraction. For example, .Net allows the developer to inject a class that is invoked for every single API request, and this class can check for permissions by name, user roles, custom attributes, etc. In this way, we can easily add our own logic that is not hard-coded completely independent of the code that is to be executed.
The goal of least common mechanisms is to implement a solution that is as flexible as possible while not introducing undue complexity or performance hits to the system.
Chapter 27: Psychological Acceptability
If we really want to make an application secure, we would require the use of one-time passwords that are typed in from a single room that requires three levels of biometric authentication to access, and then require the entire process to be repeated for each and every request. While no one in their right-mind would use such a system, it would most definitely be secure!
Though this is a silly example, it does a great job in illustrating the need to balance usability with security. In order for an application to be useful, we must purposefully choose to NOT implement all of the security that we could. The psychological acceptance principle aims to maximize the adoption of a system by users by ensuring security:
Is easy to use.
Does not impact accessibility.
Is transparent.
A great exa
mple of security that is not psychologically acceptable is when we over-rotate on password strength.
If we require users to remember passwords that are 15 characters in length and use all four sets of characters – uppercase letters, lowercase letters, symbols and numbers – we might feel secure that brute-force attacks will no longer work. That good felling will last until we realize that users are writing down their password on little yellow notes and sticking them to their monitors in plain sight. Perhaps brute force attacks are no longer a concern, but that is only because an attacker now has an easier path – just read the stupid password from someone’s monitor!
We humans are interesting creatures, and if we find something overly annoying, we will try to turn it off or go around it. Therefore, security measures should ideally not make accessing a resource any more difficult than if the mechanism did not exist. Unfortunately, complete transparency is seldom possible, and in these cases, it is up to the designer to make it as easy as possible on the user to incorporate the security mechanisms into their everyday workflow. For example, if a security mechanism will require a user to enter a strong password, providing helpful popup tips in the user interface about what is required can ease the burden.
Chapter 28: Weakest Link
Have you ever watched an episode of The Weakest Link game show? In this game, 9 contestants try to answer 9 questions in a row correctly and are awarded money for each correct answer. When someone answers incorrectly, all accumulated money is lost. The contestants are then given the chance to vote someone off the show before the next round, usually based on who is least able to answer questions correctly. The idea is that the strength of the entire chain of correctly answered questions is only as strong as the person who is most likely to miss a question. It doesn’t matter if a person with a 160 IQ is a contestant, as the next person in line with the IQ of a squashed fly will ultimately decide how successful the entire team is. The ‘winner’ of the vote is then told ‘You are the weakest link. Goodbye.’ after which the ex-contestant takes a walk of shame as they exit the stage.