Sharks in the Moat
Page 40
Security Models
So, we have the concept of a reference monitor, and the actual implementation of the reference monitor called the security kernel. So how do we make the jump from conceptual to actual? Well, it turns out that most patterns of doing that have already been well documented as something called security models. Keep in mind that a security model gives us goals of how to implement a reference monitor, but that actual implementation details are still left open for the system vendor. We will go over seven common security models – the primary difference between each is how it addresses the CIA triad.
Each security model has its own set of rules, but there is an easy trick to keep some of them straight:
The word ‘simple’ means ‘read’
The word ‘star’ or the symbol ‘*’ means ‘write’
Note: A lot of text editors use the ‘*’ symbol in the title bar to denote that unsaved changes have been made, or that a ‘write’ needs to take place. Use that little trick to remember the difference between ‘simple’ and ‘star’. In other words, ‘*’ or ‘star’ means that a ‘write’ needs to take place.
Bell-LaPadula Model
The first model is the Bell-LaPadula model, and it provides confidentiality only. It was created in the 1970s to prevent secret information from being unintentionally leaked and was the first mathematical model of a multilevel security policy. This model is called a multilevel security system because it requires users to have a clearance level, and data to have a classification. The rules are:
Simple security rule – no read up.
* property rule – no write down.
Strong * property rule – read/write at same level only.
Let’s use an example to help understand how this model works: Adam is writing a report for a company’s shareholders, which must reflect very accurate, factual and reliable information. In fact, Adam uses a fact checker service to make sure his information is always accurate. Eve, who has a lower level of clearance, is also writing a report on the same subject for a different audience, but her report is supposed to reflect her own opinions, which may or may not reflect reality. The simple security rule (no read up) prevents Eve from reading Adam’s report – she does not have sufficient clearance to read a shareholder report. The * property rule (no write down) prevents Adam from contributing to Eve’s report, just in case he accidentally reveals some confidential information. The strong * property rule prevents Adam and Eve from not only reading each other’s report, but also contributing to it.
Biba Model
The Biba model addresses data integrity only – it only wants to make sure data does not lose integrity (write) but doesn’t care who can read it. The rules are:
* integrity axiom – no write up
Simple integrity axiom – no read down.
Invocation property – cannot invoke a service higher up.
Using our example of Adam and Eve, the * integrity axiom says that Eve may not contribute to Adam’s report, but she is free to use Adam’s content in her own report. The simple integrity axiom says that Adam should not even read Eve’s report because it may cause him to introduce opinion-based information into his report. The invocation property would prevent Eve from using the same fact checker service that Adam uses.
Both models discussed so far (Bell-LaPadula and Biba) are only concerned with how data flows from one level to another, but Bell-LaPadula enforces confidentiality while Biba enforces integrity.
Clark-Wilson Model
After the Biba model was around for a few years, the Clark-Wilson model was developed that also enforces integrity but takes a completely different approach by focusing on transactions and separation of duties. It uses the following elements:
Users.
Transformation procedures (TPs) – read, write and modify.
Constrained data items (CDIs) – things that can be manipulated only by TPs.
Unconstrained data items (UDIs) – things that can be manipulated by users via primitive read and write operations.
Integrity verification procedures (IVPs) – processes that check the consistency of CDIs with the real world.
So, in short, a user can read and write UDIs only. A TP can read and write a CDI, which is then verified by an IVP.
Here is another way of looking at it:
The system contains both CDIs (constrained data items) and UDIs (unconstrained data items).
A User can modify UDIs directly but cannot modify CDIs directly.
Only a TP (transformation procedure) can modify a CDI on behalf of a user.
IVPs watch the work done by a TP and validates the integrity of the result.
When a User employs a TP to modify a CDI, we call this an access triple. A well-formed transaction is the result of an access triple that has been verified by an IVP.
Using our previous example, the Clark-Wilson model would ensure that Eve (User) could not directly insert content into Adam’s report (a CDI) – she would instead have to go through his copy writer (TP) first. A fact checker service (IVP) would ensure the new content was indeed factual. However, Eve could setup a meeting with Adam at any time on his calendar (UDI) to discuss content changes without going through any intermediary (TP).
Brewer and Nash Model
The Brewer and Nash model is sometimes called the Chinese Wall model and states that a subject can write to an object in data set A only if the subject cannot read an object in data set B. Going back to our Adam/Eve example, let’s suppose that if we allowed Eve to read Adam’s shareholder report containing earnings information, we want to make sure that she cannot initiate stock market trades based on insider knowledge. Normally, she is free to trade on the stock market, but if she gains access to that insider information (read), we should block her ability to trade shares (write). Under this model access controls change dynamically, thus the ‘throwing up a Chinese Wall’ under certain conditions.
Other Models
The four models we have covered are the four most common, but there three lesser-used models we will cover as well.
Noninterference Model
When we ensure that actions taking place at a higher security level do not interfere with actions at a lower security level, we have achieved noninterference. This model does not worry about how data flows, but rather what a subject knows about the state of the system. For example, if an operation at a higher security level somehow let an operation at a lower level know that something was going on in the higher level, we would have a type of information leakage.
Going back to our Adam/Eve example, let’s suppose that neither is allowed to discuss their respective reports with each other, but both have access to a shared network drive. If Adam leaves Eve a message in a text file about his report on the shared drive, this would be an example of communicating through covert channels and the noninterference model would prevent this. Alternatively, if Adam completes his report and sends it to a printer, Eve may be able to view the contents of the printer queue and realize that Adam was done – this too should be prevented by a noninterference model.
By the way, a covert channel is anyway to send or receive information in an unauthorized manner. There are two types:
Covert storage channel – communicating through a shared storage system. This does not have to be files containing data – it could simply be the presence or absence of some system feature.
Covert timing channel – communicating through the presence or absence of a system resource in a timed fashion.
Graham-Denning Model
So far, all the models we have discussed remain very generic in terms of how to implement the rules each describes. The Graham-Denning model attempts to rectify this by defining a set of rights that can be executed:
How to securely create an object.
How to securely create a subject.
How to securely delete an object.
How to securely delete a subject.
How to securely provide the read acces
s right.
How to securely provide the grant access right.
How to securely provide the delete access right.
How to securely provide transfer access rights.
Following this model ensures a system has covered all areas of a secure system. As an example, so far, we have never discussed if Adam can give other people the right to read his report. The Graham-Denning model exposes this security hole.
Harrison-Ruzzo-Ullman Model
The Harrison-Ruzzo-Ullman model, or HRU model, deals with the access rights of subjects and enforces the integrity of those rights. For example, it is simple to restrict or allow Eve’s ability to read Adam’s shareholder report. But what if she wanted to get a copy, remove a certain section, save the update, and then print it? If any one of those operations is denied, then the whole sequence should not be allowed. The HRU model is used to ensure that unforeseen vulnerabilities are not introduced.
Recap
Let’s quickly review all the models:
Bell-LaPadula – ensures confidentiality by enforcing no read up, no write down and read/write at the same level only.
Biba – ensures integrity by enforcing no read down and no write up.
Clark-Wilson – ensures integrity by enforcing the access triple, separation of duties and auditing.
Noninterference – ensures that commands and activities at one level are not visible to other levels.
Brewer and Nash (Chinese Wall) – allows for dynamically changing access controls that prevent conflicts of interest.
Graham-Denning – shows how subjects and objects should be created and deleted, and how to assign access rights.
Harrison-Ruzzo-Ullman (HRU) – shows how a finite set of procedures can be used to edit the access rights of a subject.
Interface Design
The programming concept of encapsulation encourages the use of an interface, which exposes only the minimum required functionality to a consumer, whether the consumer is a person or a process. When approaching a system from a security point of view, it is important to recognize where these interface boundaries exist. Some examples are user interfaces, APIs, security management interfaces, out-of-band interfaces and log interfaces.
If you recall the Clark and Wilson security model, it states that a subject’s direct access to an object should never be allowed – instead some type of program should mediate that access. All the interfaces discussed in this section follow this model to some degree, although elements of the Clark and Wilson model will be left out. Nonetheless, having a mediating layer in between the subject and object is always a good idea and increases security.
User Interfaces
Beyond increasing the psychological acceptance of a secured application, an end-user interface can implement a number of mechanisms to increase security. For example, masking a password or credit card number by using asterisks helps to assure confidentiality. However, a user interface is not limited to something an end-user can touch. A database view hides the underlying complexity and raw data and can be seen as a user interface on top of one or more objects. Any type of abstraction away from the original source is definitely an interface, such as a layer that reads and writes files to a disk system on behalf of a user. Additionally, a user interface can implement additional business logic and security checks before passing the information on to a back-end layer.
Now we must be smart with this, though – it is fine to put business logic and security controls inside of a user interface, but the same controls MUST be implemented on the backend as well, as it is far too easy to bypass an end-user client and create our own malicious data packets. Never rely on validation performed by a process that is not under our direct control, such as a browser or Windows application sitting on someone else’s desk. In more direct terms, this validation really belongs in an API.
Application Programming Interfaces (API)
An API is the published contract that a programmatic consumer is allowed to interact with. In other words, an API provides the functionality used by external processes. An API can be exposed by a low-level device driver, an operating system, or our own set of web services. The great thing about a well-constructed API is that we don’t have to understand its inner workings and is often cross-platform compatible so that we do not have to align with whatever technology was used to build the API. When an API implements a standard non-platform specific protocol, such as REST, its reusability aspect increases dramatically, and when we combine such a protocol with a SOA approach, its ability to align with the leveraging existing components principle goes through the roof!
Unfortunately, just because an API is uber-usable does not guarantee any level of security. The Cloud Security Alliance, or CSA, lists the top threat to SaaS applications as the abuse of cloud computing resources, closely followed by unsecured APIs and interfaces. This means we as an industry have a long way to go in securing APIs. If our APIs are meant to be accessed by our own applications only, we still need to be secure since the APIs are publicly accessible. If our APIs are designed to be accessed by other applications not under our own control, such as Facebook’s or Twitter’s REST APIs, then we need to be triple-secure.
Security Management Interfaces (SMI)
An SMI is a special case of a user interface because it is explicitly designed to provide administrator-level access to the most sensitive areas of a system. If you have ever used a browser-based application to configure your home router, then you have used an SMI. Common capabilities for an SMI include managing users, granting rights to users or roles, changing security settings, and configuring logs or audit trails.
Securing an SMI is usually an afterthought during the requirements and design processes, when in reality they should be subject to some of the most stringent designs and tests. They often end up as the weakest link in a system as a result. The consequences of an attacker breaching an SMI are usually very severe as the attacker winds up running under elevated privileges. The result can be loss of confidentiality, integrity, and availability, and could allow malware to be installed. Security requirements around an SMI must be explicitly captured and it must be a part of the threat modeling exercises. Here are a few recommended controls to be used with an SMI:
Avoid remote connectivity by allowing a local logon only.
Ensure TLS is enabled over the connection.
Follow the least privilege principle and use RBAC and entitlement management services to control access.
Log and audit all access to the SMI.
Out-of-Band Interfaces
You might think that when a computer is turned off it is completely inaccessible unless you happen to have physical access to the machine. This is not necessarily true if the computer has an out-of-band interface installed and active. If power is still running through the motherboard this interface can still be accessed and can be used to power up the computer and bypass BIOS and operating system protection mechanisms. Sometimes referred to as lights out management interfaces, or LOM interfaces, the best control to mitigate this threat is to check physical connections to the motherboard or an add-on card that provides the same functionality.
An in-band interface requires an agent to be running on the computer, and obviously requires the computer to be turned on and available to work. To mitigate a threat for this interface, ensure that only authorized personnel and processes can access the functionality.
Log Interfaces
Logging is the act of capturing process activity information in real-time to an external repository, most often a file residing on a disk. Audit trails are based on the capability of a system to log, but logs can often produce so much information that it can be impossible to extract useful information. Logs can also quickly eat up storage space, and if not implemented properly can bring a system to its knees. To avoid these problems, a log interface must be created that provides a way to configure logging capabilities. Common capabilities that a log interface provides are the following:
The kinds of events, such as application events, OS events, errors and exceptions, etc.
The level of verbosity by specifying the type of logging to write, such as informational, status, warning, full stack, debug, etc.
Whether to enable or disable logging
It is often handy to create a visual user interface to represent the frequency and intensity of various events, as a simple graph can often convey complex information very quickly.
Access controls to the log interface must be secured to prevent tampering, and the overall architecture should never allow existing logs to be overwritten, by instead only allowing entries to be appended. However, if verbosity is turned up to high, storage space can quickly be consumed, and this issue needs to be recorded as a requirement and addressed during the design phase. It is best to not include the ability to delete a log file as part of the log interface to prevent an attacker from covering his foot prints.
Services
Back in the 1990’s pretty much all applications were built in a monolithic manner – self-contained, non-extensible, proprietary and HUGE. In the last few years of the 20th century though, a better pattern began to evolve of implementing functionality as a reusable service that other software could invoke. Today, services are most commonly designed to be accessed over some type of web – either the Internet or an intranet. That is why we call them web services.
Web Services