Sharks in the Moat
Page 30
Any good developer will habitually provide comments in-line with code to help guide other developers when reviewing the code and maintaining it down the road. It is a well-known fact that a developer will no more remember how their own code works after 6 weeks have elapsed than any other developer who must learn it. Therefore, good commenting skills are crucial for everyone. Unfortunately, if a person is not careful they will inject too much information into code and give an attacker a great leg-up. For example, you should never discuss vulnerabilities or possible weaknesses in comments – while these should be documented they are best written down on an external Wiki. At other times a developer might simply comment out functioning lines of code that were used during prototyping or testing such as database connection strings, production or test data, account information or business logic. Figure 89 shows just such an example.
Figure 89: Examples of leaving too much information in commented code
Closely-related to insecure comments are hardcoded secrets that are needed for the software to function, such as passwords or encryption keys. This type of data should not only be stored outside of the application but should be properly protected using encryption.
When dealing with sensitive data and encountering an exception or error condition, care must be taken not to reveal this information to the end-user. As an example, if we are attempting to decrypt sensitive information during authentication using a key of an incorrect length, an exception might very well be thrown. A junior developer in her attempt to be thorough for debugging purposes, might generate an error message such as:
Unable to decrypt the data: A key length of 128 bits must be used for the selected algorithm.
From this one message, an attacker could glean the following information:
Encryption is being used to protect data surrounding the authentication process.
The correct key length for this procedure is 128 bits.
The application is NOT storing a key of 128 bits.
The application is not very good at preventing information leakage – what else can we try?
Backend data stores must always use some type of encryption. Within a live database, this may only apply to sensitive data such as passwords, PHI (personal health information), PFI (personal financial information) or PII (personally identifiable information). Backups or archives of data should be encrypted in their entirety, as they are often stored on removable media and prone to theft.
File Attacks
Vulnerabilities related to file access from a server’s perspective can be grouped into two categories – allowing users to upload files to the server, and the server itself downloading files from other locations.
Anytime an application allows a user to upload a file, a fairly large security hole develops if extra care is not taken to validate user-provided files before each is accepted. First of all, a server can handle file uploads in two ways – either by storing the bytes as a file on disk as they arrive, or holding the file’s entire contents in memory until the upload has completed. Each approach has its own pros and cons.
If a file is stored in memory as a stream while the bits arrive from the client, there is a danger of running out of memory, especially when multiple users attempt to upload files simultaneously. This approach is seldom recommended unless the files are expected to be very small and the system can scale dynamically if-needed. On the other hand, when we stream the bytes to disk as they arrive, then we have essentially created files on disk that have yet to be verified and checked for security issues. To mitigate this issue files should not be stored using the same name as indicated by the client. For example, if the client specifies that it is uploading ‘myfile.docx’, then it should be stored as something along the lines of ‘28HS9jip.tmp’ until it can be thoroughly vetted by anti-malware scanners. NEVER accept a file without using some type of reliable scanner that is kept up-to-date, and executable files must be discarded immediately. The contents of files should be inspected instead of trusting whatever file name the client ‘claims’ it is. Using an integrity checker should be used if possible to ensure the file contents have not been modified from the original source. A server should never allow file content to be uploaded through the use of a POSTed form instead of a file.
Care must be taken when accepting compressed files. For example, a zip bomb is a small file that when extracted, requires a huge amount of disk space, CPU power or memory usage due to the way compression algorithms work.
Some applications accept input from the user that controls relative storage locations on the server. For example, suppose a web application stores user-uploaded files at ‘C:uploads’ and allows the user to control which folders underneath the ‘uploads’ folder are used to store files. If the application is not careful, the user could enter something such as ‘....windows’ and be able to read and write files in the Windows directory. This is called a path traversal attack and following are a few recommendations to protect an application against such a thing:
Use a whitelist to validate acceptable file paths and locations.
Limit the characters and strings used in a file path. For example, rules disallowing the use of ‘..’ or ‘/’ can help.
Configure servers to not allow directory browsing or disclosing contents of folders.
Decode all paths before validations are carried out.
Use a mapping of generic values to represent known folders and file names instead of allowing the user to specify the actual values.
Other applications that are written on a framework allowing a remote file include, or RFI, can be tricked into loading an attacker’s own script to execute. For example, if a user enters content into a form text field, and the server code attempts to load a remote file name generated at run-time using that content, such as:
include form.type + ‘script.dat’
then the attacker could specify ‘http://malicioussite.com/evil’ as the input, resulting in http://malicioussite.com/evilscript.dat being loaded and executed.
To mitigate other ‘include’ weaknesses, implement the following recommendations:
Store library, include and utility files outside of the root or system directories.
Restrict file access to a specific directory.
Do not allow remote files to be included from remote locations.
Automated scanning can help identify some vulnerabilities that accept file names or file paths but it is not very good at identifying risky parameters. In this case the better solution is to employ a static code analysis tool to detect such weaknesses. However, nothing can beat a manual code review process.
Here are some more mitigation steps that are useful in defeating file upload weaknesses:
Use a whitelist of allowable extensions.
Ensure file validation checks take into account any case sensitivity of the file name. The best pattern is to convert everything to lower case before carrying out naming validations.
Allow only one extension for each file. For example, do not allow ‘myfile.exe.jpg’ to be uploaded.
Separate the name of the file from the file itself. For example, record the original file name in a database table, along with a column that identifies the actual file name as stored on disk. This way, we mitigate some vulnerabilities due to file naming conventions, but preserve the original file name if needed for later download to a client. The file name on disk should use salting and hashing to prevent a brute force discovery of the file name.
Carry out explicit taint checks. A taint check is a feature in some programming languages that examines all user-provided input to see if any content contains dangerous commands that might be executed.
Upload all files to a hardened staging repository where they can be examined before processing. As noted, file contents should be examined, not just file names.
Configure the application to demand the appropriate file permissions. For example, the Java Security Manager and ASP.Net partial trust implementations can provide permi
ssions security. This restricts the level of file access any processes running in the web application space will have.
So far, we have been discussing users uploading files to a server. The second category of file threats occur when a server downloads a file from an external source. This could include software patches, and even downloading a file from a trusted site can lead to compromise if an attacker is able to modify the source file. The use of hashes and integrity checks is vital to ensure files have not been tampered with. As an example of this threat, an attacker could carry out a DNS poisoning attack and force your server to download an executable patch with the attacker’s own instructions.
Processes that attempt to access remote files and download them for use must be watched carefully. For example, compression or audio stream protocols such as ‘zlib://’ or ‘ogg://’ might attempt to access remote resources without respecting internal flags and settings. Malicious document type definitions, or DTDs, can force the XML parser to load a remote DTD and parse the results.
To ensure downloaded files can be trusted, you should implement the following controls:
Always use integrity checking on files downloaded from remote locations. Code signing and Authenticode technologies can be used to verify the code publisher and the integrity of the code itself. Hashing can be used to verify the code has not been altered.
To detect DNS spoofing attacks, perform both a forward and reverse DNS lookup. This means that we convert a domain name into an IP address, and then convert the IP address back into a domain name to ensure an attacker has not messed with the hosts file or carried out DNS poisoning. Keep in mind that this does nothing to tell us if the resource has been altered, only that it is coming from an authoritative location.
When leveraging components from third-parties or open source, use monitoring tools to watch the interaction between the OS and the network to detect code integrity issues. For example, use process debuggers, system call tracing utilities, sniffing and protocol analyzers, and process activity monitors.
System Vulnerabilities
System vulnerabilities exist when two or more systems connect, and include insecure configuration parameters, using known vulnerable components, an unsecure startup, escaping the sandbox, non-repudiation, and side-channel attacks.
Configuration Parameters
Any type of data that is, or should be, stored outside of the source code but is required for proper operation is considered to be a configuration parameter. These bits of data might be stored in files, external registries or even in a database. Some examples are the following:
Database connection strings.
Log level verbosity.
The file location for log files.
Encryption keys.
Default error message strings.
Modules to load at run-time.
These resources must be protected, as unauthorized modification can very well impact availability or even compromise confidentiality and integrity of other portions of the application. Some good examples for requirements are the following:
“The web application configuration file must encrypt sensitive database connection settings.”
“Passwords must not be hard-coded in code.”
“Initialization and disposal of global variables must be explicitly monitored.”
All of the best coding in the world by developers will be rendered completely useless if the system, network and infrastructure that surrounds the application are not properly configured and hardened. For example, the development team could employ SHA-2 hashing for passwords, AES-level encryption for sensitive user information, and a robust RBAC scheme to control access, only for an attacker to discover that no one bothered to change the default administrator password for the database. So much for all of our hard work.
Following is a list of the most commonly found misconfigurations.
Missing or outdated software and operating system patches.
A lack of perimeter defense controls such as firewalls.
Installation of software with default settings and accounts.
Installation of administrative consoles with default configuration settings.
Leaving directory listings enabled.
Not explicitly setting up proper exception logging, leading to information leakage.
Not removing sample applications.
Not properly decoupling systems.
There are a number of recommended controls that should be put into place to mitigate misconfiguration. They are:
Change default configuration settings after installation.
Remove any services or processes that are not needed.
Establish a minimum security baseline, or MSB, by documenting the minimum level of security that is acceptable.
Create an automated process that locks down and hardens both the OS and all applications that run on top of the OS. This process should be used to create the MSB.
Create an established patching process.
Implement an automated scanning process to detect and report software and systems that are not compliant with the MSB.
Implement proper suppression, logging and reporting of exceptions to ensure information is not disclosed. This can be accomplished with web apps using redirects or generic error messages.
Remove all sample applications after installation.
Only design and deploy systems that are loosely coupled with a high degree of cohesiveness. This is designed to minimize the blast radius when a security flaw is exploited.
Using Known Vulnerable Components
The principle of leveraging existing components states that we should use existing software components if they exist instead of creating our own. This is a great idea from a resource and time viewpoint, as it can drastically reduce effort required to implement a given feature. However, when we do this we have now not only outsourced part of our work, but we have also outsourced part of our own security. Any weakness in how third-party or open source software is implemented now becomes our problem as well. As an example, consider the Glibc library used on many Linux servers and PHP and Python implementations. The Ghost vulnerability discovered in 2015 allowed an attacker to take complete control of any system using Glibc without any knowledge of credentials.
To make matters worse, when a vulnerability is found in open source packages the bug fixes are often rolled into the next release instead of making a patch immediately available. This makes it impossible to simply address the vulnerability without having to take on a complete new version of the component, often introducing new weaknesses that did not exist before. Now, you could argue that using an existing component does introduce any new vulnerabilities, but that is just bad semantics at-best. For example, the Glibc Ghost vulnerability had existed since 2008, 8 years before being discovered!
I am not suggesting that you stop using open source and third-party components – the business benefits far outweigh the disadvantages. However, when these libraries are used, several things must be addressed:
1) Establish a policy clearly dictating when to leverage existing components and when to roll your own, how licenses are validated, how these components are to be supported, and how end-of-life is carried out.
2) When leveraging existing components, identify known vulnerabilities and either accept or compensate each.
3) Keep updated on discovered vulnerabilities and new versions as they become available.
Secure Startup
We can design and implement the most secure software ever invented, only to be subsequently hacked after deployment because we failed to properly protect the startup state. I am referring to configuration parameters, of course, which are used to set the initial secure state. If an attacker is able to access those parameters, such as a database connection string, then we have utterly failed to secure the application. The startup phase of software when environment variables and configuration parameters are loaded is called the bootstrapping process. We must protect thes
e settings from disclosure, alteration and destruction, which maps nicely to CIA.
Sandboxing
When we wish to isolate running code from some type of sensitive resource, we can execute the untrusted program in a virtual cage called a sandbox. Sandboxes place a wall between the executing instructions and the underlying operating system, playing traffic cop for all resource requests and commands issued by the suspect code. We can often find such an environment in use when observing how a virus or malware behaves by letting it think it is infecting a host, when in reality the sandbox it is playing in will not allow the malicious code to escape. Browsers are probably the best-known sandboxes, as they wrap page-level JavaScript inside of a box that restricts access to the operating system.
Non-Repudiation
Code should implement the proper non-repudiation mechanisms by creating audit logs and ensuring that user and system actions are not disputable. In other words, the code should accurately reflect in an audit log the actions taken by recording the who, when and what. If delayed signing is not being used, the reviewer should ensure that the code is properly signed.
Side Channel Attacks
A side channel attack examines a system and watches for side-effects, thereby deducing certain things about the system. As an analogy, suppose you lost your tickets to the big game and were forced to sit out in the stadium parking lot while your friends selfishly went in without you. You would not be able to see what was going on, but you could infer certain things. When the crowd got louder, you knew an important play was underway. When you could hear a marching band, you knew it was halftime. As people were leaving after the game, you could figure out who won and who lost based on the smiles or frowns on fan’s faces.