Sharks in the Moat

Home > Other > Sharks in the Moat > Page 21
Sharks in the Moat Page 21

by Phil Martin


  Purging renders data in an unrecoverable state. While technically this could be used with physical media by heavily redacting data with a black ink marker, it is normally reserved for electronic media. Degaussing uses a powerful magnetic field to disrupt all storage on magnetic disks such as hard drives. This obviously has no effect on non-magnetic storage devices such as USB drives.

  Destruction of storage mediums involves physically harming the medium such that the data can never be recovered, or it is sufficiently cost-prohibitive that no one would take the time to do so. Think about shredding paper documents – this is very effective for low-value content as it is simply not worth the effort to piece together every little strip back together. But for highly-sensitive documents it can be well-worth the effort. In this case it is better to burn the paper until it is little more than ashes. A laboratory attack is carried out when specially trained threat agents use various techniques to recover data outside of their normal operating settings. Stringing together shredded documents can be seen as such an attack.

  When destroying electronic storage devices, we have five options. Disintegration separates the media into component parts. Pulverization grinds the media into powder or dust. Melting changes media state from a solid into a liquid by using extreme heat. The end result when cooled is usually a lump of useless metals. Incineration or burning uses extreme heat until the media bursts into a flame. Shredding cuts or tears the media into small particles. The size of the resulting particles must be considered to ensure the cost of recovery is sufficiently high.

  The proper approach to sanitization will be partially based on the type of media itself. For example, optical disks such as CDs and DVDs, and WORM devices should be pulverized, shredded or burned. Figure 52 illustrates the possible options based on media type as taken from NIST SP 800-88 ‘Guidelines for Media Sanitization’. Note that all actions end with the ‘validate’ step to ensure that information recovery is not possible, and then to document all steps that were taken.

  Electronic Social Engineering

  When an attacker uses technology to trick a human into revealing sensitive information, we call it electronic social engineering. There are four types of this threat – phishing, pharming, vishing and SMSishing, as shown in Figure 53.

  Phishing uses email or websites to trick people into providing secret information, usually by sending out mass emails to many people in the form of spam. However, in recent years spear phishing has become increasingly common in which the attacker targets a single individual. The term phishing refers to using a lure to ‘fish’ out a victim’s personal information.

  Pharming, sometimes called ‘phishing without a lure’, is a result of malicious code installed on a computer that redirects users to a fraudulent website, all without the user knowing this is happening. This attack can potentially be more rewarding for the attacker since a single system or site is targeted instead of having to reach individual users. The attack is usually carried out by altering the local system hosts file that contains mappings between URLs and IP addresses. If the user types in a legitimate URL, say ‘www.amazon.com’, the browser must convert this URL into an IP address by checking the hosts file. If an attacker tells the host file that ‘www.amazon.com’ maps to the attacker’s own web server, the browser will simply follow the instructions. Assuming the attacker has created his own malicious version of Amazon’s web site that looks the same, the user might never know the attack is underway, as the address bar will still contain ‘www.amazon.com’. Another version of this attack is called DNS poisoning, in which the attacker alters data on a DNS server instead of individual user machines. The end result is the same, but only a single computer need be compromised.

  Figure 53: Types of Electronic Social Engineering

  As Voice over IP, or VoIP becomes more popular, a new type of phishing has appeared called vishing. The only difference between normal phishing is that the attacks happen over a VoIP network instead of via email or websites. With this vector, an attacker will spoof the caller ID and pretend to be someone else, such as a help desk asking the user for their password.

  SMSishing, usually called smishing, is carried out using a short message service, or SMS, usually just called texting. In this attack, the victim receives a text message that appears to be coming from an authoritative source, such as a financial institution. The text message usually instructs the user to call a specified number to take care of some type of emergency situation with their account. When the victim dials the number, they are greeted with an automated voice response system instructing them to enter their username and password. The system will usually thank the user and then disconnect.

  While the primary weakness that social engineering preys upon is the nature of people to want to trust others, there are secondary weaknesses that it can exploit as well. For example, the lack of proper access control lists or spyware protection can allow an attacker to gather sufficient information to use in a spear phishing attack. The more personal information an attacker appears to have, the more trusting the victim will be.

  Of course, we can be just as sneaky in our attempts to defeat attackers using electronic social engineering. We can utilize dilution, sometimes called spoofback, to send bogus and faulty information back to the phisher with the intent to dilute the real information that is being collected by unaware users. Or we can use a takedown approach to repel the attack by taking down the offending phishing site – this must only be carried out with the proper legal guidance though. Just because we think a site is phishing does not always mean it is, and we could find ourselves being sued as an attacker if care is not taken.

  There are a number of steps we can execute to mitigate electronic social engineering attacks. Many of the following recommendations are not specific to this attack vector but are important to put into place.

  Use the HTTPOnly flag to prevent access to cookies by local JavaScript. Note that this flag does not apply to HTML5 local storage, and therefore we need to be careful not to store anything overly sensitive or private in this location.

  Use a private browsing mode, such as ‘incognito’ in Chrome or ‘InPrivate’ in Edge to prevent caching of web pages. Some extensions or plugins can also be used to achieve this behavior. Configure browsers to not save history and clear all page visits when closing the browser.

  Disable autocomplete features in browser forms that collect sensitive data.

  Do not cache sensitive data on backend servers. However, if you must do this for performance reasons, be sure to encrypt the cache and explicitly set a timeout.

  Do not deploy backup or unreferenced files to production. An oft-seen pattern is to deploy files with a ‘.bak’ or ‘.old’ extension. Attackers can easily guess and read such files unless proper access control is implemented. Installation scripts and change logs likewise need to be removed after a deployment.

  Harden servers so that log files are protected.

  In-line code comments must explain what the code does without revealing any sensitive or specific information. While most compiled languages will remove comments as part of the compilation process, there are two reasons to follow this rule: 1) Source code is often mistakenly deployed along with binaries and 2) uncompiled code such as JavaScript will always be deployed with comments intact unless extreme obfuscation is used. Code reviews should look at comments as well as code.

  Use static code analysis to look for APIs that leak information.

  Don’t store sensitive data if you don’t need it. For example, while a social security number may be required in order to call a third-party API, collect that data from the end-user but toss it as soon as the backend server is done with it. And make sure this data does not show up in a log somewhere.

  If you must store sensitive data in a persisted state always encrypt or hash it depending on the need.

  If storing encrypted data, maintain the key separate from the repository. Keep in mind encryption does not keep the system safe from inj
ection attacks as the injected code will simply be encrypted and then decrypted along with everything else.

  If you must absolutely store sensitive data on the client, encrypt it. Of course, now we have the problem of how to manage a key in a browser, which is solved by storing the key on the server and given to the client as-needed. Naturally, this communication must be encrypted itself using TLS.

  If TLS is not a valid option for communication channel encryption, and you need to encrypt data on the client before transmitting to the server, use an asymmetric algorithm. The public key can be used to encrypt the data on the client, and only the server possessing the private key will be able to decrypt the data. When the highest security is required, encrypt all data for transmission in addition to using TLS. A simple misconfiguration can sometimes disable TLS without anyone noticing, and a secondary encryption mechanism can mitigate that risk.

  If hashing is used, always employ a salt to minimize the ability for a force rainbow table attack.

  Stored passwords should be hashed, but if you must use encryption, then use an algorithm specifically designed for passwords such as PBKDF2 or scrypt.

  Avoid using mixed TLS in which some pages are protected using TLS and some are not. This is often seen in sites that have both anonymous pages that should be accessible to everyone and pages that require some type of authentication. The reason for partial use of TLS is usually one of performance – TLS encryption and decryption requires CPU and memory resources and can slow down a site by up to 25%, depending on its use. This is one reason that building scalability into a site from the beginning has a direct impact on security.

  Ensure the cookie’s secure flag is set, meaning that the browser will not allow the cookie to be sent over HTTP – HTTPS must be used instead before the browser will relinquish control of a cookie. Keep in mind that not all browsers properly support this flag. When it is supported, though, SurfJacking attacks can be prevented using this approach.

  Never roll your own encryption or hashing algorithms for both at-rest and in-transit data. Ideally select algorithms that are FIPS 140-2 compliant.

  Ensure digital certificates are always configured as unexpired and unrevoked.

  Educate users to not bypass browser warnings prompts, such as those raising flags around lookalike certificates or phishing prompts.

  Train users on how to recognize electronic social engineering attacks.

  Prevent users from being exposed to attacks by implementing SPAM control, disabling links in emails and IM clients, and require viewing of emails in a non-HTML format.

  Instruct employees to never trust caller ID when dealing with sensitive information.

  Some folks will recommend disabling text messaging to prevent Smishing attacks, but I hardly think in this day and age that this tactic is psychologically acceptable. Training is a better approach by instructing users to never return a phone call based on a text message only and notify authorities when an attack is suspected.

  Generously implement separation of duties to reduce the risk of insider fraud. Remember that internal threats should always be included as part of your threat profile.

  Chapter 36: The DBA Role

  A database administrator, or DBA, is in charge of the structure, performance and security of a database. Any useful application will need to store data at some point in a persistent manner, and a database is what we call this capability. In enterprise applications, the DBA role is crucial as the database is without a doubt the Holy Grail of attack targets, and the security required to protect this treasure is unique. While the presentation and middleware layers are the ones most often directly attacked, gaining access to data is usually the real goal – these intermediate layers are simply a means to the end. This is why injection attacks are so serious as they can either leak information or eventually open the door to access data directly.

  Inference and Aggregation

  Two specific database attacks are not very well-known unless you intentionally explore database security. The first is an inference attack, where the attacker is able to glean sensitive information without direct access to the database. It can be difficult to protect against an inference attack, as the individual non-sensitive pieces of data can be legitimately obtained. For example, a developer thinks that a new hire is being paid more than himself, which is obviously sensitive information. In trying to be transparent without revealing too much information, the company lists total compensation for the entire company by month on the intranet. Since the new hire is the only person hired in the last month, the snooping employee simply subtracts the current month’s total from last month’s and figures out the new hire’s salary.

  The second attack is aggregation, in which multiple pieces of information that are non-sensitive by themselves, when put together represent information that is sensitive. As an example, let’s assume an enemy army learns that the opposing army resupplies their secret underground base camps every two weeks using large caravans that seem to randomly drive around. They also notice spikes in energy usage at specific locations. By correlating the route that resupply caravans take with locations of energy spikes, the enemy is able to deduce where the secret underground bases are through aggregation. By using less-sensitive information combined together, the enemy is able to determine highly-sensitive information.

  Now let’s walk through some database security precautions we can take to keep our data secure.

  Polyinstantiation

  Both inference and aggregation attacks can be mitigated using polyinstantiation, or employing multiple instances of the database information. This can often be accomplished using a database view, which abstracts the underlying information into a restricted ‘view’ of the data. Using this approach, we can restrict the information available to a consumer based on the consumer’s security clearance or classification level. Polyinstantiation mitigates inference attacks by hiding data according to classification labels, and aggregation attacks by labeling different aggregations of data separately. Keep in mind this approach does not require us to store multiple copies of data, but rather present it in multiple ways at run-time.

  Database Encryption

  Attacks against a database will come from both outside and inside sources. Internal threats are actually the greater worry, as unauthorized employee access can go undetected for a longer time if we’re not careful. Specifically, employees that have been involuntarily-terminated in a down economy are the greatest insider threat. Therefore, proper data encryption is our best protection against both classes of threats. Furthermore, in many instances encryption is required if we are to remain in compliance with external regulations and industry standards. However, there are a few related concerns you need to be aware of, such as key management, database performance, and storage size.

  Obviously, to encrypt information, we must store the key somewhere safe. Otherwise it’s like locking your car and then placing the key on the hood. Beyond proper key management, we also need to ensure proper access control, auditing and capacity planning.

  We all know that one of the quickest ways to increase relational database performance is to apply proper indexing. Indexes are essentially pre-computed lookup tables, so that when a query that can leverage an index is executed, the search is extremely quick and saves on CPU resources. Unfortunately, indexing requires that the database be able to see the raw content, which is not possible if we encrypt that content. The end result is that if encryption prevents us from creating the proper indexes, database performance can slow down to molasses in January (that’s really slow and thick in case you didn’t know).

  Not only can encryption slow a database down for specific queries, it can increase the storage requirements as well. Encryption algorithms normally will pad the input so that the output results in the same size of blocks regardless of the length of the input. For example, a string of 3 characters might result in an output block the same size as a string of 1,000 characters. You can expect a general increase in
storage requirements by roughly 30%.

  Following is a list of questions that should be asked to elicit hidden requirements due to database encryption:

  Where should the data be encrypted - at its point of origin in the application or in the database where it resides?

  What is the minimum data classification level before data must be encrypted?

  Is the database designed to handle the increased requirements for data encryption?

  Are we aware of the performance impact of data encryption, and is the tradeoff between performance and security acceptable?

  Where will the encryption keys be stored?

  What authentication and access control measures will be implemented to protect the key that will be used for encryption and decryption?

  Are the individuals who have access to the database controlled and monitored?

  Are there security policies in effect to implement security auditing and event logging at the database layer in order to detect insider threats and fraudulent activities?

  If there is a breach of the database, do we have an incident management plan to contain the damage and respond to the incident?

  Once we have decided that encryption needs to happen, and when and where it applies, we can choose between two different approaches – have the database encrypt the data for us or encrypt the data before we hand it to the database for storage.

  When a database handles encryption, key management is handled within the database as well, and is referred to as transparent database encryption, or TDE. While this hides the complexity from the application using the database, it can cause a significant performance hit on the database server. Additionally, placing the key in the same repository as the encrypted data can be problematic as well, as a user account having access to the encrypted data will more than likely have access to the key storage mechanism as well.

 

‹ Prev