Sharks in the Moat
Page 55
There are several steps we can take to avoid violating software licenses, such as using a good software asset management process to keep track of all software in-use. By centralizing the control, distribution and installation of software we can keep rogue packages from showing up on the network. Locking down workstation removable media drives can help, as will performing regular scans on the network to detect software being used. From a legal viewpoint, it is a good idea to have employees sign an agreement to NOT install unapproved software. Be aware that some disaster recovery options may require that additional software licenses be purchased.
From a software development point of view,
there are some tactics that we should discuss with licensing. If a program needs to expire at a future date, and you attempt to disable code based on a that date, an end-user can try two different methods to defeat the logic. First, he or she can simply change the system date and time to something in the past, fooling the software into thinking the expiration point has not been reached. Secondly, byte patching can be used to change the instructions at the byte level and change the date such that the software continues to function. Since absolute dates are seldom compiled into code, a numerical window value – usually expressed as a number of days – is often hard-coded. Byte patching can be used to change this value as well. Careful thought needs to go into the scheme used to disable features after a specific date, as a hacker could exploit it and cause a denial of service.
An alternative approach is to release the software with limited functionality enabled. Byte patching can be used to flip the ‘enable’ flag, but it is a little more difficult to locate. The only surefire way to deliver a set amount of functionality without the chance of it being enabled is to produce a version of the software that does not include those features.
Technologies
Intrusion Detection Systems
In addition to ensuring secure software development, the Security role must also ensure that the infrastructure is properly protected. Let’s take a look at some technologies that can be used to carry this out.
An intrusion detection system, or IDS, looks at passing traffic to see if it can detect any attack patterns or unauthorized usage. This is useful for traffic coming from both external and internal sources. When a threat is perceived, the IDS will notify an administrator.
An IDS provides many benefits beyond simple notification that an attack may be underway. It can gather evidence on intrusion activity to be used for prosecution. It can carry out automated responses such as terminating a connection or sending alert messages through multiple paths. It can also connect with existing system tools and enforce policies. But an IDS cannot make up for weak policies, weak I&A implementations or application-level vulnerabilities. It also will not prevent logical weaknesses such as back doors into applications.
Note that we just mentioned the ability of an IDS to automatically take action when a threat has been identified. Before enabling such a capability, an IDS policy must be created to establish what actions are acceptable. For an IDS, only two actions are of much value – terminating the access, or simply tracing the access. In the latter case, the IDS can trace the traffic back to the source so that we can subsequently plug the hole or use the data for later prosecution.
There are two categories of IDSs – network-based and host-based.
A network-based IDS, or NIDS, monitors all network traffic. If placed between the Internet and a firewall, it will monitor and report all attacks it finds whether or not the firewall stops them. If placed inside of the firewall, it will recognize intruders getting past the external firewall. An IDS does not replace a firewall, it compliments one.
A host-based IDS, or HIDS, is installed on a computer and monitors traffic coming into and out of the computer, as well as file, memory and CPU usage.
In general, any IDS will have four components as shown in Figure 145 - sensors that collect data, analyzers that decide if an intrusion is underway, an administration console and a user interface. For instance, multiple sensors will be placed around the network, sniffing traffic as it passes by and handing the packets to an analyzer. The analyzer will unpack the contents and apply some intelligence to decide if the packet is suspicious. If it looks shady, the analyzer will generate an alert that is surfaced using one or more user interfaces. The administration console is used to configure sensors, analyzer logic, and how the user interfaces behave.
Figure 145: IDS Components
There are three types of IDS algorithms – signature, statistical and neural networks.
A signature-based IDS depends on pre-defined signatures to recognize an intrusion attempt. As traffic flows past, it will compare real-time patterns to its database of signatures, and if one matches close enough, it will raise the alarm. For example, if a lot of packets with the SYN flag set are detected, it may assume that a SYN flood attack is underway. This type of IDS is limited to well-known patterns as it can only reference an existing database of known attacks.
A statistical-based IDS must be trained to recognize normal and aberrant behavior on a given network. The good news is that it does not need a library of attack signatures to be kept up-to-date. The bad news is that if it is not trained properly, a lot of false positives will occur. Additionally, if an attack is underway while it is being trained, it may assume that traffic pattern is normal.
A neural network IDS is similar to the statistical model but has the capability to learn what is acceptable over time, resulting in fewer false positives and negatives.
The configuration providing the best protection is to combine a signature and statistical model.
Intrusion Protection System
An intrusion protection system, or IPS, is basically a weaponized IDS capable of aggressively defeating attacks. An IDS does have some capabilities to take action, such as terminating connections, so the line between them is a little blurry. An IPS carries that capability to the next level by being able to do things such as reconfigure a firewall and block an offending source IP address. However, there is a danger that an attacker could use this capability against us by tricking an IPS into shutting off data from valid IP addresses.
Honeypots and Honeynets
While we have already mentioned honeypots and honeynets on several occasions, let’s add a little bit more color.
Figure 146: Distracting the Bad Guy
Looking at Figure 146, recall that a honeypot is a software application that pretends to be a server vulnerable to attack. Its purpose is to act as a decoy for two reasons – to get an attacker to leave the real systems alone, and to possibly identify a real threat. There are two types – high-interaction and low-interaction.
A high-interaction honeypot is a real environment that can be attacked, while a low-interaction honeypot just looks like an environment. A high-interaction honeypot will provide more information on the attacker.
A honeynet is comprised of multiple honeypots to simulate a networked environment, providing investigators a better chance to observe the attacker in-action. During this time, an IDS will trigger a virtual alarm, while a stealthy key logger records everything the attacker types. To ensure the attacker cannot abuse the honeynet and launch attacks, a firewall stops all outgoing traffic. All traffic on honeypots or honeynets are assumed to be suspicious, and the information gleaned is used to harden a company’s live network.
One danger in implementing a honeypot or honeynet is that an external service designed to report unsafe or vulnerable sites may pick up the site and not realize it is fake. This could result in damage to the public image of the company, so care must be taken.
Data Leakage Prevention
Data leakage prevention, or DLP, is a suite of technologies and processes that locates, monitors and protects sensitive information from being disclosed. Simply put, DLP has three goals:
To locate and catalog sensitive information throughout a company.
To monitor and control the movement of that informa
tion across the internal network.
To monitor and control the movement of that information on end-user systems.
You might have noticed that those three goals just happen to align with the three states of data - data-at-rest, data-in-motion and data-in-use. This is illustrated in Figure 147.
Figure 147: Data Leakage Protection
Data-at-rest represents any information persisted to storage such as hard drives, USB drives, tape backups or in live databases. DLP uses crawlers, which are applications deployed to log onto each system and ‘crawl’ through the various data stores. The crawler will search for and log specific information sets based on a series of rules configured into the DLP.
Data-in-motion represents any information moving around a network or being transferred between two processes. For example, network packets contain data-in-motion, as does inter-process communication between two applications running on the same server. DLP uses network appliances or embedded technologies to capture and analyze network traffic. Now, when files are sent across a network, they will almost always be broken down into small packets. This means that a DLP will need to recognize packet patterns and be able to reassemble file packets into the original file in order to properly analyze the contents. This requires something called deep packet inspection, or DPI, which looks beyond the packet header to examine the packet’s payload. If sensitive information is detected being sent to an unauthorized destination, the DLP can alert and/or block the data flow in real-time. The behavior depends on how the DLP rule sets have been configured.
Data-in-use represents information that is being manipulated by end users at their workstation, such as copying data to a flash drive, printing and even copy/paste actions between applications. The DLP will usually install an agent on each computer that watches this activity but is managed by the central DLP solution.
So, we have the three states of data – at-rest, in-motion and in-use. To qualify as a real DLP, a solution must address all three states and support a centralized management capability.
As we mentioned before, a DLP solution must allow its behavior to be controlled by a series of rule sets. Most DLPs come with a preconfigured set of rules, but it is important for the customer to be able to customize those rules. Data classification should play a large part in how the rules are set up. Additionally, there are a few key capabilities that a full-fledged DLP must provide. For example, it must integrate with a directory services capability allowing the DLP to map a network address to a specific user. It should provide some type of workflow management capacity so that we can configure how incidents are handled. In other words, the DLP should allow us to dictate how incidents are routed to various parties based on the type of violation, severity, and the identified user, among other possibilities. The solution should allow us to backup and restore features and configurations to preserve policies and settings. And the DLP should support some type of reporting function, which could be internal or satisfied by integrating with external tools instead.
Naturally, there are some risks when relying on a DLP solution. DLP network modules may be improperly tuned resulting in blocking valid content or missing unauthorized content. Upon initial installation, the system should first be enabled in a monitor-only mode – this allows us to fine-tune the models and to allow various parties to adjust their activities to come into compliance before the solution is completely enabled. The impacted stakeholders should be involved in this process to minimize disruptions when it comes time to turn it on. After the system has been fully enabled, there should be some type of quick after-hours process to alter the rules in case legitimate content is being blocked when the management team is not available.
A DLP might result in a large number of false positives, which can overwhelm staff and hide valid hits. This is why it is so important that rule sets be customizable by staff. To minimize these types of disruptions, consider rolling the solution out in stages, focusing on the highest risk areas first. Trying to do too much at one time can quickly overwhelm available resources.
A significant issue with a DLP solution is that it can inspect encrypted data only if it knows how to decrypt it. This means that the DLP agents, appliances and crawlers must have access to the proper encryption keys. If users have access to personal encryption packages and do not provide the keys, information will not be accessible to the DLP. To minimize this risk, the company should roll out policies that forbid any encryption capability that is not centrally managed and block any content that cannot be decrypted.
One last weakness for DLPs is that they cannot detect sensitive information in graphics files. This could include steganography in which data is hidden inside of images, or images that represent intellectual property. DLPs can identify that
content is an image and alert based on traffic analysis of unexpected patterns, but the only solution to this problem are highly-enforced policies that govern the use of this type of information.
Anti-Malware
To defeat malware, we must do three things:
Implement an anti-malware engine.
Create an educated anti-malware research team.
Create an update mechanism.
The anti-malware engine lives on the client and executes five separate steps shown in Figure 148 – scan, detect, quarantine, remove and restore.
Figure 148: Anti-Malware Engine Components
The first step is to scan the system for malware that has somehow infected the disk or memory. Scanning can easily impact system performance, primarily the CPU, so care must be taken to keep the impact to an acceptable level.
The second step is to detect malware that exists, which is accomplished using two different techniques. Pattern matching is carried out using a definition list, which contains ‘fingerprints’ that match known malware. The second technique is to use a heuristic analyzer that looks at the behavior of suspect processes or files and correlates that to known malware behaviors.
The last three steps can be collectively seen as handling malware by containing, or quarantining, infected files, removing or eradicating the malware, and restoring the system to its pre-infection state.
Note that when quarantining or removing a rootkit, the anti-malware engine is unable to trust the operating system’s own APIs. Tunneling signatures can be used to detect tampering with an OS and can help with rootkits.
The second item on our anti-malware to-do list is to create an educated anti-malware team whose primary purpose is to research malware detected by the engine. To avoid detection, malware will often come in a compressed package, so the team must be well-versed in unpacking, un-obfuscating, decrypting and reverse engineering the malware. A technique called static analysis inspects the
instructions contained within malware with the goal of counteracting the techniques used in its own obfuscation. Some malware, such as polymorphic viruses, change their appearance and internal structure each time they are run, so it might be necessary to perform a dynamic analysis in which the malware is allowed to execute in a sandboxed virtual environment.
Once malware has been identified and analyzed, the team updates the definition list with the appropriate fingerprint.
The last item we need to carry out to effectively combat malware is to ensure the definition list is updated using a formalized process, not an ad-hoc one. Updates with new fingerprints should be embedded in the existing infrastructure process to avoid impacting user productivity.
Chapter 44: The Change Management Role
The Change Management role may be represented by a pre-selected group of people, or it may fall to a single individual in smaller companies. In a nutshell, this role has the last say in any change to production environments. It may be referred to as a Change Advisory Board, or CAB, and is most often populated by high-level employees who have full-time duties other than being on the CAB. While the Change Management role will not necessarily execute all activities presented in this section, it should ensure that the processes covered
are being followed prior to a request to update the production environment is submitted.
Change Management
Before declaring a product ready for release, we need to consider the activities that will always follow – mainly, how to handle defects and change requests. This includes setting up some type of support desk and establishing the processes to queue, approve and execute on these issues. This includes how new versions will be released and with what frequency.
Any change request – whether it is a defect or enhancement – should not be allowed unless the appropriate business owners have formally approved. Before an approval is provided, the person or entity must ensure they understand the risk and fallout resulting from a change. Care must be taken that schedule pressures do not dictate approval, which often happens. Instead the decision must be based on risk/benefit factors. For example, the product team may have promised a certain feature will be released by the next quarter, and so they push to get it completed, ignoring the risk to the entire product despite warnings from the development team. Such a scenario can often be helped by using Scrum, which gives the development team ultimate say on how long it will take to deliver a certain capability.
In very mature organizations, all changes are passed through the program management office, or PMO, and are then approved or rejected by members of the configuration/change board, or CCB. These functions may go by different names, but they almost always exist in some capacity in a healthy organization.
If a newer version of a software is being accepted, the RASQ should be calculated to ensure that the level of security is not being decreased by the new release. The asset management database should also be updated with the new release before acceptance is provided.