by Phil Martin
Wireless networking and communications.
Radio frequency identification, or RFID.
Location based services, or LBS.
Near field communication, or NFC.
Sensor networks.
A sensor network is a collection of several micro-computer detection stations that collect and transmit information. While they used to be found only in monitoring weather patterns, they now can be found in smart homes, monitoring traffic patterns, medical devices and in military surveillance operations. The devices have limited power and data storage capabilities, and the communication capabilities are less than reliable. Naturally, since they are so small each can easily be stolen.
When designing a sensor, special care must be taken to ensure data cannot be disclosed or altered, and internal clocks must be synchronized to prevent integrity problems. Availability is the Achilles heel with sensors, as is it is usually fairly simple to disrupt their operation – often crunching one under your heel is sufficient. Some threats they are vulnerable to include node takeovers, addressing protocol attacks, eavesdropping, traffic analysis and spoofing. A well-known spoofing threat is a Sybil attack in which a rogue device assumes the identity of a legitimate sensor node.
A layered approach to pervasive computing security is required. Some best practices are the following:
Ensure that physical security protections are in place, such as locked doors and badged access.
Change wireless access point devices’ default configurations and don’t broadcast SSID information.
Encrypt the data while in transit using TLS.
Encrypt data residing on the device.
Use a shared-secret authentication mechanism to keep rogue devices from hopping onto your network.
Use device-based authentication for internal applications on the network.
Use biometric authentication for user access to the device.
Disable or remove primitive services such as Telnet and FTP.
Have an auto-erase capability to prevent data disclosure should the device be stolen or lost.
Regularly audit and monitor access logs to determine anomalies.
Embedded Systems
Strictly speaking, an embedded system is a component of a larger machine or system. They are usually designed for a single purpose and are almost always associated with some type of dedicated hardware. Pervasive computers are often an example of an embedded system. Because they have a single purpose, embedded systems are generally more reliable over a general-purpose system. They are usually tightly-coupled with real-time events and are often stand-alone.
From a real-world point of view, though, the line between an embedded system and a multi-purpose system is quickly blurring. A PC is a multi-purpose system as it can load and run multiple, flexible programs. A garage door opener is an embedded system as it performs a single function and is hard-ware based. But what about a smart watch? It is closely tied to hardware and real-time events, but it can load custom apps just like a PC. One approach to defining an embedded device is the physical form factor. For this we can look to the ‘couch rule’ – if it can be lost in the cushions of a couch, then it is an embedded device. That means that some embedded devices differ from a real system only because it is so compact. A smartphone is a full-on system but is so small that it lives in a completely different threat world than a PC does. However, this is a poor approach as devices become smaller and smaller – eventually today’s full systems will easily fit in your pocket. Instead, we’re going to use the following definition called the ‘app rule’ to redraw the line between full systems and embedded systems – if a device can load an app created by someone other than the manufacturer, then it is NOT an embedded device – it is a complete system. That means that an iPhone, Android phone, Apple watch, Alexa Echo, and Raspberry Pi are all full systems. According to this definition, examples of embedded systems are garage door sensors, garage door openers, door video cameras, smart door locks, movement sensors, and industrial sensors. This approach works fairly well, as we can easily see that a PDA from yesteryear that did not allow third-party apps would be deemed an embedded device, while the latest iterations of clamshell phones are considered a full system as they allow third-party apps to be installed. Perhaps this approach is not perfect, but it is the best that I have seen to date.
Another attribute of embedded systems is that both the data and instruction sets live in the same memory space (a smartphone definitely does not fit in that category!) The read-only memory or flash memory chips in an embedded system that hold the instructions is called firmware. If firmware is stored on a read-only device, then it is not programmable by the end user. A significant drawback to programmable embedded systems is that they are inherently unsecure as manufacturers seldom pay attention to locking the environment down. In fact, embedded systems that are connected to the Internet, known as the Internet of Things, or IoT, are now some of the most popular methods for creating zombies that launch distributed attacks. The ISO 15408 Common Criteria standard and the Multiple Independent Levels of Security standard, or MILS standard, are great resources to use when addressing security vulnerabilities in embedded systems. The MILS architecture specifically is useful to create a verified, always invoked and tamperproof security layer.
The most common attack on embedded systems is a disclosure attack. In fact, the first step to turn an embedded device into a zombie is to uncover information such as passwords that have been left exposed on the device in clear text. The quickest way to apply security to a device is to ensure it always uses some type of encryption, whether it is network layer security such as TLS, or onboard encryption to protect sensitive data. The biggest reason that embedded devices often do not include any encryption at all are the increased CPU, memory and power requirements that encryption requires, especially for those devices that run on battery power only.
A reactive measure that is fairly effective is to use some sort of an auto-erase feature that will wipe the device clean of sensitive data if a compromise is detected. For example, after a set number of unsuccessful authentication attempts have been made, the devices self-destructs by erasing all data, usually resulting in a bricked device.
Due to their small size, embedded devices are highly susceptible to a side channel attack where the attacker watches and analyzes radiation and power usage as the device runs. A fault injection attack is also used where the attacker causes some type of scenario to occur and watches to see how the device behaves. For example, he could disable a Wi-Fi chip and see if the device accidentally reveals some other weakness. To combat this, the internal circuitry should be protected by a type of physical deterrent such as seals using epoxies or tapes that must be broken before the internal mechanisms can be accessed. At times, the layers in-between boards can be used for signal paths so that if the layers are separated, the device simply stops functioning correctly.
Many of the same security requirements that full systems use can be applied to embedded systems, such as multi-factor authentication and TPM chips. A particularly dangerous set of embedded devices susceptible to attack are Supervisory Control and Data Acquisition systems, or SCADA systems, which watch and control industrial processes such as robotic assembly lines or remote water-filtration stations. Due to the physical capability that could result from a compromise, attackers in this area are becoming increasingly sophisticated. Many SCADA systems were created back in the 1980’s and were never designed to be connected to the Internet, but many are becoming IoT devices in order to increase functionality and remote capabilities. The end result is a group of inherently unsecure devices being exposed to very smart and motivated attackers, particularly from nation states trying to find a way to weaken their enemies. Another reason for SCADA vulnerabilities is that they are often based on proprietary systems that used the security through obscurity idea, which we all now know is really no security at all. Because they were originally physically secured and isolated, security was not even
an afterthought – it simply was not even considered. In fact, many do not have any concept of authentication and authorization. The packet control protocol used in the network between SCADA devices is incredibly rudimentary, and pretty much anyone and their pet pterodactyl can break into the system. Naturally, there is no protection from overflow or injection attacks.
Operations and Maintenance
As I’ve mentioned before, there is no such thing as an un-hackable system – it is only a degree of difficulty. Instead, our goal is to make it so difficult for an attacker to compromise a system that he will move along to a more enticing, and probably weaker, target. Residual risk will always be present, but if we do our job properly, it will be below the level of acceptable risk. Unfortunately, once we have established a secure baseline, we can hardly sit back and sip tea (or perhaps something stronger) while admiring our handiwork. Hackers continuously strive to break encryption algorithms, find zero-day exploits, and just in general ruin our holidays. That is why we must continuously monitor the security health of a system in production. The second law of thermodynamics comes into play here, which can be stated as ‘Everything tends toward chaos’. In our case, our perfectly secure system will eventually deteriorate over time and become unsecure. At some point the risk will rise above the level of acceptability, and we must act.
Figure 40: Software Resiliency Levels Over Time
As shown in Figure 40, the time at which an application is most vulnerable is right before a new release. This trend can be seen from two different perspectives. The first says that when risk becomes high enough then we need to release a new version to address uncovered vulnerabilities. The second view says that the longer we go without a release, the less secure our application is because we have not updated our security in a while. In either case, we must continually work on the next release in order to keep risk at an acceptable level. We’re not talking releasing new functionality – we’re referring to releasing new security controls and fixing old ones. It is not uncommon for a scheduled release to be pushed back due to either incomplete features or a sub-par quality of code. When this happens, our security risk will increase because it extends the time between releases. Keep that in mind next time a release slips and you need some fodder for pushback.
When we discuss ‘operations’, we are really talking about managing a set of resources, which can be grouped into four categories – hardware, software, media and people.
Hardware resources include such things as networking devices like switches, firewalls and routers, communication devices such as phones, fax machines, and VoIP devices, and computing devices such as laptops, desktops, and servers.
Software resources include applications developed in-house, purchased from an external party, operating systems and even data, believe it or not.
While data falls under software, the physical storage mechanisms used to persist data fall under media. All non-public data must be protected in terms of confidentiality and integrity, whether it is contained as backups, archives, or log files. You can probably guess the various types of media, but they are USB drives, tapes, hard drives, and optical CD or DVD disks.
People resources can simply be described as employees and non-employees.
Monitoring
What cannot be measured cannot be managed, and we cannot measure something if we do not implement continuous monitoring. Therefore, once a system has been deployed to production, we have little hope of being able to properly manage it if we do not have an appropriate level of monitoring in place. Monitoring helps in four ways – due diligence, assurance, detection, and forensics.
Monitoring provides proof of due diligence by ensuring we are compliant with regulations and other requirements and helps us to prove to stakeholders that we are carrying out our job. Monitoring will generate tons of data and if retained properly, at any point we should be able to go back in time and see what was going on.
Monitoring assures us that controls are working properly by determining if security settings are kept above the MSB, or minimum-security baseline. By being on the job continuously through monitoring, we can be comfortable that the CIA of our software remains intact, and the appropriate controls are in-place and working.
New threats such as rogue devices and access points can be detected by monitoring, as can insider and external threats.
Finally, monitoring provides assistance after an attack in the form of forensics by providing audit trails and other evidence.
Requirements
While monitoring can be implemented for any system, software or process, we need to be sure that requirements are clearly stated up-front before implementing a solution. These requirements can obviously come from stakeholders, but internal and external regulatory policies can be used as well. As part of the requirements definition step, the specific metrics we want to collect should be identified. Well-defined metrics are a crucial part of the requirements process as software must often produce the required data for a metric to be collected, and it will have to be explicitly implemented. Without clear requirements, this will never happen.
In any moderately complex environment, teams often find themselves frozen in ‘analysis paralysis’ due to the large number of metrics and potential monitoring targets that could be included in monitoring. Two simple rules will help with this conundrum:
1) If an operation failing to function correctly can have a negative impact on the brand and reputation of the organization, then that operation must be monitored. Likewise, the metrics to monitor should directly reflect the correct behavior of that operation.
2) Systems and software that operate in the DMZ should be monitored regardless of their function as they will be the first targets for an external attack and will be the jumping off point for attacks on mission-critical operations. Metrics here should reflect a possible compromise or attack.
Even though we are discussing software assurance, the protection of physical devices falls under our umbrella as software and data is transported using devices such as backup tapes, USB drives, and removable mass storage devices. In fact, PCI DSS mandates that any physical access to a system containing cardholder data must be restricted, and these restrictions must be verified periodically. Video surveillance is a great way to accommodate this requirement, and when collated with the entry and exit of personnel using audit trails, we can create a very powerful physical access control. PCI DSS requires this data be retained for a minimum of three months, and that access must be reviewed on a regular basis.
How to Implement Monitoring
Now that we have discussed what to monitor, let’s talk about the various ways monitoring can be implemented. If you are a software developer, when the term ‘monitoring’ comes up you might tend to think of logging information to a file, or perhaps exposing endpoints for some system to hit periodically and capture metrics, or maybe even pushing activity logs to another system for subsequent analysis. While this certainly is a part of monitoring, if this is your mindset you will need to widen your perspective a little. In fact, logging is only one aspect of monitoring – scanning and intrusion detection are the other two approaches that must be covered.
Since logging will probably be the most familiar to you, let’s cover that first. Logging in terms of security creates an audit trail, or a record of who did what and when. The National Computer Security Center, or NCSC, has produced a publication called “A Guide to Understanding Audits in Trusted Systems” which lists five core security objectives of any audit mechanism. They are the following, simplified to make each easier to understand:
Support the historical review of access patterns in order to prove the effectiveness of security controls.
Support the discovery of internal and external threat agents by recording their attempts to get around security controls.
Highlight violations of the least privilege principle by tracking changes in a user’s privilege level.
Act as a deterrent by making the attacker awa
re that audit mechanisms are in place.
Contain and mitigate damage to provide additional assurance.
While logging is crucial to detecting, preventing and mitigating a compromise, scanning will help us to discover new threats as well as to confirm the makeup of our own network. For example, a scanning tool can probe the various ports and services on each enumerated host, and thereby give us a good idea of the operating systems, versions, open ports, services and protocols that are in active use.
An intrusion detection system, or IDS, sniffs traffic as it passes by and attempts to recognize patterns indicative of malicious behavior. While an IDS is typically a dedicated network appliance, a bastion host can also act as an IDS. A bastion host is a hardened server sitting in the public-facing DMZ where it is most vulnerable. While these beasts usually serve some type of function other than just detection, they can often be used to detect and report suspicious activity and are considered to be both a deterrent and detective control. As a bastion host will log large amounts of activity, it is important that these logs be protected from tampering.
Another use for such a capability is to act as a honeypot, which is a computer system functioning as a decoy so that attackers will leave the real systems alone. When deployed in this manner, it can also act as a warning system that an attacker is on the prowl, allowing us to research the latest threats and attack techniques that the hacker may be employing. There is a danger in using a honeypot, however, and it centers on the difference between enticement and entrapment. Enticement is the act of purposefully providing an opportunity for a crime to be committed without explicitly encouraging the crime. For example, if a thief is walking down the street looking for a store to rob, I might purposefully leave my front door unlocked and sit with a squad of police inside, just waiting for him to commit a crime. On the other hand, entrapment is the act of encouraging someone to commit a crime when they originally had no such intent. If I were to hang a sign outside of my store reading ‘No one is here, and you can steal all of my stuff without getting caught!’, then I might convince a passing person to take an action they were not even considering until they read the sign. Enticement is legal, while entrapment is highly illegal. In terms of a honeypot, we must be careful not to invite someone to attack our network, collect the evidence, and then turn around and use that evidence against them in court. This might look like claiming a honeypot exposes some juicy live web services such as banking interfaces when in fact it does nothing of the sort.