From the Blogosphere
Surviving the Zombie Apocalypse | @CloudExpo #Cloud #Security
Security is one of the most controversial topics in the software industry
Jun. 30, 2016 02:00 PM
It must be, I thought, one of the races most persistent and comforting hallucinations to trust that "it can't happen here" - that one's own time and place is beyond cataclysm.
Security is one of the most controversial topics in the software industry. How do you measure security? Is your favorite software fundamentally insecure? Are Docker containers secure?
Dan Walsh, SELinux architect, wrote: "Some people make the mistake of thinking of containers as a better and faster way of running virtual machines. From a security point of view, containers are much weaker." Meanwhile, James Bottomley, Linux Maintainer and former Parallels CTO, wrote: "There's contentions all over the place that containers are not actually as secure as hypervisors. This is not really true. Parallels and Virtuozo, we've been running secure containers for at least 10 years." To add to the mix, Theo de Raadt, OpenBSD project lead, wrote back in 2007: "You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes."
Who is right? If the experts disagree, how can customers evaluate their claims? If all software applications have bugs, aren't they all equally unsafe? To help guide you through, this article introduces basic software security concepts to provide you with a perspective to form an opinion on the subject.
Defining Software Vulnerability
When you lock your doors and windows in your house, you don't want anybody to get inside unless they have a key. Typically, strangers need to ring the doorbell to request entry; then you decided whether they should be let in. This - in effect - is your interface. But what if a couple of screws in one of the hinges of your back door were loose? With enough strength and persistence, somebody might be able to break it, push the door ajar, and allow themselves in without a key. The loose screws are the vulnerability.
Let's assume that you had noticed the loose screws a while back but couldn't be bothered to fix them. In your physical house, it's likely that no one will exploit that vulnerability, but this is not the case in the hostile internet environment. To keep up with the analogy, just imagine the house is in a rough neighborhood where the house is one of the few last standing structures in a world overrun by zombies. Zombies are crawling around everywhere looking for food. Fortunately, you are safe inside your house, living off of your provisions. You can hear them right now, just outside the walls, scratching and scraping, looking for a way in. All of the sudden, those two loose screws look a bit more scary, don't they?
Software systems have many interfaces. The most important ones separate different privilege domains. Userland applications, such as Firefox, are less privileged than Linux, the operating system kernel. On hypervisor deployments, anything inside a virtual machine is less privileged than the hypervisor. For example, Linux is less privileged than Xen. These interfaces are called surface of attack, because they separate potentially malicious code from the software that is in charge of running the system, such as Linux and Xen. These are the interfaces that malevolent attackers try to exploit. In the house analogy, inside is a higher privilege domain than outside; doors and windows are the surface of attack.
When running containers, the most important attack surface is the interface that separates the Linux kernel from Docker containers. It's usually called syscall interface because syscalls are what programs use to talk to the kernel. Linux offers a set of namespaces to create the illusion that each container is the only one running on the system. Namespaces reduce the scope of the syscalls available to a container, as such they are key to security. Syscalls are exposed to applications as nice little C functions by a core system library named libc. libc makes syscalls to implement its functionalities and provide POSIX compatibility (POSIX stands for "Portable Operating System Interface for Computer Environments", a standard which goes back to the golden era of the large UNIX systems of the '80s). This compatibility allows the same application to run somewhat unmodified on Linux, FreeBSD, OSx and other POSIX systems.
When running virtual machines, the surface of attack is the interface exposed by the hypervisor, which varies depending on the implementation. In the case of KVM, it consists of a set of virtual hardware devices available within the VM. In the case of Xen PV guests, it consists of a set of hypercalls.
Back at the house, you managed to leave unnoticed just in time before zombies stormed the place. You really should have fixed those screws the first time you saw them. Darting from shadow to shadow, you are desperately looking for another secure location while remaining undetected. Up ahead you spot a couple of buildings: a large condo on the left and a small house on the right. What luck! But which one do you choose to take shelter in? The condo has more than a dozen doors and windows, while the house just one door and five windows. As a zombie survivor, you know that the smaller house is easier to defend. It has a smaller... surface of attack.
A large surface of attack is harder to secure than a smaller one. More doors mean more hinges. In software, it is hard to measure the precise size of an interface. But it is undeniable that the syscall interface is large, even with namespaces. It is an order of magnitude larger than any hypervisor interface. This translates into more security vulnerabilities. All software has security vulnerabilities but some programs have more than others. The POSIX interface has more vulnerabilities per year than hypervisors. The last time that somebody made a comparison, Xen PV guests had no privilege escalation vulnerabilities in the previous 12 months, while Linux containers had 9. Unsurprisingly, the difference tends to be proportional to the size difference of the attack surfaces.
You walk up to the porch of the smaller house. You grasp the handle when you notice a noise coming from the inside. Playing it safe, you peek through the window. The living room is crowded with zombies. You step away immediately, heading toward the larger building. This time you have no choice. After a careful sweep, it proves to be empty. It even has a gym. But what now? How can you secure a place with so many openings? You could try to bar and bolt them all. You certainly don't need so many ways in and out. But what if you make a mistake barring a door or a window? One would be enough. What if a slab is not as sturdy as you thought?
Blocking entrances is a difficult job, one that needs to be performed flawlessly. It is the equivalent of using SELinux and Seccomp to reduce or close access to syscalls. Fewer syscalls mean a smaller surface of attack exposed to potentially malicious applications and containers. Some hypervisors support similar technologies, for example, Xen comes with XSM, which can be used to control access to hypercalls. The issue with these techniques is that they are hard to use. A small mistake in their configuration can be paid with a very high price. In the case of Docker and Seccomp, it's easy to block all syscalls which are so uncommon they are effectively unused but specific knowledge of what is running within the container is required. What about an application update that uses a new system call. Can you afford a security configuration mistake to break a running production application?
After a few hard days of work, you managed to block all entrances to the building but one, which has a functioning lockable door. You make regular rounds to inspect the security perimeter. Learning from your past mistake, when you see a loose screw you fix it immediately.
A key aspect of software security is updating vulnerable systems as fast as possible. It is key to shorten the period during which software defenses are at their weakest. Different projects have different disclosure policies. Some of them use "full disclosure": they release all information they have about their vulnerabilities as quickly as possible to the public. The idea is that this way attackers and system administrators stand on equal footing in the fight for security. Other projects use "responsible disclosure": they evaluate and fix vulnerabilities privately. They pre-disclose information about them to a limited number of trusted users. They prepare software updates so that when a vulnerability gets publicly announced, the fix is immediately ready. Even with the best security patching processes, there is still some time when systems are vulnerable.
You look around your new building knowing that you have done everything you could, but you still feel unsafe. The place is just too big. You don't need so many rooms all to yourself. All empty spaces unsettle you in any case. You decide to retreat to a smaller, more defensible area within the building. The gym you saw earlier has everything you need and only one door. It is perfect. You could feel safe there. You constantly check the entrance to the gym, but you also look after the other doors and windows facing outside. It's starting to feel like home.
Setting up two security perimeters is an example of defense is depth. Many of the techniques mentioned before, such as SELinux and Seccomp, can be stacked on top of each other. Another example of defense in depth is running containers inside virtual machines. In this case, POSIX and Linux namespaces are only the first surface of attack exposed to malicious workloads. If an attacker manages to penetrate it, she could damage all the other containers running inside the same virtual machine but would not be able to access anything outside of it. To take over the whole system, an attacker would also have to break through the Xen hypercall interface or the KVM virtual hardware interface, depending on the hypervisor.
Going back to the questions at the beginning of this blog, it should be clearer that vulnerabilities are inevitable but that not all software is equally insecure. In fact, security is not an on or off switch; rather it's a spectrum. It's more productive to talk about security risk. Some software solutions have a greater risk of being broken into compared to others because, on average, they present more vulnerabilities over the same period of time. In addition the process used by software projects to deal with their vulnerabilities and public announcement policies have a direct impact on the vulnerabilities' duration. The differences can be dramatic.
The risk of vulnerabilities can be reduced but can never be eliminated. Decreasing security risk is a difficult business, especially when the surface of attack is large to begin with. Many software interfaces used today were written at a time when performance and convenience were the foremost priorities. Security was only retrofitted into them.
Users should not have to deal with multiple layers of complex techniques to reduce the surface of attack of ill-suited interfaces to acceptable levels. Because there is a probability that your system will have a cataclysmic security event, it is rational to adopt simplicity as a vital software design principle to minimize the risks. After all, with zombies as in software, doors and windows are a threat to your life.
Reader Feedback: Page 1 of 1
Latest Cloud Developer Stories
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
SYS-CON Featured Whitepapers
Most Read This Week