In regard to information security "extrusion" is the situation where a threat agent manages to unauthorized exfiltrate data from e.g. a computer belonging to a competitive organization. Traditionally intrusion is prevented by ACLs in firewalls, Intrusion Detection Systems and host based mechanisms such as antivirus solutions. An IDS may be everything from an active honeypot to passive/active solutions like the Open Source project Snort.
Note that practically extrusion may be detected by much of the same methods as intrusion, as Bejtlich defines in the introduction of Extrusion Detection:
Intrusion detection is defined as the process of identifying unauthorized activity by inspecting inbound network traffic. Extrusion detection is the process of identifying unauthorized traffic by inspecting outbound network traffic.
— [Bejtlich - Extrusion Detection]
In addition to Bejtlichs definition this article will include the important aspect of host monitoring, since network analysis alone will hardly detect sophisticated attacks. Host monitoring correlated with network data and analysis has become increasingly important in later years as new extrusion vectors have been introduced.
This article will use two categories of intrusion: Human intrusion and automated machine intrusion. In addition to the above, which may be characterized as automated machine extrusion, there are also a more complex category which will not be discussed any further in the article: The human insider. While human intrusion very much includes a sociological and psychological aspect, automated machine intrusion revolve around malicious code which performs some kind of action on behalf of the threat agent.
The article will assume that the organization have realized that using human operators in addition to automated intrusion, a so-called Critical Security Incident Response Team, is a necessary mean.
Part one of the article, the background, handles basic definitions and operationalize certain aspects of expressions and statements that will be used in part 2: Extrusion.
Part 1. Background
In information security communities threat agents are often categorized into three main groups. The grouping reflects the agents resources, motivation and capacity Down with APT?. To fully understand what extrusion is and what it can do to an organizations network, it is necessary to look at threats in regard to vulnerabilities and risk.
The Crime category is often reflected by its motivation: Economic and/or personal gain. The crime threat agent have less resources and one to a a few involved individuals. An example of the crime category is the Fannie Mae incident SC - Fannie Mae where a disgruntled computer engineer planted a logic bomb with the intention to destroy 4.000 company servers.
Organized crime is much like the next category, the state threat agent, but especially separates in one special way: It is most often driven by the motivation of economic profit. One of the more current examples of organized crime is the malicious crimeware kit Zeus, where the Federal Bureau of Investigation recently made several arrests FBI - Zeus arrests. Zeus which in its fusion with SpyEye recently hit norwegian banks NRK SpyEye, is characterized by a group of threat agents working together in a systematic manner, exploiting weaknesses in online banking systems. The persistence of the threat agent is low (in regard to targeted attacks e.g. against an organization) since their target group is usually very large.
State distinguish itself from the other categories in all three aspects. Threat agents related to national states are often directly or indirectly connected to military intelligence services, this separates the agent in near unlimited resources in both economy and number of individuals, a great receipt for a high level of sophistication. Further, state sponsored threat agents has a different intention than ordinary criminals: The are motivated by political gain and are thereby, in most situations, motivated and persistent. State threat agents was named Advanced Persistent Threat in the recent worm attack denoted Stuxnet Symantec Stuxnet, which is believed to be targeted against the Iranian Busher nuclear power plant.
Exposure and vulnerabilities
As mentioned in the introduction, this article will be focused on automated machine extrusion. As complexity was realized as a fact - especially in the last decade, most security researchers accept that a computer system will never be completely bug free, thus never completely secure. This was proved through Turings halting problem already in 1936.
Given a description of a program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.
— SA - Turing: Halting problem and Audestad: Computerized society
When technology evolves and new services are offered in an organizational computer network, more complexity is introduced. Thus the more services offered from a system the larger the attack surface. Exposure is a very important aspect in regard to vulnerabilities since the more an organization exposes itself e.g. on the internet, the more vulnerabilities will be remotely performable. Intrusions and extrusion which was very difficult to exploit in the 1990s may be at a much higher risk today. This also brings into play the fact that many systems was actually developed in the 1980s to 1990s, with no security mechanisms in mind at all.
Every organization is vulnerable
Throughout this article, a vulnerability is defined as "where an organization is susceptible to attack".
What vulnerabilities that the threat agent will utilize against an organization is dependent on much of the same criterions as mentioned in the threat categories: Resources, motivation and capacity. The crime agent would probably use his or her knowledge of someone or something for personal gain e.g. a logic bomb in systems where the individual has access. While the crime threat agent probably do not have sufficient capacity and resources, he or she is highly motivated by a personal cause. Such individuals does usually not do any significant damage in the longer run.
In regard to organized crime a trend is to utilize what third-party agents sell, in other words exploit kits, private exploits and similar. Organized criminals seldom target smaller groups alone, making the organizational computerized networks dependent on the robustness[Audestad - Computerized society]of the computer network. Practically speaking this means that communication and information sharing in information security communities is a key factor.
Part 2. Extrusion
Ten years ago (or even earlier) when many current organizational networks and much of it's software was developed there were a couple of vectors into the network and maybe even fewer out of it. While many experts focus at implementing procedures in accordance with standards such as ISO 27001 and BS, the solution is often more complex than an implementation of a standard when an APT actor is involved. This part will focus at the problem of extrusion using the following extrusion vectors:
- Mobile networks
- Personal mobile networks
- Intranet/internet through e.g.IM, client-side attacks and email
- Human transfer through e.g. USB, iPod or smart phone
- Egress and ingress redefined
Through this section we will have a closer look at possible vectors in and out of computer systems. The focus will be at extrusion but intrusion vectors is also important for the understanding of why many current security concepts to a certain degree is more or less faulty.
Just following the rules
The art of writing malware is to just follow the rules
Public Key Infrastructure based authentication and encryption is a method of encrypting data using a private and public certificate TLS RFC. Most popular websites such as Facebook and Google make use of SSL/TLS. In large scale networks the problem of controlling such data traffic at a network level is a problem yet to be practically solved.
As seen in figure 1 there are basically two infiltration vectors of a computer network.
The first and possibly most complex infiltration vector to protect against is the human aspect. Humans act as carriers of an endless source of mediums such as USB sticks, smart phones and MP3 players, in this article this is referred to as human contagion. There are a lot of research on the topic of social and biological contagion, like[Yahoo - Contagion], though not the topic of focus in this article.
The human contagion problem will probably never be solved because of the complex nature of the human-beeing, in other words there will always be a way into a computer network. This very much compares to Turings halting problem in regard to bugs in computer systems. Computer policies, logging and an incident handling team, a CSIRT, are key aspects to handle the threat of human contagion.
External connections are technological and thereby less complex than the human contagion problem. The human mind may be seen as a structure, divided in four layers: The biological, cognitive, rational and social bands[The Multitasking Mind]. These bands combined decides which actions we take. The human mind is not yet, and will probably never be completely mapped due to the many variables and complexity. Thus, it will be a very hard task to provide a safe prediction to how the users use the computer and mediums.
Today external connections is the standard way of infiltrating a computer network. There are several reasons for that. In the late 40's to 80's when the cold war were a fact, a tactic used by foreign intelligence agencies were spies used to infiltrate and sabotage foreign states. This represented a huge risk to the foreign powers performing physical, illegal actions, such as the USA and Sovjet, this led to diplomatic crises in some situations. A computer network makes intelligence operations possible, without the same risks as under the cold war.
Networked versus Host monitoring
Basically there are two ways to monitor computer usage in the organization. One of the ways is looking at what happens in the host internals, namely host monitoring. The second, simplest and most generic form is deploying strategic and tactical sensors for monitoring the organization network for signs of abnormalities in network behaviour. Both network and host monitoring should be applied to the organization network, which has to do with the nature of extrusion vectors.
Host-based Intrusion Detection System is in many ways similar to the behavior of antivirus applications, but serves the purpose of giving essential in-host information to the CSIRT. The host monitoring application would preferrably provide session data, data about suspicious system calls such as
CreateRemoteThread-calls to the Windows API, illegal user actions in regard to organization policies and so on. These will provide information that would be impossible to acquire through networked sensors alone. Take a look at the encryption problem for instance. Further this enables more advanced analysis. As we will see in the last part of the article it is in such scenarios an advantage to deploy host monitoring to important, strategic individuals or high-value information servers. IfHIDSwas deployed to all workstations of thousands of users, there would be a problem with processing all the data.
Network Intrusion Detection System has an advantage over host monitoring, it is typically a stand-alone server connected to a strategic position in the network, recording and analyzing all the network data passing by on the network. Deploying a network sensor is quite simple in comparison to HIDS which require an installation on each workstation and server. An IDS will typically record session data, selected parts of full content streams, temporal storage of files for analysis and so on. Even though a networked architecture alone may seem satisfying in regard to information security, it is not. When a threat agent attack a workstation in the organizational network by using e.g. a client-side attack through a legitimate website, this will be fairly impossible to detect if there are no indicators in advance of the attack. Usually there are a very limited number of such indicators. Even worse it gets if the employee uses a portable computer and gets attacked on a third party network, like the users home network.
How to prevent extrusion
So what should be obvious at this point is that an organization should never rely solely at intrusion detection (meaning detection of malicious traffic in network ingress vectors) since there is an infinite number of infiltration and extrusion vectors such as human transferable ones and as we have earlier understated much malware simply follow the rules of the network, similar to all legitimate services.
Preventing intrusions (or extrusions) is still an important part of handling computer incidents, though not the only factor anymore. Due to the impossible task of foreseeing all future events it is also important to balance preventive measures with detecting and responding to security threatening events in the system. In regard to detecting and responding to such events, Bejtlich proposes four principles for a so-called defendable network architecture.
For starters the network should be monitored, in a practical manner this means that there should be tactical and strategical deployment of network and host sensors where logs are centrally collected. There are basically four forms of network evidence: Full content, session, statistical and alert data. Please note that in large organizations juridical aspects is often a challenge when it comes to collecting full content data such as email and legitimate network traffic. In Norway such activity is regulated by the Personal Data Act [Personal Data Act]. In addition to Bejtlichs quite network centric approach this paper strongly encourage the use of host based detection mechanisms. A Host-based Intrusion Detection System should therefore be deployed at as many as possible of the nodes. A defendable network infrastructure involves building a defendable computer network. This not only including the CSIRT and its stand-alone sensors, but also for the operations teams to make it possible to insert a sensor at any strategic point in the infrastructure. The latter e.g. means that routers, switches and similar devices need to have the possibility of a span-port in case of a sensor deployment. The operations perspective may sound simple, but in reality large organizations have up to thousands of switches, routers and firewalls. In addition to the large amount of configuration that needs to be done, the CSIRT often operates in a separate location and part of the organization than the operations teams. This introduces challenges in regard to communication and synchronization. The operations teams usually have quite other priorities when it comes to the computer networks (like keeping it operational) than the CSIRT, that focus at security.
A bottleneck especially in a juridical manner, and the next principle, is to enforce control in the organizational network. In short this is introduced through access control. A controversial aspect of controlling network traffic is maintaining access to encrypted traffic, there are basically two effective measure to achieve control, that being proxies and/or host based monitoring. The proxy technique is quite controversial since it works like a monkey-in-the-middle attack and places an extensive trust with the system administrator. An example of the latter is the encryption used by web-based banks. A proxy will only work with PKI or asymmetrical cryptography, not symmetric cryptography where crypto keys are exchanged beforehand. On the other hand this gives configuration control, making it possible to separate such sessions from legitimate encrypted sessions.
Configuration control is important when it comes to a minimalized system. When hardening operating systems, disabling network services and so on, the attack surface gets smaller. The attack surface is defined in terms of intrusion and extrusion vectors.
To avoid drive-by or generic attacks, not really directed at the organization, and to not create a huge work-load at the company analysts it is essential to keep the system current and minimalized. A note on this principle is that it will not be possible to avoid every vulnerability, such as zero-days. In large-scale systems patches may also be applied later, due to making sure the patches does not interrupt service in critical production systems. Sophisticated attackers such as organized criminals and state threat agents are typically well aware of this, creating challenges for the CSIRT.
Since the new way is not to prevent but rather to respond to incidents as they occur: Reacting to incidents implies two important focus areas that we should never forget: Who is the threat agent and in what vectors will he hit.
In conventional warfare taught to cadets at war academies, a method named the Observe, Orient, Decide, and Act loop is utilized.
OODA was developed by Colonel John Boyd as a result of his many successful aircraft fighter campaigns under the Korea war FFOD.
If we take a look at it, this is how an advanced attacker, such as state, is probably thinking and acting during an attack at an organization. In conventional warfare it is all about situational awareness and controlling the battle.
— Colonel John Boyd
A common mistake when an organization is hit, is to give away the element of surprise by doing active measures against the attacker. Attacking an organization involves a certain risk of not knowing when you are exposed. The CSIRT may be following your every move, waiting for a mistake that may lead to an attacking organizations identity and characteristics being revealed. An active measure may be doing a stop-operation, e.g. shutting down a compromised host. At this point it is important to focus at knowing the organizations valuable systems and information. Do a compromised host really have an impact on operations compared to having the possibility to stop the attackers for good?
If we move OODA to cyberspace we would start by observing the current state of information security threats against the organization. The four actions mentioned next are generally extremely effective and is probably what you would come up with in the phase of reacting to an incident:
- Look at the anomalies, not what you know
- Detecting and analyzing PDF documents
- Create a trust to the users and use them as sensors
- Focus deeper analysis on valued users Cisco CSIRT on APT
Detecting and analyzing PDF documents. Especially since 2009-2010 the focus at detecting and analyzing PDF documents has increased in the security communties. The reason for the PDF format being so popular amongst attackers are that the standard is very complex[Adobe Reference]and that PDF readers are widely deployed and runs on most platforms. This makes the PDF documents easy to exploit and spread.
Create a trust to the users and use them as sensors. Creating trust to users is much about the CSIRT having social and professional networks throughout the organization. This makes it easier for users to know where to turn when e.g. he/she notice odd behavior on a computer. In addition to using social and professional networks a thought should go at making it easy for users to contact the CSIRT in case of an incident, such as when a user access a malicious web page.
Focus deeper analysis on valued users. Earlier this year Ciscos CSIRT published some of what they were doing to detectAPTs. One of their bullet points was to focus on the users that were most likely to get attacked. This seems like a very reasonable starting point. A threat agent doing a targeted attack at the organization is probably going to use a lot of social engineering techniques in advance to map the organization. Once again the vectors for social engineering is a mixed set of new and old ones. A reconnoissance phase of an attack may start with a traditional (but yet very efficient) spear-phishing[FBI - Spear Phishing], resulting in the collection of personal information about company users (think address book). The next phase of an cutting-edge reconnaissance attack would be more difficult to detect though. Social networks such as Facebook, Orkut, MySpace, (especially) LinkedIn and so on is a very dangerous social engineering vector. The reason for this is that there is near impossible to control for the organization CSIRT, you are completely dependent on the users as sensors.
There are two examples which comes to mind when looking at Facebook as a social engineering vector. The first is an incident where Israeli soldiers fell for an fake Facebook profile compromising sensitive information [Der Spiegel - The beautiful Facebook friend of the elite soldiers]. The other is a more recent story from Symantec, showing a fatal flaw in the API for ads[Symantec - compromised ads API], giving everyone from crime to state threat agents access to about all Facebook accounts in a period over 4 years. It would surprise the author if foreign intelligence services have not found and exploited the vulnerability.
Part 3. Conclusions
In this article we have taken a closer look at how the threat situation in cyber space have changed from conventional intrusion to client-side attacks and more sophisticated attacks. This pretty much seems to be a result of an increased surface, with numerous new vectors both in and out of computer networks. Today we see a that the new extrusion vectors make it difficult to control network and host activity.
WWW, email, social networks, personal mobile networks and other similar vectors much used by the users themselves, are especially difficult to control by automated means.
What you should have realized through this article is that there will always be a vector into a complex system. Since a system is never completely impenetrable it is important to make sure that measures taken to handle incidents. When selecting which action to take, it should be taken according to a risk evaluation. Important in that regard is to focus at the values the organization are trying to protect and what type of threat agents are threats against which values.
At the end of part 2, we took a closer look at four measures that the organization should implement to keep theCSIRTable to enforce control in the network. Keep these measures and Bejtlichs four principles in mind when designing a defendable infrastructure. Even though the technical aspect is important, it is only a small part of doing information security. Social networks throughout the organization (especially toward operations) and policies are also required for the CSIRT to be able to do its job.
A huge problem in information security is the juridical aspects. As we have seen, in Norway, this is an especially sensitive matter due to the Personal Data Act [Personal Data Act].
There are solutions to the above problems, though these involves commitment by management as well as establishing a committedCSIRT.