Monday, August 26, 2013

Conclusion

Identity-based access controls, and related schemes, are too coarse-grained for
the requirements of modern collaborative systems. Capabilities offer the hope
of fixing what otherwise appears to be a hopeless problem.
Furthermore, capabilities make some security options quite simple that would
otherwise be hopelessly unwieldy.
For those interested in further exploration and experimentation, I would skip
the operating system approach.
The language-based approach offers three viable alternatives, Caja, which
should be in wide use by the time this paper is published, E, which is mature
and functional, but not very much used, and Joe-E. Caja and Joe-E share
the advantage that they are based on existing languages (Javascript and Java
respectively) and so do not present a steep learning curve. E, on the other
hand, has a number of interesting features, such as built-in support for writing
distributed systems and an interesting and useful distributed message ordering
paradigm.
For distributed systems, Waterken implements a web-based approach, and,
as mentioned above, E has support at the language level.
In all cases, it makes sense to exploit the natural link between capabilities
and objects by using a language designed to handle capabilities inherently, thu

Limited Spend

Suppose I have a capability allowing unfettered access to my bank account. I
could hand a capability to Alice, who I partially trust, that could be used to
spend up to some limit directly from my bank account. Alice could then go
and buy something on my behalf using that capability ... or steal the money
from me. But Alice could not steal or spend any more than the limit I had set.
Furthermore, until Alice does actually spend the money, it remains available to
me in my bank account, so this is not the same as actually giving the money to
her.
Also, if I thought I might change my mind later, I could first wrap the bank
account capability in a revocable capability, and then wrap the revocable capability
in the limited-spend capability. Note that the limited-spend capability
would not need to know whether the account-access capability was revocable or
not.

Proving User Choice

If we don’t even trust Carol to correctly report the user’s choices in the example
above, we could elaborate this further by having the trusted renderer hand Carol
a capability which proves that the user did indeed click on the capability she is
now handing to Bob, and Bob could decline to act unless he sees that capability.

Oblivious Transport

Suppose Alice wants to tell Bob a secret, but has to do so via an intermediary,
Carol. Alice can hand Carol a capability containing the secret, which has a
method allowing access to the secret, but only if that method is also handed
a second capability. Only Bob and Alice have this second capability. Alternatively,
the second capability can have a method which can unseal the first
one.
Carol can then hand that capability on to Bob, who can then combine it
with the “unsealing” capability to access the data inside.
This may seem like an artificial construction, but consider the case where
Alice and Bob are components of a trusted system, and Carol is untrusted
code running in that system. Combining this idea with the example above, the
capability handed to Carol could also contain data which, when handed to the
trusted HTML renderer, would be made visible to the user, but which Carol
could not herself see. Carol may determine from user actions that this capability
should be used to perform some action on behalf of the user and hand it on to
Bob to do so.

Allowing Access to Dangerous Operations

Again, thinking about the browser case; in order to render untrusted code safe
it must have its output restricted so that it cannot place “dangerous” HTML
on the web page (for example, <script> tags with arbitrary Javascript). But
it may be that the container wants to allow it to use such HTML that has been
“blessed” by the container.
In this case, a capability can be used to wrap the blessed HTML. The capability
prevents the untrusted code from modifying its contents (by not providing
a method to do so), but when handed to the safe-HTML-writing capability it
bypasses the HTML safety checks and allows the blessed HTML to be written.

Advanced Uses

It has long been held that anything capabilities can do ACLs can also do. This
is not the case, and the most obvious counter example is this: if Alice wants
to give Bob access to some file, say, then in an ACL system all Alice needs to
do is add Bob to the ACL for that file. ACLs cannot prevent Alice from giving
access to Bob.
In a capability system, Alice also needs a capability giving access to Bob
in order to pass him the capability to the file[12]. Furthermore, it must be a
capability whose API allows the passing of other capabilities.
The ramifications of this difference could form the subject matter of a whole
book, but I give some examples here.

Capabilities and the Browser

The browser is an environment that is hard to imagine how to control with
ACLs, but capabilities seem to fit right in. Let’s consider a gadget, for example.
A gadget is, when you get down to it, a piece of javascript supplied by one
site running in a page supplied by another. From a security point of view this
presents an interesting dilemma: it is very likely that the user has different
levels of trust for the two pieces of code (say, for example, that the enclosing
page is Google Mail and the gadget is provided by god-knows-who) – but from
a traditional security point of view they are indistinguishable – they both run
as the same user and they are both effectively on the same page. Furthermore
the objects that one might want to protect (contact lists, contents of emails
and so forth) are effectively invisible to the operating system’s access control
mechanisms, and to the browser’s (if only it had any).
The view in a capability world could not be more different. In this case
the gadget is entirely at the mercy of the enclosing page, which can decide in
infinite detail what the gadget has access to and how. What’s more, providing
these detailed capabilities to the gadget is as easy and natural as providing
Javascript objects to it. Indeed, in the case of Caja, at least, that is precisely
how capabilities are implemented: as Javascript objects.

Machines

The third line we can draw is between machines. Capabilities are then manifested
as network objects. One simple way to do this is to have each capability
correspond to a URL whose path is a large random number13 – this gives the
required unforgeability property, so long as the URL is kept secret. Waterken[8]
is an example of this, as is the E programming language, in which local objects
can actually be references to objects on remote systems.
If keeping the URL secret is not possible (or desirable), then combining
presentation of the URL with authentication would restore unforgeability14.
HP’s eSpeak[2] system, which combined capabilities with public keys – in order
to exercise a capability you had to prove both possession of the corresponding
private key and of the capability itself – is an example of this kind of system.
This mechanism is also often seen used in an ad-hoc way – for example,
the confirmation mails that Mailman[3] (and other list managers) sends are
effectively capabilities. Likewise, a common way to prevent cross-site request
forgery15 is to include a field in the form with a random number in it that the
server can check. When an attacker attempts to forge the form submission this
number will be missing or incorrect and so the attempt will fail. This number
is a capability.

Processes

The second approach is to draw the line between processes. In this case the
capabilities are typically managed by the operating system, though it is possible
to imagine them being managed by a privileged process.
Examples of the operating system approach are Keykos[9], Amoeba[17],
EROS[14] and Coyotos[13]. Rather then access to the ”rest of the world” being
mediated by capability objects, such access is directly through a capability.
That is, in such systems, capabilities are natively supported by the system.
Plash[5] and userv[7] could be considered to be examples of the second type
of system, where capabilities are managed by a privileged process.
Once more authentication should not be needed in these systems when capabilities
are controlled by the operating system, but could be needed in the
case where capabilities are managed instead by a privileged process.

Modules

In the first, we choose to draw the boundary between individual code units
(for example, objects or functions). We create a programming language that
enforces capability discipline on programs written in that language.
What would such a language look like? At first, it seemed it would have
to be quite esoteric, but as our understanding of capability languages has improved,
we realise that capability languages are really only slightly different
from standard object-oriented languages. Typically all they change is some details:
no globals10, opacity and unforgeablity of objects (which then become the
capabilities themselves), access to anything outside the program itself mediated
through capabilities11.
This means we have an interesting design choice available: rather than inventing
a new capability language, we can modify an existing one. The former
choice is seen in E[15] and the latter in Squeak-E (Smalltalk), Joe-E[11] (Java)
and Caja[1] (Javascript).
Because of the natural correspondence, mentioned above, between objects
and capabilities, the concern often expressed over managing the hundreds or
thousands of capabilities a fully-featured program might need is dealt with quite
simply: programmers deal with thousands of objects quite easily and naturally
– the fact that these objects are now capabilities does not change the ease with
which they can be handled.
In this case, because capabilities are directly controlled by the language,
authentication should not be required to preserve unforgeability, but may be in
order to restrict delegation, depending on implementation12.

Implementing Capabilities

So now that we know what a capability is, how do we go about creating them?
There appear to be three fundamental approaches – and one could argue that
these approaches derive from a choice about where to draw the boundaries
between different systems.

Authentication and Restricting Delegation

In the purest form of capabilities, all that is required to exercise a capability
is its possession. This means that, in general, it is entirely possible for the
possessor of a capability to delegate it by handing a copy to someone else.
This gives rise to the capability community’s fondness for the Granovetter
diagram, which shows a capability for some action being handed from one party
to another down an existing capability9 (remember that in a capability system
two entities can only communicate if they have a capability allowing them to
do so).
However, it is sometimes desirable to authenticate the wielder of the capability,
rather than relying on mere possession. There are two reasons you might
want to do this
• To restrict delegation of capabilities.
• To avoid having to keep capabilities secret, where secrecy is a necessary
prerequisite to unforgeability.
Note, however, that it is almost universally impossible to prevent delegation
if the possessor of a capability is determined to delegate. So long as they have
any kind of communication channel with the delegee, they can, at the least,
exercise the capability by proxy on behalf of the delegee. And, of course, it
turns out to be very hard indeed to eliminate communications channels between
entities, because of covert channels.

Access Control Lists

Many people like to think that capabilities and ACLs are in opposition – but
this is not entirely so. ACLs can be used to decide whether to grant capabilities
or not. In practice, this would mean that either the system would give all
capabilities a user has to each program running on their behalf, or it would be
prepared to hand them out on demand as the program requested them (the former
technique would look more like a traditional capability-based programming
model and the latter more like an ACBR one).
Note that this technique gives no advantage, of itself, over standard ACBR,
unless the “unit of mistrust”, so to speak, is smaller than the program itself.
For example, if the program uses a capability-secure language, then it could
segregate the use of capabilities internally – so even though the program as a whole has access to all of the capabilities, each module only has those it needs to
do its job. This may help when an attacker figures out how to make the program
behave in unintended ways – in a pure ACBR system, the subverted module
would have access to all of the permissions the whole program had access to.
In a well-designed capability version of the program, the module would only
have access to a small subset of those permissions (in the form of capabilities,
of course).
It is also important to remember that ACLs are applied at the time the
right is exercised, whereas once a capability is handed out it does not generally
change, so the simplest implementation of an ACL controlled capability system
is not equivalent the direct use of ACLs – if the ACLs change, the capability
system may not reflect those changes.

Extreme Limitation

This could be thought of as a subset of “some combinations are inherently safe”,
where the number of safe combinations is one.
In some environments it makes sense to limit what the untrusted code can
do to a very small set of capabilities. For example, web gadgets (a.k.a. widgets)
could be limited to
• Writing a safe subset of HTML to a designated part of the web page
displayed in a browser.
• Receiving user input through forms on the web page.
• Receiving mouse clicks from the user when they are within the bounds of
the part of the page allocated to the gadget.
• Making a network connection to their originating site.
This would provide useful functionality whilst effectively insulating the user
from any bad things the gadget could do (such as, for example, navigating to a
phishing site, or attempting to steal the user’s login cookies for the embedding
page).
Google’s Caja project[1] can be used to do exactly this, amongst other things.

Monitors

granting safe combinations of capabilities in advance, instead they are granted
(or revoked) according to what the program has done so far. So, for example,
a program might start up being allowed to access both the filesystem and the
network, but once it has done one, it is then forbidden from doing the other.
This can also be used to enforce Chinese Walls – once the user has accessed
files relating to client A, they can no longer access files relating to client B, and
vice versa.
The best known example of this kind of system is SE-Linux[6].

Some Combinations are Inherently Safe

A completely different approach is also possible, and has been pioneered by
Bitfrost[10]7, the security system for the XO8. In this the idea is that some
operations are safe if they are not combined with others.
For example, it is safe to read files if you cannot communicate on the network.
Conversely it is safe to communicate on the network if you cannot read local
files. The threat model here is that the user’s privacy might be compromised
if both operations are allowed, but either one in isolation cannot expose their
secrets.
In the Bitfrost model a program can ask for a set of capabilities and if they
are a “safe” combination the system will grant them without user intervention.
Alternatively, the user can be presented with a set of mutually exclusive safe
options (e.g. “allow file access” vs. “allow network access”) and can choose the
appropriate one for the task at hand.
Although Bitfrost is the more widely known implementation, this approach
was actually pioneered by the experimental CapDesk[16] system.

Designation is Authorisation

program – it needs to read the file the user wants to edit. How does it get the
capability to that file?
One answer is to have the system in possession of all-powerful capabilities:
for example, the capability to manufacture a read or write capability for any of
the user’s files. The system has a component which can interact with the user
to choose a file (in other words, a file open dialog). The system endows the
word processor with a capability to invoke this trusted component. When the
user chooses ”File/Open” the word processor invokes the file open dialog, the
user chooses the file they wish to edit, the dialog returns a read capability to
the word processor which can then display the contents to the user.
Contrast this with the situatation when using ACBR – from the user’s point
of view everything looks just the same: they choose File/Open, a dialog comes
up, they choose a file, the word processor displays the fie. But under the hood
something quite different has happened. The dialog returned the name of the
file, the word processor asked the OS to open it, the OS checked whether the
user was allowed to, and if so, opened it.
But suppose the word processor had ignored what the user chose and just
used a completely different file? Of course, this would work just fine under
ACBR. Indeed, the word processor wouldn’t have to ask the user to choose a
file at all.
But in a capability system the situation is entirely different - without the
user’s intervention, the word processor cannot read any files at all. Even with
the user’s help it can read just the file the user chose and no other.
A similar example comes from the Unix shell. Consider the standard Unix
cat command – it takes an argument which is the name of a file to read. In a
capability system, this would be translated into a capability to read just that
one file. The shell would do this on behalf of the user because the user had
designated that file by typing its name. Of course, the shell would have to have
access to the corresponding capability (or a factory that could make it) – which
would correspond the the ACBR notion of the user having permission to read
the file. No further checking of permissions would be required.

Managing Capabilities

Let us answer the second question first. The first thing to understand is that capability
security has nothing to say, of itself, about how you create and who you
give capabilities – other than that both these operations should be deliberate.
So, capability security does not inherently dictate any particular way of managing
capabilities – it is up to the designer to choose an appropriate mechanism.
In the following sections, I describe a few ways to manage capabilities. In
practice, capability systems tend to use some combination of these and other
ideas.

Access Controls Based on Roles

Of course, in reality, we are never actually controlling the user’s access – instead
we are controlling programs that act on his behalf. Traditionally we have taken
the view that the program itself is trustworthy: that is, that it will faithfully
carry out the wishes of the user who is controlling it. For that reason, traditional
access control has focused on controls based on the identity of the user running
the program.
In practice we use roles rather than users because of messiness in the real
world:
• Sometimes there isn’t really a particular person associated with an activity
we’d like to control: for example, if we are running a nuclear power plant,
the entities we wish to be in control are identified by their role, such as
”engineer in charge” or ”janitor”. It is the role we wish to grant permission
to, not the person.
• People come and go. When somebody starts a job they acquire a new role
(or roles), and when they leave they lose the role.
• People are fallible: it is useful for their own safety to limit their powers to
their current role so that mistakes are less costly.
But increasingly we are moving to a world where the user cannot trust
the program he is running. Even if we look at traditional environments the
devolution of control away from the data centre and professional IT staff and towards the end user himself means there are programs running on peoples’
machines that they have no idea how much to trust, and nor do they have any
way to evaluate those programs.
If we look at the Web, particular the modern trend towards mashups, gadgets
and single-domain3 applications (e.g. Twitter, Flickr, Dopplr) we see this
problem in spades. Applications are webpages, users switch between applications,
including to completely new ones, at the click of a mouse. With mashups
and gadgets the user is not even running a single program but multiple programs
from multiple sources all sharing the same page.
Yet we are still trying to control everything with access controls based on
roles (ACBR)4.
This makes no sense at all. The user has no way to sensibly create roles and
the permissions associated with them. Even if they had, the task of assigning
those roles to the various components of, say, a web page would be impossible.
Of course, I would not be saying all this if I did not think there was a better
way. There is: capabilities.
A capability can be thought of as a “handle” representing some operation on
some object. Possession of the capability is all that is required to perform that
operation. The security of a capability system then boils down to your ability
to control who5 has each capability.
Some examples of capabilities are:
• Read this particular file.
• Write the same file (note that this is a completely different capability from
the read capability).
• Pay money into my bank account.
• Pay money into your bank account.
• Take money out of my bank account.
Each of these capabilities is completely independent of all the others. I
cannot derive a read capability from a write capability, nor vice versa. I cannot
take money out of your bank account just because I can pay it in.
Note, however, that it is possible to derive new capabilities from old ones,
for example:
• Write only well-formed HTML to some file, derived from the capability to
write to the file.
• Write a safe subset of HTML to some file (for example, banning <script>
tags and other unsafe constructs, derived from the HTML capability above.
• Transfer money from my bank account to your bank account, derived from
the capability to take money out of my bank account and the one to put
money into yours.
Note that neither of these can do any more than the capabilities they are
derived from (of course: how could they?) – indeed, in both cases they do less.
In fact, this is generally likely to be the case for derived capabilities, otherwise
why bother to derive them?
In order to turn capabilities into a security system, we need some extra
properties...
• Capabilities are unforgeable – that is, the only way to possess a capability
is to be given it, or to derive it from capabilities you already have.
• All access to resources (that is, anything outside the program itself) is
through capabilities.
In practice, it is usually simplest to build capabilities on traditional objectoriented
programming. In this approach, an object corresponds to a group of
capabilities which operate on some underlying state (in the case of read access to
a file, the object might let me do seek operations as well as read operations on the
underlying file object), often itself represented as another object which might be
shared between multiple capabilities (my bank account would be shared between
the capability to deposit money in it and the capability to withdraw money from
it). Deriving new capabilities from old then becomes business as usual – create
a new object, include references to the relevant existing capabilities and away
you go.
When capabilities are implemented in this way you need to impose a further
property on the objects: encapsulation. That is, it should not be possible for
the holder of a capability to look inside it and see its inner state6.
The requirement for unforgeability of course needs to cover naughtiness on
the part of programs: they should not be able to steal capabilities from other
programs, for example, nor should they be able to access an existing capability
by creating a new reference to it.
I hope, by this point, you can see that capabilities could, indeed, form the
basis for a very fine-grained security system. But what do they have to do with
mashups and gadgets? And how on Earth do you manage all these capabilities?

Introduction to access control

Access control is central to computer security. Traditionally, we wish to restrict
the user to exactly what he should be able to do, no more and no less.
You might think that this only applies to legitimate users: where do attackers
fit into this worldview? Of course, an attacker is a user whose access should be
limited just like any other. Increasingly, of course, computers expose services
that are available to anyone – in other words, anyone can be a a legitimate user.
As well as users there are also programs we would like to control. For
example, the program that keeps the clock correctly set on my machine should
be allowed to set the clock and talk to other time-keeping programs on the
Internet, and probably nothing else1.
Increasingly we are moving towards an environment where users choose what
is installed on their machines, where their trust in what is installed is highly
variable2 and where “installation” of software is an increasingly fluid concept,
particularly in the context of the Web, where merely viewing a page can cause
code to run.
In this paper I explore an alternative to the traditional mechanisms of roles
and access control lists. Although I focus on the use case of web pages, mashups
and gadgets, the technology is applicable to all access control.

Unauthorized Access Controls and Attacks

The unauthorized disclosure of information can happen by object reuse, social engineering,
keystroke monitoring, and radiation of signals. Emanation security
involves protecting against attackers intercepting electrical signals that radiate
from computers. There are three countermeasures for emanation attacks:
white noise, control zones, and TEMPEST. There are many attacks on access
controls,however, dictionary and brute force attacks are the most common.

Intrusion Detection System

Intrusion detection systems
(IDS) can be used to monitor network segments (network-based) or individual
systems (host-based). There are two types of IDS: signature-based
and behavior-based. Signature-based IDS cannot recognize new types of attacks,
but instead looks for patterns of previously identified attacks. Behavior-
based IDS builds a profile of a network’s usual activities and compares all
future activities to this profile to detect unusual behaviors.

Access Control Administration

There are three types of access control administration: centralized, decentralized, and hybrid. Centralized
administration means that one group is responsible for ensuring
that the subjects are identified, authenticated, and authorized. Some examples
of centralized administration technologies are RADIUS, TACACS+,
and Diameter authentication protocols. Decentralized administration does
not have one group responsible for enforcing access rules and a hybrid administration
is a combination of centralized and decentralized.

Access Control Models and Techniques

Access control can be implemented by operating systems that are developed with one of the
three types of access controlmodels: discretionary, mandatory, and role-based.
The discretionary access model allows data owners to decide what subjects can
access resources; mandatory access models do not allow data owners or users
such flexibility.Discretionary uses ACLs and mandatorymodels use security labels
to enforce access control policies. The role-based model allows roles and
groups to be developed, and users are assigned to these logical containers.

Single Sign-On Technologies

Kerberos, SESAME, directory
services, and thin clients are examples of single sign-on technologies.Users
enter only one set of credentials,which allows themto access all the network resources
they require. Kerberos is a ticket-based authentication protocol that
uses symmetric cryptography.SESAME is similar toKerberos, was developed to
overcome some of Kerberos shortcomings, and uses public key cryptography.
Kerberos and SESAME are both vulnerable to dictionary attacks.

Identification and Authentication

Access is the flow of information between a subject and an object.A subject is an active entity; an
object is passive entity. Access controls require identification, authentication,
and authorization of subjects requesting to access objects. Authentication is verifying the identity of a subject. Possible authentication mechanisms are biometrics (verifies the identity by a unique personal attribute), passwords(the weakest form of authentication), token devices (create one-time passwords),
passphrases, cognitive passwords, memory, and smart cards. Each authentication mechanism has one of the following characteristics: something that you know, something that you have, or something that you are.

Penetration Testing

Penetration testing is the process of simulating attacks on a network and the systems
that make it up at the request of the owner, usually senior management.
The goal of penetration testing is to identify vulnerabilities, estimate the true protection the security mechanisms within the environment are providing, and
how suspicious activity is reported.
The penetration teammust have signed consent fromthe owner of the environment
outlining what is to be accomplished in the test and to what degree the
vulnerabilities should be tested. The team then goes through a process made up
of five steps, outlined here:
• Discovery Footprinting and gathering information about the target.
• Enumeration Performing port scans and resource identification
methods.
• Vulnerability mapping Identifying vulnerabilities in identified
systems and resources.
• Exploitation Attempts to gain unauthorized access by exploiting
vulnerabilities.
• Report to management Documentation of findings of test goes to
management along with suggested countermeasures.
The team can have varying degrees of understanding of the target before the
tests are actually carried out:
• Zero-knowledge Team does not have much knowledge of target and
must start from ground zero.
• Partial-knowledge Team has some information about target.
• Full-knowledge The team has intimate knowledge of target.
It is important that the team start off with only basic user-level access to
properly simulate different attacks. They need to utilize a variety of different
tools and attack methods and look at all possible vulnerabilities because this is
how actual attackers will function.

Attack Types



Attackers have been very busy and creative over the years and have come upwith
many different types of attacks that can be used to threaten an organization’s
access controls that are put into place to protect assets. The following list describes
many of these types of attacks:
• Man-in-the-middle attack An intruder injects himself into an ongoing
dialog between two computers so that he can intercept and read messages
being passed back and forth. These attacks can be countermeasured
with digital signatures and sequence numbers.
• Sniffing A passive attack where an intruder monitors the network in
order to gain information about the victim, usually for a later attack.
A countermeasure is encryption of data as it is being transmitted.
• War dialing A brute-force attack in which an attacker has a program
that systematically dials a large bank of phone numbers with the goal
of finding ones that belong to modems instead of telephones. These
modems can provide easy access into an environment and the countermeasures
are not to publicize these telephone numbers and implement
tight access control for modems and modem pools.
• Ping of Death A type of DoS attack where oversized ICMP packets
are sent to the victim. Systems that are vulnerable to this type of attack
do not know how to handle ICMP packets over a specific size and may
freeze or reboot. Countermeasures are patching the systems and implement
ingress filtering to detect these types of packets.
• WinNuk A type of DoS attack that sends out-of-band packets to port
139.Windows 9x and NT can be vulnerable to this kind of attack. The
countermeasures to this attack are to patch the system or upgrade to
a later operating system version.
• Fake login screens A fake login screen is created and installed onto
the victim’s system. The user then attempts to log in to the system by
entering his or her credentials into it. The screen captures the credentials
and exits showing the user the actual login screen for his or her
system. Usually, the user just thinks he mistyped the password and attempts
to authenticate again without knowing anything malicious just
took place. A host-based IDS can be used to detect this type of activity.
• Teardrop An attack that sends malformed fragmented packets to a
victim. The victim’s system usually cannot reassemble the packets correctly
and freezes as a result. Countermeasures to this attack are patching
the system and ingress filtering to detect these packet types.
• Traffic analysis A method of uncovering information by watching
traffic patterns on a network. For example, heavy traffic between HR
and headquarters could indicate an upcoming layoff. Traffic padding
can be used to counter this kind of attack, in which decoy traffic is sent
out over the network to disguise patterns and make it more difficult to
uncover patterns.
Script kiddies are individuals that do not necessarily have a high-level of computer
skills, but performattacks using already-made programs that performattacks
for them. In most cases, they do not fully understand what these programs do,
and do not understand the full extent of the damage that they can cause and the
extended ramifications of these types of attacks.

Emanation Security

Another method that attackers use to steal information is to intercept electrical
signals that radiate out of computers and devices. Every electrical device emits
some amount of electrical signals or radiation, which attackers can intercept using
specialized equipment. These signals can carry sensitive information.With the right
hardware and software, the attackers can reconstruct this data without the individuals
using the devices ever knowing about it.
We see these types of activities in spy movies where there is a white van in
the parking lot of a building, trying to pass itself off as a utility vehicle. But the
van is actually filled with specialized equipment and highly skilled individuals
with the goal of gaining access to information that they are not supposed to be
able to access. This may seemas though it only exists in spy novels and movies,
but there are actually countermeasures that are often used to protect against
these types of attacks. The three main countermeasures to emanation attacks
are TEMPEST, white noise, and control zones.
TEMPEST was a study of monitoring these signals and preventing others
frombeing able to monitor specific devices. TEMPEST then graduated into being
a standard that vendorsmust meet if they want their products to be considered
TEMPEST protection mechanisms. The devices have a special shielding,
referred to as a faraday cage that allows only an acceptable amount of radiation
to escape. Typically, only highly secure organizations implement TEMPEST
products because it is expensive, and the buying and selling of TEMPEST devices
is highly controlled by the government.
White noise is another countermeasure that can be used to fight emanation
attacks. In this case, a uniform spectrum of random electrical signals is purposely
emitted to jam the signals that are released from an environment.White
noise makes it difficult for an attacker to decipher real electrical signals that
could be deemed beneficial.
Finally, control zones can be used within buildings to block the electrical signals
as they leave a facility or room, as shown in Figure 2-9.Walls, ceilings, and
floors can be constructed with materials that greatly reduce the amount of signals
that are released from that area.

Unauthorized Disclosure of Information

Sensitive information can be disclosed intentionally or accidentally, but the results
are the same, individuals have access to information that was not intended
for their eyes.A big part of access control is preventing this type of activity from
taking place.Many times, different types of media are used by different people
within an organization. Floppy disks are shared, hard drives are checked out
from a media library for different employees to use, and shares on servers are
used by many to store information. These different media types can hold sensitive
information that can be accessed by more individuals than should be allowed.
Also, new employees often inherit old computers,which could contain sensitive
information, from former employees. These are some examples of object reuse,
which means that some type of media (object) that could contain sensitive data
is being used by a different subject. The following issues should be considered
when dealing with unauthorized disclosure of information:
• Media containing sensitive information may need to be degaussed to
properly erase all data.
• Deleting files on a disk removes only the pointers to those files, not the
files themselves, thus they are still available for unauthorized disclosure.
• Formatting a drive rewrites only the allocation table, but does not actually
remove the information held within the drive’s sectors.
• If media containing sensitive information cannot be properly erased, it
should be physically destroyed.
• Processes within an operating systemshould erase their memory segments
before other processes are allowed to use the same memory portions.
• Social engineering can be used to trick someone into providing confidential
information to unauthorized individuals, thus is a possible threat
to sensitive data.
• Zeroization is the process of writing null values over media several
times to ensure that data is not available to others.
Keystroke monitoring tools are hardware- or software-based utilities that are
used to capture each and every keystroke an individual inputs into a computer.
They can be used to monitor employees for suspicious activities or can be used
by attackers to gain access to confidential information. A common trick is to
load a Trojan horse onto a user’s system, which then installs a keystroke-monitoring
programto capture usernames and passwords and send themback to the
attacker. The attacker then uses these credentials to fraudulently authenticate as
that user and access resources and information not intended for him.

Unauthorized Access Control and Attacks

This section covers the many tricks and ploys that attackers use to gain unauthorized
access into a company’s assets and information. These items need
to be understood, along with the necessary countermeasures, to ensure that all
threats are properly identified and eliminated or mitigated.

Downfalls of IDS

Intrusion detection systems have characteristics and downfalls that security
professionals should be aware of:
• Cannot always process all packets in busy networks
• Cannot analyze encrypted data
• Not as effective in switched-based networks
• Many false alarms
• Not an answer to all prayers—still need firewalls, anti-virus software,
security policies, and other types of controls
It is important to realize the IDS is just one piece of a security program and
should be used as one of the layers that protects the company and its assets.

Signature-Based and Behavior-Based

When different attacks are identified, IDS vendors write signatures that fit the
patterns of these attacks. These signatures are installed into the IDS software by
the customer so that the product can detect and identify all attacks that are currently
known. This is similar to howanti-virus software products are continually
updated so that they can identify the latest and greatest viruses and malware.
A signature-based IDS, also referred to as rule-based IDS, keeps these signatures
in a database and compares network traffic or host-based activities to the contents
of the database. If a signature is matched to an ongoing activity, the IDS takes
whatever action it is configured to carry out (e-mail or page an individual, reset
connection, or reconfigure perimeter device). So if a network-based IDS sensor
picks up a packet that is fragmented and malformed in a way that matches a signature
in its database, it will conclude that this is an identified attack and will
take the steps it is configured to follow.
There are two types of rule engines that can be used in rule-based IDS systems,
ones that implement a state-based model and another type that implement a
model-based approach. The model-based approach works on the assumption
that attackers use specific known procedures to breach an environment’s security,
as in performing scans and exploiting certain vulnerabilities. The IDS system
looks for these specific activities to identify an intrusion.
A state-based IDS looks at the full exchange of data between source and destination
systems to fully understand the dialog that is going on between the two
systems. This provides amore in-depth look at the possible attack underway instead
of comparing individual packets to a database of individual signatures.
This type of IDS combines the packets and reviews the full conversation to look
for malicious activity.
A behavior-based IDS, on the other hand, compares current traffic to a reference
model of normal behavior. When it sees something out-of-the-ordinary
that does not match its definition of “normal,” it signals an alarm.When a behavior-
based IDS is first installed into an environment, it goes through a process
of learning about the environment, its traffic patterns, user activities, traffic
types, bandwidth use, andmuchmore. This data is collected and a profile for the
current environment is built.After this learning period, all future traffic and activities are compared to this reference profile. Anything that does not match is
seen as an attack. This approach usually produces a lot of false positives.
The behavior-based IDS can detect new attacks, unlike the signature-based
systems, because they are not depending upon matching specific attack signatures
to traffic patterns.

Network-Based and Host-Based

Network-based IDS can be hardware appliances dedicated to their tasks or software
applications installed on a computer system. In either case, the network interface card works in promiscuous mode and collects and monitors network traffic
looking for malicious activity. There are sensors placed in the network segments
that are to be monitored, and they are all connected to a central management
console. The IDS software analyzes protocols and relevant packet information
to uncover misdeeds.
Host-based IDS, on the other hand, are applications installed on individual
computers with the goal of monitoring activities taking place on specific systems
instead of monitoring network traffic. Host-based IDS have a more myopic
view and can be used to ensure that critical system files are not modified in
an unauthorized manner, scrutinize event logs, monitor use of system resources,
and possibly detect ping sweeps and port scans that are taking place on
those individual systems.
Host-based IDS can take a lot of maintenance if they are installed on each
and every systemwithin a network. In most environments, only the critical servers
have host-based IDS installed because thewhole network could be negatively affected
if one or more were compromised.
The goal of both the network and host-based IDS is to detect ongoing attacks
or potentially dangerous activities and alert the network staff so that they can
properly react and mitigate damages. Depending on the product and its configuration,
the IDS can page or e-mail the network administrator or engineer to
alert her of a specific type of activity. The IDS may also attempt to reset the connection
of an ongoing attack and even reconfigure a router or firewall to cut off
traffic from the identified source of the attack.
Network and host-based systems will be either a signature or behavior-based
product, which are described in the following section.

Intrusion Detection System

Intrusion detection systems (IDS) are dedicated appliances or software-based
components that monitor network traffic or individual computer activity
with the goals of identifying malicious actions, resource misuse, attempts to gain
unauthorized access, and attacks. There are different types of IDS systems—
host and network-based, and signature and behavior-based—which we will be
covering in the following sections.

Decentralized Access Control Administration

The alternative administration approach is decentralized access control. In this
structure, users do not authenticate to one specific system, and one group is
not enforcing specific access rules. The environment may work in more of a
peer-to-peer relationship. In this type of environment, users access network resources
without being centrally authorized. The different resources may be
owned and maintained by different departments or users and theywill make the
decisions on who can access and use these resources.
In many environments, centralized and decentralized methods of access
control are combined, which ends up in a hybrid administration. The network
administratormay controlwho can access network resources (file servers,printers,
remote sites), and individual users can control who accesses their resources
(shares, local printers, directories). This provides more flexibility for the users,
but ensures that the critical assets are closelymonitored and at less risk of being
misused.

Diameter

Diameter is an authentication protocol that has the capability of authenticating
many different types of devices over different types of connections. RADIUS is
restricted to working only over PPP and SLIP connections and can authenticate
only computers and laptops that usemodems and regular authentication protocols,
as in PAP, CHAP, and EAP. Today, companies have many different types of
wireless devices that do not use these protocols and mechanisms, thus these
companies cannot authenticate them using the RADIUS protocol.
Companies that need this extra flexibility would look at implementing
Diameter instead of RADIUS or TACACS+.

TACACS

Terminal Access Controller Access Control System(TACACS)+ provides basically the
same functionality as RADIUS and uses the same type of components: modem
pool, access server, and TACACS+ server.TACACS+ is a Cisco proprietary protocol
and has a few extras that are not included inRADIUS.As stated earlier,RADIUS encrypts
only the user’s password when an access request is passed between the
RADIUS client and server.TACACS+ encrypts all the negotiation data being passed
back and forth, which provides a higher level of security. TACACS+ also splits up
the authentication, authorization, and auditing functionality,which RADIUS does
not. This gives the administrator more flexibility in being able to decide which
functionality she actually wants to use. It also allows another mechanism to provide
the authentication, as in aKerberosKDC,and theTACACS+would still provide the
authorization piece. Because the authentication and authorization pieces are split
up, the administrator can also configure individual user profiles. SowhenKandi dials
into the company’s network, she would have a different profile, or environment,
with different access rights than Keith would when he dials in.

RADIUS



Remote Authentication Dial-in User Service (RADIUS) is an authentication protocol
that allows users to dial into an environment and authenticate over a PPP
or SLIP connection. The components that are typically involved are a modem
pool, an access server, and a RADIUS server. Users dial into the modem pool,
which is connected to an access server. The access server prompts the user for
credentials and then passes this information on to the RADIUS server. The
RADIUS server compares the credentials to its user database and then authenticates
authorized users. This process is shown in Figure 2-8.
The user is a client to the access server and the access server is a client to the
RADIUS server. The access server just works as a middleman between the user
and RADIUS server.When a user is authenticated, the RADIUS server may send
connection requirements to the access server, as in a requirement to set up a virtual
private network (VPN) or an allotted amount of bandwidth users can use
during connections.

This technology allows users to use a local ISP to connect to a corporate network
instead of enduring long-distance phone calls. It also allows for centralized
control, in that the administrator configures the RADIUS server to dictate who
is granted access and to what extent.

Centralized Access Control Administration

In a centralized access control administration, one group is controlling how subjects
access objects. This obligation usually falls upon the IT group and security
officers. They are responsible for configuring the systems that will enforce this
type of access control. The systems need to have at least a database of known users,
their credential sets, and allowed operations dictated by the access criteria.
The systems are used to identify, authenticate, and authorize users before access
to network resources is granted. The three types of centralized access control
technologies discussed are RADIUS,TACACS+, andDiameter. Each of these are
usually used when remote users need to access a corporate network either from a remote site, a home office, or for road warriors that access the network
through Internet Service Providers (ISPs).
These three centralized access control technologies are referred to as AAA
protocols, which stand for authentication, authorization, and auditing. They
provide the same overall functionality, but each has differences that would be
best used in different environments.

Access Control Administration

So, you’ve picked your model and identified the techniques you want to use,
now it’s time to actually manage the thing. How do you do that? You have
three choices of administration methods: centralized, decentralized, or hybrid.
This section explains each type.

Other Access Techniques

The following access control types aremore granular in nature and can be used
in combination with any of the previously discussed controls and models. The
operating system or application would need to provide these types of controls
for themto be available to be configured by the administrator or security officer.
• Physical location A user may be allowed access to a resource only if
she has interactively logged in to a computer, meaning that she is physically
at a computer and not logged in remotely.
• Logical location An administrator can restrict access to resources by
IP addresses, which is a logical location on a network.
• Time of day A control mechanism can be configured to only allow
access to resources between certain hours of the day and certain days
of the week.
• Transaction type Restrictions based on operations that are requested
to be carried out. A bank teller may be able to deposit checks but not
cash checks. This would be a type of transaction access restriction.

Content-Dependent Access Control

Content-dependent control is another technique used, which is based on the object’s
content that the subject is trying to access. It is usually implemented in databases.
For example, a database may contain sensitive medical information; the
more sensitive this data is, the fewer the number of people thatwill be able to access
it. Let’s say the database holds the results of different tests that have been
performed on a particular patient. If a lab technician requests to see these results,
she may be shown only that the tests were completed successfully, on what date,
and by whom. If a nurse requests to see this information, she may be given access
to a little bitmore of the information than the lab technician, but not sensitive
information, as in the patient has positive signs of malaria. And when a
physician requests to see the same set of information, she may have access to all
test data and results.
Content-dependent access controls require a lot of resources and overhead
because the database will need to scan the object to determine if a particular
subject can actually access it. The more granularity that is configured and expected
from these types of controls will equate to more resources needed to
evaluate requests.

Capability Table and ACLs


An access control matrix is a mechanism used to associate access permissions of a
subject to an object. The actual permissions assigned to a subject,which is a row
within the access control matrix, is referred to as a capability table. And the different
subjects that can access an object are referred to as an access control list
(ACL), which are the columns of the matrix, as shown in Figure 2-7.
When a subject requests to access a specific object, the operating systemwill refer
to this access control matrix to see if the access and requested operation is allowed.

Restricted Interfaces

Users’ environments can be constructed and controlled, which is another approach
to controlling the objects they have access to. An administrator can create
a user profile that will provide a user’s working environment. This profile
allows the administrator to dictate what icons,menus, applications, commands,
and functionality is available within that environment. It is considered a restricted
interface because the user utilizes the provided environment, desktop,
and profile to interface with the operating system, installed applications, and resources.
Another example of a restricted interface is a database view. A database administrator
can implement database views, which will show only the information
within the database that the user has the necessary access rights to view. So
the payroll department may be able to view employee salary and commission
information, but not individual’s employment records and human resource information.
Different department managers may be able to view their employees’
employment records, but not their salary and commission information. Database
views are controls that allowthe database administrator to logically partition
the database and present these specific pieces to different authenticated users.
The last restricted interface access control technique we look at is a physically
constrained interface. The best example of this type of access control is an automated
teller machine (ATM). These systems have operating systems that have
many more capabilities than transferring, depositing, and withdrawing funds,
but users of these machines are presented only with buttons for these specific
functions and no more. Thus, the physically constrained interface provides limited
options and only a few buttons so any other functions that the operating
system can actually perform are not accessible by regular users.

Access Control Techniques

Once an organization decides upon the access control model it will implement
(DAC,MAC, or RBAC), then it needs to look at the different possible access control
techniques that are available to work within these models and supplement their mechanisms.
The following sections explain some of the available access control techniques.

RBAC

Role-based access control (RBAC) models, also called nondiscretionary models,
make access decisions based on the rights and permissions assigned to a role or
group, not an individual user. Administrators create roles, or groups, which act
as containers for users. The administrators assign access rights and permissions
to the role instead of directly to the user. The user that is placed into a role or
group inherits the permissions and access rights from the role, thus is implicitly
assigned access rights. This kind of model is effective in large companies that
have high turnover rates because it allows the administrator to simply place new
employees into roles instead of creating new permissions for each and every
person who joins the company.

Roles usually map to specific roles outlined in the company’s organization
chart. For example, if a company has an accounting department, the administrator
can create an accounting group with access rights to the resources anyone
within the department would need. Users can be assigned to one or more roles
and each role can have limited or many access rights and permissions assigned
to it. The upper and lower bounds of access are referred to as a lattice of access
rights.

MAC


Mandatory access control (MAC) models do not leave access decisions up to the
data owner, instead systems compare the subjects’ clearances and need-to-know
to the objects’ classification to either grant or disallow access. Every object has a
security label assigned to it,which includes classification information (top secret,
secret, etc.). In order to access an object, the subject’s clearance level must be
equal to or greater than the object’s classification. For example, if Dave has a
“top secret” clearance, and an object has a “secret” classification, Dave’s clearance
dominates the objects classification. But Dave cannot access all top-secret information
within his military branch, his access is also based on his need-to-know.
The second piece of a security label is referred to as categories, as shown in
Figure 2-5.Categories outline the groups that a subjectmust have a need-to-know
of before access to the object can be granted. If Dave has a need-to-know for
one of these categories, and his clearance is equal to or dominates the object’s
classification, he can access it.
Security labels are the core decision-making component in MAC environments;
they are assigned by system administrators or security officers and
should be changed only in a well-defined manner so the security policy is supported
and enforced. Systems that implement MAC models are used in highly
secured environments, such as military or government organizations.

DAC

A discretionary access control (DAC) model allows the owners of objects (resources)
to control who accesses them and what operations can be performed
on the objects. For example, if Dan creates a share on his systemcontaining documents andWAV files, he can control and dictate who can access this share and
the items within it. This is typically done through access control lists (ACLs),
where permission is granted on a need-to-know basis.
DAC systems are used in environments that do not require the structure and
higher level of protection that mandatory access control (MAC)models provide
and enforce. Operating systems must be built differently depending upon
whether they are going to provide DAC or MAC functionality. For example,
Windows-based platforms provide aDACaccess structure instead ofMAC. Specially
developed operating systems, usually created for government agencies and
the military, provide aMAC access structure and the controls and mechanisms
necessary to enforce this level of control.
Some characteristics of DAC systems are the following:
• Access is based entirely on the identity of the user or role that
user plays within the company.
• Data owners determine who can access their resources.
• No security labels are used.
• Usually implemented through access control lists (ACLs).

Sunday, August 25, 2013

Access Control Models and Techniques

Access control models are created to enforce the rules and objectives of an
established security policy and to dictate how subjects can access objects.
There are three models that will be covered in this section: discretionary access
control (DAC), mandatory access control (MAC), and role-based access control
(RBAC).

Thin Clients


Computers without operating systems or limited instruction sets are referred to
asthin clients. They provide an alternate method of distributed computing when
compared to traditional desktop personal computers. They are networked systems
that totally depend upon a centralized server for all functionality. The centralized
server has all of the applications the clients would need and centrally controls access to all network resources. The thin client must properly authenticate to the
server before it can participate on the network. Once the thin client and user are properly authenticated, there is no reason for the user to have to enter a second set
of credentials to perform tasks, thus it is a single sign-on technology.

SESAME

The Secure European System for Applications in a Multi-vendor Environment
(SESAME) project is a single sign-on technology that was developed to extend
Kerberos functionality and improves upon its weaknesses. SESAME uses symmetric and asymmetric cryptographic techniques to protect the exchange of
data and to authenticate subjects to network resources (objects). (Kerberos
is a strictly symmetric key-based technology.)
Kerberos uses tickets for authenticating subjects to objects; SESAME uses
privileged attribute certificates(PAC), which contain the subject’s identity, access
capabilities for the object, access time period, and lifetime of the PAC. The PAC
is digitally signed so that the object can validate that it came from the trusted authentication server, which is referred to as the privilege attribute server (PAS).
The PAS holds a similar role as the KDC within Kerberos. After a user successfully authenticates to the authentication service (AS), she is presented with a token
to give to the PAS. The PAS then creates a PAC for the user to present to the resource
she is trying to access

Kerberos

Kerberos is a ticket-based authentication protocol based on symmetric cryptography. The following components are used in Kerberos:
•Key distribution center (KDC) Holds user and service cryptographic
keys (secret keys), provides authentication services, and creates and
distributes session keys.
•Authentication service (AS) Functional component of the KDC that
actually performs the authentication.
•Principals All entities that use the Kerberos protocol for authentication
are referred to as principals, which could be users, applications, resources,
or services.
•Realm A set of principals, which are grouped together logically by an
administrator. A KDC is responsible for one or more realms of principals.
•Ticket granting service (TGS) The part of the KDC that creates and
distributes tickets to the principles containing session keys.
•Ticket An authentication token.
•Secret and session keys Symmetric keys used for authentication
purposes and data encryption.
When a user logs in to the network first thing in the morning, his or her authentication information is sent to the AS, which is part of the KDC. The AS returns an encrypted ticket to the user’s computer, which is decrypted with the
user’s password (secret key). If the user enters the correct password, he is then
properly authenticated to the network and his computer. When the user needs
to access a resource on the network, let’s say a file server, the user’s computer
sends the initial ticket to the TGS. The TGS creates another ticket, containing
the user’s authentication information and two instances of the same session key,
and sends it to the user’s system, as shown in Figure 2-3. One instance of the session key is encrypted with the user’s secret key and the other instance of the
session key is encrypted with the file server’s secret key. The user’s Kerberos software decrypts and extracts one instance of the session key, inserts his or her authentication information into the ticket, and sends it onto the file server. The file
server decrypts the second instance of the session key with its secret key and reviews the user’s authentication information.
These steps are taken because the principals do not directly trust each other,
but they do trust the KDC. Only the KDC is supposed to have a copy of each
principal’s secret key. So, when the file server received a ticket that contained a
session key encrypted with its secret key, it was convinced that the ticket came
from the KDC. The file server also compared the user authentication information
the TGS put into the ticket and what the user inserted into the ticket to ensure the
user’s true identity
The following are characteristics and weaknesses pertaining to Kerberos you
need to be aware of:
•Provides authentication, confidentiality, and integrity, but not availability
or nonrepudiation.
•The KDC can be a single-point-of-failure.
•Secret keys are stored on users’ workstations.
•Session keys are stored on users’ workstations in a cache or key tables.
•Kerberos is vulnerable to dictionary attacks.
•Network traffic is not protected if encryption is not enabled.
•KDC must be readily available and support the number of requests
it receives from principals.
•All principals must have Kerberos software installed.
•Requires trusted, synchronized clocks within the network.
•The KDC should not allow any non-Kerberos network activity
to take place.
Kerberos is an authentication technology where users must prove their identities to each application and service before they can actually be used.

Directory Services

A network service is a mechanism that identifies resources (printers, file servers,
domain controllers, peripheral devices) on a network and provides a way to
make them available to users and programs. A network directory service contains information about these different resources, providing a naming scheme,
and a hierarchical database that outlines characteristics such as name, logical
and physical location, subjects that can access them, and the operations that can
be carried out on the resources.
Network directory services provide users access to network resources transparently without needing to know their exact location and access steps required
to access them. These issues are taken care of for the user in the background.
Some examples of directory services are Novell’s Netware Directory Service
(NDS) and Microsoft’s Active Directory

Single Sign-On Technologies

Because users are usually accessing multiple systems and networks many times
within a given workday, differentsingle sign-on technologieswere created. This
enables a user to enter his or her credentials only once and remain authorized
throughout the day within the entire network. He can access different resources
within the environment without having to supply another set of credentials. This
makes life easier on users, because they will not need to remember (or write
down) several different passwords; it can make administration easier by controlling and maintaining one system that is responsible for all access requests.
There are four types of single sign-on technologies that are covered in this
section:
•Directory services
•SESAME
•Kerberos
•Thin clients
A security concern relating to single sign-on technologies is that if an attacker figures out a valid credential set, he can now access all resources within
that environment. Once he is in, he is really in.

Authorization

Authorization is a process of assigning authenticated subjects access and the
right to carry out specific operations, depending upon their preconfigured access rights and permissions outlined in an access criteria. An access criteria is
developed by the administrator, or security officer, to support and carry out the
organization’s security policy. A criteria can be based on one or both of the following items:
•Clearance The security level the subject holds, which directly dictates
the objects that are accessible to it.
•Need-to-know The formal access level approved that correlates to
what information should and should not be available to a subject.
Mandatory access control (MAC) environments use clearances, classifications, and need-to-know to determine if a subject can access an object and what
operations can be carried out. Discretionary access control (DAC) environments use access control lists (ACLs), which are developed based strictly on the
subject’s need-to-know. MAC and DAC environments are described in the “Access
Control Models and Techniques” section of this chapter.
It is important to only give the subject access to the objects (resources, devices,
and information) that are required for it to complete its tasks. This concept is referred to asleast privilege. This reduces the possibility of fraud and damaging
accidents by limiting access to objects based purely on business needs.
It is best for mechanisms that are making access decisions to default to “no
access.”This means that if a subject is not explicitly allowed, it is implicitly denied.
In other words, if I don’t tell you specifically you can do something, you can do
nothing.
If a task must be completed by two or more individuals coming together to
carry out their piece of the task, it is referred to as dual control. For example, in a
cryptographic key recovery technology, the master key needed to decrypt a lost
key is usually split up into two or more pieces and each piece is given to an individual. If this master key needs to be reconstructed to recover a lost key, each individual with a piece of the key must submit their portion, which the system
combines to create the original master key. This is dual control, because it requires more than one person to complete an important task.

Smart Cards

Asmart card is a step above a memory card, in that it can actually process information because it has a microprocessor and integrated circuits. The user inserts
the smart card into a reader, which has electrical contacts that interface and
power the smart card processor. The user then enters a PIN value, which “unlocks” the information and processing power contained on the smart card. The
card can hold a user’s private key, generate a one-time password, or respond to
a challenge-response request.
Smart cards are much more tamperproof when compared to memory cards
and after a certain number of incorrect PIN values have been inputted, the card
can actually “lock”itself, which would require the user to contact the vendor to receive an overriding PIN value to “unlock” the card again. Some cards zeroize
themselves after too many invalid login attempts, which means they render themselves totally useless and must be reprogrammed.
Both memory and smart cards have the extra expenses of creating new cards
and purchasing the required readers, which must be calculated in their implementation and lifetime costs. If the cards require a second credential set to be
provided (password or PIN), it is referred to as two-factor authentication
(something that you have and something that you know).
Smart cards can be contact cards, meaning they need to be inserted into a
reader, orcontactless, meaning they do not need to be inserted into a card reader
but need to be within a certain proximity of the reader. Contactless cards need
only to be passed within range of a radio frequency acceptor to read information
from the chip. Some smart cards have both contact and contactless functionality. A hybrid card has two chips on it, one for contact readers and one for
contactless readers. And a combi card has a single chip with both the contact
and contactless interfaces.

Memory Cards

A memory card is an authentication mechanism that holds user information
within a magnetic strip and relies on a reader of some sort to process the information. The user inserts the card into the reader and then enters a set of credentials
to be properly authenticated. An example of a memory card is an automated
teller machine (ATM) card. The user inserts the ATM card into the ATM machine
and then enters his or her PIN number. The card supplies the account number
(user information) and then the user provides the secret code (PIN), together
providing a credential set.
Within companies, employees will often carry ID badges with magnetic strips.
In many of these implementations, a PIN is hashed and stored on the magnetic
strip. In order to enter a building, the employee must enter a PIN number and
swipe the badge through a reader. The reader hashes the inputted PIN number
and compares it to the value on the card itself. If they match, access is granted.

Passphrase

Apassphraseis a long string of characters that are easier for the user to remember in most cases than a password because it is an actual phrase of some type.
Once the passphrase is entered into the computer, a software program converts
it into a virtual password, which is the actual information used for authentication purposes. For example, Chip, a sales executive, could use the passphrase
“AlwaysBeClosing.” The application will put this phrase through a process, either
encryption or hashing, and work with the resulting value. Passphrases usually
provide a higher level of protection than passwords because they are longer and
harder for attackers to guess or break.

Cryptographic Keys

Another authentication method is the use of cryptographic keys and digital signatures to prove one’s identity. This can be an alternative to using passwords and
biometric systems or can be used in combination with these technologies.
A private key is a cryptographic key that should be available to only one person, and this is how it offers authentication purposes. It should be highly protected and not shared in any way. The private key can be used to create a digital
signature, which also provides authentication of a user. These items are covered
further in Chapter 6, but for now understand that they are mechanisms that can
be used for authentication purposes.

One-Time Password


A one-time password is a set of characters that can be used to prove a subject’s
identity one time and one time only. After the password is used, it is destroyed
and no longer acceptable for authentication. If the password were obtained by
an attacker as it was being transmitted, she would have a small window of time
to try and use it and most likely it was already used once, thus it is useless to the
attacker. This greatly reduces the vulnerability of someone sniffing network
traffic, obtaining a password, and being able to successfully authenticate as an
actual legitimate user.
One-time passwords are usually generated and supplied to the user via a
handheld device with an LCD display, referred to astoken device. The user reads
the password provided by the token device and enters it, along with a username,
into a system for authentication purposes. The password is good for only that
session and when the user needs to authenticate again, another password is dynamically created. Token devices, also referred to as one-time password generators, are either synchronous or asynchronous.
Synchronous token devices are synchronized with an authentication service
via clocking mechanism or by events. When a clocking mechanism is used, the
token device and authenticating service agree upon a timing scheme. The token
device presents encrypted time values to users, and they enter these values along
with their usernames into their workstations, as shown in Figure 2-2. This credential set is sent to the authentication service. Because the token device and authentication service are synchronized, the authentication service is expecting a
specific value to be submitted as the password. If the correct value is submitted,
and it correlates with the given username, the user is successfully authenticated.
When events are used to establish authentication, the user is usually required to initiate the logon process, which tells the token device and authentication system to increment the one-time values. The token device and
authentication system share the same list of values to be used for one-time
passwords; the token device encrypts and presents the next value in the list to
the user, which she enters as her password.
Asynchronous token devices use a challenge-response method to create
one-time passwords. The authentication service sends the user a value, which he enters into his token device. The token device modifies this value, through encryption or a hashing process, and presents the new value to the user. The user
then enters this new value as his password to authenticate to the authentication
service. If this set of characters is the same that the authentication service originally sent to the user, the user is properly authenticated.
Token devices protect against password guessing, replay attacks, and electronic eavesdropping, but can be exposed to masquerading attacks, where an attacker gains control of the token device and uses it to impersonate the valid user.
This is why many token devices require the user to enter a proper PIN value before it can be used. If a user has to provide a PIN and uses the token device for
authentication, this is a two-factor authentication process.

Cognitive Password

Acognitive password is based on fact or opinion used as the secret code, which
is usually easier for a user to remember and is more difficult for an attacker to
uncover. The user goes through an enrollment process by answering questions
that typically deal with personal experiences and the answers to these questions are
documented and used as cognitive passwords when the user needs to authenticate
herself at a later time. For example, when Chrissy calls a help desk for the first time,
she is enrolled for proper authentication by being asked the following questions:
•What is your mother’s maiden name?
•What is your dog’s name?
•What city were you born in?
•What is your favorite color?
When Chrissy calls back to get assistance from the help desk at a later time,
she is presented with one or more of these questions to prove her identity. Once
the help desk person is convinced of her identification, he can move on to assisting Chrissy.

Attacks on Passwords

There are two types of attacks that are commonly
used against passwords: dictionary and brute force attacks. Dictionary attacks
are performed by software tools that contain hundreds or thousands of words
that are commonly chosen as passwords. The attacker usually captures a
hashed value of a password, or password file, and the tool then compares each
of the words preloaded into the tool to the captured password until a match is
uncovered.
Another type of attack on passwords is abrute force attack. In this attack type,
a tool is used that tries every possible character and sequence of characters until
the correct password is uncovered. So whereas a dictionary attack will attempt
to match the password using a long list of words, a brute force attack will try and
crack a password one character at a time.
Dictionary and brute force programs are not just used by evildoers. Oftentimes,
systems administrators will use them to test the strength of users’ passwords to
enforce a set password policy. Because many useful tools reside on the Internet,
or are accessible to the general public, attackers and security professionals are
typically equipped with the same firepower. Security professionals simply need
to be smarter and take more precautions to protect against these never-ending
threats. The following are some countermeasures for password attacks:
•Do not allow passwords to be sent in clear text.
•Encrypt the passwords with encryption algorithms or hashing functions.
•Employ one-time password tokens.
•Rotate passwords on a frequent basis.
•Employ intrusion detection systems (IDS) to detect dictionary or brute
force attacks.
•Use dictionary tools to find weak passwords chosen by users.
•Protect password files properly.

Passwords

A password is a string of characters that should be different for each user and
highly protected. It is something that a subject knows and is the most widely
used authentication method in place today. The problem is that it is the most
insecure mechanism when compared to other authentication technologies, because users and administrators do not usually practice the necessary disciplines
required to provide a higher level of protection. Also, specialized utilities have
been developed to uncover passwords and compromise this type of authentication method.
The following is a list of best practices that should be implemented and enforced as part of a company-wide password policy:
•Passwords should have at least eight characters (alphanumeric and
symbols) and a combination of upper- and lowercase.
•Users should not be able to reuse the same passwords (password history).
•Systems should have a threshold (clipping level) configured that limits
the number of unsuccessful logon attempts.
•An accurate audit log should be maintained that includes information
about each logon attempt, which includes date, time, user ID, and
workstation.
•The password lifetime should be short but practical.
•Passwords should not be shared.
•Passwords should not be easily guessable nor should they be
dictionary words.
Passwords should never be stored in clear text; some type of encryption
scheme, as in a one-way hashing method, should be used to ensure that passwords are not easily read. Servers that store passwords should have limited
physical and logical access and should be highly protected.
Some companies choose to use password generators, which are software applications that create complex passwords for users instead of allowing them to
come up with their own. Although this sounds like a great approach, many
times the passwords that are created are too complex for the users to remember
and they are quickly written down on yellow sticky notes that are then stuck to the
monitor or secretly hidden underneath the keyboard. Writing down passwords
and making them publicly available defeats the whole purpose of passwords and
access control.

Biometrics


Biometricsis a type of access control mechanism that can be used to verify an individual’s identity with a unique personal attribute. Examples are fingerprints,
palm scans, retina scans, and voice prints. These kinds of authentication systems
gather a lot of information that can be hard to imitate, thus they provide a
higher level of protection when compared to other authentication technologies.
Biometric systems are usually more expensive, relative to other approaches,
and do not usually have a high acceptance rate by society because they are perceived to be intrusive and they obtain personal information. Biometrics are
sometimes seen as too “Big Brother-ish” and an encroachment into our personal privacy through automated means. For these reasons, they have usually
been seen only in environments that require a very high level of security. But after what happened on September 11, 2001, they are being implemented into
more locations because of society’s increased awareness of security.
Each subject must go through an enrollment period where personal attributes are captured and then stored in a reference file. The reference file can be
held in a local or remote database or even within a biometric template of a smart
card. So, when Truman presses his finger on the reader in order to enter his
highly secure work facility, the reader references the image Truman presents
against his stored reference file. If the two match, and he supplies the correct
PIN or password, he is successfully authenticated and allowed access.
The accuracy of different biometric systems can be evaluated by two separate
measurements: Type I error and Type II error. AType I error, false reject rate, is
rejecting a subject that should be authenticated and allowed access. If Truman
is rejected when he presents his fingerprint, and he really should have been authenticated,this is referred to as a Type I error. If an imposter is successfully authenticated by the system, this is called aType II error—false accept rate. Each of these
error rates should remain low for the system to provide the necessary level of
protection and efficiency. A metric has been developed to measure different biometric systems’ overall protection and accuracy level, which is a combination of
these two error types, called the crossover error rate(CER). CER is the point at
which Type I errors (false rejection) equal Type II errors (false acceptance),
The CER can be accomplished by calibrating the system to ensure that it does
not provide more Type I errors than Type II errors and vice versa. For example,
if a system was calibrated so that it resulted in only 2 Type I errors out of 100 authentication attempts and 2 Type II errors out of 100 attempts, it would have the
CER value of 2. This system would provide a higher accuracy level than a system
that provided a CER value of 3. This metric can be used when evaluating different biometric systems for purchase.
Fingerprint Ridge endings and bifurcations on the finger.
•Finger scan Selective points on the fingerprint are collected and
stored. This uses a smaller amount of data compared to fingerprint
systems.
•Palm scan Creases and ridges on the palm and all fingerprints are
used to identify individuals.
•Hand geometry The shape of a person’s hand (length and width
of hand and fingers).
•Retina scan The patterns of the blood vessels on the backside of
the eyeball.
Iris scan The unique patterns, rings, and corona in the iris, which is
the colored portion of the eye.
•Signature dynamics Electrical signals, pressure used, slant of the pen,
the amount of time and patterns used in creating a signature are captured.
There is much more information and variables to look at when compared
to a static signature, thus harder to counterfeit.
•Keyboard dynamics Mechanism that analyzes electrical signals
when a person types a certain phrase on a keyboard, such as speed
and movement.
•Voice print This mechanism recognizes subtle differences in people’s
speech sounds and patterns.
•Facial scan Attributes of a person’s face, bone structure, nose ridges,
and eye widths.
•Hand topology Looks at a side view of a person’s hand and reviews
the height and length from that perspective.

Authentication

Identification is usually providing a public piece of information (username, account number) and authentication is providing a private piece of information
(PIN number, passphrase, digital signature). Three important characteristics of
the mechanisms that can be used for authentication are as follows:
•Subject must prove something he knows Example = password
•Subject must prove something he has Example = smart card
•Subject must prove something he is Example = fingerprint
If one mechanism providing one of these characteristics is used, it is referred
to as one-factor; if two mechanisms are being used, it is two-factor; and you
guessed it, an authentication process that requires all three is referred to as
three-factor. For the authentication process to be consideredstrong authentication,
it must be at least two-factor.

User identification values should be unique to ensure accountability of individual activity. They should be nondescriptive of job functions to make them
not as easily guessed and so that attackers will not know what type of account
the credentials are tied to. There should also be secure and documented processes for issuing identification and authentication values and mechanisms to
ensure standardization.
There are several mechanisms that can be used for authentication, each one
with its own strengths and weaknesses. We take a look at the following items:
•Biometrics
•Passwords
•Token devices
•Memory cards
•Smart cards
•Cryptographic keys

Three Steps to Access Control

There are three important components of access control: identification, authentication, and authorization.Identificationis the activity of the subject supplying
information to identify itself to an authentication service. Some examples of
identification mechanisms are username, account number, and memory card.
Authentication is the second part of a credential set to verify the identity of the
subject. These mechanisms could be passphrases, passwords, cryptographic
keys, PIN numbers, or tokens. You may tell me your name, but I have no proof
that you are who you say you are until you demonstrate the secret handshake.
Only then will I be convinced of your identity. Authorization is the process of
determining what this identified subject can actually access and what operations it can carry out. Authorization is based on some type of predefined criteria, which is enforced through access control lists, security labels, capabilities
tables, or user profiles.
These three components of access control usually work together in a synergetic relationship and can be found in applications, operating systems, firewalls,
routers, databases, domain controllers, and more.