A question in a past CISA examination was along the lines of ‘what is the biggest threat presented by single sign-on?’ The answer, of course, was ‘maximum exposure if the sign-on credentials are compromised’ and herein lies the question of identity.
How do we know that the person at the other end of the connection is who they claim to be? With logical access control we rely on pre-programmed logic to authenticate the person’s identity. Once authenticated the user receives pre-assigned privileges for access to, and perhaps manipulation of: applications, data and commands.
If we consider the confidentiality, integrity and availability aspects of security (CIA), then correct identification, coupled with associated privilege allocation is important for all three, but absolutely essential with regard to availability and confidentiality, as these aspects relate to availability of the service to those who should have it at the time of need.
Integrity may also be compromised if an incorrect person receives privileges which would allow for an unauthorised change to either data or software. So authentication of identity, assuming correctly assigned privileges, is the key to the identity conundrum. In itself this does not totally eradicate the risk of unauthorised manipulation, as an authorised person may abuse privileges, but it does partially mitigate the risk.
How many factors?
We have many ways of authenticating the identity of a person, ranging from the psychological to the entirely logical, but they all come down to either something known, something possessed, something you are or a combination of all three. Control in-depth is required if we are to reduce the likelihood of maximum exposure if a single sign-on is compromised. So two or even three factor identification has become de rigeur for remote authentication.
The UK Border Agency is currently using facial recognition technology for entry into the UK. The entrant requires a passport (something possessed) the photographic details of which (encoded in a chip) is compared against the face (something you are) of the entrant as scanned by a camera. Effective, but slow and expensive due to the equipment required and it also requires the physical presence of the entrant, so it’s not suitable for remote authentication.
Variations of this are now being tested to remotely authenticate the user by using a local webcam for facial recognition, but this usually requires some pre-registration process. It also falls down if a camera is not available. The standard one-time password generator does not require a camera, thus providing more access options for the user, but does require a pre-registration process. The DVLA requires you to enter your NI number to receive a code in order to access your driving record. Now I wonder how the DVLA got hold of my NI number to facilitate their authentication process?
Electronic signatures operating within a public key infrastructure temptingly offer authentication, integrity, non-repudiation and perhaps secrecy (although Edward Snowden disagrees on this last item). The authentication of a person’s identity relies on their public key being confirmed as belonging to them, which is why such systems are defined as being ‘arbitrated’ schemes. A certificate authority (CA) confirms that a particular public key belongs to a specific entity (person or position).
Most email systems are designed to automatically request both the public key and its associated certificate if a signed message is received. However, there are two ways of fooling the process. The first is to gain access to someone’s secret key. As the key is usually stored on a device with simple password protection this may not be a difficult as it sounds. The second would be to replace the original public key held by the CA with one’s own key. A lot more difficult, unless you own the CA.
As CAs are commercial bodies, gaining ownership of one may not be too difficult. Although somewhat expensive, it’s certainly not beyond the capability of nation states, terrorist organisations or organised crime. If you own the CA then you would have access to the secret keys too, so the entire authentication process falls apart. Perhaps this is how the NSA are decoding the messages referred to by Snowden. They simply own a couple of CAs.
My car identifies its key by a handshake protocol which authenticates the key, but it does not identify me as being the key’s owner. Whoever has the key is seen as being the owner, so there is a flaw in the authentication process in that it is authenticating the wrong thing.
In much the same way, a swipe card can track the use of the card around a building, but does nothing to validate who possesses the card. So we have very effective control mechanisms which unfortunately do not achieve their real objective, which should be ‘who possesses the key, or card’?
So our starting position on identification has to be a control objective which can be objectively verified. This is level 4 (Managed & Measured/Predictable) on both the CMM or ISO 15504 scales. If the control objective is simply to authenticate the key, or card, then we have achieved the objective.
If however, it is to authenticate the owner, then we have failed miserably. So the only true identification mechanism is likely to be something characteristic (retina, fingerprint, etc.), but this almost certainly requires some form of pre-registration and a sophisticated scanning mechanism. This package may be both cumbersome and expensive, but offers a high degree of effectiveness in meeting the control objective.
Other processes may be cheaper, but less effective, so ultimately it comes down to the materiality of the asset we are trying to protect. Which brings me neatly to ISO 27000, the information security standard. ISO 27000 requires the identification and classification of assets as its starting position.
People are assets and thus should be classified. Some people will be considered more ‘important’ than others and will therefore require a higher level of authentication prior to privilege allocation. In some instances we may wish to authenticate multiple users before we allow an action to take place, such as the launching of a nuclear missile.
The greater the privilege(s) the higher the identification and authentication requirement. Control in-depth ultimately means slowing down the process, hence the need for materiality assessment. If the exposure is considered to be trivial, then we can get away with less control. Whatever the potential exposure it must be accepted and signed-off by senior management. It becomes their risk appetite. They may choke over their breakfast at this suggestion, but that is what they are paid for.
Remotely authenticating a device, whether it be a phone, computer, car key, or card is easy. Authenticating a person is much more challenging.