7. Information Handling
This section of the toolkit adds further detail to the corresponding Information Handling sub-policy within the Information Security Policy.
Assets to the University may include, but are not restricted to, information, software, physical assets, services, people and intangibles (such as reputation) etc. Classification of information is the responsibility of the information owner though this can be delegated to other persons. The classification of information should be directly related to the value of that information, legal requirements, sensitivity and criticality to the University. It should be used to determine the level of security control necessary to protect its confidentiality, integrity and availability. It should also take into account certain business needs such as the requirement for sharing of information.
The Information Security Policy defines “confidential information” as information which is of limited public availability; is confidential in its very nature; has been provided on the understanding that it is confidential; and/or its loss or unauthorised disclosure could have one or more of the following consequences:
- financial loss e.g. the withdrawal of a research grant or donation, a fine by the ICO, a legal claim for breach of confidence;
- reputational damage e.g. adverse publicity, demonstrations, complaints about breaches of privacy; and/or
- an adverse effect on the safety or well-being of members of the University or those associated with it e.g. increased threats to staff or students engaged in sensitive research, embarrassment or damage to benefactors, suppliers, staff and students
All University information assets should therefore be classed as either confidential or not and the policy for the protection of confidential information is provided in section 6 of the Information Security Policy. However it is recognised that some units may benefit from a classification scheme with a greater level of granularity. The following classification scheme: Classification Scheme [84 KB PDF] is therefore provided as an example and was approved by the PRAC ICT sub-committee. It should be noted that the labels used in this scheme (i.e. CONFIDENTIAL, RESTRICTED, SENSITIVE, UNRESTRICTED) are arbitrary and could be replaced by any appropriate labelling scheme. Each category contains a description and levels of controls that may be applied to assets in that category.
All major assets that the unit/University has should be maintained in an asset inventory and each asset should have its ownership and security classification agreed upon and documented. The nominated owner of each asset is responsible for defining appropriate use of the asset, and for agreeing and maintaining appropriate security controls. Whilst the responsibility for implementing security controls may be delegated as needed, the accountability rests with the nominated owner.
Information assets should be classified in terms of their sensitivity and criticality to the unit/University. However the organisational needs for the use and sharing of information should also be taken into consideration. Information assets can be classified in terms of their security requirements which may be confidentiality, integrity or availability (or a mixture). Assets can be classified and labelled in terms of their confidentiality by the University's Information Classification Guidelines.
Information security and the management thereof essentially comes down to managing risk. Throughout the University's information security policies and this accompanying toolkit "appropriate" or "suitable" controls are referred to continuously. It is risk assessment that allows us to determine what level of control is appropriate and, perhaps more importantly, allows us to demonstrably justify the decisions we have taken. If there is no risk assessment, then culpability for security incidents is likely to lay firmly at the door of the asset owner. However the process need not be complicated, costly or even formal. The level of risk assessment required will depend on the security requirements of the unit, legal and regulatory requirements and the nature and criticality of the asset needing protection. What is essential, however, is that it is possible show that the decisions made in implementing controls can be justified and considered reasonable.
Risk management is the reduction of identified risk to an acceptable level. Risk management therefore involves an assessment of the relevant risks followed by the appropriate treatment of the identified risks.
- The context of the risk assessment (e.g. to meet legal requirements, overall risk assessment to identify key areas, assessment of a specific system or service etc.)
- The operational objectives of the unit
- Available resources
- The type of asset
- Any local information security policies
The purpose of risk analysis is to identify, quantify and prioritise the risks according to the scope and criteria for the risk assessment. In order to assess the risks to information assets, threats and vulnerabilities must first be identified, quantified and prioritised.
The overall aim is to protect important information assets. It is vital therefore to identify and, where possible, quantify the value of those assets. All information assets should have a clearly defined owner and the process of identifying and valuing assets should include both owners of the asset and operational managers. When identifying assets at a unit level, for example, managers from all sub-departments within a unit should be consulted. In doing so the person(s) responsible for the risk assessment have a better chance of identifying all of the important assets and associating the relevant value. Assets should be identified and prioritised in terms of their criticality to the unit. They can be physical assets such as hardware or intangible assets such as reputation and can include computers, software, data, network connectivity equipment, personnel and many more.
When placing a value on information assets it should be done assuming that no controls are currently in place and should consider, for example, loss of information, loss of availability, disclosure of information, destruction of information and interference with communications. Where possible a financial value should be placed on the asset. This could be measured in terms of the cost to replace the asset and/or the cost to support it. However, for intangible assets such as reputation, it may be more difficult to place a numerical value on them. Therefore a subjective approach can also be taken and consider, for example, the importance of the asset to the unit and/or University.
More information on Asset Management and asset registers can be found at Asset Management.
Once the threats have been identified the vulnerabilities of assets should also be evaluated. Assessing vulnerabilities should include the likelihood of the vulnerability being exploited as well as the impact of the vulnerability being exploited. In order to do this, past experience and historical data can be used. Where such information is not available then a best guess can be made in the first instance based on expert advice and opinion. The process should then be repeated annually once some real data is available.
Vulnerabilities can exist in many ways such as vulnerabilities in software, hardware, processes and policies, system configuration, dependencies on third parties, the physical environment etc. Similarly to assessing threats, the likelihood of the vulnerability being exploited will depend on a number of factors including the motivation and resources of the attacker etc. In addition to those issues listed above, existing controls and difficulty to exploit the weakness can be considered to specifically determine the chances of the vulnerability being exploited.
Finally the impact of a vulnerability being exploited should be evaluated in order to assist with prioritisation. The impact could be the result of a loss of one of confidentiality, integrity and/or availability and so each of these should be considered. Other factors that may be relevant when assessing the impact might include:
- Costs in terms of time spent dealing with the incident (investigation, recovery etc.)
- Direct financial costs owing to theft or monetary fines
- Replacement value of hardware etc.
- Technical costs such as on going availability issues or knock-on effects on other services
- Human costs such as loss of goodwill or reputation
- Possible legal action
- Possible health and safety issues
Once the above information has been obtained risk analysis can be carried out in order to ascertain the main risks to the unit/University. As mentioned the level of risk analysis required will depend on the value of the asset, the information available and the resources available to the unit. What is important is to be able to justify the decisions made when choosing and implementing controls. A risk analysis may be a simple conversation in a managerial or IT committee meeting or may be a much more formal process. Either way it is important to have a record of the risk analysis in order that decisions can be justified, reviewed and, where appropriate, amended in the event of an incident. It is also important to be able to demonstrate why a particular approach to risk assessment has been taken. Generally speaking there are two approaches to risk analysis which are described below.
Qualitative risk analysis looks at the magnitude and consequences of incidents, and their likelihood of occurring. It does not use numerical input but rather a best guess approach based on the informed opinions of subject experts. It is therefore often based on the creation of incident scenarios and hypothetical events. The more input that can be gained from a range of experts the greater the degree of assurance is possible. Qualitative analysis has a number of advantages. For example, it is easy to understand for all personnel involved, and is often very useful as an initial, high-level risk assessment to identify areas which need closer attention. It is also suitable where historical data is not available and where loss can't easily be measured numerically. It therefore tends to be particularly well suited to handling incidents with so-called soft impacts such are reputational damage or loss of good will etc.
Quantitative analysis is much more formulaic and therefore requires lots more information as input. Therefore where historical data is available, the frequency of attack is known and losses can be measured in numerical terms, quantitative analysis may be the most suitable approach. The advantage of this approach is that the risk can be very accurately measured and the process used iteratively. The downside is that it requires comprehensive records to be kept and is therefore not so good at dealing with new risks when they arise. Of course, the quality of the analysis will depend greatly on the accuracy and completeness of the input data.
Specific risk analysis tools are often used for quantitative analysis. These have the advantages of being efficient (once the input data is collected), allowing data to be re-used and allowing the focus of resources to be placed on the analysis of the results.
When implementing controls to reduce the risk it is usually the case that the cost of the treatment should be lower than the cost of the impact. This is one factor which will determine how appropriate the control is along with the overall security requirements of the unit/University. It is, however, important to bear in mind intangible costs such as loss of reputation etc. At this point, existing countermeasures can be taken into consideration and gap analysis carried out. This can also help with prioritisation of risks along with (for example) by the number of threats faced by an asset. When deciding on countermeasures to implement it is important to consider that the risk can be reduced in a number of ways. For example, controls can be put in place to reduce the threat, vulnerability or impact (or all of the above) depending on the nature of the risk, costs and effectiveness of each. As an example, for a web-server where availability is of paramount importance (and confidentiality isn't), one control may be to reduce the impact of compromise by having a plan in place to quickly re-build the server.
Of course risk assessment and management must be an iterative process. This should happen in the first instance so that any residual risk is reduced to an acceptable level. However risk assessment should be carried out regularly as other priorities are identified, operational requirements change, vulnerabilities/threats are introduced or incidents happen etc. Reviewing risks may equally mean that some controls can be relaxed as well as tightened. Importantly, the risk process and review should be well documented so that it can be easily monitored and audited.
See Backup Section
Before any internal or external media are disposed of or transferred off-site, any information stored on the device must be deleted as appropriate for the classification of that information
Merely deleting the contents or doing a quick format of the media does not remove all the information stored on the media and typically, all or parts of files may be restored using commonly available software utilities. This applies to any electronic device that may store information (e.g. harddisks, CD/DVDs, media cards, USB memory, etc).
For all but unrestricted information, the data must be irretrievably deleted from the media. Where media cannot be reliably written to, due to damage or no longer being compatible with current technology, it should be physically destroyed. Whole disk encryption may be used to reduce the risk of unauthorised disclosure of information. Appropriate encryption and consequent destruction of the decryption keys may be viewed as the equivalent of appropriate deletion.
UNRESTRICTEDinformation should pose no risk to the University/unit and will usually be in the public domain already. Deleting such information is not mandatory and simply erasing such information in the normal way can be viewed as appropriate. This may include, for example:
Disclosure of RESTRICTED information is unlikely to have a substantial impact on the University in terms of its business, finance or reputation. It may, however, include personal information and so, as a minimum, it should be deleted as described above before disposal. Where personal information is involved the media should be overwritten at least once using common techniques .
Disclosure of SENSITIVE information is likely to have an adverse affect on the business, reputation or finances of the University. It must therefore be securely erased before any media is disposed of or re-used. This means that the media should be overwritten several (at least 3) times using common techniques. Physical destruction of the media may be considered in addition. Encryption of the entire media may also be considered and suitable encryption of the media may be considered appropriate if securely erasing or physical destruction is not possible for some reason.
Information Disclosure of CONFIDENTIAL information is very likely to have a significant and adverse affect on the University. It must therefore be irretrievably erased before any media is disposed of or re-used. Media must be overwritten several (at least 3) times using common techniques. Physical destruction of the media should be considered. Encryption of the entire media may also be considered and suitable encryption of the media may be considered appropriate if securely erasing or physical destruction is not possible for some reason.
Email is a very handy form of communication and its use has exploded to the extent that most of us rely on email to be able to carry out our work. However when email was first invented, it was not done so with today's use, or security requirements, in mind. As such email is an inherently insecure form of communication. That isn't to say we should all stop using it, but we should be aware of the risks. Here are some of the main issues:
Sending an email can be seen as the postal mail equivalent as sending a postcard. There is no built in confidentiality (or "envelope") to prevent anyone who "handles" the message from reading it. Connection to the University mail servers *is* protected by encryption in the form of SSL/TLS and this is the case for many other service providers (though not all by any means). However once an email leaves the University's network it should be assumed there is no protection. Since an email you send may be routed via anywhere in the world, and a copy of the email will be stored at each step along the way, it is best to start from the basic principle that you shouldn't put anything in an email that you wouldn't want to be made public!
One of the things that frequently "trips up" email users is the question of who has sent you an email. As with many other forms of electronic communication, how do you know who has sent you a given email. Just because a message comes from a particular email address, or has a particular name associated with it, doesn't mean it comes from the person you think it should be. It is very easy to set up an email address that is similar to somebody else's, or set the name on the account to be that of somebody else. It is also trivial to fake where an email comes from and send it so that it genuinely does appear to come from an address you recognise. That is not to mention the fact that somebody else (e.g. an attacker who has compromised an account) may have access to somebody's account and is using it for nefarious purposes. The message here is that you shouldn't trust an email based on the sender address.
Perhaps a good rule of thumb is not to worry too much about who specifically has sent you an email. Do be aware that it might not be who you think it is though, and do question whether the information in the email is relevant to you, or may pose a risk. For example if someone emails you telling you have won a lottery you haven't entered, it is probably not true, regardless of who appears to have sent it! Likewise if someone is asking for your password(s) then stop and think twice (more on this below).
Similarly, how do you know who is at the other end of an email you send. Perhaps the sender is not who you think they are (as described above), perhaps they have set a different email address for your replies to go to, or perhaps they are going to forward your message to other people. In addition to that it is all too easy to send a message to the wrong recipient accidentally. Nearly everyone I know (including myself) have done this at some point or another. Take care when sending emails, but remember our starting point - don't put anything in an email you wouldn't want to be made public!
Your email account is "secure" though right? It is protected by a username and password so nobody else could get access? Unfortunately people are often pretty careless with their passwords, sometimes without even realising it. In the University of Oxford, a single password is used for access to email and other Single Sign On (SSO) resources, so you need to be careful with your password. If you use public machines, all over the world to access your email (or other SSO resources), there is a pretty good chance that one of them will have some malicious software on it, designed to steal your password and get access to your account. This isn't necessarily something to panic about but, again, you should be aware of the risks. What does happen on a regular basis is that accounts are compromised and used to send out spam messages. This will lead to your account being temporarily disabled by OxCERT and it is uncanny how often this happens at a time when the user needs access to their account urgently!
Secondly attackers do delete emails in peoples folders. This may be just out of pure malice (because they can) but is often to hide their presence as they don't want you to see the hundreds of bounced messages as a result of all the spam they are sending. Thirdly, guess what, if an attacker has access to your account they can read all of your emails! What was that first rule again? Email should not be relied on as secure storage for your important work. Have another way of accessing and storing important information, and never store confidential or sensitive information in your mail folders.
If you follow this advice, you may well be prepared to take the chance that someone will compromised your account because you may think the risk is low. However do think about what else that account might give an attacker to! The extent to which you protect access to your SSO account may depend on what other resources you have access to. If you do need to check your email on untrusted machines, be aware of the risks and take some mitigating action. This could be changing your password from a trusted machine at the next opportunity and/or checking your account for unusual activity or rules that have been set up. Remember though that the security of your account is your reponsibility. If you need to recover deleted emails this may be possible and you should contact the OUCS help centre but there are no guarantees. Similarly if your account is disabled because it is compromised that is your responsibility.
Last, but by no means least, there are lots of people who want to send you spam and scam emails. Don't be a victim! Remember that nobody in the University should ask you for your password - especially via email. If you are being sent links to websites that ask you for your password, think twice and be aware of how to spot legitimate sites. Remember not to trust an email just because it appears to come from a .ox.ac.uk address. If you are in any doubt ASK your local IT support staff or the OUCS help centre for advice. Phishing attacks cost the University a significant amount of money so please don't add to the problem!
Of course there are ways of mitigating some of the risks mentioned above. The use of encryption and digital signatures can be used to provide confidentiality, authentication of the sender and some assurance over the identity of the recipient. The two standard ways in which this can be done are to use PGP (Pretty Good Privacy) or S/MIME. Where encryption is to be used, the owner of any information assets should be consulted and agree on appropriate levels of encryption. For more details on how to secure email see OUCS's advice on secure emails.
Encryption is the process of transforming readable information into something unreadable using an algorithm (or cipher) and a cryptographic key. The input into the process is often referred to as the plaintext and the output is known as the ciphertext. The reverse process, used to recover the plaintext is known as decryption. Broadly speaking, there are two types of encryption: symmetric (or private-key) encryption and asymmetric (or public-key) encryption.
- Symmetric Encryption
- Symmetric - or private-key - encryption uses the same key for encryption and decryption. The security of symmetric cipher systems depends on:
- Asymmetric Encryption
- Asymmetric - or public-key - encryption uses different keys for encryption and decryption. Therefore, anyone wishing to be a receiver must publish or share their encryption key. In order to decrypt, a separate decryption key must be used and kept secret. It should not be possible to deduce the plaintext from knowledge of the ciphertext and the public key, and there should be some means of checking the authenticity of a public key.
Using encryption for secure file storage and transfer presents a number of challenges. While the use of strong, well recognised encryption algorithms may 'solve' the problem of appropriately securing files in storage and in transit, the use of encryption itself does not imply complete security or confidence. Rather one problem is solved, but a number of others created, and it is dealing with these challenges that provides the overall level of security for the cryptographic system. Furthermore, there is no such thing as a 100% secure system and 'security' should be thought of as being appropriate (or not) for the task in hand. There will always be some trade-off between security and usability and this will usually be determined by user requirements and risks.
Encryption does not mean security. In fact, if implemented badly it can reduce security (for example in terms of availability), and/or end up being an expensive but meaningless security control. It may be that it is necessary to encrypt based on the University's information security policies in which case read on! However, if it isn't, make sure you ask yourself if it is really the right solution. Often when we are asked about encryption, for example, it turns out that what is really required is good access control along with some sort of file management server. Encryption would normally be considered for copies of original data which, for example, might temporarily need to be in a less secure environment than normal, (and thereby exposed to greater risks from accidental or malicious threats): this might occur where information is accessed remotely across an insecure network; transferred to a third party (possibly as an email attachment), or loaded on a laptop being used away from the office. Encrypted storage might also be considered for sensitive information where there is a risk that the device or media holding it could be accessed or stolen by others. Where the risks are sufficiently great, encryption might be used to complement and strengthen other security measures.
Firstly you may need to consider whether or not you want symmetric or asymmetric encryption. Symmetric encryption is generally more efficient as it works on binary data and uses simple operations. It is therefore good for data in storage and encrypting large files and/or volumes. However, since the same key is used for encryption and decryption it requires careful key management. How are you able to securely distribute keys to persons in remote locations for example? There is also a lack of non-repudiation in that it is difficult to prove who may have encrypted or decrypted a given plaintext when identical keys are shared amongst users. Asymmetric encryption attempts to solve some of these problems by using one-way mathematical functions and using different keys for encryption and decryption. However it is less efficient as it tends to convert the plaintext into a mathematical representation and uses complex operations. Public keys are easily distributed but the issue becomes one of trust and authenticating those keys.
In actual fact, you will likely be selecting a product for a particular purpose, rather than a type of encryption. Most products also probably use a combination of asymmetric and symmetric encryption anyway, with symmetric encryption being used to encrypt the majority of the data, and asymmetric encryption being used to manage the keys. It is useful to consider what type of encryption is being used in the products you are considering however, as it may help you identify certain requirements (e.g. for key management).
Unfortunately, there is no hard and fast rule and there is no one set of standards that covers everything. It therefore comes down to what levels of assurance you require and where the requirement is coming from. For example, if the requirement to encrypt is coming from a third party then you should check with them what standards are required. Otherwise the data owner should be satisfied that the risks have been appropriately mitigated. The following should be taken into consideration when deciding on the level of trust to place in a particular cryptographic control and/or product:
- Is the algorithm and/or product well known and reputable?
- Probably the first test is whether you have heard of the algorithm/product, whether it is widely used and whether it has a decent reputation.
- There are some standards to look for and meeting these may be a requirement of third parties.
- The National Institute of Standards and Technology (NIST)
produce many of the Federal (US) standards and recommendations
that have been adopted at both a national and international
level. The Federal Information Processing Standards (FIPS) are
one of the many outputs of NIST and relevant standards include:
- FIPS 140-2 : Security Requirements for Cryptographic Modules
- FIPS 186-2 : Digital Signature Standard
- FIPS 197 : Advanced Encryption Standard [AES]
One other NIST standard that is worth drawing attention to is FIPS 46-3: The Data Encryption Standard (DES). This was withdrawn as a standard in May 2005. Therefore this should not be used for encryption unless there are exceptional circumstances. If the use of DES is required then advice should be sought from OxCERT.
For more information on NIST standards please see http://www.itl.nist.gov/fipspubs/
- RSA PKCS
- The Public-Key Cryptography Standards are specifications
produced by RSA Laboratories for the purpose of accelerating the
deployment of public-key cryptography. The PKCS documents have
become widely referenced and implemented and contributions from
the PKCS series have become part of many formal and de facto
standards, including ANSI X9 documents, PKIX, SET, S/MIME, and
SSL. Some RSA standards of note include:
- PKCS#1: RSA Cryptography Standard
- PKCS#3: Diffie-Hellman Key Agreement
- PKCS#5: Password-based Encryption Standard
- PKCS#7: Cryptographic Message Syntax Standard
- PKCS#10: Certification Request Standard
- PKCS#11: Cryptographic Token Interface
For more information please see the RSA website
- Product Evaluations
- There are several bodies that will evaluate products against
certain criteria to provide a level of assurance in a product.
Again the level of assurance that is required will depend on
your specific requirements. The most common of these is probably
the Common Criteria for Information Technology Security
Evaluation. This is an international standard (ISO/IEC 15408)
for computer security certification and offers evaluation levels
(EALs) from 1 - 7. See http://www.commoncriteriaportal.org/ for more
Other evaluations to look out for might include the CESG Assisted Products Service (CAPS): http://www.cesg.gov.uk/products_services/iacs/caps/index.shtml.
Key management is one of the most important issues when considering encryption. If the encryption algorithms are strong and implemented correctly, the security of the cryptographic system is often dependent on the secure management of the encryption keys. If decryption keys are compromised then the security of the cryptographic system falls apart. For public key algorithms, the main requirement for the management of keys is that the private key remains secret, the integrity of the public key is guaranteed and that their use is controlled. The main requirement for key management with symmetric algorithms is that the keys remain secret throughout their lifetime and that their use is controlled via key separation (i.e. using separate keys for particular tasks).
Key Generation and Storage
- Where are the keys generated and by whom?
- Who generates the keys?
- Where are the keys stored?
- How are the keys stored?
Thought should also be given to whether the keys are stored encrypted or not. Where keys are stored in plaintext, then there should be appropriate controls in place to prevent unauthorised access/disclosure. Where keys are stored encrypted access is usually controlled by way of a passphrase. This means that any decryption key (no matter what algorithm or key length is used) is, at best, only as as strong as the passphrase used to protect it. If a passphrase can be easily guessed or 'brute-forced' then the 'strength' of the key is irrelevant. Having good password policies in place is therefore essential to the overall security of the system.
Size does matter when it comes to cryptographic keys. For symmetric algorithms the key needs to be sufficiently long as to prevent a successful brute force attack within the time period that the data actually needs to be protected (i.e. if something is going to be made public in 5 years it doesn't need to be protected for the entire lifetime of the universe!). 64 bit keys are generally considered to be on the boundary of what is technically feasible these days and so are generally considered too short. Usually keys of 128 bits or upwards are recommended.
For asymmetric systems then it is more about being able to derive the private key, using mathematical functions, from the public key. However the greater the key-length the greater the impact on performance. Currently the standard default is a 2048 bit key based on a tradeoff between security and performance.
Since one of the main requirements for symmetric cryptography is that the keys remain secret throughout their lifetime, yet the same key is used for encryption and decryption, one of the main problems becomes how to ensure that the recipient of any ciphertext is able to get access to the decryption key. Thought should therefore be given to how keys are protected during distribution, in symmetric systems. When deciding on particular solutions it may also be prudent to ask who has copies of the keys and what implications this might have.
Asymmetric cryptography looks to overcome this solution by using both a public key (for encryption) and a private key (for decryption). Public keys can, as the name would suggest, be distributed freely, but the issue then becomes one of determining the authenticity of the public key (i.e. how can you trust that the recipient is actually who you intend it to be). The two main models for this are the Web of Trust and Public Key Infrastructure.
Owing to the performance issues associated with asymmetric encryption and the key distribution issues associated with symmetric encryption, hybrid systems often use symmetric encryption to encrypt the bulk of the data, and asymmetric encryption to encrypt the keys necessary for decryption.
The necessary lifetime of cryptographic keys will depend upon a number of factors including, the confidentiality requirements of the data being protected, the size of the key, the algorithm being used, and the use of the key. The lifetime of a key may also come to an end in an accidental fashion (e.g. the compromise of a secret key) in which case there needs to be a means available in order to revoke and renew the keys.
Cryptographic keys should also be used only for the intended purpose. Some users, for example will have different keys for encryption/decryption and signing. Symmetric encryption keys in particular should only be used for a specific designated purpose and should be securely deleted when no longer needed.
As mentioned above, it is imperative to have a means in place for dealing with keys that should no longer be used, for example, because they are no longer considered secure or have been compromised. Similarly it is important to ensure that all copies of keys are destroyed when the key is no longer in use.
One final issue, particularly with regards to asymmetric encryption, is that of what happens when a user loses their private decryption key. Clearly once a private key is lost it is no longer possible to decrypt any ciphertext that has been encrypted using the corresponding public key. Therefore there is a risk of the documents themselves being 'lost'. There are a number of possible mitigations against such a risk including:
- Encrypting data to multiple keys
- Keeping backups of keys (either locally or centrally)
- Key Reconstruction
- Key Escrow
- This is basically where all files/communications that are encrypted, are also encrypted to a master key. Clearly there are privacy issues with such a solution and key escrow is a controversial and strongly debated topic. In order to maintain the privacy of users, master keys can often be split into components, each of which can be weighted and distributed amongst a number of trusted persons. This reduces the risk of one person being able to abuse their position and infringe on users' privacy.
- Clearly, if any of the above practices are adopted, there will also need to be agreed policies and processes in place to define and communicate their use. For further advice please contact OxCERT. There are also also possible legal/regulatory issues that will need to be taken into consideration, these are discussed briefly in the Legal Issues section.
However there should be some awareness that there are certain legal issues that may affect certain policy decisions, and that may need to be dealt with via policies themselves. For example, different countries have legal restrictions on the import, export and use of cryptographic technologies. Care should therefore be taken if travelling with cryptographic technology (e.g. laptops using whole disk encryption). One good source of information on this matter can be found at http://rechten.uvt.nl/koops/cryptolaw/. For further advice please contact OxCERT.
Also relevant are the powers of law enforcement in the UK that require decryption keys (or the relevant plaintext) to be presented under certain sections of the Regulation of Investigatory Powers Act (RIPA). Thought needs to be given to who would be responsible for providing keys and/or plaintext if such a request was made.
Of course encryption on its own is not the answer to all security problems associated with electronically communicating and storing data. As well as strong key management, general good security practice should be followed and all users made aware of their responsibilities. If an attacker has control over a user's machine, or is able to access information in some other way (e.g. social engineering, shoulder surfing), encryption offers no protection at all. Therefore the usual good practice advice should be promoted to end users (e.g. keeping AV up to date, patching, reporting of incidents, safe browsing habits) and all users of the system - from administrators, down to end users - should be made aware of their responsibilities towards security.
Introducing new ICT systems clearly introduces new risks. The primary concerns are likely to be risks to the existing infrastructure as a result of the change (e.g. a compromised system being used to access further restricted systems) and the risk to the information processed by the new system. For more information see the risk assessment section within the toolkit.
- The person and/or group responsible for the system administration of any new ICT system should be clearly defined
- Any third party responsibilities should be clearly identified and well defined
- The operational requirements of the new system
- The security requirements of the system in terms of confidentiality, integrity and/or availability
- A list of existing systems with which the new system will interact
- A list of necessary services (and hence unnecessary services) required
- A list of required users and their necessary privileges
- How the system will be accessed (e.g. will remote access be required and, if so, where from)
- How authentication will be handled
- How/when software updates will be applied to any new ICT system
- Where the system will be placed within the network (e.g. how exposed will the system be to the outside world, and what other devices would this system provide an attacker access to)
- What changes will be needed to any existing access control mechanisms (e.g. firewalls, switches, routers)
- Leaving services open to the world unnecessarily
- Failing to apply security updates before full access to the outside world is granted
- Setting up weak passwords for test purposes or for particular services
- Setting up test devices with default weak configurations and giving them unnecessary access to the entire internet
- Switching off firewalls for "testing" purposes
- Failing to test for unexpected behaviour following config changes (e.g. testing firewall rules etc.)
When introducing new systems the risks to the information to be processed should also be taken into consideration. When testing new ICT systems "live" data should preferably not be used. This is particularly the case when that data includes information that could be classified as personal, sensitive or confidential. The following should also be identified:
- The person and/or group responsible for any information processed with the system (i.e. the information owner)
- The classification of that information and the security requirements in terms of confidentiality, integrity and/or availability
- The impact on the unit/University of a breach of those security requirements
Any risk assessment should include all relevant user groups so that the security requirements and risks can be successfully identified. The information owner should be responsible for signing off any residual risk.
It is important to successfully identify the security requirements of the system and the information to be processed as this will help you to identify an appropriate controls. Its also important to remember that risk can be mitigated by reducing the threat, vulnerability and/or impact. Security controls therefore can target prevention, detection and/or reaction as appropriate. For example, if a system doesn't handle any sensitive information but needs a high degree of availability, it may be sensible to focus on detecting and responding to security incidents so as to minimize the impact on availability. This might include making sure log files are secured so that it can be determined how an attacker got access, and having a plan in place to re-install/replace any compromised system. Alternatively, for systems that require a high degree of confidentiality, much more emphasis should be place on the initial risk assessment and preventative controls. In such cases, reactive controls may include the need to identify the University Data Protection Officer and/or Press Office.
Remember when identifying security requirements that OxCERT will require evidence of how a machine was compromised, and that the vulnerability has been closed, before any router blocks will be lifted. OxCERT can also be contacted for specialist advice when setting up new systems: OxCERT
Testing should be carried out in order to confirm that the security measures put in place meet the security requirements of the system/information. The results of any testing should be documented. Only when testing has been successfully completed should "live" data be introduced.
Any risk assessment and testing procedures should be carried out periodically for existing systems. The frequency of testing will vary depending on the requirements of the system, but the review period should be documented and justifiable.