A PDF copy of the entire toolkit is available: Information Security Toolkit [PDF 328 KB] as at 20.10.11
Units should have their own information security policy in place. The security policy could be part of an existing policy framework or a policy in its own right. However it should describe the information security objectives for the unit, and demonstrate the commitment, at a senior level within the unit, towards achieving those objectives. Of course, the unit may simply adopt the University's Information Security policy and sub-policies as their own. However it may be appropriate for the unit to amend, add specifics to, or create their own policy from scratch.
The unit's information security policy should be brought to the attention of, and made available to, all staff, students, third parties and other persons who may interact with the unit's and/or University's information systems. There should be an auditable process in order to demonstrate that all users are made aware of the security policies, and are reminded of them on a regular basis. This could include, for example, including reference to the policies in user registration processes etc. Reminding users of the policies once a year, or when significant changes occur, would likely be considered reasonable.
The information security policy should also describe the structure for managing information security within the unit. Normally the management of security will be included with the management of ICT and such frameworks usually exist already, consisting of a departmental/college IT committee or equivalent, chaired by a senior member of the college/department. For smaller departments this may be subsumed within a structure organised at a divisional or faculty level, though final responsibility rests with the Head of Department or unit.
All users within the unit have responsibilities for information security and the Head of Department, College, Hall or other administrative unit is ultimately responsible for ensuring that users are aware of those responsibilities. Responsibilities will vary and will include responsibilities for configuring and maintaining ICT systems (usually assigned to registered IT support staff) down to the responsibilities of all end users (e.g. not to disclose passwords or allow unauthorised use of systems or accounts etc.). When handling information classified as confidential or sensitive, users should be explicitly made aware of their responsibilities for that particular dataset (e.g. not copying to personal computers, storing unencrypted etc.). All users have a responsibility to report any breaches of information security policies.
The ICT/information security policies should be reviewed at least once a year. There should be means of demonstrating that the policy has been reviewed (e.g. minutes of meeting at which it was discussed). Policies should also be reviewed in the event of significant changes or in the event of security breaches.
Whenever third parties are dealt with, the security requirements of the University/unit, must be brought to the attention of the relevant party and be included in any contract. Security requirements of third parties should also be included in any such agreements and/or contracts from the outset. In particular, such agreements should clearly state the roles and responsibilities for specific aspects of security including, for example, applying security updates, auditing systems, monitoring logs. Responsibilities for incident response should also be included from the beginning and all third parties should be made aware of the requirements of the unit and OxCERT for dealing with incidents (e.g. maintaining appropriate logs to be able to trace any misuse). This should explicitly include notification of any security breaches that the third party becomes aware of.
- outsourcing the design, development or operation of information systems (e.g. websites)
- when access to University's information systems is granted from remote locations where the facilities are not under the control of the University.
- when users who are not staff or students of the University are given access to the information systems.
- when any University/unit owned information is shared with non-University members (e.g. research data etc.)
- the reputation of the third party
- the security policies and management framework of the third party
- their approach towards incident response
- the level of sensitivity of any data being shared
- any possible access to information other than that which is intended
- where (physically) the information will be stored/handled
- the laws in any locations that the information will be stored/handled
- what controls the third party has in place to prevent unauthorised access to data
- whether the third party is regularly audited and/or accredited
- any previous security breaches
This section of the toolkit adds further detail to the corresponding Personnel, Recruitment and Training sub-policy within the University's Information Security Policy.
Roles and responsibilities should include the requirement to implement and act in accordance with the information security policies and should be defined and clearly communicated during the pre-employment process. Security roles and responsibilities can be included in job definitions and/or contracts. A code of conduct or acceptable use policy may also be used to cover the responsibilities of staff, students, contractors, conference guests or others. Where appropriate, verification checks on applicants for new positions should be carried out before interview including, for example, character references. For positions that will allow access to sensitive or confidential information, further background checks and vetting may be appropriate.
All parties should sign up to the University's policies before being granted access to services. Sign up can be done as a link to terms and conditions when registering for an account, or could also be done via physical means such as inclusion in contracts of employment. It is important however to be able to demonstrate that users actively agree to abide by the policies. Similarly, means should be in place which can clearly demonstrate that all users are reminded of the policies on a regular basis. On average once a year should suffice or when any significant changes occur. For groups who have short term access to information systems (e.g. contractors) more frequent reminders may be appropriate. Regular reminders could simply be via communications such as email, or may be an interactive process as part, for example, of renewing user accounts and/or changing passwords.
Awareness, education and training should be suitable and relevant to the person's role, responsibilities and skill. They should include information on known threats, who to contact for further security advice and the proper channels for reporting information security incidents. Awareness training should commence with a formal induction process designed to introduce the unit's and University policies and expectations before access to information services is granted. For registered IT support staff, ITS3 at OUCS provide inductions on a regular basis which include briefs on incident handling from OxCERT. However all staff should be made aware of their own responsibilities for the secure handling of information.
Training might include areas such as security requirements for particular dataset and/or units, legal responsibilities as well as training on technical controls that can be put in place. However it is vital (particularly when handling information classed as sensitive or confidential ) that the limitations of technical controls are widely communicated and that social and personnel controls are not overlooked. For example, when data is to be protected by encryption, all staff having authorised access to the data, must be made explicitly aware of the requirements for handling the data (e.g. copies of the data must not be made onto personal devices etc.).
Disciplinary processes can be used as a deterrent but should also ensure fair and correct treatment for users who are suspected of committing a breach of security, taking into account first offences, level of skill, training etc. In serious cases process should allow for instant removal of duties, access rights and privileges.
This section of the toolkit adds further detail to the corresponding Operations sub-policy within the University's Information Security Policy.
It only takes a single lapse in order to place the University's or unit's data and resources at risk. Therefore all staff who are managing ICT systems should be trained and qualified as appropriate to their role. This should include making staff aware of their roles and responsibilities when it comes to information security. Information security training should be included as part of staff inductions and be ongoing as relevant to their role. In particular, security training and awareness should include:
- Awareness of the risks to information
- How to prevent the compromise and/or accidental disclosure of sensitive information
- The need to be aware of the latest threats and vulnerabilities
- Maintenance, monitoring and auditing of log files
- Security testing
- How to report security incidents
- Security incident handling and business continuity
- Legal compliance
OUCS offer inductions for new IT support staff which includes security incident handling within Oxford University and ITS3 continue to offer a range of training courses. For more details, or to request courses in a particular field, please contact ITS3.
JANET also offer a number of courses in the field of information security. For more details please see: JANET training Courses.
Not all system owners/administrators will necessarily be registered IT support staff however, where this is the case, system owners/administrators should be made aware of their responsibilities towards information security and should be made explicitly aware of ICT regulations and policies.
Having documented procedures in place can help ensure sensitive data is processed in a secure and efficient manner. It also provides a means to ensure that best practice is being followed by all staff (not just IT staff), and that the availability and integrity of ICT systems is maintained.
Effective documentation of operating procedures can save duplication of effort and facilitate the most efficient use of ICT systems and staff time. Failure to maintain appropriate documentation can lead to operational shortcuts, increased system downtime, processing errors, problems with auditing and difficulties in training new staff.
All documentation should be reviewed at regular intervals and updated when any changes in operating procedures are made. It is important to ensure that documentation is made available to all users who need it. Thought should therefore be given to how to control access to documentation and under which circumstances access to documentation will be required (e.g. in disaster recovery and/or business continuity situations). For example you may consider keeping multiple copies (including hard copies) of important documentation.
- Means used to connect to networks
- IP address, netmask and routing information
- Any authorised/unauthorised network access
- Means and methods for remote access
- Network services that should/shouldn't be enabled
- Switchports and patching information
- Cabling requirements and organisation
- Local firewall implementations
- Startup and shutdown procedures
- Operating system details
- Inter-system dependencies
- Scheduled tasks
- User and administrator accounts
- Running processes and services
Audit trails and system logs
- Required levels of logging
- Required system events, access logs and errors
- Required logging of system usage
- Logging policies including log rotation and deletion
Software and services
Backups of data are primarily to maintain availability of information. \ Threats to availability include accidental/deliberate damage, loss and theft. It is therefore the responsibility of information asset owners to ensure that all information is backed up appropriately and that processes for restoring data are thoroughly tested to ensure successful system recovery. Backup policies should reflect the level of information that it is necessary to backup and how frequently backups should be made. This will depend largely on the importance of the information to the unit/University and the frequency with which the information is changed/updated. Such factors will also determine whether it is appropriate to take full backups or differential/incremental backups.
Backups should be stored in a suitable location so as to reflect the security requirements of the information. Usually the backup location should be a remote location or device kept separately from the original data to escape any damage/loss from the primary copy. What is appropriate will depend on the risk you are trying to mitigate. For example, backing up a laptop to a remote USB drive and storing the drive in the same location as the laptop, may mitigate the risk of a hardware failure, but not of theft of the device. It is also important to apply the same level of security controls to any backup copies of data, as it is to the original and, where confidentiality is a primary concern, backups should be protected by means of encryption. When backing up to remote devices consideration should also be given to the expected longevity of the media so as to protect the integrity of the backup for an appropriate period of time.
Restoration processes should be clearly defined, documented and regularly tested to ensure they are effective. Thought should be given to how long it will take to restore data in any given circumstance. For critical systems in particular, backup procedures should cover all systems information, applications and data necessary to recover the entire system in the event of a disaster.
Archiving of information and documents must take place with due consideration for legal, regulatory and business issue. The policy for archiving information should be set by the unit that is responsible for that information.
For staff and postgraduate students, OUCS provide a central backup service for desktops, laptops and servers. For more detail please see http://www.oucs.ox.ac.uk/hfs/
Information on the extent to which the University may monitor the use of its ICT systems can be found within the Regulations Relating to the use of Information Technology Facilities. All personnel, and IT staff in particular, must ensure they comply with these regulations.
Other than that, the level of monitoring for individual systems should be appropriate as determined by an assessment of the risk. It is, however, imperative that the source of any abusive or malicious network traffic can easily be traced and isolated. This may be a requirement to trace the end user and/or computer responsible. All units should therefore follow OxCERT's guidance on logging, especially where NAT, routed networks and/or multi-users systems are used. All logs must be associated with an accurate time source. If units do not want to run their own NTP service then OUCS offer a central NTP service.
Clearly, log file management is a balance between the amount of data collected and how useful this is. Sufficient logs should be collected to be able to identify the source of the problem without flooding log files with excessive data. Some key things that could be included in log files might include:
- Authorised access
- Privileged operations
- Unauthorised access attempts
- System alerts and failures
- Network traffic
Failure to provide sufficient logs can severely restrict investigations into any kind of problem and/or security incident. Inability to trace the source of problems is therefore one of the main reasons for delayed incident resolution. It is therefore vital to ensure that not only is the correct information being collected from the outset, but that the logs are available to those that need them and can be relied upon. Logs should therefore be periodically tested in order to ensure that the process is working as expected. It is important to bear in mind log retention and roll over periods. Thought should also be given as to how the integrity of the logs can be maintained. If a system is compromised, for example, it is likely that the logs will be tampered with by the attacker. Using a separate system, such as a central syslog server could therefore be considered.
This should be appropriate to the use of the system and, for ICT systems managed by the unit, should include authorised personnel only (usually ICT staff). Best practice advice only allows access with the minimum privileges necessary for the task in hand. It is advisable that all unit/University owned computers (desktops and laptops) should have a designated administrator who is responsible for that machine. The designated administrator could be a group or individual and would be responsible for that machine. They will, for example, have the ability to create user accounts, assign privileges, configure the machine, install and remove software and apply security related updates etc. Where other users need access to that machine, the administrator would be responsible for assigning the appropriate privileges to them.
For personally owned machines, it is likely that the owner will also be the system administrator. In these circumstances, users should be made fully aware of the ICT Regulations, the Information Security Policies and any local acceptable use policy. All system administrators should be made aware of their security responsibilities, particularly in relation to security incidents. For example, where personally owned machines are concerned, failing to allow local IT support staff to inspect machines, or unwillingness to re-install the operating system, can severely hamper, or even prevent entirely, incident resolution.
Malicious code includes malicious software such as viruses, worms, trojans etc. Commonly the term malware is used to describe all malicious software. Malicious code, however, can also be taken to include remote and/or mobile code that could be executed to exploit some vulnerability in a local system. This includes the likes of cross-site-scripting (XSS) attacks, SQL (or other code) injection attacks and buffer overflow attacks. In fact, any attack which includes the unauthorised running of code on a computer can be counted as malicious code for the purposes of this policy and toolkit.
Protection against malicious code should be appropriate and based on an assessment of the risks to individual systems and to the University as a whole. Thought should be given to the emphasis which should be placed on prevention, detection and recovery. What is appropriate for one system may be different for others but the decision should be justifiable and based on the risk assessment. For example, where information systems process personal, sensitive or confidential information, priority may be given to prevention. On the other hand, it may be deemed an acceptable risk to have a machine compromised if detection and recovery processes are robust and the risk of disclosure of information is considered low. Similarly, desktop machines will be treated differently to servers as the impact of certain actions will be different. It is, however, important to fully understand the risks.
Nowadays, malicious code is typically used to gain unauthorised access to systems and/or resources. For example, this could be installing a key-logger to record passwords which will be used in further attacks or the remote execution of code in order to gain full control over systems. Most commonly, those behind propagation of malware are motivated by money. Therefore it is in their interest for the malware to be stealthy and go undetected. Recently, much has been made in the news about advanced persistent threats and malware being used for espionage at a national level. Given the sensitivity of certain research, for example, it is not beyond the realms of possibility that the University of Oxford and/or certain high profile/senior members could be a victim of targeted attacks.
The risks associated with malicious code are therefore numerous. Whilst the obvious risks may be those surrounding the unauthorised disclosure of sensitive data, the following lists just some of the risks associated with malicious code:
- Risk to the safety and well being of individuals
- Reputational damage
- Loss of important research bids/contracts
- Direct financial loss (for example fines from the Information Commissioner)
- Legal action undertaken against the University or a college
- Further abuse of resources (e.g. compromised accounts being used for spam runs)
- Denial of service to legitimate users
- Deletion, modification or corruption of data
- Disciplinary action
- Reputational damage for the University and/or local unit
- Dissatisfied users
- Loss of productivity and potential income
- Unhappy management
The chosen controls should be based on the identified risks. The controls should be proportionate, and the cost of implementation should not outweigh the cost of 'doing nothing'. Appropriate controls will also depend on the nature of the system e.g. whether it is a personal laptop/desktop, university/unit owned or server.
Malware that targets user machines can propagate in a number of ways including automatic network propagation (in the case of worms), email attachments, drive-by-downloads, infected portable devices such as USB sticks, trojaned software, social engineering (e.g. Fake AV malware) etc. The most common current attack vectors are probably code hidden in websites (drive-by downloads), email attachments and USB devices all of which are used to exploit vulnerabilities in a user's operating system and/or other software.
Prevention can be difficult as attacks are often sophisticated and well resourced. In addition to that, the perception is often that increased security means decreased usability. However at the very least, the following actions should be taken:
- Running legitimate, up-to-date antivirus software and scanning your machine regularly
- Using automatic updates and installing critical security updates for your operating system as soon as possible
- Keeping third party software (such as Adobe products or web browsers) up to date with the latest version
- Not clicking on links or opening attachments in unsolicited email
- Not installing pirated software
- Making sure your operating system's firewall is on
- Only visiting reputable websites
- Not using untrusted USB devices or using your own USB devices in untrusted machines
Realistically most users cannot be expected to detect malware other than by running up-to-date AV and relying on alerts. However if other suspicious symptoms are encountered (e.g. a dramatic increase in advertising pop-ups, sudden performance issues, unexpected email replies or deletion) then users should get their machine checked out by local ITSS or the OUCS Help Centre.
This may not always be as simple as it first appears - even if your AV detects and removes some malware from your machine, there may be malware still present. It is also difficult to be certain how long ago the machine was compromised. Having your machine compromised decreases the trust you can place upon it and recovery steps will depend on how much risk you are willing to accept. Recovery steps will also depend on whether the machine is owned by the university/unit or an individual. For university owned machines, for example, it may be simpler to re-image the machine. Again, the very minimum steps that need to be taken are to:
- Provide evidence that some malware has been found and removed (e.g. AV logs, files which have been found and deleted etc.)
- Ensure that the machine's operation system and all software is fully patched and up-to-date
- Make sure that the machine is successfully scanned with up-to-date AV software
Of course the only way to be 100% sure that the machine is clean is to re-install the operating system back to its original state. Restore points can be used but there should be some degree of assurance that the machine is not being restored to a compromised state. Similarly, important work and software may need to be restored from backup whilst exercising caution so as to not restore compromised material. Where re-installation is not possible, additional steps could include:
While sophisticated attacks can, and do, happen, the vast majority of incidents that OxCERT deal with occur because simple controls have not been put in place. Probably the most common cause is out-of-date, vulnerable software (often third party software). It is therefore imperative that servers are kept as up-to-date as possible in terms of the operating system and all third party software. System administrators should pay particular attention to the software being run and maintain a watch for vulnerabilities in any such software and/or the underlying operating system. This is just as important as applying patches when they become available as there may be exploitable vulnerabilities for which a fix is yet to be released.
OxCERT do attempt to provide information on technical vulnerabilities of relevance and details can be found at Security Bulletins. It should be stressed however that OxCERT is not responsible for maintaining any University systems and it is up to system administrators to be aware of vulnerabilities on their own systems. OxCERT are not able to notify ITSS of all existing vulnerabilities and system administrators should consider joining the appropriate vendor mailing lists for vulnerability announcements. Where vulnerabilities do exist, system administrators (or other persons responsible for ICT systems) should evaluate the unit's exposure and take appropriate action. Usually this will involve a plan to apply security updates however where this is not immediately possible other measures MUST be taken in accordance with the identified risk. This could be anything from disabling services, restricting access, following mitigation guides by vendors or even (in extreme cases) temporarily removing access to the server.
Often code is written purely for functionality and security is not even taken into consideration. Units should therefore ensure that anyone employed (either internally or externally) to write code for servers is appropriately trained and fully aware of security issues when writing code. In particular, any code that allows user input of any kind should be properly sanitised and the input validated. More information, and details on how to prevent SQL injection (and other similar) attacks can be found within Suggested Technical Solutions.
- Good authentication and password policies
- The chosen authentication methods should be appropriate to the system in question. Systems with poor authentication methods face a risk of unauthorised access. Passwords, for example, may be guessed, shared, key-logged or brute-forced. If passwords are the only method of authentication these threats should be addressed. One of the main threats to passwords today is that of key-logging malware. Therefore system administrator passwords should not be used on untrusted machines. OxCERT have witnessed examples of Windows domain administrator passwords, for example, being used to authenticate IT support staff on machines compromised by key-logging malware leading to the compromise of the entire domain.
- Access to services
- Access to services should be restricted as far as possible without
adversely affecting availability. For certain services (e.g. HTTP on
a web server) this may not be possible. Remote access services (SSH,
Remote Desktop, VNS etc.) should, in particular, be restricted where
possible. This might include restricting to specific static IP
addresses or certain IP addresses ranges. Where static addresses are
not possible using a VPN range could be considered. Even using the
University's general VPN range for all users is preferable to access
from the entire world. If access really is required from anywhere,
then other controls MUST be put in place to reduce the considerable
risks that this poses.
For services which are intentionally open to the world (e.g. HTTP, SMTP etc.) the use of a DMZ should be considered in order to prevent any compromise being used to gain further access to internal resources
- Unnecessary privileges
- All software should be run with the minimal privilege necessary to do the job at hand and should not, for example, be run as root. Similarly user accounts should have the minimum privileges necessary by default.
- Antivirus software
- Where possible anti-virus software should be run. However for some servers this is not always possible or advisable. Where this is the case other measures MUST be taken (see other controls in this list).
- Configuration errors and testing
- Often security breaches occur as a result of simple configuration
errors combined with a lack of testing. Therefore when configuration
changes are being made they should be tested to ensure that no
unexpected consequences occur. Specific test systems may also be
used though thought should be given to data sets used in test
systems. For example where personal or sensitive personal
information is processed it is advisable for test systems to use
simulated, rather than live data.
Configuration changes that could increase the likelihood of security incidents might include the following (and many more):
Detection of security breaches is crucial. Often compromised servers are discovered by OxCERT only to find out that initial access was several months earlier and the system has likely been accessed by several unauthorised groups or individuals. OxCERT do, of course, monitor the network for signs of unauthorised access to systems and will alert ITSS to any breaches we are aware of. This should be seen as the last line of defence for system administrators however who are responsible for monitoring their own systems/services. The following list gives just some examples of techniques that can be used to detect security incidents:
In order to recover from system compromises, it is important to understand how the machine was compromised so that any identified vulnerabilities can be fixed. Usually log files from the local machine would be used to ascertain the method of entry. These could be supplemented by network flow data which can be used to identify IP addresses and times to look for in the log files.
As well as understanding how the attacker gained entry in the first place it is also important to understand what the attacker did in order to remove any outstanding malware, backdoors and/or minimise the impact of any other actions - such as access to confidential information (e.g. password files). For example, looking for files and folders created around the time of the compromise or checking the bash history of compromised accounts may provide clues as to what the attacker did. Similarly, running services should be checked to make sure they are all authorised. More information on recovery can be found in the suggested technical solutions section.
This section of the toolkit adds further detail to the corresponding Network Management sub-policy within the University's Information Security Policy.
The unit is responsible for all connections on the unit's side of the FRODO box. The University does not operate an organisational level firewall at the JANET link. Whilst a small number of ports are blocked inbound by router access control lists, this should NOT be seen as sufficient protection for individual units. Instead the backbone network should be treated like the Internet and each unit should have their own means of controlling traffic to and from their network. Usually this would mean operating a unit level firewall and/or a series of firewalls at strategic locations in order to prevent unauthorised network traffic. Where units do not operate a firewall there should be documented justification for this decision and, where appropriate, alternative controls should be put in place. This could include, for example, switch or router access control lists. Firewalls or other devices for restricting information flows should be based on source and destination address checking mechanisms.
Any firewall rulesets (or other means of access control) should be in line with an agreed access control policy. A good starting point would be to opt for a default deny inbound policy and a default allow outbound, however there may be good reasons for choosing other policies. Whatever the setup, only authorised traffic should be allowed to traverse the network and so it is advised that access is restricted to being only from necessary locations and only to necessary services. For example, where remote access is required (e.g. SSH or Remote Desktop), this could be restricted to a known set of IP addresses such as a static or VPN range. Only access to the necessary ports should be allowed and it may be worth considering the use of non-standard ports, if appropriate.
Firewall rules and other access control lists should be checked at regular intervals to ensure they are behaving as expected. This could be done on a monthly basis and should always be done when significant changes are made to the configuration. Units should have some procedures in place to ensure that changes to firewall configurations are controlled and rulesets should always be checked after config changes or troubleshooting. A full record of any changes should be maintained.
Network management and control should ensure the security of information in networks and the protection of connected services from unauthorised access. Responsibilities for network management should therefore be clearly assigned and all networks should be managed by suitably qualified and experienced staff. This is in order to avoid costly errors, inappropriate access to network devices (e.g. from non IT support staff) and/or slow responses to fixing problems which may have an impact on the unit and/or University as a whole. ITS3 at OUCS can help to organise appropriate training courses.
A list of services available to Network Administrators can be found within the OxCERT webpages. This includes any chargeable services. NSMS do also offer a chargeable network management service details of which can be found at: NSMS network management service.
Any network services and their security requirements should be clearly identified. Such services might include the provision of connections, private network services and managed security solutions such as firewalls and intrusion detection systems. Security features of services could be technology such as authentication, encryption etc. or procedures to restrict access to services or applications.
Where possible network traffic should be appropriately segregated with routing and access controls between the domains. This could include, for example, separating traffic on untrusted, "public" networks, from that of staff and students. Thought may also be given to segregating critical assets where the loss or compromise of that asset would have a big impact on the operation of the unit. DMZs could be used, for example, to protect access to major services such as mail servers, web servers or domain controllers. Consideration should also be given to the segregation of wireless networks from internal and private networks.
Appropriate physical security for network cabling will depend on the physical environment and the sensitivity of any data being transferred. Particular attention should be paid to cabling which is exposed in public areas or exposed to the physical elements. For example, cabling and other network equipment may be at risk of flooding in parts of Oxford and appropriate controls should be implemented to mitigate the risk. Where particular network links carry sensitive data, or where availability is a key priority, protection should be implemented to protect from interception or accidental damage.
This section of the toolkit adds further detail to the corresponding Access Control sub-policy within the University's Information Security Policy.
Local access control policies will define who is allowed access to which physical locations and logical resources. This could refer to individuals but more likely will refer to specific roles and/or user groups. There needn't be one single, over-arching access control policy and it is, perhaps, more likely that access control policies will refer to specific locations/resources. For example you may have a data centre/machine room policy which only authorises access to system administrators or a policy that only allows members of a particular committee, access to the minutes of their meetings.
However you should ensure that there is some form of policy for all resources (even if it just means that access is allowed to anyone). For computing systems, information systems and peripherals the default policy should be that access is not allowed and any policies should explicitly allow access to particular users/groups/roles.
How often access controls should be reviewed will depend on the specific resource. Access to confidential information, for example should be reviewed on a more regular basis (perhaps once a month) than access to public areas (which may be reviewed annually). Any access control policies should be reviewed whenever there are significant changes such as changing roles or staff/students leaving/arriving.
Robust identification and authentication means that the methods of identification and authentication should be sufficient to be able to trace any misuse/abuse of systems to an individual user. Identification of users requires sufficient steps to be taken to ensure a claimed identity is genuine. Many identification and authentication systems will therefore be based on the users University card, including a user's Single Sign On (SSO) username. Users must not share access to their SSO account and where group access is required, project accounts can be considered. Similarly, where access is required to another user's mailbox, access should be delegated via Nexus. SSO/Nexus passwords must not be shared.
A number of authentication methods could be used, the most common being a username and password. Secure logon controls should include not sending passwords in clear text and may include controls such as limiting the number of logon attempts (bearing in mind the potential for denial of service) and/or sending warnings to system administrators when thresholds are reached. When implementing authentication solutions provisions should be made for keyloggers and, for critical systems, further authentication methods should be considered e.g. two factor authentication. The University can provide two-factor authentication using one-time-passwords via SMS for particular services where necessary.
For remote access the level of authentication should be appropriate with the identified risk. In many cases, Webauth or the University's central VPN service will be acceptable. However, for access to information of a sensitive or confidential nature more specific access control systems may be appropriate. In such circumstances specific access control policies should be in place to define who is allowed access, from where, and how copies of the data will be controlled. The Classification of Information Guidelines should be followed for further advice. University remote access accounts must not be shared. Users are responsible for any misuse of their remote access account including accidental misuse (e.g. leaving yourself logged in). Common occurrences of such misuse include copyright infringement, malware and access to libraries and resources. All of the above will lead to remote access accounts being disabled and users will be held accountable for the use of their own account.
The length of sessions needs to be a balance between usability and security. For devices that are publicly available, or when allowing access to sensitive or confidential resources, sessions should be timed out sooner than devices in private office areas. SSO sessions typically last 10 hours, so this should be taken into consideration when deciding if this is an appropriate means of authentication. Of course other means can be taken to mitigate the risk which can include human factors such as putting up reminder signs in public areas or having (for example) help centre staff routinely patrol for inactive but open sessions.
For some systems it may not be appropriate and/or trivial to individually authenticate every single user. Typical examples include public kiosk machines in a help centre environment. The risk here is that such systems could be misused in such a way as to damage the reputation of the unit/University, break the law, or be compromised by (for example) key-loggers which may harvest user information. Therefore where it is not possible/appropriate to authenticate every user, a specific risk assessment must be carried out and other steps taken to mitigate the risk. Often this will include reducing the chance of abuse by locking down the system and only allowing access to specific resources. However other means of monitoring or authentication could be implemented such as CCTV or logging of user access to public areas.
Individual devices should be authenticated on the network so as to prevent unauthorised use and ensure all equipment is permitted to connect to the network. Examples would include 802.1X or MAC address registration.
Access to diagnostic, remote management ports and other similar services should be restricted to only authorised users, and the level of restriction should reflect the importance of the service. Scanning of remote access ports such as SSH is extremely common so access should be restricted as far as possible without adverse affects on availability.
Password management systems should be designed so as to maintain accountability. Good practice advice might include allowing users to choose passwords, enforcing quality, preventing password re-use and protecting passwords in transit and in storage. Users should be advised to use passwords on personal machines that will be connecting to the University network as the responsibility for misuse of their machine lies with them.
This section of the toolkit adds further detail to the corresponding User Management sub-policy within the Information Security Policy.
There should be authentication and authorisation procedures in place to ensure users are only allowed to access services which are intended for them. Specifically for systems using techniques such as Single Sign On, controls should be in place to ensure that creating an account does not allow users access to services for which they are not authorised to use. It is also particularly important to ensure that the same password (or similar) should not be used to login to systems that are supposed to be kept secure. This applies especially to those systems that may contain sensitive personal information.
The following ruling from the Information Commissioner demonstrates that this could be considered a breach of the Data Protection Act 1998: Hampshire school breached data protection rules.
Registration and de-registration procedures should ensure that users are assigned unique IDs so accountability can be maintained and records should be kept of all registered users. All users should sign up to a conditions of use/access when they are assigned an account. There should also be means to remind users of such terms on a regular basis and ensure they are aware of any changes. Systems should be audited for unwanted/redundant accounts on a regular basis. When users leave access should be revoked and the user de-registered.
Inappropriate system privileges can be a major contributory factor in system failure and security breaches so, where possible, privileges should be kept to the minimum necessary and suitable permissions should be defined. In signing up to a conditions of use, users should be made aware of their own responsibilities.
Passwords should not be shared between users and it should be noted that sharing of University passwords such as SSO credentials is explicitly forbidden by the University ICT regulations. This includes giving out your password to IT support staff, OUCS or any other University department. OUCS will NEVER ask users for their passwords and so ANY correspondence (email, phone calls etc.) asking for such details should be treated with caution. Unfortunately, phishing scams are rife these days and users do fall for them. For more details on recognising fake emails please see How To Recognise Fake Emails.
In order to make life simpler when users leave or are temporarily away, role based accounts and access could be considered. Where shared access to accounts is needed other means should be implemented. Project accounts, for example can be set up for clubs and societies (see Registration for more details on different types of accounts) and, within Nexus, access to email accounts can be delegated so there is no need to share passwords (see Delegating access to email, calendar and other Nexus features using Outlook 2007). If passwords need to be written down, this should be done so in as secure a manner as possible. Password management tools can be useful in order to help keep multiple passwords stored in a secure manner. This can be particularly useful for users who have many passwords to remember. It is important to be aware of the fact that where password exposures are known about, user accounts will be temporarily disabled. This can cause significant disruption to users who need access to critical services. It is therefore imperative that all users are made aware of their responsibilities towards password security, and of the consequences of breaching these policies. More information on University passwords and guides to password security can be found at Registration.
Users may also be advised to protect against unauthorised access to their own machines, accounts and private information. Obviously this will depend on the specific environment. However some examples could include simply locking office doors, password protected screen locks (e.g. where locking doors isn't possible or offices are shared), logging out of sessions on public machines, and clear desk policies.
Visitor accounts should be specifically issued and the same University policies and practices (traceability, incident response etc.) are applicable to visitor accounts. The University has two services available for visitors which are Eduroam and OWL-Visitor. For more information on these services please see OUCS and efficient IT. Where individual units wish to provide their own network access for visitors, this should be provided using their own specific IP address space which is segregated from other networks within the unit. It is also important to be able to trace any abuse or misuse coming from such areas. Therefore authentication should be considered. In some circumstances this may be difficult or not desirable. In this case the risks should be assessed and, for example, controls put in place to prevent abuse/misuse in the first place. For example, it may be deemed an acceptable risk to have unauthenticated access to specifically assigned networks if physical access is controlled, physical monitoring is in place, network access is allowed only to specific known good destinations etc.
This section of the toolkit adds further detail to the corresponding Information Handling sub-policy within the Information Security Policy.
Assets to the University may include, but are not restricted to, information, software, physical assets, services, people and intangibles (such as reputation) etc. Classification of information is the responsibility of the information owner though this can be delegated to other persons. The classification of information should be directly related to the value of that information, legal requirements, sensitivity and criticality to the University. It should be used to determine the level of security control necessary to protect its confidentiality, integrity and availability. It should also take into account certain business needs such as the requirement for sharing of information.
The Information Security Policy defines “confidential information” as information which is of limited public availability; is confidential in its very nature; has been provided on the understanding that it is confidential; and/or its loss or unauthorised disclosure could have one or more of the following consequences:
- financial loss e.g. the withdrawal of a research grant or donation, a fine by the ICO, a legal claim for breach of confidence;
- reputational damage e.g. adverse publicity, demonstrations, complaints about breaches of privacy; and/or
- an adverse effect on the safety or well-being of members of the University or those associated with it e.g. increased threats to staff or students engaged in sensitive research, embarrassment or damage to benefactors, suppliers, staff and students
All University information assets should therefore be classed as either confidential or not and the policy for the protection of confidential information is provided in section 6 of the Information Security Policy. However it is recognised that some units may benefit from a classification scheme with a greater level of granularity. The following classification scheme: Classification Scheme [84 KB PDF] is therefore provided as an example and was approved by the PRAC ICT sub-committee. It should be noted that the labels used in this scheme (i.e. CONFIDENTIAL, RESTRICTED, SENSITIVE, UNRESTRICTED) are arbitrary and could be replaced by any appropriate labelling scheme. Each category contains a description and levels of controls that may be applied to assets in that category.
All major assets that the unit/University has should be maintained in an asset inventory and each asset should have its ownership and security classification agreed upon and documented. The nominated owner of each asset is responsible for defining appropriate use of the asset, and for agreeing and maintaining appropriate security controls. Whilst the responsibility for implementing security controls may be delegated as needed, the accountability rests with the nominated owner.
Information assets should be classified in terms of their sensitivity and criticality to the unit/University. However the organisational needs for the use and sharing of information should also be taken into consideration. Information assets can be classified in terms of their security requirements which may be confidentiality, integrity or availability (or a mixture). Assets can be classified and labelled in terms of their confidentiality by the University's Information Classification Guidelines.
Information security and the management thereof essentially comes down to managing risk. Throughout the University's information security policies and this accompanying toolkit "appropriate" or "suitable" controls are referred to continuously. It is risk assessment that allows us to determine what level of control is appropriate and, perhaps more importantly, allows us to demonstrably justify the decisions we have taken. If there is no risk assessment, then culpability for security incidents is likely to lay firmly at the door of the asset owner. However the process need not be complicated, costly or even formal. The level of risk assessment required will depend on the security requirements of the unit, legal and regulatory requirements and the nature and criticality of the asset needing protection. What is essential, however, is that it is possible show that the decisions made in implementing controls can be justified and considered reasonable.
Risk management is the reduction of identified risk to an acceptable level. Risk management therefore involves an assessment of the relevant risks followed by the appropriate treatment of the identified risks.
- The context of the risk assessment (e.g. to meet legal requirements, overall risk assessment to identify key areas, assessment of a specific system or service etc.)
- The operational objectives of the unit
- Available resources
- The type of asset
- Any local information security policies
The purpose of risk analysis is to identify, quantify and prioritise the risks according to the scope and criteria for the risk assessment. In order to assess the risks to information assets, threats and vulnerabilities must first be identified, quantified and prioritised.
The overall aim is to protect important information assets. It is vital therefore to identify and, where possible, quantify the value of those assets. All information assets should have a clearly defined owner and the process of identifying and valuing assets should include both owners of the asset and operational managers. When identifying assets at a unit level, for example, managers from all sub-departments within a unit should be consulted. In doing so the person(s) responsible for the risk assessment have a better chance of identifying all of the important assets and associating the relevant value. Assets should be identified and prioritised in terms of their criticality to the unit. They can be physical assets such as hardware or intangible assets such as reputation and can include computers, software, data, network connectivity equipment, personnel and many more.
When placing a value on information assets it should be done assuming that no controls are currently in place and should consider, for example, loss of information, loss of availability, disclosure of information, destruction of information and interference with communications. Where possible a financial value should be placed on the asset. This could be measured in terms of the cost to replace the asset and/or the cost to support it. However, for intangible assets such as reputation, it may be more difficult to place a numerical value on them. Therefore a subjective approach can also be taken and consider, for example, the importance of the asset to the unit and/or University.
More information on Asset Management and asset registers can be found at Asset Management.
Once the threats have been identified the vulnerabilities of assets should also be evaluated. Assessing vulnerabilities should include the likelihood of the vulnerability being exploited as well as the impact of the vulnerability being exploited. In order to do this, past experience and historical data can be used. Where such information is not available then a best guess can be made in the first instance based on expert advice and opinion. The process should then be repeated annually once some real data is available.
Vulnerabilities can exist in many ways such as vulnerabilities in software, hardware, processes and policies, system configuration, dependencies on third parties, the physical environment etc. Similarly to assessing threats, the likelihood of the vulnerability being exploited will depend on a number of factors including the motivation and resources of the attacker etc. In addition to those issues listed above, existing controls and difficulty to exploit the weakness can be considered to specifically determine the chances of the vulnerability being exploited.
Finally the impact of a vulnerability being exploited should be evaluated in order to assist with prioritisation. The impact could be the result of a loss of one of confidentiality, integrity and/or availability and so each of these should be considered. Other factors that may be relevant when assessing the impact might include:
- Costs in terms of time spent dealing with the incident (investigation, recovery etc.)
- Direct financial costs owing to theft or monetary fines
- Replacement value of hardware etc.
- Technical costs such as on going availability issues or knock-on effects on other services
- Human costs such as loss of goodwill or reputation
- Possible legal action
- Possible health and safety issues
Once the above information has been obtained risk analysis can be carried out in order to ascertain the main risks to the unit/University. As mentioned the level of risk analysis required will depend on the value of the asset, the information available and the resources available to the unit. What is important is to be able to justify the decisions made when choosing and implementing controls. A risk analysis may be a simple conversation in a managerial or IT committee meeting or may be a much more formal process. Either way it is important to have a record of the risk analysis in order that decisions can be justified, reviewed and, where appropriate, amended in the event of an incident. It is also important to be able to demonstrate why a particular approach to risk assessment has been taken. Generally speaking there are two approaches to risk analysis which are described below.
Qualitative risk analysis looks at the magnitude and consequences of incidents, and their likelihood of occurring. It does not use numerical input but rather a best guess approach based on the informed opinions of subject experts. It is therefore often based on the creation of incident scenarios and hypothetical events. The more input that can be gained from a range of experts the greater the degree of assurance is possible. Qualitative analysis has a number of advantages. For example, it is easy to understand for all personnel involved, and is often very useful as an initial, high-level risk assessment to identify areas which need closer attention. It is also suitable where historical data is not available and where loss can't easily be measured numerically. It therefore tends to be particularly well suited to handling incidents with so-called soft impacts such are reputational damage or loss of good will etc.
Quantitative analysis is much more formulaic and therefore requires lots more information as input. Therefore where historical data is available, the frequency of attack is known and losses can be measured in numerical terms, quantitative analysis may be the most suitable approach. The advantage of this approach is that the risk can be very accurately measured and the process used iteratively. The downside is that it requires comprehensive records to be kept and is therefore not so good at dealing with new risks when they arise. Of course, the quality of the analysis will depend greatly on the accuracy and completeness of the input data.
Specific risk analysis tools are often used for quantitative analysis. These have the advantages of being efficient (once the input data is collected), allowing data to be re-used and allowing the focus of resources to be placed on the analysis of the results.
When implementing controls to reduce the risk it is usually the case that the cost of the treatment should be lower than the cost of the impact. This is one factor which will determine how appropriate the control is along with the overall security requirements of the unit/University. It is, however, important to bear in mind intangible costs such as loss of reputation etc. At this point, existing countermeasures can be taken into consideration and gap analysis carried out. This can also help with prioritisation of risks along with (for example) by the number of threats faced by an asset. When deciding on countermeasures to implement it is important to consider that the risk can be reduced in a number of ways. For example, controls can be put in place to reduce the threat, vulnerability or impact (or all of the above) depending on the nature of the risk, costs and effectiveness of each. As an example, for a web-server where availability is of paramount importance (and confidentiality isn't), one control may be to reduce the impact of compromise by having a plan in place to quickly re-build the server.
Of course risk assessment and management must be an iterative process. This should happen in the first instance so that any residual risk is reduced to an acceptable level. However risk assessment should be carried out regularly as other priorities are identified, operational requirements change, vulnerabilities/threats are introduced or incidents happen etc. Reviewing risks may equally mean that some controls can be relaxed as well as tightened. Importantly, the risk process and review should be well documented so that it can be easily monitored and audited.
See Backup Section
Before any internal or external media are disposed of or transferred off-site, any information stored on the device must be deleted as appropriate for the classification of that information
Merely deleting the contents or doing a quick format of the media does not remove all the information stored on the media and typically, all or parts of files may be restored using commonly available software utilities. This applies to any electronic device that may store information (e.g. harddisks, CD/DVDs, media cards, USB memory, etc).
For all but unrestricted information, the data must be irretrievably deleted from the media. Where media cannot be reliably written to, due to damage or no longer being compatible with current technology, it should be physically destroyed. Whole disk encryption may be used to reduce the risk of unauthorised disclosure of information. Appropriate encryption and consequent destruction of the decryption keys may be viewed as the equivalent of appropriate deletion.
UNRESTRICTEDinformation should pose no risk to the University/unit and will usually be in the public domain already. Deleting such information is not mandatory and simply erasing such information in the normal way can be viewed as appropriate. This may include, for example:
Disclosure of RESTRICTED information is unlikely to have a substantial impact on the University in terms of its business, finance or reputation. It may, however, include personal information and so, as a minimum, it should be deleted as described above before disposal. Where personal information is involved the media should be overwritten at least once using common techniques .
Disclosure of SENSITIVE information is likely to have an adverse affect on the business, reputation or finances of the University. It must therefore be securely erased before any media is disposed of or re-used. This means that the media should be overwritten several (at least 3) times using common techniques. Physical destruction of the media may be considered in addition. Encryption of the entire media may also be considered and suitable encryption of the media may be considered appropriate if securely erasing or physical destruction is not possible for some reason.
Information Disclosure of CONFIDENTIAL information is very likely to have a significant and adverse affect on the University. It must therefore be irretrievably erased before any media is disposed of or re-used. Media must be overwritten several (at least 3) times using common techniques. Physical destruction of the media should be considered. Encryption of the entire media may also be considered and suitable encryption of the media may be considered appropriate if securely erasing or physical destruction is not possible for some reason.
Email is a very handy form of communication and its use has exploded to the extent that most of us rely on email to be able to carry out our work. However when email was first invented, it was not done so with today's use, or security requirements, in mind. As such email is an inherently insecure form of communication. That isn't to say we should all stop using it, but we should be aware of the risks. Here are some of the main issues:
Sending an email can be seen as the postal mail equivalent as sending a postcard. There is no built in confidentiality (or "envelope") to prevent anyone who "handles" the message from reading it. Connection to the University mail servers *is* protected by encryption in the form of SSL/TLS and this is the case for many other service providers (though not all by any means). However once an email leaves the University's network it should be assumed there is no protection. Since an email you send may be routed via anywhere in the world, and a copy of the email will be stored at each step along the way, it is best to start from the basic principle that you shouldn't put anything in an email that you wouldn't want to be made public!
One of the things that frequently "trips up" email users is the question of who has sent you an email. As with many other forms of electronic communication, how do you know who has sent you a given email. Just because a message comes from a particular email address, or has a particular name associated with it, doesn't mean it comes from the person you think it should be. It is very easy to set up an email address that is similar to somebody else's, or set the name on the account to be that of somebody else. It is also trivial to fake where an email comes from and send it so that it genuinely does appear to come from an address you recognise. That is not to mention the fact that somebody else (e.g. an attacker who has compromised an account) may have access to somebody's account and is using it for nefarious purposes. The message here is that you shouldn't trust an email based on the sender address.
Perhaps a good rule of thumb is not to worry too much about who specifically has sent you an email. Do be aware that it might not be who you think it is though, and do question whether the information in the email is relevant to you, or may pose a risk. For example if someone emails you telling you have won a lottery you haven't entered, it is probably not true, regardless of who appears to have sent it! Likewise if someone is asking for your password(s) then stop and think twice (more on this below).
Similarly, how do you know who is at the other end of an email you send. Perhaps the sender is not who you think they are (as described above), perhaps they have set a different email address for your replies to go to, or perhaps they are going to forward your message to other people. In addition to that it is all too easy to send a message to the wrong recipient accidentally. Nearly everyone I know (including myself) have done this at some point or another. Take care when sending emails, but remember our starting point - don't put anything in an email you wouldn't want to be made public!
Your email account is "secure" though right? It is protected by a username and password so nobody else could get access? Unfortunately people are often pretty careless with their passwords, sometimes without even realising it. In the University of Oxford, a single password is used for access to email and other Single Sign On (SSO) resources, so you need to be careful with your password. If you use public machines, all over the world to access your email (or other SSO resources), there is a pretty good chance that one of them will have some malicious software on it, designed to steal your password and get access to your account. This isn't necessarily something to panic about but, again, you should be aware of the risks. What does happen on a regular basis is that accounts are compromised and used to send out spam messages. This will lead to your account being temporarily disabled by OxCERT and it is uncanny how often this happens at a time when the user needs access to their account urgently!
Secondly attackers do delete emails in peoples folders. This may be just out of pure malice (because they can) but is often to hide their presence as they don't want you to see the hundreds of bounced messages as a result of all the spam they are sending. Thirdly, guess what, if an attacker has access to your account they can read all of your emails! What was that first rule again? Email should not be relied on as secure storage for your important work. Have another way of accessing and storing important information, and never store confidential or sensitive information in your mail folders.
If you follow this advice, you may well be prepared to take the chance that someone will compromised your account because you may think the risk is low. However do think about what else that account might give an attacker to! The extent to which you protect access to your SSO account may depend on what other resources you have access to. If you do need to check your email on untrusted machines, be aware of the risks and take some mitigating action. This could be changing your password from a trusted machine at the next opportunity and/or checking your account for unusual activity or rules that have been set up. Remember though that the security of your account is your reponsibility. If you need to recover deleted emails this may be possible and you should contact the OUCS help centre but there are no guarantees. Similarly if your account is disabled because it is compromised that is your responsibility.
Last, but by no means least, there are lots of people who want to send you spam and scam emails. Don't be a victim! Remember that nobody in the University should ask you for your password - especially via email. If you are being sent links to websites that ask you for your password, think twice and be aware of how to spot legitimate sites. Remember not to trust an email just because it appears to come from a .ox.ac.uk address. If you are in any doubt ASK your local IT support staff or the OUCS help centre for advice. Phishing attacks cost the University a significant amount of money so please don't add to the problem!
Of course there are ways of mitigating some of the risks mentioned above. The use of encryption and digital signatures can be used to provide confidentiality, authentication of the sender and some assurance over the identity of the recipient. The two standard ways in which this can be done are to use PGP (Pretty Good Privacy) or S/MIME. Where encryption is to be used, the owner of any information assets should be consulted and agree on appropriate levels of encryption. For more details on how to secure email see OUCS's advice on secure emails.
Encryption is the process of transforming readable information into something unreadable using an algorithm (or cipher) and a cryptographic key. The input into the process is often referred to as the plaintext and the output is known as the ciphertext. The reverse process, used to recover the plaintext is known as decryption. Broadly speaking, there are two types of encryption: symmetric (or private-key) encryption and asymmetric (or public-key) encryption.
- Symmetric Encryption
- Symmetric - or private-key - encryption uses the same key for encryption and decryption. The security of symmetric cipher systems depends on:
- Asymmetric Encryption
- Asymmetric - or public-key - encryption uses different keys for encryption and decryption. Therefore, anyone wishing to be a receiver must publish or share their encryption key. In order to decrypt, a separate decryption key must be used and kept secret. It should not be possible to deduce the plaintext from knowledge of the ciphertext and the public key, and there should be some means of checking the authenticity of a public key.
Using encryption for secure file storage and transfer presents a number of challenges. While the use of strong, well recognised encryption algorithms may 'solve' the problem of appropriately securing files in storage and in transit, the use of encryption itself does not imply complete security or confidence. Rather one problem is solved, but a number of others created, and it is dealing with these challenges that provides the overall level of security for the cryptographic system. Furthermore, there is no such thing as a 100% secure system and 'security' should be thought of as being appropriate (or not) for the task in hand. There will always be some trade-off between security and usability and this will usually be determined by user requirements and risks.
Encryption does not mean security. In fact, if implemented badly it can reduce security (for example in terms of availability), and/or end up being an expensive but meaningless security control. It may be that it is necessary to encrypt based on the University's information security policies in which case read on! However, if it isn't, make sure you ask yourself if it is really the right solution. Often when we are asked about encryption, for example, it turns out that what is really required is good access control along with some sort of file management server. Encryption would normally be considered for copies of original data which, for example, might temporarily need to be in a less secure environment than normal, (and thereby exposed to greater risks from accidental or malicious threats): this might occur where information is accessed remotely across an insecure network; transferred to a third party (possibly as an email attachment), or loaded on a laptop being used away from the office. Encrypted storage might also be considered for sensitive information where there is a risk that the device or media holding it could be accessed or stolen by others. Where the risks are sufficiently great, encryption might be used to complement and strengthen other security measures.
Firstly you may need to consider whether or not you want symmetric or asymmetric encryption. Symmetric encryption is generally more efficient as it works on binary data and uses simple operations. It is therefore good for data in storage and encrypting large files and/or volumes. However, since the same key is used for encryption and decryption it requires careful key management. How are you able to securely distribute keys to persons in remote locations for example? There is also a lack of non-repudiation in that it is difficult to prove who may have encrypted or decrypted a given plaintext when identical keys are shared amongst users. Asymmetric encryption attempts to solve some of these problems by using one-way mathematical functions and using different keys for encryption and decryption. However it is less efficient as it tends to convert the plaintext into a mathematical representation and uses complex operations. Public keys are easily distributed but the issue becomes one of trust and authenticating those keys.
In actual fact, you will likely be selecting a product for a particular purpose, rather than a type of encryption. Most products also probably use a combination of asymmetric and symmetric encryption anyway, with symmetric encryption being used to encrypt the majority of the data, and asymmetric encryption being used to manage the keys. It is useful to consider what type of encryption is being used in the products you are considering however, as it may help you identify certain requirements (e.g. for key management).
Unfortunately, there is no hard and fast rule and there is no one set of standards that covers everything. It therefore comes down to what levels of assurance you require and where the requirement is coming from. For example, if the requirement to encrypt is coming from a third party then you should check with them what standards are required. Otherwise the data owner should be satisfied that the risks have been appropriately mitigated. The following should be taken into consideration when deciding on the level of trust to place in a particular cryptographic control and/or product:
- Is the algorithm and/or product well known and reputable?
- Probably the first test is whether you have heard of the algorithm/product, whether it is widely used and whether it has a decent reputation.
- There are some standards to look for and meeting these may be a requirement of third parties.
- The National Institute of Standards and Technology (NIST)
produce many of the Federal (US) standards and recommendations
that have been adopted at both a national and international
level. The Federal Information Processing Standards (FIPS) are
one of the many outputs of NIST and relevant standards include:
- FIPS 140-2 : Security Requirements for Cryptographic Modules
- FIPS 186-2 : Digital Signature Standard
- FIPS 197 : Advanced Encryption Standard [AES]
One other NIST standard that is worth drawing attention to is FIPS 46-3: The Data Encryption Standard (DES). This was withdrawn as a standard in May 2005. Therefore this should not be used for encryption unless there are exceptional circumstances. If the use of DES is required then advice should be sought from OxCERT.
For more information on NIST standards please see http://www.itl.nist.gov/fipspubs/
- RSA PKCS
- The Public-Key Cryptography Standards are specifications
produced by RSA Laboratories for the purpose of accelerating the
deployment of public-key cryptography. The PKCS documents have
become widely referenced and implemented and contributions from
the PKCS series have become part of many formal and de facto
standards, including ANSI X9 documents, PKIX, SET, S/MIME, and
SSL. Some RSA standards of note include:
- PKCS#1: RSA Cryptography Standard
- PKCS#3: Diffie-Hellman Key Agreement
- PKCS#5: Password-based Encryption Standard
- PKCS#7: Cryptographic Message Syntax Standard
- PKCS#10: Certification Request Standard
- PKCS#11: Cryptographic Token Interface
For more information please see the RSA website
- Product Evaluations
- There are several bodies that will evaluate products against
certain criteria to provide a level of assurance in a product.
Again the level of assurance that is required will depend on
your specific requirements. The most common of these is probably
the Common Criteria for Information Technology Security
Evaluation. This is an international standard (ISO/IEC 15408)
for computer security certification and offers evaluation levels
(EALs) from 1 - 7. See http://www.commoncriteriaportal.org/ for more
Other evaluations to look out for might include the CESG Assisted Products Service (CAPS): http://www.cesg.gov.uk/products_services/iacs/caps/index.shtml.
Key management is one of the most important issues when considering encryption. If the encryption algorithms are strong and implemented correctly, the security of the cryptographic system is often dependent on the secure management of the encryption keys. If decryption keys are compromised then the security of the cryptographic system falls apart. For public key algorithms, the main requirement for the management of keys is that the private key remains secret, the integrity of the public key is guaranteed and that their use is controlled. The main requirement for key management with symmetric algorithms is that the keys remain secret throughout their lifetime and that their use is controlled via key separation (i.e. using separate keys for particular tasks).
Key Generation and Storage
- Where are the keys generated and by whom?
- Who generates the keys?
- Where are the keys stored?
- How are the keys stored?
Thought should also be given to whether the keys are stored encrypted or not. Where keys are stored in plaintext, then there should be appropriate controls in place to prevent unauthorised access/disclosure. Where keys are stored encrypted access is usually controlled by way of a passphrase. This means that any decryption key (no matter what algorithm or key length is used) is, at best, only as as strong as the passphrase used to protect it. If a passphrase can be easily guessed or 'brute-forced' then the 'strength' of the key is irrelevant. Having good password policies in place is therefore essential to the overall security of the system.
Size does matter when it comes to cryptographic keys. For symmetric algorithms the key needs to be sufficiently long as to prevent a successful brute force attack within the time period that the data actually needs to be protected (i.e. if something is going to be made public in 5 years it doesn't need to be protected for the entire lifetime of the universe!). 64 bit keys are generally considered to be on the boundary of what is technically feasible these days and so are generally considered too short. Usually keys of 128 bits or upwards are recommended.
For asymmetric systems then it is more about being able to derive the private key, using mathematical functions, from the public key. However the greater the key-length the greater the impact on performance. Currently the standard default is a 2048 bit key based on a tradeoff between security and performance.
Since one of the main requirements for symmetric cryptography is that the keys remain secret throughout their lifetime, yet the same key is used for encryption and decryption, one of the main problems becomes how to ensure that the recipient of any ciphertext is able to get access to the decryption key. Thought should therefore be given to how keys are protected during distribution, in symmetric systems. When deciding on particular solutions it may also be prudent to ask who has copies of the keys and what implications this might have.
Asymmetric cryptography looks to overcome this solution by using both a public key (for encryption) and a private key (for decryption). Public keys can, as the name would suggest, be distributed freely, but the issue then becomes one of determining the authenticity of the public key (i.e. how can you trust that the recipient is actually who you intend it to be). The two main models for this are the Web of Trust and Public Key Infrastructure.
Owing to the performance issues associated with asymmetric encryption and the key distribution issues associated with symmetric encryption, hybrid systems often use symmetric encryption to encrypt the bulk of the data, and asymmetric encryption to encrypt the keys necessary for decryption.
The necessary lifetime of cryptographic keys will depend upon a number of factors including, the confidentiality requirements of the data being protected, the size of the key, the algorithm being used, and the use of the key. The lifetime of a key may also come to an end in an accidental fashion (e.g. the compromise of a secret key) in which case there needs to be a means available in order to revoke and renew the keys.
Cryptographic keys should also be used only for the intended purpose. Some users, for example will have different keys for encryption/decryption and signing. Symmetric encryption keys in particular should only be used for a specific designated purpose and should be securely deleted when no longer needed.
As mentioned above, it is imperative to have a means in place for dealing with keys that should no longer be used, for example, because they are no longer considered secure or have been compromised. Similarly it is important to ensure that all copies of keys are destroyed when the key is no longer in use.
One final issue, particularly with regards to asymmetric encryption, is that of what happens when a user loses their private decryption key. Clearly once a private key is lost it is no longer possible to decrypt any ciphertext that has been encrypted using the corresponding public key. Therefore there is a risk of the documents themselves being 'lost'. There are a number of possible mitigations against such a risk including:
- Encrypting data to multiple keys
- Keeping backups of keys (either locally or centrally)
- Key Reconstruction
- Key Escrow
- This is basically where all files/communications that are encrypted, are also encrypted to a master key. Clearly there are privacy issues with such a solution and key escrow is a controversial and strongly debated topic. In order to maintain the privacy of users, master keys can often be split into components, each of which can be weighted and distributed amongst a number of trusted persons. This reduces the risk of one person being able to abuse their position and infringe on users' privacy.
- Clearly, if any of the above practices are adopted, there will also need to be agreed policies and processes in place to define and communicate their use. For further advice please contact OxCERT. There are also also possible legal/regulatory issues that will need to be taken into consideration, these are discussed briefly in the Legal Issues section.
However there should be some awareness that there are certain legal issues that may affect certain policy decisions, and that may need to be dealt with via policies themselves. For example, different countries have legal restrictions on the import, export and use of cryptographic technologies. Care should therefore be taken if travelling with cryptographic technology (e.g. laptops using whole disk encryption). One good source of information on this matter can be found at http://rechten.uvt.nl/koops/cryptolaw/. For further advice please contact OxCERT.
Also relevant are the powers of law enforcement in the UK that require decryption keys (or the relevant plaintext) to be presented under certain sections of the Regulation of Investigatory Powers Act (RIPA). Thought needs to be given to who would be responsible for providing keys and/or plaintext if such a request was made.
Of course encryption on its own is not the answer to all security problems associated with electronically communicating and storing data. As well as strong key management, general good security practice should be followed and all users made aware of their responsibilities. If an attacker has control over a user's machine, or is able to access information in some other way (e.g. social engineering, shoulder surfing), encryption offers no protection at all. Therefore the usual good practice advice should be promoted to end users (e.g. keeping AV up to date, patching, reporting of incidents, safe browsing habits) and all users of the system - from administrators, down to end users - should be made aware of their responsibilities towards security.
Introducing new ICT systems clearly introduces new risks. The primary concerns are likely to be risks to the existing infrastructure as a result of the change (e.g. a compromised system being used to access further restricted systems) and the risk to the information processed by the new system. For more information see the risk assessment section within the toolkit.
- The person and/or group responsible for the system administration of any new ICT system should be clearly defined
- Any third party responsibilities should be clearly identified and well defined
- The operational requirements of the new system
- The security requirements of the system in terms of confidentiality, integrity and/or availability
- A list of existing systems with which the new system will interact
- A list of necessary services (and hence unnecessary services) required
- A list of required users and their necessary privileges
- How the system will be accessed (e.g. will remote access be required and, if so, where from)
- How authentication will be handled
- How/when software updates will be applied to any new ICT system
- Where the system will be placed within the network (e.g. how exposed will the system be to the outside world, and what other devices would this system provide an attacker access to)
- What changes will be needed to any existing access control mechanisms (e.g. firewalls, switches, routers)
- Leaving services open to the world unnecessarily
- Failing to apply security updates before full access to the outside world is granted
- Setting up weak passwords for test purposes or for particular services
- Setting up test devices with default weak configurations and giving them unnecessary access to the entire internet
- Switching off firewalls for "testing" purposes
- Failing to test for unexpected behaviour following config changes (e.g. testing firewall rules etc.)
When introducing new systems the risks to the information to be processed should also be taken into consideration. When testing new ICT systems "live" data should preferably not be used. This is particularly the case when that data includes information that could be classified as personal, sensitive or confidential. The following should also be identified:
- The person and/or group responsible for any information processed with the system (i.e. the information owner)
- The classification of that information and the security requirements in terms of confidentiality, integrity and/or availability
- The impact on the unit/University of a breach of those security requirements
Any risk assessment should include all relevant user groups so that the security requirements and risks can be successfully identified. The information owner should be responsible for signing off any residual risk.
It is important to successfully identify the security requirements of the system and the information to be processed as this will help you to identify an appropriate controls. Its also important to remember that risk can be mitigated by reducing the threat, vulnerability and/or impact. Security controls therefore can target prevention, detection and/or reaction as appropriate. For example, if a system doesn't handle any sensitive information but needs a high degree of availability, it may be sensible to focus on detecting and responding to security incidents so as to minimize the impact on availability. This might include making sure log files are secured so that it can be determined how an attacker got access, and having a plan in place to re-install/replace any compromised system. Alternatively, for systems that require a high degree of confidentiality, much more emphasis should be place on the initial risk assessment and preventative controls. In such cases, reactive controls may include the need to identify the University Data Protection Officer and/or Press Office.
Remember when identifying security requirements that OxCERT will require evidence of how a machine was compromised, and that the vulnerability has been closed, before any router blocks will be lifted. OxCERT can also be contacted for specialist advice when setting up new systems: OxCERT
Testing should be carried out in order to confirm that the security measures put in place meet the security requirements of the system/information. The results of any testing should be documented. Only when testing has been successfully completed should "live" data be introduced.
Any risk assessment and testing procedures should be carried out periodically for existing systems. The frequency of testing will vary depending on the requirements of the system, but the review period should be documented and justifiable.
This section of the toolkit adds further detail to the corresponding Physical Security sub-policy within the Information Security Policy.
Secure areas could be anything from a building or collection of buildings down to individual rooms or even particular devices. Defining "security perimeters" therefore means ensuring all personnel are aware of who is authorised to access the area and who is not. This is usually done using signs such as "staff only" and in general communication of policies to all personnel. For example there may be a local policy to ensure that only system administrators as allowed access to machine rooms, whereas other "staff-only" areas may allow visitors to enter if they are signed in and wear appropriate identification.
In both cases however the policy should be clearly communicated to all staff/students and there should be signs and physical security controls (e.g. locked doors) in order to enforce the policy. All personnel should be made aware of who is authorised to access specific areas and should be encouraged to challenge and/or report any persons they come across who are suspected as being unauthorised. This may mean, particularly for large departments, ensuring that authorised personnel wear visible identification at all times. Of course in certain circumstances this may not be possible or deemed appropriate and this should be determined by an assessment of the risk.
Appropriate physical security controls will depend on the nature of the secure area being protected. For example a machine room may be classed as highly secure and require door locks, systems to record entry of individuals, CCTV and intruder detection. Alternatively areas accessible to University members only may simply require identification at the point of entry e.g. via a manned reception or card-swipe entry system. It may also be considered to protect certain individual machines from theft or unauthorised physical access. This could be physically secured boxes or simply disabling external ports such as USB ports.
Appropriate physical security controls for offices will, again, depend on the specific environment and risk. However usually offices should be locked and only specifically authorised personnel given keys. All users should be made aware of who does have access to given office areas (e.g. cleaners, security staff) so that a judgement can be made as to how best to secure access to information within the office. For example, a clear desk policy may be desirable or a policy to keep sensitive information in locked drawers or safes. Using password protected screens or logging out of/switching off computers when leaving an office for a significant period should usually be encouraged as these are fairly cheap controls that will provide some mitigation against casual, opportunist or accidental breaches of data. In certain areas (e.g. open plan offices) screen locking etc. may be particularly relevant.
The risk of natural or man-made disasters such as fires or floods should also be assessed before deciding where to site equipment, particularly if it is critical to the operation of the unit/University. The frequency of flooding in the past, for example, can be used to help with such risks assessments. Where the risk is substantial, appropriate protection should be offered. This could be providing other, failover systems to introduce redundancy, physical protection, or deciding on alternative locations. Systems should also have proper ventilation if in areas that reach high temperatures such as switches in lofts or offices that get especially hot during the summer season. Adequate fans on systems in these situations are essential, including periodic reviews to ensure.
Uninterruptable Power Supply (UPS) should be used for systems that require high availability, such as for core infrastructure including file servers and network equipment to protect them from failures in supporting utilities. Power line surge protection equipment should be used where UPS is not used, as these can help protect systems from damage from power irregularities. Redundant power supplies for critical systems should be considered to reduce the risk of downtime should a power supply fail, and can also provide ease of system power management such as connecting to a different UPS without causing system downtime.
Locations of power and network cables should documented and installed, where possible, along areas less likely to be affected by future construction projects, normal traffic by people or vehicles, heavy runoffs from roads and grit/snow removal equipment. As with systems, cables providing infrastructure support should not be exposed and accessible to passer-by's, especially not in public areas. Installing cabling within walls, ceilings, or within covered trunking and out of easy reach can help reduce tampering and the possibility of intercepted network traffic or loss of service. Power and network port monitoring can be used to alert appropriate personnel if there is a change of state to any connected cables, providing a means of detecting any unauthorized interception or disconnection attempts.
It is important that authorisation is received before taking equipment or information off-site. Such authorisation should be gained via the information asset owner and/or a line manager. The specific terms and conditions with which the information/equipment can be used off-site should be explicitly defined. For example, it may be strictly forbidden to make copies of certain data, use software on personal machines or allow others to use specific equipment. It may also be a condition to keep any portable devices such as USB sticks in locked rooms, drawers or cases to prevent accidental loss or theft. The use of encryption should be considered in addition when taking data off-site - see the Cryptographic Controls section of the toolkit.
Procedures should exist to ensure that any sensitive data and licensed software have been removed or securely overwritten when equipment is sold on, transferred or scrapped, as described in Media Handling - Disposal Procedures.
This section of the toolkit adds further detail to the corresponding Incident Response sub-policy within the Information Security Policy.
Please see OxCERT's guidance for incident handling. In response to compromised machines, OxCERT will usually impose a router level block. However the compromised machine will still have local network access and it is in the interest of the unit to isolate the machine as quickly as possible. Failure to do so may result in other compromised machines on the network and escalation of the incident. There have been several incidents where a large number of compromised machines have resulted in units having their network connection temporarily suspended in order to contain and deal with the incident.
Where OxCERT cannot impose a block on individual machines (e.g. behind NAT devices) units are expected to respond within 4 working hours. Failure to respond in such circumstances may lead to appropriate blocks on the NAT device in order to protect the integrity of the backbone network and users information. Further information can be found at on OxCERT's 'Logging of network access' page.
While it is recognised that smaller units may not be able to have two people available, contacts must be provided who can take responsibility for the unit's ICT systems. For all units, contingency plans must be in place to cover absence of the primary contact. Central computing services will maintain a register of security incidents relating to the network which will be available for audit and will be summarised in regular reports to the PRAC ICT Sub-committee. The PRAC ICT Sub-committee is responsible for receiving reports on breaches of security.
Units should ensure they have sufficient contingency plans in place for dealing with security incidents, including plans for rebuilding machines in the event of a system compromise. In many cases, where attackers have gained administrator privileges and/or there is insufficient logging there may be no alternative but to re-install systems from scratch. Units should take this into account when carrying out risk assessments on business critical systems and when implementing redundancy. Care should be taken to avoid backup systems being open to the same vulnerability as the primary system (for example both using the same administrator password).
This section of the toolkit adds further detail to the corresponding Business Continuity Planning sub-policy within the Information Security Policy.
Business continuity plans incorporate, but are not the same thing as, disaster recovery plans. Disaster recovery plans tend to include documentation for the restoration of systems, treating each system equally. Business continuity plans however include risk assessment and the option to mitigate those risks. They also include prioritisation of services and systems based on the criticality to the unit/University. Business continuity can therefore focus on prevention as much as on recovery and is ultimately about keeping critical services running.
Key to business continuity is therefore the identification of critical operational processes. These should be the processes which the unit/University relies on more than any others in order to achieve its primary goals. The requirements for continuity should also be identified including for example identification of specific processes, systems, staff, materials, transport etc. required for the successful operation of the service. Requirements for continuity should also include acceptable time periods for which services may be unavailable. Doing this will help to prioritise systems/services and be useful in ascertaining the level of resource that should be provided for mitigating risks.
Risks to core services should then be evaluated. These can be varied and should include any event that would have a significant impact on the operation of those services. These could be technical risks such as system failures, physical disasters such as floods, fires, explosions or social/human issues such as pandemics etc. The business continuity management process should then include controls to reduce those risks whether by preventative or reactive means, and limit the consequences of damaging incidents. This may include, for example, suitable insurance or the implementation of additional preventive and mitigating controls (e.g. planning for a pandemic). One area that is often overlooked when considering business continuity is that of security breaches, as well as system failures, and the plans for each may be different. It is therefore imperative to have plans in place for the restoration of services following security breaches. These plans should take into consideration the requirements of the University, as laid out by OxCERT, for incident response (see Incident Handling).
- Emergency procedures to describe the immediate action to be taken following an incident
- Fallback procedures to describe any actions that might be taken to maintain services on a temporary, or reduced capacity basis
- Resumption procedures to describe actions to return business to normal operational levels within a specified time period
- Testing procedures and schedules to put into practice plans before incidents happen.
Information, and all other assets, required for business processes should be readily available to those who need it in case of such events. However, it is important to consider that business continuity plans should identify high impact vulnerabilities in the unit and or University. Such plans will likely be treated as sensitive or confidential and should therefore be handled accordingly. Business continuity plans should also be accessible in the event of any disaster scenarios so thought should be given to storing plans securely in remote locations.
The overall business continuity framework within a unit/the University should ensure that all roles and responsibilities are clearly defined. Business/service owners should be responsible for business continuity plans and for ensuring that plans are maintained and kept up to date. Business continuity plans should therefore be reviewed and audited on a regular basis and/or when significant changes occur.
Business continuity plans should also be tested on a regular basis as appropriate. This could include numerous techniques including discussion of certain scenarios, simulations, disaster recovery testing or even complete rehearsals of major events.
This section of the toolkit adds further detail to the corresponding Compliance sub-policy within the Information Security Policy.
As with all other aspects of this policy, a devolved model is used for compliance. Whilst the PRAC ICT sub-committee is responsible for monitoring compliance at a University level, the responsibility for the compliance within individual units is devolved to the head of department. Essentially this means that all users must be made aware of their responsibilities for implementing these policies.
Heads of department (usually via other delegated staff) are responsible for ensuring that there is a management framework for information security. Policies and procedures must be in place and clearly communicated to all users on a regular basis in order for compliance to be devolved down to individual users. Signing up to policies and procedures on a regular basis should therefore be done in a way that is both apparent to the user and auditable. For example this could be done via an interactive online process and/or part of annual reviews. Where users are unsure of their responsibilities, or aware of shortcomings this should be reported to their line manager.
It is important that there is a consequence for failing to comply with these policies but that consequence will, naturally, depend on the specific circumstances. From an end-user point of view, disciplinary action is a possibility where policies are breached wilfully or as a result of negligence. Similarly failure to comply with policies, advice and guidance that leads to systems or accounts being compromised will result in those systems and/or accounts being blocked from the network and/or disabled. In such cases policies on Incident Response, and the requirements of OxCERT must be met before the account/system is enabled again.
Responsibility for this, regardless of the importance of the system/account in question will lie with the user/administrator of the account/system. Ultimately, all actions that occur in accordance with these policies will be supported at the highest level of the University.
The question is often asked as to how to enforce these policies and monitor compliance. Monitoring compliance is an exercise in information assurance and can be done in a number of ways (both internally and externally). It is difficult to prove compliance so it is therefore important to be able to detect breaches of policy. This may be via technical means such as monitoring logs and other intrusion detection techniques or by human means such as spot checks.
One clear way to monitor compliance is audit and this can be done internally or externally depending on requirements. Audits can be in the form of technical audits (penetration testing, vulnerability scans etc.) or audits of policies and processes. External audit will usually not be necessary as the cost may outweigh the benefit. However it can lead to a greater degree of assurance and certification. This may be required for certain high-risk systems or where it is a contractual requirement. Further advice on audit can be sought from OxCERT. There must be procedures in place so that incidents are reported upwards and senior managers receive appropriate reports.
It is imperative, when handling personal or sensitive personal data, that the principles of the Data Protection Act 1998 are followed and, special care must be taken when data is to be transferred or stored outside of the European Economic Area. Full guidance and advice on compliance with data protection and privacy issues can be found on the University's Data Protection web pages.
It should be noted that the following is not legal advice. However there should be some awareness that there are certain legal issues that may affect certain policy decisions, and that may need to be dealt with via policies themselves. For example different countries have legal restrictions on the import, export and use of cryptographic technologies. Care should therefore be taken if travelling with cryptographic technology (e.g. laptops using whole disk encryption). One good source of information on this matter can be found on Professor Bert-Jaap Koops' web pages. For further advice please contact OxCERT.
Also relevant are the powers of law enforcement in the UK to require that decryption keys (or the relevant plaintext) should be presented under certain sections of the Regulation of Investigatory Powers Act (RIPA). Thought needs to be given to who would be responsible for providing keys and/or plaintext if such a request was made.