Title of Service: Co-location services - part of Shared Infrastructure Services

Status of Document: This document describes the co-location services offered in June 2012.

1. Purpose

  • This document sets out the Service Level Description (“SLD”) for the Shared Infrastructure Services (“SIS”, “the Services”) delivered to a customer (“the Customer”) and managed by Oxford University Computing Services (“OUCS”). Shared Infrastructure Services are grouped into the following:
    • Computing equipment co-location in the University Shared Data Centre (“USDC”) – “Co-location Services”.
    • Network provision within the USDC – “Data Centre Networking”.
    • Virtual machine (“VM”) and virtual data centre (“vDC”) provisioning within the SIS virtual infrastructure – “Virtual Infrastructure”.
  • Unless otherwise expressed in writing, this SLD will apply to all Services offered to the Customer. The schedules attached will form part of this SLD.
  • The OUCS Management Committee, or its successor body will review this SLD regularly. Any addition or variation made will be communicated with 90 days notice.

2. Customer Responsibilities

  • The Customer will nominate up to three members of its staff, who will be responsible for liaising with OUCS concerning the usage of Co-location Services. In normal circumstances only these nominated individuals would be expected to contact OUCS concerning the Service. The Customer will promptly notify OUCS should there be any change to these nominees.
  • The Customer must not interfere with aspects of the Services for which OUCS is responsible. OUCS reserves the right to charge for the time involved in resolving issues caused by members of the Customer's organisation attempting to take any action in regard to the Services that has been defined as the responsibility of OUCS.
  • All service usage is bound by the University of Oxford‘s “IT Rules and Regulations” and the University’s “Disclaimer of Liability”.

3. Working Hours

  • Working Hours are defined as 09:00 to 17:00, Monday to Friday excluding statutory public holidays and the period between 24th December and the 1st January inclusive.

4. Fault Reporting

  • If a fault is notified within Working Hours, OUCS will commence investigation within one working hour (provided that no similar fault is also being handled by the same group).
  • For faults notified outside Working Hours, informal arrangements exist for staff to be called, but no funding is provided to make this a contractual obligation.
  • OUCS will use reasonable efforts to resolve any notified fault.
  • Faults should be reported to OUCS via:
  • E-mail to usdc@oucs.ox.ac.uk
  • OUCS will report significant faults/outages to the itss-announce mailing list and will also attempt to contact Customer contacts via email. The ITSS SMS system may also be used in the event of a major fault.

5. Request Fulfilment

  • OUCS will use reasonable efforts to respond to any request for the supply of Services within one working day.

6. Availability

  • Although no formal availability target is published, the service operates at all times.
  • Outside Working Hours, OUCS will use reasonable efforts to ensure the Services are available.
  • OUCS reserves the right to deny Customer access to the Services on reasonable grounds. OUCS will endeavour to notify the customer promptly should such a situation arise.

7. Scheduled Maintenance

  • OUCS will give a minimum of 5 days notice of scheduled maintenance events.
  • Emergency maintenance work can be carried out at any time and OUCS will attempt to inform Customers of emergency maintenance work at the first available opportunity.

8. Charges and Payments

  • Unless otherwise agreed, the customer will be invoiced quarterly (at the end of January, April, July, and October) in arrears for all other charges due. Intra-University payments must be made by journal transfer within the University’s financial system. Other invoices are due for payment within 30 days of the date of the invoice.
  • OUCS may vary the charges specified with effect from the end of the first 90 days or any date thereafter by giving to the Customer 90 days notice in writing of the new charges.

9. The Data Centre

  • The USDC is located in the basement of the Oxford Molecular Pathology Institute (“OMPI”).
  • The USDC has been designed in an N+1 (components have at least one independent backup component) resilient fashion to allow for close to 100% availability.
  • Racks within the USDC are pre-fitted and are standard 19-inch racks with dimensions of 600mm wide, 1200mm deep. The racks allow for 38 rack units (1.75 inches per unit, known as “U”) of usable space.
  • Each rack also features lockable doors and redundant power capabilities.
  • Equipment can be hosted in either:
    • A full private general rack
    • A split private general rack
    • A shared general rack
  • Physical proximity cannot be guaranteed for multiple pieces of Customer equipment hosted in shared racks.
  • Equipment hosted in pre-built racks such as High Performance Computing (HPC) equipment may be able to be hosted by special request, and negotiation, on a first-come first-served basis.

10. Air Conditioning

  • The USDC is designed to operate enclosed cold aisles.
  • OUCS will use reasonable efforts to maintain the temperature in the USDC cold aisles of 22 degrees Celsius +/- 5 degrees, and a humidity of 50% +/- 5%.
  • The Customer must ensure the cooling load per general rack (or pro-rata part thereof) does not exceed 6 kW without prior approval by the data centre staff. This is to ensure the appropriate distribution of any heat load within the data centre space.

11. Power Supply

  • Power Availability is defined as a period when power is available via either per-rack power feed to Customer equipment. Power is defined as unavailable if both per-rack power feeds are disrupted.
  • OUCS will use reasonable efforts to maintain Power Availability.
  • Failure of the utility grid to deliver power to both of the USDC incoming power feeds for a period longer than the UPS runtime (see below) is beyond the control of OUCS. If OUCS fears such a circumstance will arise it will make best efforts to contact Customers to ensure orderly power down of equipment.
  • Customer power consumption must not exceed 6kW per general rack or pro-rata part thereof without prior approval by the data centre staff.
  • Power is supplied via standard C13 connectors; C19 connections are also available in full racks.
  • OUCS will provide up to 40 power connections (32x C13, 8x C19) for customers renting a full rack, split evenly over the two USDC power feeds.
  • OUCS will provide up to 32 power connections for Customers renting a half rack, split evenly over the two USDC power feeds.
  • OUCS will provide up to 16 power connections for Customers renting a third of a rack, split evenly over the two USDC power feeds.
  • OUCS will provide 2 power connections for Customers co-locating an individual server, one on each USDC power feed.
  • Power consumption is measured by kWh used, or calculated using the 95th-percentile of power usage. Power consumption will be monitored on a regular basis.
  • Power usage by equipment in general racks will be charged at the Utility Company supply rate multiplied by 1.5, to allow for additional cooling requirements. This rate naturally varies month by month and the customer will be charged in accordance with this variance.
  • The power supply and associated electrical systems have a five-yearly test cycle that requires the USDC and all hosted equipment to be powered down. OUCS will provide appropriate notice prior to this test and will endeavour to ensure that this test occurs outside of Working Hours, most likely over a weekend. If possible, OUCS will arrange to have the two power feeds tested independently to allow equipment to remain powered up, but at-risk, during testing.

12. UPS

  • The two USDC power feeds are protected by two independent UPS systems. OUCS will use reasonable efforts to maintain these systems including ensuring appropriate periodic maintenance is carried out according to the manufacturer’s recommendation.
  • The target UPS autonomy runtime is 12 minutes.

13. Other Systems

  • OUCS will use reasonable efforts to maintain the following other systems fitted to the USDC:
    • Fire detection and suppression
    • Water leak detection
    • CCTV
    • Physical security and access control
    • Environmental monitoring

14. Network Connectivity

  • Standard Network Connections and Fibre Network Connections are defined in the Data Centre Networking Section.
  • OUCS will provide up to 22 Standard Network Connections for customers renting a full rack.
  • OUCS will provide up to 11 Standard Network Connections for Customers renting a half rack.
  • OUCS will provide up to 7 Standard Network Connections for Customers renting a third of a rack.
  • OUCS will provide 1 Standard Network Connection for Customers co-locating an individual server.
  • Additional Standard Network Connections and Fibre Network Connections are available on a first-come first-served basis as defined in the Data Centre Networking Section.
  • For private racks where the network port density exceeds the specification above, additional active data centre networking equipment can be situated in the rack to increase port availability. This will reduce the amount of rack space usable by the Customer and will incur additional charges.
  • Customer managed network switching equipment is only permitted in racks provided with a single logical connection to a single campus VLAN, presenting a single logical point of ingress for the network into the customer equipment. NB: This restriction is in place to prevent network loops affecting Services provided to other Customers.
  • Customer-managed network equipment must have Per-VLAN Spanning Tree (PVST) enabled at all times.

15. Exclusions

  • OUCS is not responsible for the maintenance of the co-located hardware or any software that runs upon it.

16. Physical Access

  • OUCS will provide systems to allow for physical access on:
    • A 24 hour basis for Customers renting a full rack or part-rack
    • An accompanied basis during Working Hours for a customer co-locating one or more pieces of equipment in a Shared Rack.

17. Customer Responsibilities

  • The Customer must ensure that all cabinets are cabled to allow appropriate levels of airflow.
  • The Customer must ensure that blanking plates, brush plates and other airflow management devices are used as directed by the USDC staff to ensure a stable hosting environment.
  • The Customer must not bring packaging associated with equipment into the USDC. All equipment must be unpacked and prepared outside of the data centre space.
  • The Customer must obey all additional guidance related to Health and Safety and all other reasonable requests required to maintain the USDC in good working order.
  • All customer equipment should come with lights-out management capabilities, be attached to an IP KVM device or be attached to an appropriate management system to enable remote management.
  • Customers must ensure that all doors are physically secured and returned to their original position when entering and exiting the USDC and the enclosed cold aisles within the data centre space.
  • Failure to comply may result in termination of services.

18. Co-location Charges

  • The charges are defined in the price schedule available on the OUCS website.
  • All charges are monthly or part thereof and will be pro-rata for the first month.
  • Power usage is charged at 1.5x the supply price. This is around 13.5p per kWh (correct as of June 2011).