A virtual datacentre (vDC) offers a pool of compute resource, storage and IP addresses for allocation to VMs as the customer sees fit. In vSphere terms, it is in essence a resource pool with associated storage and a pool of IP addresses.
A vApp is in essence a container that can contain one or more virtual machines, and optionally an internal network directly connecting VMs within a vApp . In addition, a vApp also shares some functionality with virtual machines. A vApp can power on and power off, and can also be cloned. In an virtual datacentre, all VMs you deploy must be contained within a vApp. However a vApp with one VM in it to all intents and purposes behaves exactly the same as a solitary VM.
Catalogs contain either virtual machines (vApps), virtual machine templates (vApp Templates) or media used to create virtual machines (ISO images, etc). Private catalogs are available to a customer organisation, and a public catalog is also available, offering ISOs of common Linux distributions and standard vApp templates. See the VMware knowledge base article Catalogs in vCloud Director for further information.
Resources can be provided to a virtual machine for a limited time rather than "forever". These leases must be renewed or resources will be reclaimed. By default leases in a virtual datacentre are set to never expire, but this can be changed by the organisation virtual datacentre administrator as necessary.
A connection to the Campus network is provided to virtual datacentres in the Campus cluster. It allows your college or departmental VLAN to be extended into the private cloud. It is delivered as an additional annexe connection and the cost is rolled into the data centre charges. As such the typical cost associated with enabling a new Frodo port is included.
The Data Centre Network is a fully managed network, provided to virtual datacentres in the Datacentre cluster. It provides additional resiliency over the Campus Network as well as greater bandwidth (it is delivered over 10Gbit networking infrastructure).
Under normal circumstances one virtual datacentre is available per customer, which can be resized to accommodate changing requirements as necessary. However we recognise that there may be a need to separate production and test/development workloads, so if required we can also offer a second virtual datacentre to a customer for this purpose. It should be noted that roles and permissions may be set within a datacentre to provide separation of privilege (for example, restricting console access to particular users within a datacentre). This can be integrated with a unit's Active Directory (or other LDAP-compliant directory service).
Customers with a virtual datacentre (either on the Campus or Datacentre cluster) may also request one or more private internal networks to be created for them. By default these will have no external connectivity, although a VM with two virtual network cards may be deployed to connect the internal network to the external datacentre network. Internal networks may be useful for management, cluster heartbeats etc.
At present we do not offer an image level backup of virtual machines hosted in a virtual datacentre. The exception to this is VMs that are managed by NSMS under a Server Management agreement. We expect to be offering self-service image level backup “backup-as-a-service” from early next year. As such we recommend that VMs are backed up using a guest client such as the one offered by the HFS service.
The private cloud is built across two sites (Banbury Road and South Parks Road), with part of the service delivered from each site and data at each site replicated to the other. All components in the cloud are redundant, up to the sites themselves. In the event of failure, all data is secure (replication is asynchronous to within the last minute or so). The cloud team will evaluate the nature and duration of the failure. In the event of complete site failure, workloads running on the failed site will be brought up on the other site and all services will be offered from one site for the duration of the site failure.