OUCS Annual Report
2. Network Infrastructure
The University's connection to SuperJanet3 is nominally at 8Mbps (using a 34Mbps ATM connection which is shared with Oxford Brookes University). However, in the interests of removing traffic from Janet as quickly as possible, Oxford is able to utilise up to 24Mbps for incoming traffic (this equates to about 270Gbytes/day). This is just as well, since incoming traffic continues to grow (about doubling every year), and in peak months (eg May-99) exceeds 60 Gbyte/day on average, with some days exceeding 80 Gbytes [figure 1]. Of course, traffic is not uniform through the day, with a major trough from 3am to 9am, and peaks between 2pm and 6pm; these peaks exceed 50% of the incoming capacity limit [figure 2].
Discussions with UKERNA (which operates Janet) are ongoing, to ensure that suitable steps can be taken as the limit of the incoming bandwidth is neared. The General Board has made a budgetary provision in 1999/2000 to enable any expense involved to be funded.
Outgoing traffic remains fairly stable, at about 25Gbyte/day on average (reaching 40Gbytes on peak days) [figure 3].
The Funding Councils' Joint Information Systems Committee (JISC) was instructed by the Funding Councils to pass on some charges for the use of Janet. This was done in order to impose a measure of dampening on growth in traffic, especially over the (then) expensive US link. As from 1-Aug-98, therefore, charges are being levied on the University for all incoming traffic passing across the US link, except that between 1am and 6am 7 days/week, and also excepting any that is directed through the national Web cache. The rate was set for this year at 2p/Mbyte, but subsidised by 50% by HEFCE; charges were billed quarterly in arrears. It should be noted that it is impossible to judge whether a particular site is located somewhere that involves traffic across this link (eg most of the Far East traffic comes via the US, and ".com" sites are not exclusively in the US); even relatively local traffic can sometimes be directed across the US link, depending on dynamic traffic routing activity.
In response to these measures, OUCS took the following steps during the year:
Since Oxford's network is one of the largest and most complex in the country (indeed, perhaps in the whole of Europe), matching traffic data exactly to departments, units or colleges is most difficult. Furthermore, there seemed no sensible means of charging for public-access computers (in OUCS, in libraries and in colleges), nor for dial-up connections, nor for use via shared computers like Ermine and Sable in OUCS. These factors, combined with the difficulties of handling charges for some units in Oxford, resulted in the University's Resources Committee agreeing to fund the charges, at least for 1998/9, centrally. They were expected to amount to up to £100,000, payable quarterly in arrears, but in the event JISC waived the bill for the final quarter (due in August 1999) because a new charging regime (fixed payments based on past usage) was to be introduced in 1999/2000, with payments in advance. In any case, and although traffic had grown at a greater rate than anticipated (resulting in greater recoveries at the 2p/MB rate than anticipated), the cost of extra US capacity fell even more sharply. This moderation of traffic cost, plus the need for institutions to be able to budget for this cost, resulted, following a campaign led by Oxford, in the change to fixed-cost charging in 1999/2000.
Notwithstanding the introduction of charging and the above measures, Oxford's US traffic continued to grow, reaching a peak of over 900Gbytes in May 99, prior to the usual summer decline (last year, the peak was in July, with nearly 600Gbytes of incoming traffic) [figure 4]. Nevertheless, Oxford's share of the total national incoming US traffic declined slowly, from about 4% at the start of the year, to about 3.5% by July 99 [figure 5]. A close watch is kept on this overall comparative situation (to ensure Oxford does not fall behind other universities in its endeavours to constrain traffic growth), as well as on a selection of other universities [figure 6]. Based on this evidence, Oxford's position seems to be satisfactory.
As indicated above, the existing Web cache was upgraded during the year in anticipation of significant additional usage following the introduction nationally of traffic charges (note that 60-70% of all incoming traffic across the US link is comprised of Web pages [figure 7]). Directing a Web request via the cache will ensure that if that same Web page has recently been fetched by someone else, then it will be furnished locally, rather than undertaking another fetch. The cache has proved very reliable and performed its task efficiently. Although the number of computers using it grew steadily during the year [figure 8], it still only peaked at just over 4,000 (out of over 20,000). In a peak month it handled 25 million requests representing160 Gbytes of traffic [figure 9 and figure 10]. Nevertheless, at best the cache only achieved a 30% "hit rate", ie in only 30% of cases was the sought page already available [figure 11].
Towards the end of the year, it became clear that steps would have to be taken to enforce use of the cache, in view of the financial savings to be made. Consultations with the community were commenced and plans laid for upgrading the cache with a view to preparing the way.
The Backbone network comprises an FDDI ring with several "spurs", one in particular connecting all University departments located in hospitals in the Headington area [figure 12]. The number of networks connected grew only modestly [figure 13], but the numbers of computers connected [figure 14] and amount of traffic carried [figure 15] continued to show substantial growth. There is now only a relatively short period during the day, 4am to 8am, when there is a relative lull in traffic [figure 16].
The Backbone predominantly utilises FDDI over University-owned optic fibres in University-owned ducts, so incurs minimal recurrent costs. This is a superb heritage of the University's decision to acquire a restricted telecommunications licence in 1986. In addition to the FDDI network, a few other technologies are now also employed, where it is not cost-effective to extend the duct network. This includes spread-spectrum microwave wireless LANs, used to connect a few buildings in North Oxford off the path of the duct network, a few in the vicinity of The Plain and along Iffley Road, and this year Templeton College). It also includes a microwave link to the Begbroke site occupied by Materials and others; this also carries voice traffic using Voice-over-IP technology. [figure 17]
By and large the Backbone has performed remarkably reliably this year, as throughout its life. However, faults have occurred in a couple of nodes during the year; these caused disruption to those connected via those nodes, one taking a couple of days to restore. A more damaging break occurred in December 1998, when a digger cut through the duct taking the network to Headington. Initially, this only severed the connection to Oxford Brookes (cutting them off from Janet), but it was necessary to sever the connections to the Headington-based University medical departments in order to restore the cable. The break for University departments only lasted 5 hours, but that for Brookes lasted 36 hours. Fortunately, it was at a very quiet time for that university. This experience prompted the development of plans for an alternative duct route to Headington to remove this single point of failure; a bid for funding in HEFCE's MAN initiative to do this proved unsuccessful.
The current Backbone technology (FDDI running at 100Mbps) was first installed in 1991/2, and has proven to be immensely reliable and competent to handle the increased loads. However, it is clear from the rate of growth and daily utilisation pattern [figures 15 and 16] that it would shortly need substantial upgrading. Furthermore, the above faults have underscored the fact that many backbone components are now 8 years old, and reliability must now be expected to decline.
OUCS had been building up reserves from recurrent Gandalf charges for several years, but this would still not be adequate for the replacement required. Accordingly, a funding bid was made to the University's Resources Committee, which has proved successful, and work commenced late in the year to prepare tenders.
The OUCS dial-up service enables authorised users with a modem to access the University network from any location (world-wide) with a phone line. It supports a wide range of modem speeds, up to 56Kbps. It also supports ISDN connections, used by a few colleges and individuals. Additional lines were installed during the year, bringing the total to 145 by July 99 [figure 18]. This is necessary to keep pace with the increased numbers of users registered for and using the service [figure 19], and the numbers of calls made [figure 20].
This service has also proved to be reliable and effective during the year, due in no small measure to the attention paid to monitoring it by the network team.
Consideration was given by the IT Committee during the year to the possibility of restricting use of this service, or imposing charges upon one or more segment of the user population. It was especially of concern lest some units of the University be encouraging use of this service in place of providing adequate Ethernet connectivity (the IT Strategy's preferred option). In the end, it was agreed not to use charges to achieve the appropriate strategic outcome, but persuasion (which has indeed proved effective).