2. Network Infrastructure
The connection of the University network to Janet (SuperJanet3) is provided by means of a 34 Mbps ATM link shared with Oxford Brookes. Outgoing traffic bandwidth is set by UKERNA (which operates Janet) on the instruction of JISC at 8Mbps. The incoming bandwidth is unconstrained, except by the effective 25Mbps capacity of the line (and the capacity consumed by Brookes (which has risen to as high as 30% of Oxford's at times).
It is noteworthy that the patterns of usage of incoming and outgoing traffic through the day are remarkably similar, indicting activity matched to a UK work cycle [figures 3 & 4]. The peaks and troughs of activity on incoming traffic are predictable (the troughs being in the early hours of the morning when Oxford users are at their quietest). But the fact that the outgoing traffic follows much the same pattern suggests it too is being largely generated by Oxford users, rather than by, say, American surfers devouring Oxford's Web pages!
UKERNA's contract with Cable & Wireless for Janet (SuperJanet3) expires at the end of March 2001. Considerable effort, coordinated by JISC, has been expended nationally to develop a replacement. Comprehensive Spending Review funds were secured to ensure a capital injection sufficient to be able to control ongoing costs. The general principle was expounded (though not actually made compulsory) that Janet henceforth would be responsible for interconnecting regional networks (which usually have the misnomer `MANs').
OUCS undertook investigations, in conjunction with Oxford Brookes and UKERNA, to determine the most effective way for Oxford to be connected to this proposed national backbone. At one time, it appeared possible that Oxford might consider joining the proposed MAN Lense (covering the area bounded by Guildford, Southampton and Brighton). However, this made no real sense (economically or otherwise), since Oxford's traffic is national and international, not regional.
Ultimately, UKERNA agreed to develop a `Thames Valley Network', with a connection to the backbone at Reading, and providing for the two Oxford universities, together with Reading and Rutherford Appleton Laboratory. UKERNA is providing the funds, from its national budget, and is managing the acquisition and installation (and possibly its ongoing operation). Tenders were called, and the contract awarded to Scottish and Southern Electric, which will provide 622Mbps lines interconnecting the 3 sites (thus providing valuable resilience, as well as a useful direct link between Oxford and RAL). These will be installed in January 2001. A valuable feature of the contract is the ability to upgrade at modest cost to 2.5Gbps circuits within 2 years if needed. The links to be provided will be based on DWDM technology, as for Janet itself.
- If Parkinson's Law continues to hold (capacity is consumed at whatever level it is provided), then there may be concerns about the ability of internal networks to cope; the new Backbone (see below) is timely, but may need a `mid-term upgrade' earlier than expected; the biggest bottleneck may prove to be departmental or college networks, some of which are still operating at 10Mbps;
- The current technology employed in the Oxford Firewall is unlikely to be able to cope if traffic does grow to fill the 622Mbps lines; much of Oxford's defences against hacker attack are dependent on this device; it may be very expensive to upgrade it to technology capable of keeping pace with this level of traffic volume growth;
- If the proportion of trans-Atlantic traffic remains much the same (about 40-50% of the total incoming traffic), as seems likely, then there could be a dramatic increase in the charges Oxford is levied by JISC (see below); in any case, because of Oxford's relatively well-developed network, it is likely to be able to ramp up its usage faster than most other HEIs — with charges based on relative shares of traffic, this could increase Oxford's charge substantially (the only redeeming factor is that, if charges continue to be based on the previous year's usage, there will be a lag before this increased usage translates into increased charges).
Traffic patterns will need to be carefully monitored once TVN is installed so as to be ready to deal with such problems before they become critical. OUCS has also been investigating the possibility of obtaining a secondary Internet connection from a commercial provider, to which it might endeavour to divert much recreational traffic; at this stage, this does not really seem to be cost-effective, but is being kept under review.
It is very difficult to know how these charges may vary in future. The present arrangement has a number of anomalies (eg see above), which are certain to be redressed before long. During the year, JISC undertook a consultation exercise to seek opinion from the HE community; Oxford responded to this, among other things pressing for continued (or greater) top-slicing; it is thought that around 50% of the community also pressed for top-slicing model. However, there was a considerable divergence of opinion on other related matters, with a vocal minority seeking full cost recovery based on actual current usage. Furthermore, it is understood that the Funding Councils are very much against further top-slicing. JISC has as yet given little indication how it will respond to the quest for an improved cost-recovery model.
For 2001/2002, indications are that charges will be formulated in the same way as for 2000/2001. It seems very likely that, subsequently, charges will take into account all incoming traffic, not just trans-Atlantic traffic (originally, this was the fastest-growing component of Janet's costs, but this is no longer the case). It is also not clear whether the present incentive to use the national cache will remain in the longer term. This is something that OUCS will continue to monitor and influence as best it can [figure 7].
The hardware used for this, the Acedirector, routes requests to one of the seven proxy servers which are connected to it, which themselves pass on requests to the national cache. Domains within the UK will be requested directly.
During Hilary Term, about 55 million requests for Web objects were handled by the cache farm each week. During Trinity this increased to over 60 million per week. The amount of data served to Web browsers per week was 350 megabytes in Hilary, and 400 megabytes in Trinity [figures 8, 9 & 10]. Approximately 25% of the data requested by local Web clients is served from the local caches.
Gigabit Ethernet does not have the same degree of intrinsic resilience as FDDI, so care was taken to design a configuration that offered similar security. The network was progressively deployed during the year, and has proceeded with minimal difficulties, and almost no disruption to traffic. At year end, there were still about 25% of networks to be transferred. The target for completion, December 2000, seems likely to be met.
As far as the geographic distribution of the backbone is concerned [figure 15], there has only been very modest development this year. In any event, resources have been devoted very substantially to deployment of the Gigabit Ethernet [figure 16].
For the longer term, OUCS has no plans to reduce or abandon this service, which a few universities appear to be doing. In such a case, users would have to dial-up another ISP and gain connection that way. Economically, this makes some sense; however, the chief impediment is that access to many electronic resources is governed by the IP address of the user. Dialling-up OUCS ensures it is within Oxford, whereas dialling up an external ISP would result in access being barred. Since Oxford has probably the largest collection of electronic resources in the country, this is a much greater factor for us than most others.