Where does the Internet’s network technology reach its limits?
Hoogendoorn: The Internet today is a best-effort network. This means that while the Internet endeavors to deliver data packets as rapidly as possible, it does not guarantee their delivery. In particular, when congestion occurs in the IP-network, the excess packets are simply discarded, but the network node can keep on operating. When a link in the network fails, packet loss also occurs until routing protocols have found new forwarding paths to the affected destinations. End-to-end protocols (such as TCP) have learnt to cope with such packet loss by repeating the transmission of lost packets, and by slowing down the offered traffic when repeated loss occurs. It is precisely this simple and robust behavior that has made the Internet such a successful technology for connectionless services. However, in connection-oriented real-time applications, for example voice or video, when packets are lost the quality of the service is almost immediately affected, up to complete loss of service. Repeating lost packets at a later time is of course no solution for such services.
What are today’s visions for overcoming these technical barriers, and what future solutions are being worked on around the world?
Hoogendoorn: There are a number of techniques which can be used to alleviate the situation. Some are available already while others will find their way on to network equipment soon. For example, in order to provide Quality of Service (QoS), priorities can be used to decide which packets should be discarded first in the case of congestion, this is the principle behind Differentiated Services (DiffServ). Another approach is to use Multi Protocol Label Switching (MPLS) to provide a reserved path from source to destination. Of course, this by itself does not yet guarantee QoS, but it reduces the problem to that of ensuring that the offered traffic to a specific destination does not exceed the corresponding reserved path capacity. However, the number of paths to each specific destination and thus the effort of path configuration increases proportionally with the number of interconnected sites.
Apart from Quality of Service, we also need to address faster failure detection and recovery. The Protocol Liveliness Protocol which is now being discussed in the Internet Engineering Task Force (IETF) will make it possible to detect any kind of link failure in some tens of milliseconds. A quick reaction to circumvent the failure is also required. There is a limit to how fast routing protocols (such as OSPF) can be made to react, so techniques for local failure reaction are necessary. Existing techniques such as Equal Cost MultiPath (ECMP) and the MPLS fast reroute feature allow traffic to be diverted to the remaining available links. However, these techniques require complex engineering to ensure that all link failures are covered and that sufficient capacity is available on the alternative after a link failure.
What will be the characteristics of the next-generation Internet?
Hoogendoorn: My vision is a universal network able to carry all services ranging from demanding high-bandwidth real-time services to best- effort service without guarantees. There are several reasons for thinking that it should be one universal network and not several. One reason is the economy of service sharing. For example: any capacity not needed at a particular instant for high-quality services can be automatically utilized by best-effort services. Moreover, as we cannot predict which combinations of services will be required by future applications, maximum flexibility is assured by hosting all base services on one network. A further reason is the savings in operational costs resulting from a single network.
What is Siemens’ approach here in the KING project?
Hoogendoorn: The approach which we are pursuing adheres to the basic principles of connectionless packet switching. After all, the simplicity and robustness of these principles are largely responsible for the overwhelming success of IP networks today. This means that we are pursuing solutions which do not require a multitude of path overlays on the connectionless network. Elements of our solution approach include traffic distribution and generalized multipath routing, also allowing unequal cost routes, as well as fast local failure detection and reaction. Moreover, we are studying the engineering rules for such networks, e.g. setting traffic distribution weights and budgets for admission control to the network. It is our vision that such tasks should largely be automated to relieve the network operator of burdensome and expensive operational procedures.
Who is involved in the project?
Hoogendoorn: Siemens is pursuing this research together with five German Universities and two Fraunhofer research organizations. These are University of Essen, University of Karlsruhe, Technical University of Munich, University of Stuttgart, University of Würzburg, Fraunhofer Institute ESK in Munich and Fraunhofer Institute FOKUS in Berlin.
What will the solution look like?
Hoogendoorn: We do not want to develop a specific Siemens solution, we want to develop solutions for next-generation carrier-grade networks which will find the widest possible acceptance in the networking industry. To this end we will use the results and knowledge gained from our work to influence industry standards in this direction. The solution that we are pursuing comprises three key elements: A stateless core network with traffic distribution and generalized multipath routing, a resource management for high-quality traffic that is located at the edges of the network only and an autonomous network control.
How does the “autonomous network control” work?
Hoogendoorn: We refer to the autonomous network control entity as ‘Network Control Server,’ or NCS. The NCS is aware of the network topology and multipath routes, and receives statistical information on network traffic, for example link loads. It uses this information at regular intervals, say every 5 to 15 minutes, to determine appropriate traffic distribution weights and admission control parameters to ensure efficient network operation, also taking potential failures into account. In other words it operates as a kind of automated traffic engineering and network management entity.
What stage is your research currently at?
Hoogendoorn: We have made good progress in the analysis of the basic concepts. For example, efficient algorithms for loop-free generalized multipath routing have been found, as have computational formulas for traffic admission and distribution. At the same time we have initially demonstrated some KING principles using present-day routers and other network equipment. Even though it did not yet include all aspects of our solution approach, it served as a convincing demonstration of the utility of the KING concept, and was recently shown at CeBIT 2003. We are now in the second year of our project, placing more emphasis on implementation aspects, for example the network control server, while at the same time continuing to develop the theoretical foundations for our work.
What are the advantages offered by KING in terms of speed, economy, and availability?
Hoogendoorn: The advantage that we see accruing from KING is that it effectively supports carrier-grade networks. To summarize, this means quality of service with resilience, efficient network resource allocation as well as minimized operational costs. This is the key to unlocking the potential of the next generation network.
What is your personal motto?
Hoogendoorn: Always look forward. Learn from the past but don’t dwell on it.