International Center for Advanced Internet Research
New forms of services-oriented architecture enable advanced capabilities to be created by abstracting functionality from specific physical implementations, system configurations, and middleware. This ongoing macro trend toward services-oriented architectures is beginning to transform communications services at all levels. As convergence toward common protocols and digital communications infrastructure rapidly progresses, this new architecture offers another potential for rapid migration from legacy models. This type of architecture paves the way for new communication services and infrastructures that fundamentally depart from traditional approaches, which consist of highly defined capabilities closely coupled with supporting infrastructure. In contrast, new communications infrastructure can be designed as a large-scale distributed facility that can be used as a foundation platform, on which it is possible to create many types of networks, services, and applications that are not closely integrated with underlying physical components and system configurations. The separation of service layer functionality from the attributes of underlying infrastructure has far-reaching implications. For example, it allows capabilities for highly programmable communication services and networks. Core network elements can be used as separable components, dynamically mixed and integrated with a flexibility not possible using traditional infrastructure. Consequently, these components become flexible tool sets, middleware modules, and other resources that can be continually and dynamically assembled and reassembled to create and enhance services. This new architectural technique is beginning to emerge from research labs, and it is being demonstrated in prototype on metropolitan, national, and international facilities.
In a very short time, information technology has undergone many revolutions, driven by multiple macros force-innovative disruptive architectures and technologies, large-scale gains in component performance, increasing cost efficiencies, and powerful new software functionality. These changes continually create new opportunities for applications and services creation. Historically, one important macro trend has been the creation of enhanced capabilities through architecture that provides for various levels of functional abstraction by making those capabilities less dependent on specific physical-layer implementations and configurations, including through the virtualization of infrastructure environments. For example, during the infancy of the computer revolution, programming was accomplished by directly manipulating physical elements. Later, increasingly more sophisticated compilers and programming languages were created, allowing continually more abstract software architectures, programmability, and general capability. Another example is the trend toward using new abstraction layers to separate service and application interfaces from authentication and authorization processes, as well as from specific implementations. This approach allows for significantly enhanced services and applications by avoiding closely integrating capabilities with specific systems, physical hardware, and configurations.
This general information technology trend continues today, and it can be observed in many of the changes influencing new communications architecture. In part, this trend is being propelled by convergence. For many decades, different communication modalities-e.g., voice, video, and data-were designed and implemented as distinctive services on separate, incompatible infrastructures, from endpoints to edge equipment through core facilities. Today, all communications modalities are being migrated to a common digital infrastructure, supported by a ubiquitous set of protocols-e.g., transmission control protocol/Internet protocol (TCP/IP). This trend toward digital communications enables an ever-increasing level of abstraction between physical infrastructure and communication services. At the edge of the network, it is no longer necessary to have devices designed for a specific communication service modality, one for video, another for data, and another for voice communications. Any device should be able to support any service modality at any location. Recognition of this potential is driving a revolution in the next generation of communications-enabled consumer electronics.
However, many of today's designs for new digital services are still based on long-held assumptions about creating and deploying communication services. Traditionally, a limited set of these services is precisely designed, with requirements carefully measured and specified. Similarly, the infrastructure resources being designed to support these services are created to precisely match those service requirements. An expectation is that this infrastructure can be deployed and supported for many years without major changes. Ultimately, this approach may result in an infrastructure that has many elements of the inherent inflexibility of the traditional infrastructure that it is replacing. Already, applications are being designed that cannot be supported by today's Internet, even by many of its more advanced implementations.
A different approach can leverage the many opportunities provided by digital communications convergence-opportunities to fundamentally change the traditional approach to the design and implementation of services. A new architectural framework can be created that enhances abstraction levels and reduces dependencies on specific communication infrastructure implementations such as hardware, protocol, and systems. By creating high degrees of separation between end-delivered services and support infrastructure, communication services can be provided with substantially more flexibility and capabilities. Because of this higher level of abstraction, core infrastructure becomes a programmable facility, or platform, on which it is possible to create many new types of functions. This architecture can be used to create truly programmable large-scale services, applications, and networks.
As with many prior innovative trends, development of this new architecture is being driven in large part by multiple large-scale, resource-intensive applications-especially global science applications-that cannot be supported by traditional infrastructure . Many require significant communication resources, such as asymmetric bandwidth intensive data transfers among sites around the world. Other applications are highly intolerant of latency and require high-performance deterministic end-to-end channels. Also, some applications require continually changing real-time infrastructure. In addition, many new types of general enterprise and consumer communication services require capabilities that are not available through traditional architecture. For example, current IP-based digital media services tend to be implemented as distinct applications. A more flexible infrastructure can allow multiple digital media capabilities to be easily integrated into virtually any other service or individual application and integrated into global communication services. For example, this approach would enable common portals to easily implement interactive digital media. This architecture will also enable digital media to stream with significantly higher density than common modalities used today.
The advanced networking research community is exploring a fundamentally new approach to the design of communications infrastructure. Rather than design a network infrastructure specifically for a limited set of communication services, these new initiatives are conceptualizing basic infrastructure as a large-scale distributed facility that can be used as a platform for creating, programming, and reprogramming limitless new specialized networks and distinctive services, including those not yet invented. One general goal is enabling any service to be accessed by any device at any location. The key to this potential is a new architecture that abstracts service functionalities from the inflexibilities and restrictions of supporting physical infrastructure, in part by allowing for levels of virtualization not previously possible. A high level of abstraction can be provided by resource layers that are transparent to higher-layer services, for example, by using virtualization techniques. Consequently, communication services can be designed independently from specific core, access, and edge delivery facilities and from the specific characteristics of edge devices.
The separation of services from underlying infrastructure has multiple implications. This architecture is significantly more flexible and scalable than traditional designs and allows for an infrastructure that is unlimited in its service creation capabilities. Instead of creating services on a centralized carrier network, services are based on a large-scale, distributed facility that advertises, allocates, manages, and controls resources dynamically.
Using this new architecture, new services can be more easily created and reprogrammed by continually assembling and reassembling multiple communication resource elements as required. This architecture also provides a potential for many networks to co-exist in an infrastructure, and it provides opportunities for a high level of services specialization, differentiation, and customization. In addition, it provides a means to separate physical infrastructure provisioning, management, operations, and support from end-delivered services. Consequently, it can allow some organizations to focus completely on infrastructure provisioning while allowing their customer organizations to provide only end-delivered services without having to own or operate their own infrastructure.
Until recently, the architecture and technology required to reduce the dependencies of communication services on supporting infrastructure did not exist. However, the current trend toward protocol and digital infrastructure convergence has provided major capabilities for mitigating or even eliminating these dependencies. In addition, other opportunities for dependency reduction are being created through new protocols, methods, and technologies being designed for every level of network interface, access paths, and core infrastructure.
Considerable standards and development activities are focused on services-oriented architecture, which provides a common method of defining and implementing capabilities within information technology (IT) environments. Any resource in such an environment can be an advertised "service." Web services architecture provides methods for defining "packages" of such services, that is, mechanisms that can be used to gather and use multiple individual services. These and related architectures and methods are powerful tools for creating capabilities that are abstracted from the specifics of IT environments and are especially useful for communications services.
Multiple standards bodies are addressing common architectures, methods, and information sharing for Web services, a Web services data language (WSDL), and related architecture. For example, the World Wide Web Consortium (W3C) is developing concepts of a "semantic Web," which provides a way of enabling Web information to be easily understood by system processes. This consortium defines the "building blocks" required for common interoperability methods such as extensible markup language (XML) and related standard message exchange mechanisms such as simple object access protocol (SOAP).
The Organization for the Advancement of Structured Information Standards (OASIS) is developing a Web service resource framework (WSRF) standard. WSRF provides Web service specifications that define a method of modeling and managing state within a Web service context. WSDL schema can provide definitions of packaged functionality, including stateful resource and service elements, and edge processes. The Global Grid Forum-the standards body for grid technology, which advances capabilities for flexible distributed environments-has adopted the WSRF architecture as a convenient top-level abstraction method, which is now part of their open grid services architecture model .
These techniques can be used to enable edge processes to directly provision and manage network services and other resources. Using XML for schemas for common service definitions provides a powerful abstraction method for communication services. It allows for common capabilities to be created and provisioned across multiple heterogeneous domains. The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) ASN.1 committee is developing a new standard that uses the ASN.1 language (a traditional notation for defining protocol messages that is commonly used in the communications industry) as a new schema definition language for XML (www.itu.int).
ASN.1 provides for a clear distinction between content-e.g., "abstract syntax" ("message description," which contains no implied coding method) and "transfer syntax" ("encoding") (http://asn1.elibel.tm.fr/en). However, using these techniques, communication service designers do not have to adhere to common definitions. They can base services on commonly defined elements, create new elements, or use mixtures of common and customized elements. They can be integrated with signaling used for general or core provisioning methods, e.g., as defined by the ITU-T's automatically switched transport network (ASTN) (G.807) and automatically switched optical network (ASON) (Y.1304) architectural standards (www.itu.int).
One major trend enhancing capabilities for abstracting communication services from specific infrastructure implementations is the reduction of hierarchical communication protocol layers, even to the point of directly placing data on light paths-for example, through IP-over-DWDM services. A related trend also enhancing abstraction opportunities is the migration of signaling architecture-for both in-band and out-of-band communications-to IP-based standards, which significantly enhances the potential for abstracting communications services from specific physical implementations. Signaling based on IP can be used for access control, topology discovery, traffic engineering, wavelength routing, reconfiguration, protection and restoration, and for the configuration and re-configuration of many individual infrastructure components. The functional abstraction enabled by these trends allow for substantially more flexibility in services creation and deployment because such services no longer have to be dependent on a centralized governance model.
Traditionally, the creation and deployment of new communication services has been dependent on centralized management and control capabilities. Because of this dependency, the creation and deployment of new services has been a slow, multi-year process. New architectural models that abstract services from infrastructure enable core resources to be used as a platform on which it is possible to create many new types of services and applications independent of centralized management processes. These communication infrastructure platforms provide for various advertised capabilities, such as application programming interfaces (APIs), which can be used to create new services. These types of capabilities are being created in early prototypes .
For example, the experimental optical dynamic intelligent network (ODIN) services architecture was designed, developed, and implemented on an optical test bed to demonstrate the potential for a service layer that could broker resource requests by applications and services for core network services, primarily dynamically allocated dedicated Layer-2 (L2) circuits and L1 light paths . A series of experiments and demonstrations have shown that data-intensive applications can use these signaling and service-layer techniques to provision light paths dynamically as required on optical networking accessible through specialized APIs. In these demonstrations, ODIN served as an intermediary service layer between the applications and lower-level network components. Essentially, ODIN extends control-plane functionality through the service layer to the application, including functions such as light-path addressing, dynamic path computation, resource discovery and light-path reachability, as well as other functions . The actually optical level provisioning tool used is the Internet Engineering Task Force (IETF) generalized multiprotocol label switching (GMPLS) standard. The signaling protocol used in part is based on the experimental lightweight path control (LPC) protocol, described in an IETF draft (www.ietf.org) . The LPC protocol provides a standard mechanism that allows edge processes to communicate requirements for specific network paths, including light paths, through a server-based process that directly establishing the paths using user network interfaces (UNIs). Request signals are interpreted by a server that has direct access to network state information and that can establish topologies dynamically.
The new potentials for abstracting communication services from physical infrastructure are being made possible by the next generation of optical components, which are more flexible than traditional devices. Dense wavelength division multiplexing (DWDM) has supported static optical channels for two decades. However, new components, protocols, and software provide capabilities for dynamic provisioning. Emerging systems are being introduced that include multifunctional optical cross-connects (OXCs)-e.g., those that are optical-to-electrical-to-optical (OEO)-based and those that are optical-to-optical-to-optical (OOO)-2D- and 3D-micro-electromechanical system (MEMS)-based, addressable DWDM interfaces, controllable optical add/drop multiplexers (OADMs), tunable lasers, tunable amplifiers, flexible gatings, and other devices.
Experiments on the potential for abstracting services from physical implementation using such new components are being conducted on advanced optical test beds. One such test bed is the Optical Metro Network Initiative (OMNI) test bed, a wide-area metro photonic test bed in the Chicago area that supports 24 10 GE optical channels among four core node sites interconnected with dedicated fiber (www.icair.org/omninet). These nodes are comprised of a DWDM photonic switch (2D-MEMs-based), an optical fiber amplifier (OFA, to compensate for link-and-switch decibel loss), optical transponders/receivers (OTRs), and high-performance L2/L3 routers/switches. Each node supports 4x10 Gbps optical channels, based on addressable wavelengths. This test bed is being used to demonstrate the utility of creating communication services using a highly flexible core infrastructure.
These concepts are also being explored on larger-scale test beds, such as the national-scale OptIPuter, an experimental research initiative that is being funded by the National Science Foundation . In this project, the reference requirements for communication services are those that are needed by large-scale science projects such as geophysical sciences, bioinformatics, and space exploration. In part, the OptIPuter has been designed as a next-generation supercomputer. Traditionally, supercomputers have been designed and developed as a specific environment on a defined physical infrastructure that can support large-scale, computation-intensive applications. For this project, the high-performance computing and communications "platform" consists of a national-scale-distributed environment, based on a national optical network, designed with an innovative architecture that closely integrates light paths, IP signaling and data communication services, mass data-storage systems, high-performance cluster-based computational processing, and advanced visualization technologies. Although these resource components are integrated, the architecture within this environment provides for a high level of services abstractions.
The OptIPuter was designed as a large-scale, high-performance, distributed environment, a "lambda grid," within which it is possible to create many types of supercomputers. Traditionally, computation-intensive applications must conform to restrictions inherent in physical attributes of supporting infrastructure. The design of the OptIPuter environment allows core services, through specialized middleware, to be abstracted so that applications can create ad hoc virtual supercomputers designed to meet their specific requirements . The central architectural resource and key enabling component for this environment consists of multiple optical networks, not computers, based on dynamic light-path provisioning. These light paths enable the dynamic instantiation of "supernetworks," which function essentially as distributed backplanes for virtual environments on nationwide or worldwide fabrics.
For many years, the Canadian Network for the Advancement of Research Industry and Education (CANARIE) has been designing and developing innovative communication capabilities based on service-layer abstractions. They are recognized as world leaders in bringing these concepts from research labs into production services, e.g., on the CA*net 4 network (www.canarie.ca/canet4). For example, the CANARIE user-controlled light path (UCLP) architecture was created to allow a service provide to allocate a segmented network domain within a larger distributed facility to end customers. UCLP enables edge processes to discover, access, provision, and dynamically reconfigure optical (L1) light paths within a domain or across multiple domains, independent of any central authority.
The UCLP design, process, and software provides a mechanism not just for allocating facility resources, e.g., light paths, but also the control, management, and engineering systems for those resources. In addition, this architecture provides capabilities that can enable customers to further reallocate subsections of those resources along with their related control, management, and engineering systems. The UCLP design was created within the context of services-oriented architecture.
These architectural concepts are also being explored on an international scale. An international consortium is designing and developing the Global Lambda Integrated Facility (GLIF) as an international platform that can provide for a set of core basic resources within which it is possible to create multiple differentiated specialized networks and services. The GLIF provides for a closely integrated environment (networking infrastructure, network engineering, system integration, middleware, applications, etc.) and support capabilities based on dynamic configuration and reconfiguration of resources. GLIF is based on a fabric consisting, in part, of dynamically allocated light paths, some of which are individually addressable wavelengths. These light paths are globally interconnected through sites worldwide that provide advanced services, including high-performance transport, based on an open optical exchange point architecture.
For example, the StarLight facility in Chicago has been designed, developed, and implemented to provide next-generation communication services based on new services abstraction architecture (www.startap.net/starlight) . Similar facilities are also being developed in other cities, such as NetherLight in Amsterdam, Netherlands, UKLight in London, England, NorththenLight in Stockholm, Sweden, and CzechLight in Prague, Czech Republic . Related exchanges have also been established in Japan, Korea, China, and several countries on other continents. This architecture is being developed so that multiple edge processes can access and manage a wide range of specialized services, e.g., through middleware integrated with control and management planes, including multiple based on core resources, such as high-performance computational clusters, rendering and visualization facilities, mass storage systems, instrumentation, and L1 paths on a dynamic, reconfigurable optical infrastructure . Using these architecture options can be provided for either accepting default pre-defined services or for creating customized new services within distributed environments that provide component resources that can be dynamically discovered, gathered and integrated using specialized access techniques, such as Web services based signaling based on semantic web architecture.
As noted, many of the architectural concepts described here are being driven by global high-performance applications. Recently, iGRID2005 (igrid2005.org) presented an international showcase of 49 next-generation applications from 20 countries requiring high-performance communications provisioned on a specialized, flexible global infrastructure. The majority of the applications demonstrated could not be supported by a traditional data communications infrastructure. For many of the applications demonstrated, key enabling factors consisted of the enhanced flexibility provided by communication services abstractions, including those that allowed access to deterministic paths provisioned on a global 200 Gbps infrastructure customized specifically for the event and provided primarily by GLIF resources. Although the provisioning for the event as a whole was not automated, a number of individual application demonstrations employed new types of provisioning based on advanced architecture for service abstractions interlinked with core network resource provision, including light-path provisioning.
This event demonstrated the innovative applications that are possible when dependencies between services or applications and particular communications infrastructure are mitigated or removed. For example, many applications showcased at the event demonstrated the value of service virtualization in communication environments. In the future, the design of new applications and services will benefit from capabilities that provide increasingly higher levels of programmability and virtualization.
The creation of enhanced architectural abstractions for information technology resources has been a major force shaping IT development for decades. Today, this approach is influencing the direction of new communications architecture. Traditionally, communication providers have defined and provided services that have been closely coupled with supporting physical infrastructure. A problem with this traditional model is that new services creation and enhancement is slow, costly, and complex. By creating an architecture that allows for a significant level abstraction to separate services from infrastructure, the design and creation of services can be much more independent of existing physical implementation and configurations. The physical infrastructure can become a large-scale distributed facility that, in effect, becomes a programmable platform for many new types of communication services. This architecture can also lead to new types of business models-for example, some providers may only offer basic infrastructure and allow other organizations to offer higher-layer services. They may also provide a wide range of accessible APIs so that multiple organizations, and even individuals, can create customized services. This approach is already emerging within various commercial Web-based services-for example, among general-content providers that are allowing anyone on the Internet to access resource APIs. In the future, this type of architecture will enable higher-level abstractions to provide new types of innovative services by integrating these higher-level capabilities with those that can directly and dynamically reconfiguring basic communications infrastructure resources.
 Special Issue, Journal of Future Generation Computer Systems, Elsevier Science Press, Vol. 19, Issue 6, Aug 2003.
 I. Foster, C. Kesselman, "The Grid: Blueprint for New Computing Infrastructure," Morgan Kaufmann, 2003; I. Foster, C. Kesselman, J. Nick, and S. Tuecke, "The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration," Open Grid Service Infrastructure WG, Global Grid Forum, June 2002. (www.globus.org/alliance/publications/papers/ogsa.pdf).
 J. Mambretti, J. Weinberger, J. Chen, E. Bacon, F. Yeh, D. Lillethun, B. Grossman, Y. Gu, M. Mazzuco, "The Photonic TeraStream: Enabling Next Generation Applications Through Intelligent Optical Networking at iGrid 2002," Journal of Future Computer Systems, Elsevier Press, August 2003, pp.897-908.
 J. Mambretti, "Experimental Optical Grid Networks: Integrating High Performance Infrastructure and Advanced Photonic Technology with Distributed Control Planes," Proceedings, Optical Networking for Grids Workshop, European Conference on Optical Communications, Stockholm, Sept. 4, 2004.
 D. Lillethun, J. Lange, J. Weinberger, Simple Path Control Protocol Specification, www.ietf.org/internet-drafts/draft-lillethun-spc-protocol-01.txt.
 L. Smarr, A. Chien, T. DeFanti, J. Leigh, P. Papadopoulos, "The OptIPuter," Special Issue: Blueprint for the Future of High Performance Networking Communications of the ACM, Vol. 46, No. 11, Nov. 2003, pp. 58-67.
 DeFanti, T., Brown, M., Leigh, J., Yu, O., He, E., Mambretti, J., Lillethun, D., and Weinberger, J., "Optical Switching Middleware for the OptIPuter," Special Issue on Photonic IP Network Technologies for Next-Generation Broadband Access. IEICE Transactions on Communications, E86-B, 8 (Aug. 2003), pp. 2263-2272.
 T. DeFanti, C. De Laat, J. Mambretti, Bill St. Arnaud, "TransLight: A Global Scale Lambda Grid for E-Science," Special Issue on "Blueprint for the Future of High Performance Networking," Communications of the ACM, Nov. 2003, Vol. 46, No. 11, pp. 34-41.
 J. Mambretti, "Progress on TransLight and OptIPuter and Future Trends Towards Lambda Grids," Proceedings, Eighth International Symposium on Contemporary Photonics Technology (CPT2005)," Tokyo, Jan. 12-14, 2005.
 J. Mambretti, "Ultra Performance Dynamic Optical Networks and Control Planes for Next Generation Applications," Proceedings, Mini-Symposium on Optical Data Networking, Grasmere, England, Aug. 22-24, 2005.