首页    期刊浏览 2024年09月15日 星期日
登录注册

文章基本信息

  • 标题:Managing mega-networks: What's happening at the edge? - Technology Information
  • 作者:Philip Kim
  • 期刊名称:Communications News
  • 印刷版ISSN:0010-3632
  • 出版年度:2000
  • 卷号:April 2000
  • 出版社:Nelson Publishing

Managing mega-networks: What's happening at the edge? - Technology Information

Philip Kim

Evolutionary service providers are increasingly moving their point of demarcation further onto the customer premises. The proliferation of carrier-deployed integrated access devices (IADs) opens the door to an entirely new business model, enabling delivery of toll-quality voice and high-speed data services over a single DSL (digital subscriber line) access facility. The question is how to mass deploy and manage the new multiservice broadband access network.

With billions of dollars at stake, every CLEC and ILEC (competitive and incumbent local exchange carrier) has already deployed--or is planning to deploy--xDSL to the masses, providing integrated voice and data along with significant savings. And, from large to small businesses and the less lucrative but more numerous residential sectors, subscribers are eager to embrace the benefit of xDSL--almost as eager as service providers are to deploy it. But every CLEC or ILEC knows that xDSL deployment cannot be so difficult or expensive as to prohibit its mass deployment or cost effectiveness.

When customers first deployed LANs (local area networks) and WANs (wide area networks), it became obvious that the enterprise environments were difficult to manage. And, since then, large enterprise networks have become notorious for excessive downtime, complex configuration changes, and growing operational staff requirements. Articles and studies have claimed that network maintenance and related downtime expenses (both direct and indirect) could in a matter of years exceed the actual hardware and software cost of the enterprise network itself.

In response, enterprise vendors improved their devices with embedded SNMP (signaling network management protocol) management agents in hopes of better offering FCAPS (fault, configuration, accounting, performance, and security) functionality. The central NMS (network management system) tools that were designed to communicate to the embedded SNMP agents were more complex, but for the most part, effective at resolving the immediate, tactical concerns of most users. But as xDSL loomed on the horizon, both vendors and new service providers started to realize some issues with the current technology:

* xDSL networks were going to be significantly larger than any enterprise network;

* Enterprise management solutions were not designed to handle large xDSL networks;

* The service provider/carrier business model can't support the overhead associated with deploying and managing an enterprise network;

* The carrier business model requires providing more value add than traditional enterprise--such as simultaneous data/voice, end-to-end QoS (quality of service), and guaranteed network availability; and

* The new service provider may not own the entire network--but instead lease capacity (requiring the provider to work very closely with a number of third parties, some who may, in fact, compete with the provider).

As the problem started to outgrow the solution, the concept of a better method caught everyone's imagination and the work toward creating and adopting a more efficient method began.

The Telecommunications Network Management architecture (TNM) was adopted by the International Telecommunications Union (ITU) just a couple of years ago. The TNM model specifies a standard approach for managing new, service-oriented networks using a layered approach (i.e., element layer, up to network/systems layer, up to services layer, up to business layer). The TNM model is particularly useful because of the vendor heterogeneity of the new access network. Furthermore, it provides a template for flow-through provisioning.

A typical workflow process (chronologically) that TNM can address is as follows:

1. Provider receives customer order;

2. Provider checks xDSL availability to customer;

3. Provider orders xDSL service (internally or from 3rd party) from 3rd parties;

4. Provider configures core xDSL/ voice network;

5. Provider configures access network;

6. Provider provisions services at gateway;

7. Data--IP addressing, bridging/ routing, NAT (network address translation), etc.;

8. Voice--configuring transport, QoS, mapping CRVs, Class 4/5 switches, etc.;

9. Provider technician installs CPE (customer premise equipment) device at customer site;

10. Configures CPE data/voice;

11. Technician ensures dial-tone, connectivity, and performance;

12. Customer is entered into billing system; and

13. Customer performance and billing views.

And, while the actual sequence of the activities may vary, ultimately, the end goal is the same: new services end-to-end deployed as quickly and efficiently as possible. There are many OSS (operations support system) vendors (Metasolv, Architel, AI Metrix, CrossKeys, Syndesis, etc.) that provide excellent tools that integrate a variety of technologies to reduce complex, time-consuming tasks--especially the ones related to the core of the network. But, at the edge of the network, mass deployment of CPEs brings an element of difficulty that is not easily addressed by a turnkey OSS without some extensive programming. The problem is detailed below.

PROBLEM PART 1

Deploying services to the edge means installing and managing large numbers of equipment at the customer premises--moving the provider past its traditional demarcation line [e.g., CO (central office), POP (point of presence)]. Unfortunately, moving the demarcation point to the customer site typically results in a squaring of the installed (ergo, managed) network elements.

The scalability problem must be addressed in at least two stages: device installation followed by service provisioning. While there is a significant progress and much discussion about automated installation (i.e., no truck roll), most CPE today are manually installed and configured at the subscriber site, typically via a simple interface [e.g., command line or a GUI (graphical user interface)]. And this particular process is especially difficult to replace using automation or an OSS because of the factors listed below:

1. On-site installation of CPE often requires some on-site wiring by a technician (e.g., adding/moving a connection outlet to a more convenient location, installing a microfilter or splitter, etc.);

2. Subscribers may have inaccurately reported the voice/data parameters of their own network, thereby requiring a modification from the original work order; and

3. Existing, centralized internal inventory database systems are known to have inaccurate information [with industry reports citing as much as a 30% error rate (SoundView Technology Group 11/16/99)] with information about the edge of the network typically less reliable than the core network.

An assured, successful installation currently requires an on-site technician to view and work around any last-minute problems. But, while eliminating a truck roll to a subscriber site may not be quite possible in the near future, reducing the installation difficulty and, hence, the installation time (and cost) is still very desirable and, more importantly, possible. And the current solution to reducing installation difficulty is centralized provisioning via an OSS.

Once the installer has performed the necessary and basic task of physical connectivity, there are several ways a device can be configured:

1. The installer performs a manual configuration based on a work order;

2. The network administrator manually configures the device;

3. A provisioning server contacts and configures the device (via file, transactions, etc.); and

4. The device contacts a provisioning server and allows itself to be configured by the provisioning server.

The first and second approaches are inefficient and can also introduce human errors. The last two methods both eliminate human entry, differing only in which end initiates the configuration. Note that, in both situations, the provisioning server must be already programmed to support every device in the network, and it must have an accurate database of the existing connections and devices in its path. But, if that programming exists, the initial installation is easier and more cost effective.

But what if the provider needed to add or delete new users on a frequent and distributed fashion? Some phone companies have discovered that quite a number of users have moved from provisioned circuits--but that those circuits are still provisioned (i.e., not available) despite their inactivity. Today, a well-constructed OSS can go a long way to solving this problem. However, imagine the problem as more and more subscribers and services are added--how do you still easily provision new devices, add new subscribers, or track and manage all the new services if there are problems with today's network?

Fact: while managing the emerging converged voice and data services is difficult today, it could become impossible as the multiplied number of subscribers and services exceed hundreds or even millions. And that doesn't even take into consideration the fact that there may be multiple providers for many, yet undetermined, services. If management becomes impossible, it could stunt and even halt the growth of a company's expansion.

And that's where the promise of directory enabled networking (DEN) comes in ...

DEN is designed for the management of network element information in both enterprise and service-provider networks and is intended to provide users access to the services on those networks--based on a set of polices that exist across the entire network. Technically, the DEN schema provides vendors a common model that can be used to define the components of a network (e.g., CPE, protocol, switch, etc.), the links or connections between them [e.g., ATM (asynchronous transfer mode), xDSL], and how to access services or otherwise realize the network's capabilities.

Note that DEN is not intended to replace existing management, but, rather, to supplement and leverage its capabilities. For example, SNMP has its own schema which, when incorporated into a DEN schema, is now exposed to management frameworks and collections of similar data from other devices, networks, autonomous systems, and even network clouds. This is one example of the "intelligence" of policy-based networking.

Using the LDAP access protocol, disparate information can be collected, transmitted, and manipulated from native devices. In addition to SNMP, the DEN schema and information model works with other existing network services and associated protocols, such as DNS (domain naming system), DHCP (dynamic host configuration protocol), and RADIUS (remote authentication dial-in user service). The DEN framework can store and retrieve network service information, such as that provided by DNS, DHCP, and RADIUS, in a common repository (the directory). Note that directories are good for storing certain types of information but that other types, like binary files, are better suited for storage in other types of repositories [e.g., CDRs (call detail recording) into a billing server, image files onto an FTP server].

Some proposed DEN applications include:

* Service management (includes service creation and network element provisioning);

* Security (single log on)--and central management of DNS, DHCP, DS (digital signal), RADIUS, QoS, authentication and authorization, PKI (public key infrastructure) for end-to-end security of applications over the network;

* Support for mobile users with policies that follow the user instead of profiles that are tied to a physical port or an IP address; and

* Inventory and asset tracking.

These are many more potential applications that would provide great benefit to both network user and administrator alike. While there was once a question of whether directory enabled networking would be successful, the question now is only, when will it be successful? One answer: when the networks get too big or complicated for traditional methods to work. And--for access networks--that is much sooner than most people anticipated even a year ago.

[GRAPH OMITTED]

Kim is senior product line manager at Accelerated Networks, Seattle, Wash.

www.accelerated.com

Circle 257 for more information from Accelerated Networks

COPYRIGHT 2000 Nelson Publishing
COPYRIGHT 2000 Gale Group

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有