Cisco 1999 White Paper: Controlling Your Network - A Must for Cable Operators

Controlling Your Network - A Must for Cable Operators

Executive Overview

This white paper describes how multiple systems operators (MSOs) can control the traffic on their multiservice network to ensure that their users receive consistently high levels of service. In addition, it discusses how to prevent outside content providers from disrupting the cable network by delivering broadband content without authorization granted by the MSO.

The Opportunities for MSOs

Today, the volume of networked data traffic has bypasses that of voice traffic, and the demand for data access is still climbing steadily for both business and residential subscribers. The demand for high-bandwidth video access is beginning to undergo a similar upswing. Further, there is a strong subscriber interest in bundled data, voice, and video services offered by a single provider. MSOs are in an excellent position to take advantage of these unprecedented revenue opportunities.

Such a move requires being able to deliver data, voice, and video to all your subscribers through a single converged network of integrated components designed for this type of service deployment - an Internet Protocol (IP) network.

A converged network delivers substantial benefits, such as resource sharing not only of bandwidth, but also of capital expenditure, operational costs, and training. A converged network gives you the freedom to offer bundles of data, voice, and video services, and today you can offer these services with new confidence in your abilities to control service quality.

And today you can do this with new confidence in your abilities to deliver service quality.

For example, converged network enables you to create "virtual" networks to ensure that different types of traffic do not interfere with each other. You get the advantages of a converged network with the service-delivery confidence of separate networks - but without the gross inefficiencies and future-limiting isolation of separate networks.

Cisco Systems and its strategic partners have made a no-compromise commitment to delivering end-to-end, carrier-class, high-bandwidth IP networks to meet these New World opportunities.

Elements of a Multiservice Cable Network

New World IP networks are the most advanced, flexible, and cost-efficient solution available for delivering data, voice, and video over cable - the types of services your customers want. New World networks are also designed to leverage the common infrastructure you already have to optimize your time to market and profitability in developing, deploying, billing, maintaining, and expanding services over time.

Page 1 of 7

Achieving these cost and quality efficiencies has been possible only by ensuring a high degree of intelligence in the network itself close to the cable plant, where bandwidth control is extremely valuable and efficient. Thus Cisco implemented the cable modem termination shelf (CMTS) of the converged network as a full router (the first qualified DOCSIS-compliant headend router in the industry), designed specifically to control quality-of-service (QoS) issues directly at the cable plant. The level of control and quality assurance available through a Cisco converged network is unparalleled.

The full Cisco cable solution is built on the industry's broadest, most widely accepted foundation of integrated components:

  • Cisco IOS software
  • Data-over-Cable Service Interface Specification (DOCSIC) industry protocols
  • Dynamic packet transport (DPT) products that redefine metropolitan-area network (MAN) architectures for high-bandwidth, resilient fiber rings optimized to carry large and rapidly growing volumes of packet traffic
  • Cisco gibabit switch routers (GSRs) that perform Internet switching and routing at 155 and 622 Mbps
  • Cisco universal broadband router (uBR) family for a cost-effective, scalable, and a feature-rich interface between the backbone network and subscriber cable modems
  • Cable access routers (uBR 900 series) for all the components needing to build a secure virtual private network (VPN) and as an interface between the MSO's network and subscriber's personal computer
  • Voice over Internet Protocol (VoIP) for the quality, stability, and functionality necessary for carrier-class, real-time IP communications services
  • Asynchronous Transfer Mode (ATM) wide-area network (WAN) connections and management software

    Cable operators are successfully using these solutions as a common infrastructure to achieve economies of scale and attract millions of new subscribers from the dialup world today.

    The question is, will they be able to maintain control of their networks and content in the face of accelerating growth and competition?

    Minimizing the Risk of Adoption

    Some MSOs are hesitant to deploy IP-based networks because they fear that they will not be able to control them. They are concerned that other content providers will flood their network with bandwidth-hogging services, particularly video, making it difficult to maintain a balanced, high-quality service delivery for all subscribers.

    Sustained service quality over the long term requires IP network control, being able to intelligently segment and manage resources by user type, service, destination, or application so that delivery quality does not suffer with growth or the addition of new services. That is the job of Cisco IOS QoS.

    Cisco QoS has made it easy and safe to deploy increasingly rich new services across a common infrastructure while preventing these services from impacting each other in negative ways.

    Today, video over IP is just another content-distribution opportunity that does not have to disrupt the quality of your other services. At the same time, video offers considerable potential for revenue growth for MSOs serving as content aggregators and distributors.

    Suddenly, Bandwidth Is Here

    Until very recently, almost all users accessed the Internet via 28.8K or 56K modems at best, and the demand for streaming media, especially video, was low. The relative popularity of video over the Internet was off to a very slow start, because of the choke point at the subscriber's computer - the dialup modem.

    However, with the contagious popularity of cable modems among today's users, demand for video over IP is growing fast, and so is the supply of content to fulfill the demand. Media players are accelerating this appeal. More Internet sites are offering video as a primary attraction. And Web advertisers are discovering the value of engaging viewers through the lively interactivity of rich media instead of static Web banners.

    As a result, a broad diversity of video content is becoming available to growing numbers of cable modem users, and the Internet backbone itself is becoming the new choke point. However, as a cable operator, your service offering does not have to be compromised, and you are in an excellent position to profit from the trend.

    Page 2 of 7

    Absolute QoS Control Is Here Too

    Early fears of losing network and content control to service providers within the network were founded in fact. Multiple service delivery over IP networks brings with it an inherent problem: How do these multiple services - packetized voice, streaming media, Web browsing, database access, and e-mail - coexist without competing with each other for bandwidth?

    Cisco QoS has solved the problem by putting absolute control, down to the packet, in your hands.

    The role of QoS is that of providing ways to prioritize the relative need for bandwidth of each service in response to the overall flow of network traffic at any given moment.

    The ability to prioritize and control traffic level is a distinguishing factor and critical difference between New World networks employing Internet technologies and "the Internet."

    But beyond that, new, advanced QoS techniques give you the means to maximize revenue generated through bandwidth capacity by providing the highest quality for your most valuable services.

    Four-Way Network and User Control

    The QoS available for New World networks can ensure that you have control of all service events on the network by four general means:

  • Network engineering
  • Traffic-type identification
  • Admission control and policing
  • Preferential queuing

    Network engineering is fine-tuned methodology of specifying and predicting the link bandwidths required in each segment of your network based on the services you want to offer. If, for example, as a cable operator, you want to generate additional revenue as a telephony service provider, network engineering is the first step to deciding how much bandwidth you will need for the minimum levels of service for a given number of potential subscribers. Cisco has resources to help cable operators calculate capacity requirements and return on investment based on your particular regional mix of target subscribers thorough [sic] an economic analysis of the opportunity.

    Next, traffic-type identification allows you to isolate different traffic types in your IP network. Through Cisco QoS, you can identify each traffic type - Web, e-mail, voice, video. Tools such as type-of-service (ToS) bits identification allow you to isolate network traffic by the type of application, even down to specific brands, by the interface used, by the user type and individual user identification, or by the site address.

    Admission control and policing is the way you develop and enforce traffic policies. These controls allow you to limit the amount of traffic coming into the network with policy-based decisions on whether the network can support the requirements of an incoming application. Additionally, you are able to police or monitor each admitted application to ensure that it honors its allocated bandwidth reservation.

    Preferential queuing gives you the ability to specify packet types - Web, e-mail, voice, video - and create policies for the way they are prioritized and handled. For example, although voice and video traffic are intolerant of delays and drops, you still might want to ensure that lower-priority residential Web browsing is allocated enough bandwidth to deliver an acceptable level of service during peak usage.

    Ensuring Cooperation between the Network Edge and the Backbone

    The ability to flexibly and effectively scale usage between millions of subscribers and thousands of network node elements requires cooperation between the network edge at the user end and the backbone at the core pipes. The Cisco uBR7200 CMTS products, which work with any type of DOCSIS-based cable modems, are the key to obtaining the QoS that lets you specify, coordinate, and enforce policies that can operate across your entire network from the backbone to the edge.

    Page 3 of 7

    Cisco QoS distributes functionality and control uniformly between the edge and the backbone of the network, thus allowing for wide-scale deployments of network services while concurrently providing backbone scalability to meet extremely high packet throughput requirements (see Figure1).

    Edge function control allows you to deal with each subscriber as an individual entity, assigning access authorizations, bandwidth allocations, and security filters for each address. Edge function processing thus off-loads the backbone from the overhead of processing user assignments in real time. Instead, the backbone is allowed to switch and transport packets at full speed based on header information independent of the specific users that make up the traffic.

    (Figure 1 Omitted)

    In addition, the backbone is empowered to enforce higher-level QoS policies, such as overall bandwidth contention between voice and video traffic, as well as QoS internetworking - coordinating gateway specifications between backbone providers, for instance, Spring and UUNet.

    Cisco IOS software and QoS enable service providers to distribute network functionality and responsibility between edge functions and backbone functions. This distribution of functionality enables simultaneous performance and services scalability.

    At the edge of the network, Internet service providers (ISPs) gain the capability to flexibly:

  • Specify policies that establish traffic classes and service levels
  • Specify policies that define how network resources are allocated and controlled to handle these traffic classes
  • Efficiently map packets into the traffic classes
  • Apply policies and "high-touch" services to meet customer application and security requirements
  • Collect and export detailed measurements concerning network traffic and service resource utilization

    In the backbone of the network, Cisco IOS software, QoS, and supporting technologies provide the capabilities to:

  • Scale the network to provide extremely high capacity, performance, and reliability
  • Provide policy administration and enforcement
  • Provide streamlined queuing and congestion management

    In the backbone, Cisco IOS software and QoS provide the capability to effectively control, manage, and scale the high-bandwidth network necessary to handle the demands of Internet traffic growth while meeting the QoS requirements of business and consumer applications. Cisco IOS software, QoS, and supporting technologies deliver backbone functionality focused on extremely high throughput and capacity scalability as well as policy administration and enforcement. The backbone is relieved of the responsibility of implementing high-touch services on high-speed interfaces, thus contributing to the reliability and stability of the network.

    Page 4 of 7

    QoS Control Mechanisms

    Caching Is the Relief Valve

    Caching is the cost-effective and widely popular method of storing frequently accessed Web content regionally, near the users, to off-load the backbone of duplicated, same-page traffic. Whether it's Web-page caching or the newer streaming-media caching, the idea is the same. Both are effective ways to optimize the bandwidth of the backbone by moving some of the content to the edge of the network in stored caching servers.

    As a leader in the caching market, Cisco created the Web Cache Communication Protocol (WCCP) to allow Cisco Cache Engines and other cache products to communicate with Cisco routers. WCCP, built into a wide variety of the Cisco IOS-based networking products, enables the transparent, scalable, and secure introduction of caching technology into networks.

    Selectively Limited Access Rates

    Committed access rate (CAR) is an edge-focused QoS mechanism provided by selected Cisco IOS-based network devices. The controlled-access rate capabilities of CAR allow you to specify the user access speed of any given packet by allocating the bandwidth it receives, depending on its IP address, application, precedence, port, or even Media Access Control (MAC) address.

    For example, if a "push" information service that delivers frequent broadcasts to its subscribers is seen as causing a high amount of undesirable network traffic, you can direct CAR to limit subscriber-access speed to this service. You could restrict the incoming push broadcasts as well as subscribers' outgoing access to the push information site to discourage its use. At the same time, you could promote and offer your own or partner's services with full-speed features to encourage adoption of your services, while increasing network efficiency.

    CAR also lets you discourage the subscriber practice of bypassing Web caches. It gives you the ability to increase the efficiency of your network by allocating high bandwidth to video and rich media coming from a Web-cached source and low bandwidth to the same content coming from an uncached source.

    Further, you could specify that video coming from internal servers receives precedence and broader bandwidth over video sourced from external servers.

    With CAR, the choice is yours, and it's easy to make constant revisions and adjustments as traffic patterns shift.

    NetFlow Switching for Added Revenue

    NetFlow Switching provides high performance for network-layer services and fine-grained data collection at the edge of the network. It also puts a host of Cisco IOS network services such as security and traffic accounting under your control.

    NetFlow exports extensive flow-by-flow measurements for collection, postprocessing, and usage by accounting and billing, network planning, and network monitoring. The data collected for each flow includes the following:

  • Source and destination IP address
  • Start-of-flow and end-of-flow timestamps
  • Packet and byte counts
  • Next-hop router address
  • Input and output physical port interfaces
  • Source and destination Transmission Control Protocol/User Datagram Protocol (TCP/UDP) port numbers
  • IP protocol type
  • Type-of-service field
  • TCP flags
  • Source and destination autonomous system numbers
  • Source and destination subnet masks

    This highly granular NetFlow data serves as a key metering mechanism for introducing and monitoring differential service charges based on parameters such as time of day, class of service, application usage, or traffic usage.

    Page 5 of 7


    IP Precedence for Traffic Classification

    IP Precedence provides the capability to partition traffic at the edge into multiple classes of service. In fact, it lets network operators define up to six separate classes of service and employ extended access control lists (ACLs) to define network policies in terms of congestion handling and bandwidth for each class.

    The IP Precedence feature uses the three precedence bits in the ToS field appearing in the IP header to specify a class of service for each packet. This scenario provides considerable flexibility for precedence assignments, including customer assignments (for example, by application or access router) and network assignments (for example, by IP or MAC address, physical port, or application).

    It can act either in passive mode (accepting precedence assigned by the customer) or in active mode utilizing defined policies to either set or override the precedence assignment. IP Precedence can also be mapped into adjacent technologies-such as Tag Switiching, Frame Relay, or ATM-to deliver end-to-end QoS policies across a heterogenous network environment.

    Reserving Bandwidth

    An important protocol is Resource Reservation Protocol (RSVP), Internet Engineering Task Force (IETF) RFC 2205, which can provide bandwidth "reservations" in the regional and HFC cable networks. Devices that send video over the backbone can use RSVP to signal the bandwidth and QoS requirements to the network, which will then either allocate the bandwidth along the path, or signal that inadequate capacity exists. For example, when delivering a high-bit rate video signal over an IP network that should not be interrupted, RSVP can be used to set up a "circuit" over the IP network that guarantees bandwidth will be available for the entire video transmission. A primary feature of RSVP is its broad scalability, especially in multicast environments.

    Token Bucket Measurement

    Bursty traffic can affect the QoS of all traffic on the network by introducing inconsistent latency, also known as "jitter." This jitter can cause problems for some applications, such as desktop vidoeconferencing and high-speed video servers.

    Cisco offers a popular method of curbing sudden bandwidth-hogging spikes of large traffic, known as bursts, at the backbone through a technique known as token bucket measurement. Conceptually, this technique of burst control is illustrated by a continuous, but regulated, stream of tokens spilling into a bucket. As network packets pass by the token bucket, each packet must pick up a single token to "conform," or move forward in the network. The flow of tokens into the bucket is controlled by your burst-control policies, such as "allow bursts of 500 kbps for no more than ten seconds."

    When a policy-busting burst of traffic arrives at the bucket, individual packets are allowed to pass as long as the supply of tokens in the bucket is available. When the burst of packets depletes the momentary token supply, those packets left sitting without tokens are considered "out of policy," exceeding the burst rate. Their fate depends entirely on your prespecified configurable action polices. You can specify either that packets without tokens must wait until more tokens are fed into the bucket, indicating refreshed bandwidth availability, or that they can proceed optionally at a low-bandwidth level. This way, you can color (set precedence) or recolor (modify existing packet precedence) for each nonconforming packet. You can even drop the packets altogether. The choice and the control is entirely yours.

    Preferential Queuing Backed by Weighted Fair Queing

    Another backbone-based control capability offered by Cisco QoS is the combination of preferential queuing (PG) and weighted fair queuing (WFQ).

    PQ ensures that important traffic gets the fastest handling at each point where it is used. Because it was designed to give strict priority to important traffic, PQ can flexibly prioritize according to network protocol, incoming interface, packet size, source, or destination address.

    In PQ, each packet is placed in one of four queues-economy, standard, medium, or premium. Packets that are not classified by this priority-list mechanism fall into the medium queue. During transmission, the PQ algorithm gives higher-priority queues absolute preferential treatment over low-priority queues. This approach is simple and intuitive, but it can transfer queuing delays from higher-priority traffic to the lower-priority traffic, increasing jitter on the lower-priority traffic. Higher-priority traffic can be rate limited to avoid this problem.

    Page 6 of 7

    WFQ, on the other hand, adds the capability to provide expeditious handling for high-priority traffic requiring low delay while, fairly sharing the remaining bandwidth between lower-priority traffic sources: WFQ divides link traffic into high-priority and low-priority flows (based on metrics including IP Precedence and traffic volume). High-priority flows receive immediate handling, whereas low-priority flows are interleaved and receive proportionate shares of the remaining bandwidth.

    Random Early Detection for Congestion Management

    Random early detection (RED) gives you the ability to flexibly specify traffic-handling policies to maximize throughput under congestion conditions. RED helps you intelligently avoid network congestion by implementing algorithms that provide a host of protections, including the ability to:

  • Distinguish between acceptable temporary traffic bursts and excessive bursts likely to swamp network resources
  • Work cooperatively with traffic sources to avoid TCP slow-start oscillation, which can create periodic waves of network congestion
  • Provide fair bandwidth reduction to reduce traffic sources in proportion to the bandwidth being utilized
  • Set minimum and maximum queue depth thresholds as well as packet drop probability

    Thus, RED works with TCP to anticipate and manage congestion during periods of heavy traffic to maximize throughput via managed packet loss.

    Go Forward and Grow with Safety and Control

    Cisco QoS gives you complete control over all your content and services while at the same time protecting your available bandwidth and minimizing delays for time-sensitive voice and video applications.

    QoS can also propel you forward by giving you the information you need to offer advanced differentiated services at a profit. For example, time-and usage-based billing via NetFlow measurements provide you with a means of encouraging (or shifting) demand during periods of light network loading by offering off-peak discount pricing.

    Traffic classes and prioritization allow you to encourage business subscribers to classify their traffic and transport only the highest-value bits during peak usage periods and heavy congestion conditions.

    Bandwidth allocations via the CAR feature let you carefully engineer network capacity to meet bandwidth commitments during periods of congestion.

    With QoS, you can optimize service profits by marketing "express" services to premium customers ready to pay for superior network performance. Enterprise customers are already leveraging virtual private networks (VPNS) and other advanced services provided by broadband MSOs to optimize communications with customers, suppliers, branch offices, and mobile/telecommuting employees.

    Cisco QoS services help you pursue a New World Internet business model for profitable revenue growth by:

  • Offering and charging for targeted, differentiated services
  • Maximizing network utilization
  • Maximizing revenue per carried bit
  • Generating incremental billing for new services

    Every competitive MSO has been challenged to plan and build an IP infrastructure that can deliver a full range of differentiated network services and provide absolute network control, from the edge to the backbone. Now, that's exactly what you can do.

    For more information on Cisco Cable Solutions visit our web site: www.cisco.com/cable.

    7 of 7