Network Bulls
www.networkbulls.com
Best Institute for CCNA CCNP CCSP CCIP CCIE Training in India
M-44, Old Dlf, Sector-14 Gurgaon, Haryana, India
Call: +91-9654672192
The increasing deployment of switched Ethernet to the desktop can be attributed to the
proliferation of bandwidth-intensive applications. Any-to-any communications of new
applications, such as video to the desktop, interactive messaging, and collaborative whiteboarding,
increase the need for scalable bandwidth. At the same time, mission-critical
applications call for resilient network designs. With the wide deployment of faster switched
Ethernet links in the campus, organizations either need to aggregate their existing resources
or upgrade the speed in their uplinks and core to scale performance across the network
backbone.
EtherChannel is a technology that Cisco originally developed as a LAN switch-to-switch
technique of inverse multiplexing of multiple FastEthernet or Gigabit Ethernet switch ports
into one logical channel. Figure 2-15 shows some common EtherChannel deployment
points.
The benefit of EtherChannel is that it is cheaper than higher-speed media while using
existing switch ports. The following are advantages of EtherChannel:
■ It enables the creation of a high-bandwidth logical link.
■ It load-shares among the physical links involved.
■ It provides automatic failover.
■ It simplifies subsequent logical configuration. (Configuration is per logical link instead
of per physical link.)
CAUTION These ratios are appropriate for estimating average traffic from access layer,
end-user devices. They are not accurate for planning oversubscription from the server
farm or edge distribution module. They are also not accurate for planning bandwidth
needed on access switches hosting typical user applications with high-bandwidth
consumption (for example, nonclient-server databases or multimedia flows to unicast
addresses). Using QoS end to end prioritizes the traffic that should be dropped in the
event of congestion.
44 Chapter 2: Medium-Sized Switched Network Construction
Figure 2-15 EtherChannel
EtherChannel technology provides bandwidth scalability in the campus by offering the
following aggregate bandwidth:
■ FastEthernet: Up to 800 Mbps
■ Gigabit Ethernet: Up to 8 Gbps
■ 10-Gigabit Ethernet: Up to 80 Gbps
Each of these connection speeds can vary in amounts equal to the speed of the links used
(100 Mbps, 1 Gbps, or 10 Gbps). Even in the most bandwidth-demanding situations,
EtherChannel technology helps aggregate traffic and keeps oversubscription to a minimum,
while providing effective link-resiliency mechanisms.
NOTE Due to the full duplex nature of EtherChannel links, documentation may
sometimes duple these numbers indicating the full potential of the link. For example, an
8-port FastEthernet channel operating at full duplex can pass up to 1.6 Gbps of data.
EtherChannel
Improving Performance with Spanning Tree 45
Redundant Topology
Redundant topology can be accomplished using multiple links, multiple devices, or both.
The key is to provide multiple pathways and eliminate a single point of failure. Figure 2-16
shows a simple redundant topology between segment 1 and segment 2.
Figure 2-16 Redundant Topology
Although redundant designs can eliminate the possibility of a single point of failure causing
a loss of function for the entire switched or bridged network, you must consider problems
that redundant designs can cause. Some of the problems that can occur with redundant links
and devices in switched or bridged networks are as follows:
■ Broadcast storms: Without some loop-avoidance process in operation, each switch or
bridge floods broadcasts endlessly. This situation is commonly called a broadcast
storm.
■ Multiple frame transmission: Multiple copies of unicast frames may be delivered to
destination stations. Many protocols expect to receive only a single copy of each
transmission. Multiple copies of the same frame can cause unrecoverable errors.
■ MAC database instability: Instability in the content of the MAC address table results
from copies of the same frame being received on different ports of the switch. Data
forwarding can be impaired when the switch consumes the resources that are coping
with instability in the MAC address table.
Layer 2 LAN protocols, such as Ethernet, lack a mechanism to recognize and eliminate
endlessly looping frames. Some Layer 3 protocols like IP implement a Time-To-Live (TTL)
mechanism that limits the number of times a Layer 3 networking device can retransmit a
packet. Lacking such a mechanism, Layer 2 devices continue to retransmit looping traffic
indefinitely.
A loop-avoidance mechanism is required to solve each of these problems.
Server/Host X Router Y
Segment 1
Segment 2
46 Chapter 2: Medium-Sized Switched Network Construction
Recognizing Issues of a Redundant Switched Topology
Because of the simple algorithms that a Layer 2 device uses to forward frames, numerous
issues must be managed in a redundant topology. Although these issues are managed with
technology built into the devices, a failure in these technologies may create network
outages. It is important to understand these issues in more detail.
Switch Behavior with Broadcast Frames
Switches handle broadcast and multicast frames differently from the way they handle
unicast frames. Because broadcast and multicast frames may be of interest to all stations,
the switch or bridge normally floods broadcast and multicast frames to all ports except the
originating port. A switch or bridge never learns a broadcast or multicast address because
broadcast and multicast addresses never appear as the source address of a frame. This
flooding of broadcast and multicast frames can cause a problem in a redundant switched
topology. Figure 2-17 shows how a broadcast frame from PC D would be flooded out all
ports on the switch.
Figure 2-17 Broadcast Flooding
Broadcast Storms
A broadcast storm occurs when each switch on a redundant network floods broadcast
frames endlessly. Switches flood broadcast frames to all ports except the port on which the
frame was received.
Example: Broadcast Storms
Figure 2-18 illustrates the problem of a broadcast storm.
The following describes the sequence of events that start a broadcast storm:
1. When host X sends a broadcast frame, such as an Address Resolution Protocol (ARP)
for its default gateway (Router Y), switch A receives the frame.
E0
A
C
0260.8c01.1111
E0: 0260.8c01.1111
E2: 0260.8c01.2222
E1: 0260.8c01.3333
E3: 0260.8c01.4444
0260.8c01.2222
E2
E1
B
D
0260.8c01.3333
MAC Address Table
0260.8c01.4444
E3
Improving Performance with Spanning Tree 47
Figure 2-18 Broadcast Storm
2. Switch A examines the destination address field in the frame and determines that the
frame must be flooded onto the lower Ethernet link, segment 2.
3. When this copy of the frame arrives at switch B, the process repeats, and the frame is
forwarded to the upper Ethernet segment, which is segment 1, near switch B.
4. Because the original copy of the frame also arrives at switch B from the upper Ethernet
link, these frames travel around the loop in both directions, even after the destination
station has received a copy of the frame.
A broadcast storm can disrupt normal traffic flow. It can also disrupt all the devices on the
switched or bridged network because the CPU in each device on the segment must process
the broadcasts; thus, a broadcast storm can lock up the PCs and servers that try to process
all the broadcast frames.
A loop avoidance mechanism eliminates this problem by preventing one of the four
interfaces from transmitting frames during normal operation, thereby breaking the loop.
No comments:
Post a Comment