Packops - آموزش‌های دواپس
1.57K subscribers
1.7K photos
164 videos
580 files
736 links
📕آموزش های دواپس / کلاد
🔧حل مشکلات دواپس
🎥 ویدیو و پادکست های دواپس
😎مدیر : @farshad_nickfetrat
گروه : @packopsGP
کانال : @packops

سایت 👾 :
📌 https://packops.dev
📌 https://packosp.ir
اینستاگرام 👾 :
📌 https://instagram.com/packops.ir
Download Telegram
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
[Interface Processors]
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
Interface processors are responsible for the following actions:
• Decoding the electrical or optical signals as they arrive from the physical media
• Translating these signals into 1s and 0s
• Transferring the 1s and 0s into a memory location
• Deciding where the end of a packet is and signaling other devices in the router that
there is a new packet to process
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
🇮🇷🇮🇷🇮🇷 @geekgap 🇮🇷🇮🇷🇮🇷Interface Processor Ring Buffers
Packops - آموزش‌های دواپس
🇮🇷🇮🇷🇮🇷 @geekgap 🇮🇷🇮🇷🇮🇷Interface Processor Ring Buffers
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
Interface processors transfer packets into memory using direct memory access; they will
directly copy the packet into a memory location specified by the controlling device (in this
case, a Cisco IOS Software device driver. The set of addresses they copy the packets into
is stored as a ring buffer, which is illustrated in below Figure.

Each entry in the buffer points to a different memory location; the first packet copied in off
the wire will be placed in the memory location indicated by (pointed to by) A, while the
second will be placed in the memory location pointed to by B, the third will be placed in
the memory location pointed to by C, and so on.
When the interface processor copies a packet into location H, it will loop around the ring
and start copying the next packet into location A again. This looping effect is why the
transmit and receive buffers are generally called transmit and receive rings.

🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
[Central Processing Unit]
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
The central processing unit (CPU) provides horsepower for any general task the software
needs to perform. On some platforms, the CPU does the work required to switch packets,
whereas on others, the CPU primarily focuses on control-plane management while
hardware specifically tailored to switching packets does the packet switching.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
[Memory]
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
Cisco routers use memory to store the following:

• Packets while they are being switched
• Packets while they are being processed
• Routing and switching tables
• General data structures, executing code, and so on

Some Cisco platforms have only one type of memory, dynamic random-access memory
(DRAM) or synchronous dynamic random-access memory (SDRAM), whereas others have
a wide variety of memory available for different purposes.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
[Backplanes and Switching Fabrics]
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
When a packet is switched by a router, it has to be copied from the inbound port to the
outbound port in some way; the ports in the router are generally interconnected using some
sort of a switching fabric to allow this inbound-to-outbound copying. Cisco routers use
three types of interconnections:
• Shared memory
• Crossbar switching fabric
• Bus backplanes
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
[Shared Memory]
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
In shared memory architectures, packets are copied into a memory location that is
accessible by both the input and output interface processors. In below Figure illustrates that as a
packet is copied into memory on the inbound side, it is copied into a packet buffer that all
interfaces can access. To transmit a packet, the outbound interface copies the packet out of
the shared memory location onto the physical wire (encoding the 0s and 1s as needed)
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
The primary speed bottleneck on shared memory architectures tends to be the speed of the
memory in which the packets are stored. If the memory cannot be accessed for some
amount of time after a packet is written to it, or if the access speed is slow, the interface
processors won’t be able to copy packets to and from the wire quickly. Most routers that
use this architecture use very-high-speed memory systems to accommodate high-speed
packet switching.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
In some routers, the interface processor is physically separated from the shared memory
used to switch packets; for example, in the Cisco 7200 series of routers, the interface
processors are physically located on the Port Adapters (PAs), while the shared memory is
physically located on the Network Processing Engine (NPE). In these cases, a bus separates
the shared memory from the interface processor, as Figure below illustrates.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
Packops - آموزش‌های دواپس
Copying Packets Through Shared Memory over a Bus @geekgap
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
If there is a bus between the interface processor and the shared memory, packets are copied
into local memory on the line card and then transferred across the bus to the shared memory,
where other line cards can access the packet to copy it back for transmitting.
Because each packet must be copied across the bus between the interface processor and the
shared memory twice, the bus’s bandwidth has a major impact on the performance of the
router, as well as on the speed at which packets can be written into and read from the shared
memory.
In virtually all shared memory systems, whether there is a bus between the interface
processor and the shared memory or not, the work of switching the packet is done by the
CPU or some other centralized processor that has access to the shared memory.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
[Crossbar Switching Fabric]
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
If a switching decision can be made on individual line cards—such as when the line cards
have specialized hardware or a separate processor—copying the packet into shared memory
of any type between interfaces isn’t necessary. Instead, the packet can be transferred
directly from the local memory on one line card to the local memory on another line card.
Figure below illustrates an example of a crossbar switching fabric
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
Crossbar Switching Fabric @geekgap
Packops - آموزش‌های دواپس
Crossbar Switching Fabric @geekgap
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
In a crossbar switching fabric, each line card has two connections to a fabric. At each cycle
(or point in time), any given line card’s output can be connected to any other line card’s
input. So, if Line Card 2 receives a packet that it determines needs to be transmitted out a
port on Line Card 3, it can ask the switch fabric controller to connect its output to the input
of Line Card 3 and transfer the packet directly.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
For multicast, the inbound line card can request a connection to multiple line card inputs.
For example, in the Figure, Line Card 5’s output is connected to both Line Card 6’s and
Line Card 4’s input, so any packets Line Card 5 transmits will be received by both of these
line cards at the same time.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
The primary advantage of a crossbar switching fabric is that it can be scaled along with the
number of line cards installed in a system; each new line card installed represents a new set
of input and output connections that can be used in parallel with the other existing
connections. As long as each individual connection has enough bandwidth to carry line-rate
traffic between any pair of line cards, the entire router can carry line-rate traffic between
multiple pairs of line cards.
The bandwidth of the individual lines in the crossbar switching mesh will not help in one
instance: if two line cards want to transmit packets to a third line card at the same time. For
example, in the Figure, if both Line Cards 1 and 2 want to transmit a packet to Line Card
3, they can’t; Line Card 3 has only one input.
This problem has several solutions, any and all of which are used in Cisco routers. The first
is to schedule the connections in a way that no line card is expected to receive from two
other line cards at once. Cisco routers provide various scheduling algorithms and systems
to ensure that no line card is starved for transmission, so packets don’t bottleneck on the
inbound line card.
Another possibility, used on some high-speed line cards on some platforms, is to provide
two connections into a given line card so that it can receive packets from two different line
cards at the same time. This doesn’t completely resolve the problem—scheduling is still
required—but it does relieve some of the pressure on line cards that are often used as
uplinks to higher-speed networks.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
[Bus Backplanes]
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
The last type of connection used between line cards is a bus backplane. This is different in
a few ways from a shared memory architecture with a bus. Figure below illustrates such a
backplane.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
Bus Backplane @geekgap
Packops - آموزش‌های دواپس
Bus Backplane @geekgap
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Packets that are received by a line card are transmitted onto the backplane as they are
received. When the switching engine has received and processed enough of the packet to
decide which interface the packet needs to be transmitted out of, the line card with that
interface connected is instructed to continue copying the packet off the backplane and is
given information about which port the packet should be transmitted out.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

The remainder of the line cards is instructed to flush this packet from their buffers and
prepare to receive the next packet, or do other work as needed.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷


Ali Mobasheri
Packet Receipt and Transmission Across a Bus Backplane @geekgap
Packops - آموزش‌های دواپس
Packet Receipt and Transmission Across a Bus Backplane @geekgap
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
The following list explains the process illustrated in Figure :
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
1 Line Card 1 begins receiving the packet and copies it onto the backplane; all the other
line cards begin to store the packet in local buffers.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
2 When Line Card 1 has put the packet headers up the IP header onto the backplane, the
switching engine consults its local tables to determine the correct output port.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
3 The switching engine now signals Line Card 4 that it should continue accepting and
storing this packet, and it informs the remaining line cards that there is no reason for
them to continue receiving or storing this packet.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
4 After Line Card 1 has finished transmitting the packet onto the backplane, Line Card
4 transmits the packet onto the locally connected physical media.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
Switching multicast packets over a backplane bus is relatively simple; the switching engine
signals several line cards to accept and transmit the packet, rather than just one. Scheduling
algorithms and local buffers are used to prevent several line cards from sending packets
onto the backplane at the same time.
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷

Ali Mobasheri
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
دوستان گیک گپی عزیز شبتون بخیر و خوشی باشه انشاالله
مبحث فردا شب راجع به :
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
[Cisco IOS Software: The Brains]
🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
به طور کامل راجع به IOS دستگاه های Cisco صحبت میکنیم

هرروزتون گیک گپی

Ali Mobasheri —---> 🇮🇷🇮🇷🇮🇷@geekgap🇮🇷🇮🇷🇮🇷
🎄🎊
آغاز کریسمس را به هموطنان عزیز مسیحی تبریک میگیم و امیدواریم سالی توام با سلامتی،خوشی و خوشبختی و موفقیت داشته باشيد
🌸🌹🌼🌻🌷🌺🌼🌹❤️⛄️❄️
@geekgap