Functionalities of Memory Buffering

Port-based Memory Buffering:

Memory frames are held in queues associated with each incoming and outgoing port before transmission. All frames are held in a shared memory buffer for transmission to the port. Each port available on the switch shares one memory buffer. Memory frames are dynamically linked to destination ports before the transmission process begins. If the destination port is busy, there may be a transmission delay of 1 frame. Frames may be dropped while the port is out of buffers.

Shared Memory Buffering:

For port buffering, some early Cisco switches used a shared memory architecture.  All frames are placed in a shared memory buffer shared by all ports on the switch with shared buffering. The required buffer space for the port is dynamically allocated. The destination port has a dynamic connection with frames in the buffer. Packets can now be received on one port and then sent on another without having to move the packet to a new queue.

 


Memory Buffering in Cisco Switches

A memory buffer is a portion of the memory used by the switch to store data. Network switch interfaces buffer or drop traffic that exceeds their capacity. Traffic bursts, many-to-one traffic patterns, and interface speed mismatches are the main causes of buffering. That is the main concept of a memory buffer. Ethernet switches use memory buffering techniques to hold frames before they are sent to their destination. The switch uses buffering when the destination port is congested and busy. Therefore, frames must be buffered before transmission. As a result, frames can be dropped when network congestion occurs without an effective memory buffering mechanism. If the port is busy, the frame will be held by the switch until it is sent. Data storage in the switch is provided by memory buffers. 

Similar Reads

Types of Memory Buffering in Cisco Switches:

There are two methods of buffering....

Functionalities of Memory Buffering:

Port-based Memory Buffering:...