Tuesday, 21 March 2017

Memory Buffering on Switches

An Ethernet switch uses a buffering technique to store frames before forwarding to the destination. Buffering can also be used when the destination port is busy due to congestion. During congestion at the port, the switch stores the frame until it can be transmitted. The area of memory where the switch stores the data is called the memory buffer. There are two methods of memory buffering:-



  • Port-based memory buffering

  • Shared memory.


Port-based Memory Buffering


In port-based memory buffering, frames are stored in queues that are linked to specific incoming ports. Switches utilizing port buffered memory in this type of buffering. In port buffering switch provide each Ethernet port with a certain amount of high-speed memory to buffer frames until transmitted.  A disadvantage of port buffered memory is the dropping of frames when a port runs out of buffers. It is also possible for a single frame to delay the transmission of all the frames in memory because of a busy destination port. This delay occurs even if the other frames could be transmitted to open destination ports.


Shared Memory Buffering


Some of the earliest Cisco switches use a shared memory design for port buffering. Shared memory buffering deposits all frames into a common memory buffer that all the ports on the switch share. The amount of buffer memory required by a port is dynamically allocated. The frames in the buffer are dynamically connected to the destination port. This allows the packet to be received on one port and then transmitted on another port, without moving it to a different queue.

No comments:

Post a Comment