Sunday, 30 October 2016

Data Bandwidth

In computer networks, bandwidth is the amount of data that can be carried from one point to another in a given time period; (generally a second). Network bandwidth is usually expressed in bits per second (bps); kilobits per second (kb/s), megabits per second (Mb/s), or gigabits per second (Gb/s). It sometimes thought of as the speed that bits travel, however, this is not accurate.


For example, in both 100Mb/s and 1000Mb/s Ethernet, the bits are sent at the speed of electricity. The difference is the number of bits that are transmitted per second.



  • A combination of factors determines the practical bandwidth of a network.

  • The properties of the physical media.


  • The technologies chosen for signaling and detecting network signals.




  • Physical media properties, current technologies, and the laws of physics play a role in determining the available bandwidth.




The table shows the commonly used units of measure for data bandwidth






























































Unit



Abbrivation



Decimal Value



Binary value



Decimal Size



Bit



b



0 or 1



0 or 1



1/8 of a byte



Byte



B



8 bits



8 bits



1 byte



Kilobyte



KB



1,0001 bytes



10241 bytes



1,000 bytes



Megabyte



MB



1,0002



10242



1,000,000 bytes



Gigabyte



GB



1,0003



10243



1,000,000,000 bytes



Terabyte



TB



1,0004



10244



1000,000,000,000 bytes



Petabyte



PB



1,0005



10245



1000,000,000,000,000 bytes




Throughput




The measurement of a bits transfer across the media over a given period is called throughput. It is a measure of how many units of information a system can process in a given amount of time. Due to some factors, it generally does not match the specified bandwidth in physical layerimplementations. Many factors manipulate it, including following:




  • The type of traffic




  • The amount of traffic




  • The latency created by the number of network devices between source and destination




  • Error rate




Latency is the amount of time, to include delays; for data to travel from one given point to another.


The networks with multiple segments, throughput can’t be faster than the slowest link in the path from source to destination. Even if all or most of the segments have high bandwidth; it will only take one segment in the path with low throughput to create a tailback to the throughput of the entire network.


The average transfer speed over a medium is often described as throughput. This measurement includes all the protocol overhead information; such as packet headers and other data that is included in the transfer process. It also includes packets that are retransmitted because of network conflicts or errors.


There is another measurement to evaluate the transfer of usable data that is known as goodput. Goodput is the measure of usable data transferred over a given period of time. Goodput is throughput minus traffic overhead for establishing sessions, acknowledgments, and encapsulation. It only measures the original data.


1 comment: