What Causes Packet Loss on the Internet?

SASE Secure Access Service Edge

Why Packet Loss?: When faced with the typical Global Internet ping time of 300 milliseconds and 1 out of 25 packets being lost (4 per cent) an Internet VPN user with a T1 connection who might expect 10 megabytes/minute throughput on a clean local connection will find he is getting less than 10 per cent or under 1 megabytes/minute. 

Congestion: The Internet Standards treat packet loss and congestion as synonyms. Routers discard incoming packets that can’t be stored or transmitted.  Imagine an Ethernet (10 megabit/sec) pipe feeding a T1 (1.54 megabit) router.  Anytime the average feed from the Ethernet exceeds 1.54 megabits/sec, packets will be lost. This is normal congestion, (ie. packets lost) because the average sum of the inputs to a router exceeds the capacity of its output. 

Bit errors: As information packets move from place to place, there is always a chance that some bits will be modified. Each packet has a mathematical sum of the bits it contains appended to it. When a receiving router receives a packet whose contents and the appended sum don’t agree, that packet is discarded.  This can occur anyplace in the journey from source to destination.

Deliberate Discard: Layer 2 and Layer 3 networks can guarantee that voice or video connections won’t lose bits. Internet packet traffic moves over these same networks, and if it looks like there are too many packets to get all the voice and video through without missing a bit, packets are discarded until there is room for the voice and video.  Similarly Cisco and Nortel backbone routers offer packet discard policies, so the operator of the router can decide which types of traffic will suffer lost packets as the router approaches congestion.

What is causing the delay? 80 milliseconds is a typical North American ping time when going between national backbones. Most of this delay cannot be avoided. Speed of Light:.  The speed of light in a fiber optic cable works out to 10 milliseconds per thousand miles, for a ping time (due to the speed of light) of 60 milliseconds on a coast-to-coast (US) fiber link.  Remember that long distance routes are not normally as-the-crow-flies, so the distance may be much further than you think, especially with overseas connections.

Router in and out time: Routers receive packets before forwarding them.  If a router is sending a 1500 byte packet at T1(1.536 megabits/sec) the time from the first bit to the last bit is 7.8 milliseconds.  If the router is storing packets waiting to send them, then the delay time increases.  We believe this is the reason the measured Global Internet ping time varies.

Congestion Avoidance:  TCP assumes that all packet loss is caused by congestion and responds by reducing the transmission rate. 

Slow Start: When a TCP connection starts (or re-starts if more than one packet has been lost) it sends one packet, waits for the acknowledgment, then sends 2, then 4…and ramps up its transmission pace. Each step in the ramp consumes a round trip delay.

Data Acknowledgments:  The TCP receiver sends an acknowledgment to the sender whenever a segment of information is received. The sender does not assume any data is lost until a multiple of round trip time has elapsed without receiving an acknowledgment, or until it has received multiple duplicate acknowledgments.

Window Size:  TCP can only send a certain amount of data before it must stop transmitting and wait for an acknowledgment. That amount of data is called the window size. The standard window size in TCP is limited to 64 kilobytes.  RFC1323 allows larger windows, but it is not yet usable by applications running on Microsoft platforms.

When you factor all of the above, it gives a new perspective on why VPN over the internet is subject to such variability in performance.  If you can experience zero packet loss, your performance rises.  This is what makes MPLS networks so attractive for any applications where performance is important.

Read more about controlling packet loss to improve application performance.

Share this post