7   Packets¶

In this chapter we accost a few abstract questions about packets, and take a close look at transmission times. Nosotros besides consider how big packets should be, and how to detect transmission errors. These bug are contained of any particular set of protocols.

7.1   Packet Filibuster¶

There are several contributing sources to the delay encountered in transmitting a parcel. On a LAN, the most meaning is usually what we will call bandwidth filibuster: the fourth dimension needed for a sender to get the packet onto the wire. This is only the packet size divided by the bandwidth, later on everything has been converted to common units (either all bits or all bytes). For a 1500-byte bundle on 100 Mbps Ethernet, the bandwidth filibuster is 12,000 bits / (100 $.25/µsec) = 120 µsec.

There is also propagation delay, relating to the propagation of the bits at the speed of light (for the transmission medium in question). This delay is the distance divided by the speed of calorie-free; for 1,000 m of Ethernet cable, with a point propagation speed of nearly 230 m/µsec, the propagation filibuster is about 4.3 µsec. That is, if we start transmitting the 1500 byte packet of the previous paragraph at time T=0, then the beginning chip arrives at a destination i,000 one thousand abroad at T = iv.3 µsec, and the concluding bit is transmitted at 120 µsec, and the concluding bit arrives at T = 124.3 µsec.

Bandwidth delay, in other words, tends to dominate inside a LAN.

Merely equally networks get larger, propagation delay begins to dominate. This also happens as networks get faster: bandwidth filibuster goes down, but propagation delay remains unchanged.

An of import difference between bandwidth delay and propagation delay is that bandwidth delay is proportional to the amount of data sent while propagation delay is not. If we send two packets back-to-back, then the bandwidth filibuster is doubled only the propagation delay counts but once.

The introduction of switches leads to store-and-forward filibuster, that is, the time spent reading in the entire packet before whatsoever of information technology can be retransmitted. Store-and-forward delay can also exist viewed every bit an additional bandwidth delay for the second link.

Finally, a switch may or may not as well innovate queuing filibuster; this will oft depend on competing traffic. We will look at this in more than detail in 20   Dynamics of TCP, simply for now note that a steady queuing delay (eg due to a more-or-less abiding average queue utilization) looks to each sender more than like propagation delay than bandwidth delay, in that if ii packets are sent back-to-back and go far that fashion at the queue, so the pair will experience only a single queuing delay.

7.1.1   Delay examples¶

Instance i: A──────B

  • Propagation filibuster is forty µsec
  • Bandwidth is 1 byte/µsec (one MB/sec, 8 Mbit/sec)
  • Packet size is 200 bytes (200 µsec bandwidth delay)

So the total 1-way transmit fourth dimension for 1 packet is 240 µsec = 200 µsec + 40 µsec. To send two back-to-back packets, the time rises to 440 µsec: we add one more than bandwidth delay, but not some other propagation delay.

Example 2: A──────────────────B

Like the previous instance except that the propagation delay is increased to iv ms

The total transmit time for 1 packet is now 4200 µsec = 200 µsec + 4000 µsec. For two packets it is 4400 µsec.

Case iii: A──────R──────B

Nosotros now have ii links, each with propagation delay forty µsec; bandwidth and packet size as in Case ane.

The total transmit time for i 200-byte packet is now 480 µsec = 240 + 240. There are ii propagation delays of 40 µsec each; A introduces a bandwidth delay of 200 µsec and R introduces a store-and-frontwards filibuster (or second bandwidth delay) of 200 µsec.

Instance iv: A──────R──────B

The same as iii, but with data sent as 2 100-byte packets

The total transmit time is now 380 µsec = 3x100 + 2x40. There are still two propagation delays, simply there is simply 3/four equally much bandwidth delay because the transmission of the first 100 bytes on the 2nd link overlaps with the transmission of the second 100 bytes on the outset link.

_images/delay134.svg

These ladder diagrams represent the full transmission; a snapshot state of the manual at any ane instant can exist obtained by drawing a horizontal line. In the middle, case 3, diagram, for example, at no instant are both links active. Notation that sending ii smaller packets is faster than one large packet. We expand on this important point below.

Now let us consider the situation when the propagation delay is the most pregnant component. The cross-continental United states of america roundtrip delay is typically around 50-100 ms (propagation speed 200 km/ms in cable, v,000-10,000 km cable route, or about three-6000 miles); we will utilise 100 ms in the examples here. At a bandwidth of 1.0 Mbps, 100ms is almost 12 kB, or eight full-sized Ethernet packets. At this bandwidth, we would have four packets and 4 returning ACKs strung out along the path. At 1.0 Gbit/s, in 100ms we can send 12,000 kB, or 800 Ethernet packets, before the get-go ACK returns.

At most non-LAN scales, the delay is typically simplified to the round-trip time, or RTT: the time between sending a package and receiving a response.

Different delay scenarios have implications for protocols: if a network is bandwidth-limited then protocols are easier to design. Extra RTTs exercise not price much, so we can build in a considerable amount of back-and-forth commutation. However, if a network is filibuster-limited, the protocol designer must focus on minimizing actress RTTs. As an extreme example, consider wireless transmission to the moon (0.3 sec RTT), or to Jupiter (ane hour RTT).

At my abode I formerly had satellite Internet service, which had a roundtrip propagation delay of ~600 ms. This is remarkably high when compared to purely terrestrial links.

When dealing with reasonably high-bandwidth "large-scale" networks (eg the Net), to skilful approximation nearly of the non-queuing delay is propagation, and and then bandwidth and full filibuster are effectively independent. But when propagation delay is small are the ii interrelated. Because propagation filibuster dominates at this scale, we can ofttimes make simplifications when diagramming. In the analogy below, A sends a data packet to B and receives a pocket-size ACK in return. In (a), we show the information parcel traversing several switches; in (b) we show the information packet as if it were sent along one long unswitched link, and in (c) we introduce the idealization that bandwidth delay (and thus the width of the packet line) no longer matters. (Nigh later ladder diagrams in this book are of this type.)

_images/largescale.svg

7.1.2   Bandwidth × Delay¶

The bandwidth × delay product (ordinarily involving round-trip delay, or RTT), represents how much we can send before we hear anything back, or how much is "pending" in the network at any i time if we transport continuously. Note that, if we utilize RTT instead of i-way time, and then half the "awaiting" packets volition be returning ACKs. Here are a few judge values, where 100 ms can exist taken every bit a typical inter-continental-altitude RTT:

RTT bandwidth bandwidth × filibuster
1 ms 10 Mbps 1.2 kB
100 ms ane.5 Mbps twenty kB
100 ms 600 Mbps 8 MB
100 ms i.5 Gbps 20 MB

7.2   Packet Delay Variability¶

For many links, the bandwidth filibuster and the propagation filibuster are rigidly stock-still quantities, the quondam by the bandwidth and the latter past the speed of lite. This leaves queuing delay as the major source of variability.

This country of affairs lets us define RTTnoLoad to be the time information technology takes to transmit a packet from A to B, and receive an acknowledgment back, with no queuing delay.

While this is often a reasonable approximation, information technology is non necessarily truthful that RTTnoLoad is always a stock-still quantity. There are several possible causes for RTT variability. On Ethernet and Wi-Fi networks there is an initial "contention menstruum" earlier transmission actually begins. Although this delay is related to waiting for other senders, it is not exactly queuing filibuster, and a bundle may encounter considerable delay here even if it ends up being the beginning to be sent. For Wi-Fi in particular, the incertitude introduced by collisions into packet delivery times – even with no other senders competing – can complicate higher-level delay measurements.

It is too possible that different packets are routed via slightly different paths, leading to (hopefully) pocket-sized variations in travel fourth dimension, or are handled differently by different queues of a parallel-processing switch.

A link'southward bandwidth, too, tin vary dynamically. Imagine, for example, a T1 link comprised of the usual 24 DS0 channels, in which all channels not currently in use by voice calls are consolidated into a single data channel. With eight callers, the data bandwidth would be cut by a 3rd from 24 × DS0 to 16 × DS0. Alternatively, perhaps routers are allowed to reserve a varying amount of bandwidth for high-priority traffic, depending on demand, and and then the bandwidth allocated to the best-effort traffic tin vary. Perceived link bandwidth can besides vary over time if packets are compressed at the link layer, and some packets are able to be compressed more than others.

Finally, if mobile nodes are involved, then the distance and thus the propagation filibuster can change. This can exist quite significant if ane is communicating with a wireless device that is being taken on a cantankerous-continental road trip.

Despite these sources of fluctuation, we volition normally presume that RTTnoLoad is fixed and well-defined, particularly when nosotros wish to focus on the queuing component of filibuster.

vii.3   Parcel Size¶

How big should packets exist? Should they be large (eg 64 kB) or small (eg 48 bytes)?

The Ethernet answer to this question had to practise with equitable sharing of the line: big packets would not allow other senders timely access to transmit. In any network, this outcome remains a business organization.

On the other mitt, large packets waste a smaller pct of bandwidth on headers. Nevertheless, in almost of the cases we will consider, this percentage does non exceed x% (the VoIP/RTP instance in 1.3   Packets is an exception).

It turns out that if store-and-forrard switches are involved, smaller packets have much better throughput. The links on either side of the switch tin be in use simultaneously, as in Case 4 of vii.1.1   Filibuster examples. This is a very real issue, and has put a damper on involvement in support for IP "jumbograms". The ATM protocol (intended for both phonation and data) pushes this to an farthermost, with packets with only 48 bytes of data and 5 bytes of header.

As an example of this, consider a path from A to B with four switches and v links:

A───────R1───────R2───────R3───────R4───────B

Suppose we send either one big packet or five smaller packets. The relative times from A to B are illustrated in the following figure:

_images/fivepacket.svg

The point is that we can take advantage of parallelism: while the R4–B link higher up is handling packet 1, the R3–R4 link is handling packet 2 and the R2–R3 link is handling packet iii then on. The five smaller packets would accept five times the header capacity, but as long as headers are small relative to the data, this is not a significant issue.

The sliding-windows algorithm, used past TCP, uses this idea as a continuous process: the sender sends a continual stream of packets which travel link-by-link so that, in the total-capacity instance, all links may exist in apply at all times.

7.3.ane   Error Rates and Package Size¶

Package size is too influenced, to a modest degree, past the transmission error charge per unit. For relatively high error rates, it turns out to be better to send smaller packets, considering when an error does occur then the unabridged packet containing it is lost.

For case, suppose that i scrap in x,000 is corrupted, at random, so the probability that a unmarried bit is transmitted correctly is 0.9999 (this is much higher than the mistake rates encountered on existent networks). For a 1000-bit packet, the probability that every flake in the bundle is transmitted correctly is (0.9999)m, or nearly xc.v%. For a ten,000-chip packet the probability is (0.9999)10,000 ≃ 37%. For 20,000-fleck packets, the success rate is below 14%.

At present suppose we have 1,000,000 $.25 to send, either as 1000-bit packets or every bit 20,000-bit packets. Nominally this would require 1,000 of the smaller packets, but because of the 90% parcel-success rate we will need to retransmit 10% of these, or 100 packets. Some of the retransmissions may also be lost; the total number of packets we wait to need to send is about ane,000/90% ≃ ane,111, for a full of i,111,000 bits sent. Next, let u.s.a. attempt this with the 20,000-bit packets. Here the success rate is so poor that each packet needs to be sent on average seven times; lossless transmission would require 50 packets simply we in fact need 7×50 = 350 packets, or 7,000,000 bits.

Moral: choose the packet size pocket-size plenty that nearly packets practice not encounter errors.

To be fair, very large packets tin can exist sent reliably on about cablevision links (eg TDM and SONET). Wireless, nonetheless, is more of a problem.

7.3.2   Packet Size and Real-Fourth dimension Traffic¶

There is i other concern regarding excessive packet size. As we shall encounter in 25   Quality of Service, it is mutual to commingle bulk traffic on the same links with real-time traffic. It is straightforward to give priority to the real-fourth dimension traffic in such a mix, meaning that a router does non begin forwarding a majority-traffic packet if in that location are any real-time packets waiting (we do demand to be sure in this instance that existent-fourth dimension traffic will not amount to then much equally to starve the bulk traffic). Nevertheless, once a bulk-traffic packet has begun transmission, it is impractical to interrupt it.

Therefore, 1 component of any maximum-delay bound for existent-time traffic is the transmission time for the largest majority-traffic bundle; we will call this the largest-packet delay. Equally a practical matter, nigh IPv4 packets are limited to the maximum Ethernet packet size of 1500 bytes, merely IPv6 has an selection for so-called "jumbograms" upwardly to 2 MB in size. Transmitting ane such packet on a 100 Mbps link takes most 1/half dozen of a 2d, which is probable besides large for happy coexistence with real-time traffic.

7.4   Error Detection¶

The basic strategy for packet error detection is to add some extra $.25 – formally known equally an error-detection code – that will permit the receiver to determine if the bundle has been corrupted in transit. A corrupted package will then be discarded by the receiver; higher layers do not distinguish between lost packets and those never received. While packets lost due to bit errors occur much less frequently than packets lost due to queue overflows, information technology is essential that data be received accurately.

Intermittent packet errors by and large autumn into two categories: depression-frequency bit errors due to things like cosmic rays, and interference errors, typically generated by nearby electrical equipment. Errors of the latter type generally occur in bursts, with multiple bad bits per package. Occasionally, a malfunctioning network device volition introduce bursty errors as well.

The simplest error-detection machinery is a single parity bit; this will catch all i-scrap errors. There is, nevertheless, no straightforward generalization to N bits! That is, there is no Due north-bit mistake code that catches all Northward-bit errors; meet exercise 11.0.

The so-chosen Cyberspace checksum, used past IP, TCP and UDP, is formed by taking the ones-complement sum of the xvi-bit words of the message. Ones-complement is an culling way of representing signed integers in binary; if one adds 2 positive integers and the sum does not overflow the hardware word size, then ones-complement and the now-universal twos-complement are identical. To grade the ones-complement sum of 16-chip words A and B, kickoff take the ordinary twos-complement sum A+B. So, if in that location is an overflow bit, add it back in as low-order flake. Thus, if the word size is iv bits, the ones-complement sum of 0101 and 0011 is g (no overflow). Now suppose we desire the ones-complement sum of 0101 and 1100. Kickoff we take the "exact" sum and become one|0001, where the leftmost 1 is an overflow scrap past the 4-bit wordsize. Considering of this overflow, we add this scrap back in, and go 0010.

The 4-bit ones-complement numeric representation has two forms for zero: 0000 and 1111 (information technology is straightforward to verify that whatever 4-bit quantity plus 1111 yields the original quantity; in twos-complement annotation 1111 represents -1, and an overflow is guaranteed, and then adding back the overflow flake cancels the -1 and leaves us with the original number). It is a fact that the ones-complement sum is never 0000 unless all bits of all the summands are 0; if the summands add together upward to zero by coincidence, then the actual binary representation will be 1111. This means that we tin use 0000 in the checksum to represent "checksum not calculated", which the UDP protocol nevertheless allows over IPv4 for efficiency reasons. Over IPv6, UDP packets must include a calculated checksum ( RFC 2460, §eight.1).

What is stored in the parcel'due south checksum field is the complement of the sum above, or 0xFFFF − sum. This means that when the receiver calculates the checksum over the same range of bytes, the value obtained should be aught.

Ones-complement improver has a few properties that make numerical calculations simpler. Kickoff, when finding the ones-complement sum of a series of 16-chip values, we tin defer adding in the overflow bits until the end. Specifically, we can observe the ones-complement sum of the values past adding them using ordinary (twos-complement) 32-scrap add-on, and then forming the ones-complement sum of the upper and lower 16-fleck half-words. The upper half-give-and-take here represents the accumulated overflow. See practise 10.0.

We can also find the ones-complement sum of a series of 16-bit values by concatenating them pairwise into 32-bit values, taking the 32-bit ones-complement sum of these, and so, as in the previous paragraph, forming the ones-complement sum of the upper and lower 16-scrap half-words.

Somewhat surprisingly, when computing the 16-bit ones-complement sum of a series of bytes taken two at a time, information technology does not matter whether we convert the pairs of sequent bytes to integers using big-endian or little-endian byte order (16.1.5   Binary Data). The overflow from the low-society bytes is added to the loftier-social club bytes by virtue of ordinary carries in addition, and the overflow from the loftier-lodge bytes is added to the depression-lodge bytes by the ones-complement rule. Meet do ten.5.

Finally, at that place is another fashion to look at the (16-flake) ones-complement sum: it is in fact the remainder upon dividing the message (seen as a very long binary number) by 216 - 1, provided we replace a balance of 0 with the equivalent ones-complement zero value consisting of sixteen ane-bits. This is similar to the decimal "casting out nines" rule: if nosotros add upwards the digits of a base of operations-10 number, and echo the process until nosotros get a single digit, then that digit is the rest upon dividing the original number by 10-1 = nine. The analogy here is that the bulletin is looked at as a very big number written in base-ii16, where the "digits" are the 16-chip words. The procedure of repeatedly adding upwardly the "digits" until nosotros go a single "digit" amounts to taking the ones-complement sum of the words. This residuum approach to ones-complement addition isn't very practical, but it does provide a useful way to clarify ones-complement checksums mathematically.

A weakness of any error-detecting code based on sums is that transposing words leads to the aforementioned sum, and the error is non detected. In particular, if a message is fragmented and the fragments are reassembled in the incorrect order, the ones-complement sum will likely not observe it.

While some error-detecting codes are ameliorate than others at detecting certain kinds of systematic errors (for example, CRC, below, is normally improve than the Internet checksum at detecting transposition errors), ultimately the effectiveness of an error-detecting code depends on its length. Suppose a package P1 is corrupted randomly into P2, but still has its original Due north-bit error code EC(P1). This N-bit code will fail to find the fault that has occurred if EC(P2) is, by chance, equal to EC(P1). The probability that two random N-bit codes will match is 1/2N (though a modest random modify in P1 might not pb to a uniformly distributed random change in EC(P1); run into the tail end of the CRC section below).

This does not mean, yet, that 1 package in 2North will be received incorrectly, equally most packets are error-free. If we use a 16-bit error lawmaking, and only one packet in 100,000 is actually corrupted, and then the rate at which corrupted packets will sneak by is merely 1 in 100,000 × 65536, or about i in vi × 109. If packets are 1500 bytes, y'all accept a good chance (90+%) of accurately transferring a terabyte, and a 37% gamble (1/e) at ten terabytes.

7.4.ane   Cyclical Redundancy Check: CRC¶

The CRC error code is based on long sectionalization of polynomials, where the coefficients are integers modulo ii. The use of polynomials tends to sound complicated but in fact it eliminates the need for carries or borrowing in addition and subtraction. Together with the use of modulo-2 coefficients, this ways that addition and subtraction go equivalent to XOR. We treat the message, in binary, every bit a giant polynomial m(X), using the bits of the message equally successive coefficients (eg 10011011 = Ten7 + 104 + X3 + X + 1). We standardize a divisor polynomial p(X) of degree N (N=32 for CRC-32 codes); the full specification of a given CRC code requires giving this polynomial. (A full specification also requires spelling out the bit order within bytes.) We append N 0-bits to one thousand(X) (this is the polynomial XNchiliad(10)), and divide the issue by p(X). The "checksum" is the residual r(X), of maximum degree North−i (that is, N bits).

This is a reasonably secure hash against real-world network corruption, in that it is very hard for systematic errors to effect in the same hash code. However, CRC is non secure against intentional corruption; given an arbitrary bulletin msg , at that place are straightforward algebraic means for tweaking the concluding bytes of msg then that the CRC code of the result is equal to any predetermined value in the appropriate range.

As an instance of CRC, suppose that the CRC divisor is 1011 (making this a CRC-three code) and the message is 1001 1011 1100. Here is the sectionalisation; we repeatedly subtract (using XOR) a copy of the divisor 1011, shifted then the leading one of the divisor lines upwardly with the leading 1 of the previous divergence. A 1 and so goes on the quotient line, lined upward with the concluding digit of the shifted divisor; otherwise a 0. There are several online calculators for this sort of matter, eg hither. Notation that an actress 000 has been appended to the dividend.

                                1 0100 1101 011      ┌─────────────────── 1011 │ 1001 1011 1100 000        1011         ─── ─         010 1011 1100 000          x xi          ── ──          00 0111 1100 000              101 ane              ─── ─              010 0100 000               10 11               ── ──               00 1000 000                  1011                  ────                  0011 000                    x 11                    ── ──                    01 110                     1 011                     ─ ───                     0 101              

The residue, at the lesser, is 101; this is the Northward-bit CRC lawmaking. We then append the code to the original message, that is, without the added zeroes: 1001 1011 1100 101; algebraically this is XDue northm(X) + r(X). This is what is actually transmitted; if converted to a polynomial, it yields a residuum of zero upon division past p(X). This slightly simplifies the receiver's job of validating the CRC code: it but has to bank check that the balance is zero.

CRC is easily implemented in hardware, using bit-shifting registers. Fast software implementations are also possible, usually involving handling the bits one byte at a time, with a precomputed lookup table with 256 entries.

If we randomly change enough bits in bundle P1 to create P2, and then CRC(P1) and CRC(P2) are finer independent random variables, with probability of a friction match 1 in twoN where N is the CRC length. Even so, if we change just a few bits then the change is not so random. In particular, for many CRC codes (that is, for many choices of the underlying polynomial p(10)), changing upwards to iii bits in P1 to create a new bulletin P2 guarantees that CRC(P1) ≠ CRC(P2). For the Cyberspace checksum, this is not guaranteed even if we know only two $.25 were changed.

Finally, at that place are besides secure hashes, such as MD-v and SHA-1 and their successors (28.half dozen   Secure Hashes). Nobody knows (or admits to knowing) how to produce two messages with same hash here. However, these secure-hash codes are mostly not used in network error-correction equally they are much slower to calculate than CRC; they are generally used only for secure authentication and other college-level functions.

7.4.2   Error-Correcting Codes¶

If a link is noisy, nosotros can add together an error-correction code (also called forward error correction) that allows the receiver in many cases to effigy out which bits are corrupted, and set up them. This has the effect of improving the bit error rate at a cost of reducing throughput. Fault-correcting codes tend to involve many more than bits than are needed for mistake detection. Typically, if a communications engineering proves to take an unacceptably high bit-error rate (such as wireless), the next footstep is to introduce an error-correcting code to the protocol. This generally reduces the "virtual" scrap-error rate (that is, the mistake rate as corrected) to acceptable levels.

Possibly the easiest mistake-correcting code to visualize is 2-D parity, for which we demand O(Northward1/2) additional $.25. We accept Due north×Northward information $.25 and arrange them into a square; we then compute the parity for every cavalcade, for every row, and for the unabridged square; this is 2N+1 extra $.25. Here is a diagram with North=four, and with fifty-fifty parity; the column-parity bits (in blue) are in the lesser (fifth) row and the row-parity bits (also in blueish) are in the rightmost (fifth) column. The parity fleck for the entire four×4 information foursquare is the light-blue scrap in the bottom correct corner.

_images/2d_parity.svg

At present suppose ane bit is corrupted; for simplicity, assume information technology is 1 of the data bits. And then exactly one column-parity bit will be incorrect, and exactly one row-parity flake will exist incorrect. These two incorrect bits marker the column and row of the wrong information bit, which we can then flip to the correct state.

We can make North big, only an essential requirement hither is that there be simply a single corrupted bit per square. We are thus likely either to go on Due north small, or to choose a different code entirely that allows correction of multiple $.25. Either way, the add-on of fault-correcting codes tin can easily increase the size of a packet significantly; some codes double or fifty-fifty triple the total number of bits sent.

seven.4.ii.1   Hamming Codes¶

The Hamming code is another popular error-correction code; it adds O(log N) additional bits, though if N is large enough for this to be a material improvement over the O(Nane/two) performance of 2-D parity and so errors must be very infrequent. If nosotros have viii data bits, permit united states of america number the bit positions 0 through vii. We then write each chip's position equally a binary value between 000 and 111; we volition call these the position bits of the given data fleck. We now add four code $.25 equally follows:

  1. a parity bit over all 8 data bits
  2. a parity chip over those information $.25 for which the commencement digit of the position bits is 1 (these are positions 4, v, half dozen and vii)
  3. a parity fleck over those data $.25 for which the 2nd digit of the position bits is 1 (these are positions 010, 011, 110 and 111, or two, 3, 6 and 7)
  4. a parity chip over those information bits for which the third digit of the position $.25 is i (these are positions 001, 011, 101, 111, or 1, iii, 5 and seven)

We can tell whether or not an error has occurred past the showtime code bit; the remaining three code bits then tell united states of america the respective iii position $.25 of the incorrect flake. For example, if the #two code fleck above is correct, so the kickoff digit of the position bits is 0; otherwise information technology is ane. With all three position $.25, we have identified the wrong information bit.

As a concrete instance, suppose the data word is 10110010. The four code bits are thus

  1. 0, the (fifty-fifty) parity scrap over all eight bits
  2. 1, the parity bit over the second half, 10110010
  3. 1, the parity bit over the bold bits: xxi0010
  4. ane, the parity chip over these bold bits: 10110010

If the received data+lawmaking is now 1011one010 0111, with the assuming bit flipped, and then the fact that the first lawmaking flake is incorrect tells the receiver at that place was an fault. The second code bit is besides wrong, and so the first fleck of the position bits must be 1. The third lawmaking bit is right, then the 2d bit of the position $.25 must be 0. The fourth code fleck is also right, so the 3rd bit of the position bits is 0. The position bits are thus binary 100, or 4, and so the receiver knows that the incorrect bit is in position 4 (counting from 0) and can be flipped to the correct land.

seven.5   Epilog¶

The bug presented here are perchance non very glamorous, and often play a supporting, behind-the-scenes role in protocol design. Nonetheless, their influence is pervasive; we may even think of them as part of the underlying "physics" of the Internet.

As the early Internet became faster, for instance, and propagation delay became the dominant limiting factor, protocols were often revised to limit the number of back-and-forth exchanges. A archetype example is the Simple Mail Transport Protocol (SMTP), amended by RFC 1854 to allow multiple commands to exist sent together – called pipelining – instead of individually.

While at that place have been periodic calls for large-packet support in IPv4, and IPv6 protocols be for "jumbograms" in backlog of a megabyte, these are very seldom used, due to the store-and-frontward costs of big packets every bit described in vii.3   Bundle Size.

Almost every LAN-level protocol, from Ethernet to Wi-Fi to bespeak-to-point links, incorporates an error-detecting lawmaking chosen to reflect the underlying transportation reliability. Ethernet includes a 32-bit CRC code, for case, while Wi-Fi includes extensive error-correcting codes due to the noisier wireless environment. The Wi-Fi fragmentation option (iv.2.1.5   Wi-Fi Fragmentation) is directly tied to 7.3.1   Error Rates and Packet Size.

vii.half dozen   Exercises¶

Exercises are given fractional (floating bespeak) numbers, to permit for interpolation of new exercises. Exercises marked with a ♢ have solutions or hints at 34.7   Solutions for Packets.

1.0. Suppose a link has a propagation delay of 20 µsec and a bandwidth of two bytes/µsec.

(a). How long would it take to transmit a 600-byte package over such a link?

(b). How long would it take to transmit the 600-byte packet over ii such links, with a store-and-frontward switch in between?

two.0. Suppose the path from A to B has a unmarried switch S in betwixt: A───S───B. Each link has a propagation filibuster of 60 µsec and a bandwidth of 2 bytes/µsec.

(a). How long would it take to ship a single 600-byte bundle from A to B?

(b). How long would it take to send two back-to-dorsum 300-byte packets from A to B?

(c). How long would it have to send three dorsum-to-back 200-byte packets from A to B?

three.0.♢ Repeat parts (a) and (b) of the previous do, except alter the per-link propagation delay from threescore µsec to 600 µsec.

4.0. Suppose the path from A to B has a single switch S in between: A───S───B. The propagation delays on the A–S and Due south–B are 24 µsec and 35 µsec respectively. The per-packet bandwidth delays on the A–S and S–B links are 103 µsec and 157 µsec respectively. The ladder diagram below describes the sending of two consecutive packets from A to B. Characterization the time intervals (a) through (eastward) at the right edge, and give the total fourth dimension for the packets to be sent.

_images/ladder_exercise3.5.svg

5.0. Now assume that the delays are the aforementioned as in exercise 4.0, except that the bandwidth delays are reversed: the A–S bandwidth delay is 157 µsec and the Due south–B bandwidth delay is 103 µsec. The propagation delays are unchanged. Draw the respective ladder diagram, and likewise the total time for two sequent packets to be sent. (Hint: the total time should not change.)

6.0. Once again suppose the path from A to B has a single switch S in betwixt: A───Due south───B. The per-link bandwidth and propagation delays are as follows:

link bandwidth propagation delay
A──S five bytes/µsec 24 µsec
S──B iii bytes/µsec thirteen µsec

(a). How long would it take to send a single 600-byte bundle from A to B?

(b). How long would it take to ship two back-to-back 300-byte packets from A to B? Annotation that, considering the South──B link is slower, package 2 arrives at S from A before Southward has finished transmitting bundle 1 to B.

7.0. Suppose in the previous exercise, the A–S link has the smaller bandwidth of three bytes/µsec and the Southward–B link has the larger bandwidth of 5 bytes/µsec. The propagation delays are unchanged. Now how long does it take to send 2 dorsum-to-back 300-byte packets from A to B?

8.0. Suppose we have five links, A───R1───R2───R3───R4───B. Each link has a bandwidth of 100 bytes/ms. Assume nosotros model the per-link propagation delay as 0.

(a). How long would it accept a single 1500-byte bundle to go from A to B?

(b). How long would it accept five consecutive 300-byte packets to go from A to B?

The diagram in 7.3   Packet Size may help.

9.0. Suppose there are N equal-bandwidth links on the path between A and B, as in the diagram below, and we wish to send M consecutive packets.

A ─── Si ─── … ─── Due southDue north-1 ─── B

Permit BD be the bandwidth filibuster of a single package on a unmarried link, and assume the propagation delay on each link is zero. Show that the total (bandwidth) delay is (M+North-one)×BD. Hint: the full time is the sum of the time A takes to begin transmitting the last bundle, and the time that last parcel (or any other parcel) takes to travel from A to B. Bear witness that the former is (M-1)×BD and the latter is Due north×BD. Notation that no packets ever have to wait at any Due southi because the ith packet takes exactly as long to arrive as the (i-ane)th packet takes to depart.

10.0. Repeat the analysis in 7.3.i   Error Rates and Parcel Size to compare the probable total number of bytes that need to be sent to transmit xseven bytes using

(a). i,000-byte packets

(b). 10,000-byte packets

Assume the bit error rate is 1 in 16 × 105, making the mistake rate per byte most i in 2 × 105.

xi.0. In the text it is claimed "there is no N-flake error code that catches all N-bit errors" for N≥2 (for Northward=i, a parity scrap works). Bear witness this claim for Northward=two. Hint: pick a length M, and consider all M-bit letters with a single 1-bit. Any such message can be converted to whatsoever other with a two-scrap error. Prove, using the Pigeonhole Principle, that for large plenty Grand two messages mane and one thousand2 must have the aforementioned error code, that is, east(m1) = due east(m2). If this occurs, so the error lawmaking fails to detect the error that converted m1 into m2.

12.0. Consider the following 4-bit numbers, with decimal values in parentheses:

1000 (8)

1011 (11)

1101 (13)

1110 (fourteen)

The ones-complement sum of these can be establish using the sectionalization method by treating these equally a four-digit hex number 0x8bde and taking the rest mod 15; the upshot is 1.

(a). Find this ones-complement sum via three 4-bit ones-complement additions. To get started, notation that the (exact) sum of 1000 and 1011 is 1|0011, and adding the carry bit to the low-guild 4 bits gives a ones-complement sum of the first pair of 0100.

(b). The exact (and viii-flake twos-complement) sum of the values higher up is 46, or ten|1110 in binary. Detect the ones-complement sum of the values by taking this exact sum and so forming the ones-complement sum of the 4-bit high and low halves. Note that this is not the aforementioned as the twos-complement sum of the halves.

13.0. Permit [a,b] denote a pair of bytes a and b. The 16-bit integer corresponding to [a,b] using large-endian conversion is a×256 + b; using piddling-endian conversion information technology is a + 256×b.

(a). Notice the ones-complement sum of [200,150] and [90,230] by using big-endian conversion to the respective 16-chip integers 51,350 and 23,270. Convert dorsum to two bytes, again using large-endian conversion, at the end.

(b). Exercise the aforementioned using picayune-endian conversion, in which example the xvi-fleck integers are 38,600 and 58,970.

13.v. In the description in the text of the Internet checksum, the overflow bit was added back in subsequently each ones-complement addition. Evidence that the aforementioned concluding result will be obtained if we add upwardly the 16-scrap words using 32-scrap twos-complement arithmetics (twos-complement is the normal arithmetics on all modern hardware), and and so add the upper xvi $.25 of the sum to the lower 16 bits. (If there is an overflow at this concluding stride, we take to add together that dorsum in likewise.)

14.0. Suppose a message is 110010101. Calculate the CRC-3 checksum using the polynomial Teniii + one, that is, find the 3-scrap residue using divisor 1001.

fifteen.0. The CRC algorithm presented to a higher place requires that we procedure ane bit at a time. It is possible to exercise the algorithm Northward bits at a time (eg N=8), with a precomputed lookup table of size 2N. Complete the steps in the following description of this strategy for N=3 and polynomial Teniii + 10 + 1, or 1011.

sixteen.0. Consider the following prepare of bits sent with two-D even parity; the data $.25 are in the 4×iv upper-left block and the parity $.25 are in the rightmost column and bottom row. Which flake is corrupted?

              ┌───┬───┬───┬───┬───┐ │ 1 │ 1 │ 0 │ 1 │ 1 │ ├───┼───┼───┼───┼───┤ │ 0 │ 1 │ 0 │ 0 │ ane │ ├───┼───┼───┼───┼───┤ │ 1 │ 1 │ one │ 1 │ 1 │ ├───┼───┼───┼───┼───┤ │ 1 │ 0 │ 0 │ ane │ 0 │ ├───┼───┼───┼───┼───┤ │ i │ one │ 1 │ 0 │ ane │ └───┴───┴───┴───┴───┘            

17.0. (a) Show that ii-D parity can notice any three errors. | (b). Find 4 errors that cannot be detected past two-D parity. | (c). Show that that two-D parity cannot correct all two-bit errors. Hint: put both $.25 in the same row or column.

18.0. Each of the following 8-bit messages with 4-scrap Hamming lawmaking contains a single error. Correct the message.

(a)♢. 10100010 0111

(b). 10111110 1011

xix.0. (a) What happens in 2-D parity if the corrupted chip is in the parity column or parity row?

(b). In the following viii-bit message with 4-scrap Hamming code, there is an error in the lawmaking portion. How can this be determined?