chapter no 3

 

1. The message is split into 10 frames, each with an 80% chance of arriving undamaged. What is the expected number of times the message must be sent to get the entire message through?

Answer:

Let the probability of a frame being transmitted successfully be 0.8, and the probability of a frame being damaged (and requiring retransmission) be 0.2.

For one frame to arrive undamaged, the expected number of transmissions is the reciprocal of the probability of success:

E[transmissions per frame]=10.8=1.25E[\text{transmissions per frame}] = \frac{1}{0.8} = 1.25

Since there are 10 frames, the total expected number of transmissions is:

E[total transmissions]=10×1.25=12.5E[\text{total transmissions}] = 10 \times 1.25 = 12.5

Thus, on average, the message will need to be sent 12.5 times to get the entire message through.


2. Character Encoding:

Given the character encoding:

  • A: 01000111

  • B: 11100011

  • ESC: 11100000

  • FLAG: 01111110

The bit sequence transmitted for the four-character frame "A B ESC FLAG" can be constructed by concatenating the binary representations of these characters:

ABESCFLAG=01000111111000111110000001111110A \, B \, ESC \, FLAG = 01000111 \, 11100011 \, 11100000 \, 01111110

Thus, the bit sequence transmitted is:

01000111111000111110000001111110


3. Output after byte stuffing:

Given the data fragment "A B ESC C ESC FLAG FLAG D" and the byte-stuffing algorithm described in the text, the output after stuffing will involve inserting an ESC byte before any occurrences of the FLAG or ESC byte in the data.

  • A → No need for stuffing.

  • B → No need for stuffing.

  • ESC → ESC is byte-stuffed to ESC ESC.

  • C → No need for stuffing.

  • ESC → ESC is byte-stuffed to ESC ESC.

  • FLAG → FLAG is byte-stuffed to ESC FLAG.

  • FLAG → FLAG is byte-stuffed to ESC FLAG.

  • D → No need for stuffing.

So, the output after byte-stuffing is:

A B ESC ESC C ESC ESC ESC FLAG ESC FLAG D


4. Maximum overhead in byte-stuffing algorithm:

The maximum overhead in a byte-stuffing algorithm occurs when every byte in the original data stream is either a FLAG or ESC byte, since each such byte will need to be stuffed with an ESC byte.

  • For every FLAG or ESC byte, an additional ESC byte is inserted.

Thus, the maximum overhead is 100% (i.e., the size of the stuffed data stream is twice the size of the original data stream).


5. Is it wasteful to end each frame with a FLAG byte and begin the next one with a FLAG byte?

Answer: No, it is not wasteful. Having a FLAG byte at the start and end of each frame is a reliable way to mark the boundaries of the frame, ensuring that the receiver can identify the beginning and end of a frame, especially in cases where the data might contain FLAG bytes within it. While it may seem redundant, it provides robust error detection and framing synchronization, which is crucial in reliable data communication.


6. Bit string: 0111101111101111110, after bit stuffing:

In bit stuffing, after every five consecutive 1's, a 0 is inserted to prevent a sequence of six consecutive 1's from being interpreted as a flag byte (in protocols where flags are 011111).

Given the string:

0111101111101111110

We insert a 0 after every sequence of five 1's:

  1. 01111011111101111110

Thus, the bit string after bit stuffing is:

011110011111101111110


7. When might an open-loop protocol (e.g., Hamming code) be preferable to feedback-based protocols?

An open-loop protocol like Hamming code might be preferable in situations where low latency is essential, and retransmissions are not an option. For example, in real-time applications or environments where feedback channels are unavailable or unreliable, open-loop error detection and correction could be advantageous since it does not rely on feedback from the receiver.


8. Hamming distance of the error-detecting scheme with two parity bits:

The scheme uses one parity bit for checking the odd-numbered bits and another parity bit for checking the even-numbered bits. The Hamming distance of a code is the minimum number of bit changes needed to convert one valid codeword into another valid codeword.

Since this code detects all single-bit errors and corrects two-bit errors, the Hamming distance of this code is 3. This is because it can detect errors in up to 2 bits and can correct 1-bit errors.


9. Sixteen-bit message using Hamming code:

To find the number of check bits needed for a 16-bit message using Hamming code, we use the formula for the number of check bits rr:

2rm+r+12^r \geq m + r + 1

where mm is the number of data bits (16 in this case). Substituting the values:

2r16+r+12^r \geq 16 + r + 1

We can try different values of rr:

  • For r=5r = 525=322^5 = 32, and 16+5+1=2216 + 5 + 1 = 22, so this satisfies the condition.

  • Therefore, 5 check bits are required.

The bit pattern transmitted for the message 1101001100110101 (with even parity) would be determined by placing the parity bits at the appropriate positions, which can be done using the standard Hamming code procedure.


10. 12-bit Hamming code with value 0xE4F:

Given the Hamming code value 0xE4F, we can decode it by checking the parity bits and using the Hamming error-correction algorithm to identify the erroneous bit. If a single-bit error is found, the receiver can flip the erroneous bit to recover the original value.

To decode, the parity check positions would be calculated, and the error location would be identified. After fixing the error, the original hexadecimal value would be derived.



11. Parity bit block error detection:

Will a block with n rows and k columns of bits, using horizontal and vertical parity bits, detect all single errors, double errors, and triple errors? Will it detect some four-bit errors?

Answer: This error detection scheme adds a parity bit to each row and each column of a block of data. The parity bit in the lower-right corner checks both the row and column parity. This setup will detect:

  • Single-bit errors: Yes, since a single bit error in any position will cause the corresponding row and/or column parity to fail.

  • Double-bit errors: It may detect double-bit errors if they occur in different rows or columns, but it might not detect double-bit errors that affect both the row and the column in such a way that the overall parity checks remain valid.

  • Triple-bit errors: It may detect some triple-bit errors but not necessarily all, depending on their distribution across the rows and columns.

  • Four-bit errors: This scheme may fail to detect some four-bit errors, particularly when the four bits are positioned in such a way that the row and column parity checks remain valid. For example, if two bits flip in one row and two bits flip in one column in a specific pattern, the parity bits might still appear valid.


12. Maximum error rate for parity block better than Hamming code:

If data is transmitted in blocks of 1000 bits, what is the maximum error rate under which an error detection and retransmission mechanism with one parity bit per block is better than using a Hamming code?

Answer: The Hamming code can correct single-bit errors, whereas the one-parity bit scheme can only detect errors. To determine when using Hamming code is better, we calculate the probabilities:

  • Parity check mechanism: With one parity bit, the system can only detect errors. If a block contains an odd number of errors, the parity check will detect it.

  • Hamming code: This can correct one-bit errors, and it can detect two-bit errors but cannot correct them.

For the parity check to be more efficient, the error rate must be low enough that the probability of having more than one error in a block (i.e., the probability of encountering a 2-bit or higher error) outweighs the advantage of error correction.

This happens when the error probability is low enough that the cost of retransmission using a simple parity check is smaller than the overhead of Hamming code's additional check bits and complexity. Thus, the error rate must be smaller than the probability of having more than one error in a block for the parity check method to outperform the Hamming code.


13. Probability of undetected errors in a parity-check scheme:

In a block of bits with n rows and k columns using horizontal and vertical parity bits for error detection, what is the probability that exactly 4 bits are inverted due to transmission errors, and the error will remain undetected?

Answer: The probability that the error will be undetected depends on the arrangement of the 4 flipped bits. For the error to remain undetected, the flipped bits must cancel out in the parity calculations for both the rows and columns. This would happen if:

  • The flipped bits are in such a way that the row and column parities remain valid.

To compute this probability, we would need to calculate the number of ways that four flipped bits can be distributed across rows and columns so that the parities still hold. This is a combinatorial problem, and it involves counting the number of valid configurations of 4 flipped bits that do not affect the overall parity. The probability of this occurring is relatively small, but it is not zero.


14. Output sequence of convolutional coder:

Given the convolutional coder of Fig. 3-7 (not provided here), what is the output sequence when the input sequence is 10101010 (left to right) and the internal state is initially all zero?

Answer: To solve this, we would need to simulate the convolutional encoding process. The convolutional coder uses a shift register to process the input bits, applying a set of polynomial functions to the input sequence. Based on the state and the input, the output will depend on how the coder's polynomials are structured.

In general, convolutional coding takes each input bit, shifts it through a register, and computes the output based on the current input and the past states. The output sequence can be computed by following the shift-register rules for the specific convolutional coder given in the diagram (Fig. 3-7).


15. Internet checksum:

A message of "1001 1100 1010 0011" is transmitted using a 4-bit Internet checksum. What is the value of the checksum?

Answer: To calculate the Internet checksum:

  1. Break the message into 4-bit words: 1001, 1100, 1010, 0011.

  2. Add these 4-bit words together:

    • 1001 + 1100 = 1 0010

    • 1 0010 + 1010 = 1 1100

    • 1 1100 + 0011 = 1 1111

  3. Perform one's complement on the result: 1111 becomes 0000.

So, the checksum value is 0000.


16. Remainder when dividing x7+x5+1x^7 + x^5 + 1 by the generator polynomial x3+1x^3 + 1:

Answer: To find the remainder, divide x7+x5+1x^7 + x^5 + 1 by x3+1x^3 + 1 using polynomial long division:

  1. Divide the leading term of the dividend x7x^7 by the leading term of the divisor x3x^3, giving x4x^4.

  2. Multiply x4x^4 by x3+1x^3 + 1, subtract the result from x7+x5+1x^7 + x^5 + 1, and continue until the degree of the remainder is less than the degree of the divisor.

The remainder is x2+x+1x^2 + x + 1.


17. CRC transmission with bit stream 10011101 and generator polynomial x3+1x^3 + 1:

Answer: The CRC procedure involves:

  1. Appending 3 zeros to the message (since the generator polynomial is x3+1x^3 + 1, which has degree 3).

  2. Dividing the augmented bit stream by the generator polynomial x3+1x^3 + 1.

  3. The remainder after the division is the CRC value.

After performing the division, the transmitted bit string will consist of the original bit stream followed by the remainder (CRC). The error detection occurs when the receiver divides the received bit string by the generator polynomial and checks if the remainder is zero.


18. CRC with IEEE 802 standard:

Given a 1024-bit message with 992 data bits and 32 CRC bits, computed using the IEEE 802 standardized CRC polynomial, will errors during transmission be detected?

Answer: The IEEE 802 standardized CRC polynomial can detect various errors:

  • Single-bit errors: Yes, it can detect single-bit errors.

  • Two isolated bit errors: Yes, it can detect two isolated bit errors.

  • 18 isolated bit errors: It will detect most 18 isolated bit errors, but if the bit errors occur in a specific pattern, they may go undetected.

  • 47 isolated bit errors: The CRC may not detect all 47 isolated bit errors.

  • Long burst errors (24 and 35 bits): It is very effective at detecting burst errors, and both 24-bit and 35-bit burst errors will be detected.


19. Acceptance of multiple copies of the same frame:

Is it possible for a receiver to accept multiple copies of the same frame when no frames (message or acknowledgment) are lost?

Answer: Yes, it is possible. If the acknowledgment frame is lost, the sender may resend the same data frame, causing the receiver to accept the frame again, even if it has already received it successfully. However, protocols like ARQ (Automatic Repeat reQuest) are designed to handle such cases by using sequence numbers to distinguish between new and retransmitted frames.


20. Channel efficiency with stop-and-wait:

A channel has a bit rate of 4 kbps and a propagation delay of 20 ms. For what range of frame sizes does the stop-and-wait protocol give an efficiency of at least 50%?

Answer: The efficiency of stop-and-wait is given by the formula:

Efficiency=Transmission timeTransmission time+Round-trip delay\text{Efficiency} = \frac{\text{Transmission time}}{\text{Transmission time} + \text{Round-trip delay}}

Where:

  • Transmission time = Frame sizeBit rate\frac{\text{Frame size}}{\text{Bit rate}}

  • Round-trip delay = 2 × propagation delay

To achieve an efficiency of at least 50%, we need to solve for frame size where:

Frame sizeBit rateFrame sizeBit rate+2×Propagation delay0.5\frac{\frac{\text{Frame size}}{\text{Bit rate}}}{\frac{\text{Frame size}}{\text{Bit rate}} + 2 \times \text{Propagation delay}} \geq 0.5

Substituting the given values (bit rate = 4 kbps, propagation delay = 20 ms), we can compute the frame size that satisfies this condition.

Comments

Popular posts from this blog

Chapter no 1

chapter 2