TCP UDP 1

Question 1

Under which condition does UDP dominance occur?

A. when TCP traffic is in the same class as UDP
B. when UDP flows are assigned a lower priority queue
C. when WRED is enabled
D. when ACLs are in place to block TCP traffic

Answer: A

It is a general best practice to not mix TCP-based traffic with UDP-based traffic (especially Streaming-Video) within a single service-provider class because of the behaviors of these protocols during periods of congestion. Specifically, TCP transmitters throttle back flows when drops are detected. Although some UDP applications have application-level windowing, flow control, and retransmission capabilities, most UDP transmitters are completely oblivious to drops and, thus, never lower transmission rates because of dropping.
When TCP flows are combined with UDP flows within a single service-provider class and the class experiences congestion, TCP flows continually lower their transmission rates, potentially giving up their bandwidth to UDP flows that are oblivious to drops. This effect is called TCP starvation/UDP dominance.
TCP starvation/UDP dominance likely occurs if TCP-based applications is assigned to the same service-provider class as UDP-based applications and the class experiences sustained congestion.
Granted, it is not always possible to separate TCP-based flows from UDP-based flows, but it is beneficial to be aware of this behavior when making such application-mixing decisions within a single service-provider class.

Question 2

Which two actions must you perform to enable and use window scaling on a router? (Choose two)

A. Execute the command ip tcp window-size 65536.
B. Set window scaling to be used on the remote host.
C. Execute the command ip tcp queuemax.
D. Set TCP options to “enabled” on the remote host.
E. Execute the command ip tcp adjust-mss.

Answer: A B

Question 3

Which three TCP enhancements can be used with TCP selective acknowledgments? (Choose three)

A. header compression
B. explicit congestion notification
C. keepalive
D. time stamps
E. TCP path discovery
F. MTU window

Answer: B C D

TCP Selective Acknowledgement (SACK) prevents unnecessary retransmissions by specifying successfully received subsequent data. Let’s see an example of the advantages of TCP SACK.

TCP Selective Acknowledgement
TCP (Normal) Acknowledgement

For TCP Selective Acknowledgement, the process is the same until the Client realizes Segment#2 was missing. It also sends ACK#1 but adding SACK to indicate it has received Segment#3 successfully (so no need to retransmit this segment. Therefore the server only needs to resend Segment#2 only. But notice that after receiving Segment#2, the Client sends ACK#3 (not ACK#2) to say that it had all first three segments. Now the server will continue sending Segment #4,#5, …

For TCP (normal) acknowledgement, when a client requests data, server sends the first three segments (named of packets at Layer 4): Segment#1,#2,#3. But suppose Segment#2 was lost somewhere on the network while Segment#3 stills reached the client. Client checks Segment#3 and realizes Segment#2 was missing so it can only acknowledge that it received Segment#1 successfully. Client received Segment#1 and #3 so it creates two ACKs#1 to alert the server that it has not received any data beyond Segment#1. After receiving these ACKs, the server must resend Segment#2,#3 and wait for the ACKs of these segments.

The SACK option is not mandatory and it is used only if both parties support it.

The TCP Explicit Congestion Notification (ECN) feature allows an intermediate router to notify end hosts of impending network congestion. It also provides enhanced support for TCP sessions associated with applications, such as Telnet, web browsing, and transfer of audio and video data that are sensitive to delay or packet loss. The benefit of this feature is the reduction of delay and packet loss in data transmissions. Use the “ip tcp ecn” command in global configuration mode to enable TCP ECN.

The TCP time-stamp option provides improved TCP round-trip time measurements. Because the time stamps are always sent and echoed in both directions and the time-stamp value in the header is always changing, TCP header compression will not compress the outgoing packet. Use the “ip tcp timestamp” command to enable the TCP time-stamp option.

The TCP Keepalive Timer feature provides a mechanism to identify dead connections. When a TCP connection on a routing device is idle for too long, the device sends a TCP keepalive packet to the peer with only the Acknowledgment (ACK) flag turned on. If a response packet (a TCP ACK packet) is not received after the device sends a specific number of probes, the connection is considered dead and the device initiating the probes frees resources used by the TCP connection.

Question 4

A network engineer notices that transmission rates of senders of TCP traffic sharply increase and decrease simultaneously during periods of congestion. Which condition causes this?

A. global synchronization
B. tail drop
C. random early detection
D. queue management algorithm

Answer: A

Global synchronization occurs when multiple TCP hosts reduce their transmission rates in response to congestion. But when congestion is reduced, TCP hosts try to increase their transmission rates again simultaneously (known as slow-start algorithm), which causes another congestion. Global synchronization produces this graph:

Global synchronization reduces optimal throughput of network applications and tail drop contributes to this phenomenon. When an interface on a router cannot transmit a packet immediately, the packet is queued. Packets are then taken out of the queue and eventually transmitted on the interface. But if the arrival rate of packets to the output interface exceeds the ability of the router to buffer and forward traffic, the queues increase to their maximum length and the interface becomes congested. Tail drop is the default queuing response to congestion. Tail drop simply means that “drop all the traffic that exceeds the queue limit. Tail drop treats all traffic equally and does not differentiate among classes of service.

Question 5

Which three problems result from application mixing of UDP and TCP streams within a network with no QoS? (Choose three)

A. starvation
B. jitter
C. latency
D. windowing
E. lower throughput

Answer: A C E

When TCP is mixing with UDP under congestion, TCP flows will try to lower their transmission rate while UDP flows continue transmitting as usual. As a result of this, UDP flows will dominate the bandwidth of the link and this effect is called TCP-starvation/UDP-dominance. This can increase latency and lower the overall throughput.

Question 6

A network administrator uses IP SLA to measure UDP performance and notices that packets on one router have a higher one-way delay compared to the opposite direction. Which UDP characteristic does this scenario describe?

A. latency
B. starvation
C. connectionless communication
D. nonsequencing unordered packets
E. jitter

Answer: A

Question 7

A network engineer is configuring a routed interface to forward broadcasts of UDP 69, 53, and 49 to 172.20.14.225. Which command should be applied to the configuration to allow this?

A. router(config-if)#ip helper-address 172.20.14.225
B. router(config-if)#udp helper-address 172.20.14.225
C. router(config-if)#ip udp helper-address 172.20.14.225
D. router(config-if)#ip helper-address 172.20.14.225 69 53 49

Answer: A

Question 8

Which traffic characteristic is the reason that UDP traffic that carries voice and video is assigned to the queue only on a link that is at least 768 kbps?

A. typically is not fragmented
B. typically is fragmented
C. causes windowing
D. causes excessive delays for video traffic

Answer: A

If the speed of an interface is equal or less than 768 kbps (half of a T1 link), it is considered a low-speed interface. The half T1 only offers enough bandwidth to allow voice packets to enter and leave without delay issues. Therefore if the speed of the link is smaller than 768 kbps, it should not be configured with a queue.

Question 9

Which two attributes describe UDP within a TCP/IP network? (Choose two)

A. Acknowledgments
B. Unreliable delivery
C. Connectionless communication
D. Connection-oriented communication
E. Increased headers

Answer: B C

Question 10

A network engineer wants to ensure an optimal end-to-end delay bandwidth product. The delay is less than 64 KB. Which TCP feature ensures steady state throughput?

A. Window scaling
B. Network buffers
C. Round-trip timers
D. TCP acknowledgments

Answer: A

First we need to understand about bandwidth-delay product.

Bandwidth-delay product (BDP) is the maximum amount of data “in-transit” at any point in time, between two endpoints. In other words, it is the amount of data “in flight” needed to saturate the link. You can think the link between two devices as a pipe. The cross section of the pipe represents the bandwidth and the length of the pipe represents the delay (the propagation delay due to the length of the pipe).

Therefore the Volume of the pipe = Bandwidth x Delay (or Round-Trip-Time). The volume of the pipe is also the BDP.

For example if the total bandwidth is 64 kbps and the RTT is 3 seconds, the formula to calculate BDP is:

BDP (bits) = total available bandwidth (bits/sec) * round trip time (sec) = 64,000 * 3 = 192,000 bits

-> BDP (bytes) = 192,000 / 8 = 24,000 bytes

Therefore we need 24KB to fulfill this link.

For your information, BDP is very important in TCP communication as it optimizes the use of bandwidth on a link. As you know, a disadvantage of TCP is it has to wait for an acknowledgment from the receiver before sending another data. The waiting time may be very long and we may not utilize full bandwidth of the link for the transmission.

Based on BDP, the sending host can increase the number of data sent on a link (usually by increasing the window size). In other words, the sending host can fill the whole pipe with data

and no bandwidth is wasted.

In conclusion, if we want an optimal end-to-end delay bandwidth product, TCP must use window scaling feature so that we can fill the entire “pipe” with data.