TCP vs UDP: Key Differences and When to Use Each

TCP provides reliable, ordered data delivery with error-checking, while UDP is faster but does not guarantee delivery. Each suits different use cases in networking.

TCP and UDP

TCP and UDP are the two main transport-layer protocols that carry data across the internet. Every networked application uses one of them. Understanding the differences, trade-offs, and appropriate use cases for each is a foundational skill in web development, networking, and systems design.

What Is TCP

TCP (Transmission Control Protocol) is a connection-oriented protocol that provides reliable, ordered, and error-checked delivery of data between two devices. Before any data is exchanged, TCP establishes a connection through a process called the three-way handshake, in which the sender and receiver exchange synchronisation and acknowledgement packets to confirm that both sides are ready to communicate.

Once the connection is established, TCP guarantees that every byte sent by the sender arrives at the destination intact and in the correct order. If a packet is lost or corrupted in transit, TCP detects the gap through acknowledgement numbers, requests retransmission of the missing data, and holds subsequent packets in a buffer until the gap is filled before delivering them to the application in sequence.

This reliability comes at a cost. Every packet must be acknowledged, the three-way handshake adds latency before data can flow, and retransmission introduces further delays when packets are lost. For applications where accuracy is more important than raw speed, this overhead is an acceptable trade-off. For applications where real-time responsiveness matters more than completeness, it is not.

The TCP Three-Way Handshake

Before TCP can transmit any data, the client and server must establish a connection through a three-step exchange of control packets. This process synchronises sequence numbers and confirms that both sides are ready to communicate reliably.

TCP connection establishment:
Client                          Server
  |                               |
  |-------- SYN (seq=100) ------->|  Client says: "I want to connect,
  |                               |  my starting sequence number is 100"
  |                               |
  |<--- SYN-ACK (seq=200,ack=101)-|  Server says: "Acknowledged, my
  |                               |  sequence number is 200"
  |                               |
  |-------- ACK (ack=201) ------->|  Client says: "Acknowledged.
  |                               |  Connection established."
  |                               |
  |===== Data can now flow ======>|

This handshake adds one full round-trip time of latency before any application data can be sent. For a client connecting to a distant server, this can add tens to hundreds of milliseconds before a single byte of content is transferred, which is one reason HTTPS connections that also require a TLS handshake can feel slow on the first load.

What Is UDP

UDP (User Datagram Protocol) is a connectionless protocol. It sends packets directly to the destination without any prior setup, without confirming that the receiver is ready, and without tracking whether packets arrived. Each packet, called a datagram, is independent. The sender fires it into the network and moves on immediately to the next one.

If a UDP packet is lost, corrupted, or arrives out of order, the protocol does nothing about it. There are no acknowledgements, no retransmissions, and no sequencing. The application layer is entirely responsible for deciding how to handle missing or reordered data, if it needs to handle it at all.

This simplicity makes UDP significantly faster than TCP. The absence of handshaking, acknowledgement overhead, and retransmission delays means data starts flowing immediately and continues flowing at the maximum rate the network allows. For applications designed to tolerate occasional data loss, the speed advantage far outweighs the lack of guarantees. A dropped frame in a video call is barely noticeable. A frozen call caused by TCP waiting for a retransmission is intolerable.

UDP datagram transmission:
Sender                          Receiver
  |                               |
  |---- Datagram 1 (seq=1) ------>|  Arrives successfully
  |---- Datagram 2 (seq=2) ------>|  Lost in transit — no retransmission
  |---- Datagram 3 (seq=3) ------>|  Arrives successfully
  |---- Datagram 4 (seq=4) ------>|  Arrives out of order
  |                               |
  No acknowledgements. No waiting. Sender never knows what arrived.

TCP vs UDP: Side-by-Side Comparison

FeatureTCPUDP
Connection ModelConnection-oriented. Requires a three-way handshake before data can flow.Connectionless. Sends data immediately with no setup.
Delivery GuaranteeGuaranteed. Every packet is acknowledged and retransmitted if lost.None. Packets may be lost silently with no notification.
OrderingGuaranteed. Packets are reordered and buffered to deliver data in sequence.None. Packets may arrive out of order. The application must handle sequencing if needed.
Error CheckingChecksum plus retransmission for corrupted or missing packetsChecksum only. Corrupted packets are discarded with no retransmission.
SpeedSlower due to acknowledgement overhead, handshaking, and retransmission delaysFaster. Minimal overhead and no waiting for acknowledgements.
Flow ControlYes. TCP adjusts the transmission rate to avoid overwhelming the receiver.No. The sender transmits at whatever rate it chooses.
Congestion ControlYes. TCP reduces transmission rate when it detects network congestion.No. UDP can continue sending even during heavy network congestion.
Header Size20 to 60 bytes depending on optionsFixed 8 bytes
State MaintainedYes. Both sides maintain connection state throughout the session.No. Each datagram is independent with no connection state.
Primary Use CaseWeb pages, email, file transfers, APIs, databases, SSHVideo calls, live streaming, gaming, DNS, IoT telemetry

When to Use TCP

TCP is the right choice whenever data accuracy and completeness matter more than raw speed. Any application where a missing or reordered byte would produce incorrect behaviour belongs on TCP. The overhead is justified because the alternative, corrupted or incomplete data, is worse than the latency cost.

  • Web browsing (HTTP and HTTPS): Every byte of HTML, CSS, JavaScript, and image data must arrive correctly and in order for the browser to render the page properly. A missing packet in a JavaScript file would produce a syntax error.
  • Email (SMTP, IMAP, POP3): Messages must be delivered completely. A partially received email with missing characters or attachments is worse than a delayed one.
  • File transfers (FTP, SFTP, HTTPS downloads): A single missing chunk corrupts the entire file. Reliable delivery is non-negotiable.
  • APIs and database connections: Request and response data must arrive exactly as sent. A missing character in a JSON payload or an SQL query would cause incorrect application behaviour.
  • SSH connections: Every keystroke and command sent over a remote terminal must arrive reliably. Dropped characters in a shell session are unacceptable.
  • Financial transactions: Payment data, order records, and account balances require absolute accuracy. Any data loss in a financial system could have serious consequences.

When to Use UDP

UDP is the right choice when low latency matters more than guaranteed delivery, and when the application is designed to handle or tolerate occasional packet loss gracefully. The absence of acknowledgement overhead means data flows with minimal delay, which is critical for real-time applications.

  • Video and voice calls (WebRTC, Zoom, FaceTime): A dropped video frame is barely noticeable and the call continues smoothly. Waiting for TCP to retransmit a lost frame would freeze the video for the duration of the retransmission, which is far more disruptive.
  • Live streaming: Viewers watching a live broadcast do not need old frames retransmitted. By the time a retransmission arrives, the stream has moved on and the packet is useless. Skipping ahead is preferable to buffering.
  • Online gaming: A multiplayer game needs to know where every player is right now, not where they were 200 milliseconds ago after a TCP retransmission. Games send frequent small position updates and accept that some will be lost.
  • DNS lookups: A DNS query is a short, simple request that expects a short, simple response. The cost of a TCP handshake for every domain name lookup would add unacceptable latency to every web request. DNS uses UDP and simply retries if no response arrives quickly.
  • IoT sensors and telemetry: A temperature sensor reporting every second can afford to lose an occasional reading. The overhead of TCP reliability for millions of sensor packets would consume far more resources than the occasional lost reading costs.
  • QUIC protocol: The QUIC transport protocol, used by HTTP/3, implements its own reliability mechanisms on top of UDP. This allows it to benefit from UDP's low latency while providing the reliability guarantees that HTTP requires.

How HTTP/3 and QUIC Changed the Story

For most of the web's history, HTTP ran over TCP and inherited all of TCP's reliability guarantees and overhead. HTTP/3, standardised in 2022, changed this by building on QUIC, a transport protocol developed by Google that runs over UDP.

QUIC reimplements the reliability, ordering, and congestion control features of TCP at the application layer, but solves a significant problem that TCP cannot: head-of-line blocking. In TCP, if a single packet is lost, all subsequent packets must wait in a buffer until the missing packet is retransmitted, even if they belong to completely independent data streams. QUIC multiplexes multiple independent streams over a single connection, so a lost packet in one stream does not block others.

QUIC also combines the connection and TLS handshakes into a single round trip, reducing the latency of establishing a new HTTPS connection. The result is that HTTP/3 over QUIC delivers the reliability that web applications need with lower latency than TCP, particularly over unreliable networks like mobile connections where packet loss is common.

FeatureHTTP/1.1 over TCPHTTP/2 over TCPHTTP/3 over QUIC (UDP)
TransportTCPTCPUDP (via QUIC)
MultiplexingNoYes, but with head-of-line blockingYes, without head-of-line blocking
Connection SetupTCP + TLS (2 round trips)TCP + TLS (2 round trips)QUIC + TLS (1 round trip or 0 for repeat connections)
Packet Loss ImpactBlocks entire connectionBlocks all streamsAffects only the stream with the lost packet
Mobile PerformancePoor on unreliable networksModerateSignificantly better

Frequently Asked Questions

  1. Can UDP packets arrive out of order?
    Yes. UDP applies no sequencing to packets. Each datagram is routed independently through the network and may take a different path, arriving in a different order from how it was sent. Applications that require ordered data and choose to use UDP must implement their own sequencing logic. Video streaming applications handle this by buffering a small amount of data and reordering packets within that buffer, discarding any that arrive too late to be useful.
  2. Does using UDP mean the data is insecure?
    No. Security and transport protocol are independent concerns. DTLS (Datagram Transport Layer Security) provides the same encryption and authentication guarantees for UDP that TLS provides for TCP. WebRTC, which uses UDP for media transmission, mandates DTLS encryption by default. QUIC, which powers HTTP/3, also builds TLS 1.3 directly into its protocol. Choosing UDP does not mean forgoing security; it means choosing a different transport mechanism that security can be layered on top of.
  3. Why does DNS use UDP instead of TCP?
    DNS queries and responses are small enough to fit in a single packet in the vast majority of cases. The overhead of a TCP three-way handshake before every domain name lookup would add a full round-trip of latency to every DNS resolution, and since DNS lookups happen before every web request, this overhead would compound across every page load. UDP allows a DNS query to be sent and answered in a single round trip with no setup cost. DNS does use TCP for specific cases where responses are too large for a single UDP packet, such as DNSSEC-signed responses and zone transfers between authoritative servers.
  4. What is head-of-line blocking and why does it matter?
    Head-of-line blocking is a problem that occurs in TCP when a single lost packet prevents all subsequent packets from being delivered to the application, even if those packets arrived successfully and belong to completely independent data. TCP delivers data in strict order, so a gap caused by a lost packet creates a queue where everything behind it must wait until the missing packet is retransmitted and received. HTTP/2 attempted to solve this at the HTTP layer with multiplexing, but because it still runs over a single TCP connection, a lost packet at the TCP layer still blocks all HTTP/2 streams simultaneously. HTTP/3 over QUIC solves this definitively because QUIC multiplexes streams independently, meaning a lost packet affects only the stream it belongs to.
  5. Is TCP always slower than UDP in practice?
    Not necessarily in all circumstances. On reliable, low-latency networks with minimal packet loss, the performance difference between TCP and UDP is small because retransmissions rarely occur and acknowledgement overhead is the only meaningful cost. The gap widens significantly on unreliable networks such as mobile connections, satellite links, and congested Wi-Fi, where packet loss is more frequent and TCP's retransmission delays become more pronounced. For applications sending large volumes of data over reliable networks, the difference may be negligible. For real-time applications over variable-quality networks, the difference can be the distinction between a usable and an unusable experience.

Conclusion

TCP and UDP represent two fundamental approaches to data transport, each optimised for a different set of priorities. TCP prioritises reliability, ordering, and correctness at the cost of latency and overhead, making it the right choice for web pages, email, file transfers, APIs, and any application where every byte must arrive exactly as sent. UDP prioritises speed and low latency at the cost of delivery guarantees, making it the right choice for real-time audio and video, gaming, DNS, and any application designed to tolerate occasional packet loss gracefully. HTTP/3 and QUIC blur this distinction further by delivering TCP-level reliability over a UDP foundation, combining the speed advantage of UDP with the correctness guarantee that web applications require. Understanding both protocols and the trade-offs between them gives you the foundation to reason about network performance, diagnose connectivity issues, and design systems that use the right transport for each part of their communication. Continue with TCP three-way handshake, how routing works, and HTTP vs HTTPS to build a complete picture of how data moves across the internet.