tailscale/derp
Brad Fitzpatrick 15fc6cd966 derp/derphttp: fix netcheck HTTPS probes
The netcheck client, when no UDP is available, probes distance using
HTTPS.

Several problems:

* It probes using /derp/latency-check.
* But cmd/derper serves the handler at /derp/probe
* Despite the difference, it work by accident until c8f4dfc8c0
  which made netcheck's probe require a 2xx status code.
* in tests, we only use derphttp.Handler, so the cmd/derper-installed
  mux routes aren't preesnt, so there's no probe. That breaks
  tests in airplane mode. netcheck.Client then reports "unexpected
  HTTP status 426" (Upgrade Required)

This makes derp handle both /derp/probe and /derp/latency-check
equivalently, and in both cmd/derper and derphttp.Handler standalone
modes.

I notice this when wgengine/magicsock TestActiveDiscovery was failing
in airplane mode (no wifi). It still doesn't pass, but it gets
further.

Fixes #11989

Change-Id: I45213d4bd137e0f29aac8bd4a9ac92091065113f
2024-05-03 08:24:24 -07:00
..
derphttp derp/derphttp: fix netcheck HTTPS probes 2024-05-03 08:24:24 -07:00
testdata derp: add debug traffic handler 2021-06-18 15:47:55 -07:00
README.md derp: add a README.md with some docs 2023-05-02 13:42:25 -07:00
derp.go derp: include src IPs in mesh watch messages 2023-08-16 08:46:52 -07:00
derp_client.go derp: optimize another per client field alignment 2024-01-12 13:05:39 -08:00
derp_server.go derp,ipn/ipnlocal: stop calling rand.Seed 2024-05-02 09:09:09 -07:00
derp_server_default.go all: update copyright and license headers 2023-01-27 15:36:29 -08:00
derp_server_linux.go derp: use tstime (#8634) 2023-07-27 15:56:33 -04:00
derp_test.go all: use Go 1.22 range-over-int 2024-04-16 15:32:38 -07:00
dropreason_string.go derp, derphttp, magicsock: send new unknown peer frame when destination is unknown (#7552) 2023-03-24 19:11:48 -07:00

README.md

DERP

This directory (and subdirectories) contain the DERP code. The server itself is in ../cmd/derper.

DERP is a packet relay system (client and servers) where peers are addressed using WireGuard public keys instead of IP addresses.

It relays two types of packets:

  • "Disco" discovery messages (see ../disco) as the a side channel during NAT traversal.

  • Encrypted WireGuard packets as the fallback of last resort when UDP is blocked or NAT traversal fails.

DERP Map

Each client receives a "DERP Map" from the coordination server describing the DERP servers the client should try to use.

The client picks its home "DERP home" based on latency. This is done to keep costs low by avoid using cloud load balancers (pricey) or anycast, which would necessarily require server-side routing between DERP regions.

Clients pick their DERP home and report it to the coordination server which shares it to all the peers in the tailnet. When a peer wants to send a packet and it doesn't already have a WireGuard session open, it sends disco messages (some direct, and some over DERP), trying to do the NAT traversal. The client will make connections to multiple DERP regions as needed. Only the DERP home region connection needs to be alive forever.

DERP Regions

Tailscale runs 1 or more DERP nodes (instances of cmd/derper) in various geographic regions to make sure users have low latency to their DERP home.

Regions generally have multiple nodes per region "meshed" (routing to each other) together for redundancy: it allows for cloud failures or upgrades without kicking users out to a higher latency region. Instead, clients will reconnect to the next node in the region. Each node in the region is required to to be meshed with every other node in the region and forward packets to the other nodes in the region. Packets are forwarded only one hop within the region. There is no routing between regions. The assumption is that the mesh TCP connections are over a VPC that's very fast, low latency, and not charged per byte. The coordination server assigns the list of nodes in a region as a function of the tailnet, so all nodes within a tailnet should generally be on the same node and not require forwarding. Only after a failure do clients of a particular tailnet get split between nodes in a region and require inter-node forwarding. But over time it balances back out. There's also an admin-only DERP frame type to force close the TCP connection of a particular client to force them to reconnect to their primary if the operator wants to force things to balance out sooner. (Using the (*derphttp.Client).ClosePeer method, as used by Tailscale's internal rarely-used cmd/derpprune maintenance tool)

We generally run a minimum of three nodes in a region not for quorum reasons (there's no voting) but just because two is too uncomfortably few for cascading failure reasons: if you're running two nodes at 51% load (CPU, memory, etc) and then one fails, that makes the second one fail. With three or more nodes, you can run each node a bit hotter.