This allows tests to verfiy that a DERP connection was actually
established.
Related to #4326
Updates tailscale/corp#2579
Signed-off-by: Maisem Ali <maisem@tailscale.com>
A new package can also later record/report which knobs are checked and
set. It also makes the code cleaner & easier to grep for env knobs.
Change-Id: Id8a123ab7539f1fadbd27e0cbeac79c2e4f09751
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
So magicsock can later ask a DERP connection whether its source IP
would've changed if it reconnected.
Updates #3619
Change-Id: Ibc8810340c511d6786b60c78c1a61c09f5800e40
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
In prep for a future change to have client ping derp connections
when their state is questionable, rather than aggressively tearing
them down and doing a heavy reconnect when their state is unknown.
We already support ping/pong in the other direction (servers probing
clients) so we already had the two frame types, but I'd never finished
this direction.
Updates #3619
Change-Id: I024b815d9db1bc57c20f82f80f95fb55fc9e2fcc
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
A public key should only have max one connection to a given
DERP node (or really: one connection to a node in a region).
But if people clone their machine keys (e.g. clone their VM, Raspbery
Pi SD card, etc), then we can get into a situation where a public key
is connected multiple times.
Originally, the DERP server handled this by just kicking out a prior
connections whenever a new one came. But this led to reconnect fights
where 2+ nodes were in hard loops trying to reconnect and kicking out
their peer.
Then a909d37a59 tried to add rate
limiting to how often that dup-kicking can happen, but empirically it
just doesn't work and ~leaks a bunch of goroutines and TCP
connections, tying them up for hour+ while more and more accumulate
and waste memory. Mostly because we were doing a time.Sleep forever
while not reading from their TCP connections.
Instead, just accept multiple connections per public key but track
which is the most recent. And if two both are writing back & forth,
then optionally disable them both. That last part is only enabled in
tests for now. The current default policy is just last-sender-wins
while we gather the next round of stats.
Updates #2751
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
If a peer is connected to multiple nodes in a region (so
multiForwarder is in use) and then a node restarts and re-sends all
its additions, this bug about whether an element is in the
multiForwarder could cause a one-time flip in the which peer node we
forward to. Note a huge deal, but not written as intended.
Thanks to @lewgun for the bug report in #2141.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It was once believed that it might be useful. It wasn't. We never used it.
Remove it so we don't slowly leak memory.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This adds a flag to the DERP server which specifies to verify clients through a local
tailscaled. It is opt-in, so should not affect existing clients, and is mainly intended for
users who want to run their own DERP servers. It assumes there is a local tailscaled running and
will attempt to hit it for peer status information.
Updates #1264
Signed-off-by: julianknodt <julianknodt@gmail.com>
Before it was using the local address and port, so fix that.
The fields in the response from `ss` are:
State, Recv-Q, Send-Q, Local Address:Port, Peer Address:Port, Process
Signed-off-by: julianknodt <julianknodt@gmail.com>
This adds a handler on the DERP server for logging bytes send and received by clients of the
server, by holding open a connection and recording if there is a difference between the number
of bytes sent and received. It sends a JSON marshalled object if there is an increase in the
number of bytes.
Signed-off-by: julianknodt <julianknodt@gmail.com>
It would be useful to know the time that packets spend inside of a queue before they are sent
off, as that can be indicative of the load the server is handling (and there was also an
existing TODO). This adds a simple exponential moving average metric to track the average packet
queue duration.
Changes during review:
Add CAS loop for recording queue timing w/ expvar.Func, rm snake_case, annotate in milliseconds,
convert
Signed-off-by: julianknodt <julianknodt@gmail.com>
When building with redo, also include the git commit hash
from the proprietary repo, so that we have a precise commit
that identifies all build info (including Go toolchain version).
Add a top-level build script demonstrating to downstream distros
how to burn the right information into builds.
Adjust `tailscale version` to print commit hashes when available.
Fixes#841.
Signed-off-by: David Anderson <danderson@tailscale.com>
Fixes regression from e415991256 that
only affected Windows users because Go only on Windows delegates x509
cert validation to the OS and Windows as unhappy with our "metacert"
lacking NotBefore and NotAfter.
Fixes#705
* advertise server's DERP public key following its ServerHello
* have client look for that DEPR public key in the response
PeerCertificates
* let client advertise it's going into a "fast start" mode
if it finds it
* modify server to support that fast start mode, just not
sending the HTTP response header
Cuts down another round trip, bringing the latency of being able to
write our first DERP frame from SF to Bangalore from ~725ms
(3 RTT) to ~481ms (2 RTT: TCP and TLS).
Fixes#693
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Strictly speaking, we don't know that it's a wireguard packet, just that
it doesn't look like a disco packet.
Signed-off-by: David Anderson <danderson@tailscale.com>
This will make it easier for a human to tell what
version is deployed, for (say) correlating line numbers
in profiles or panics to corresponding source code.
It'll also let us observe version changes in prometheus.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Active discovery lets us introspect the state of the network stack precisely
enough that it's unnecessary, and dropping the initial DERP packets greatly
slows down tests. Additionally, it's unrealistic since our production network
will never deliver _only_ discovery packets, it'll be all or nothing.
Signed-off-by: David Anderson <danderson@tailscale.com>