Compare commits

...

63 Commits

Author SHA1 Message Date
License Updater 9fb620417d licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-05-06 16:44:28 +00:00
Brad Fitzpatrick c3c18027c6 all: make more tests pass/skip in airplane mode
Updates tailscale/corp#19786

Change-Id: Iedc6730fe91c627b556bff5325bdbaf7bf79d8e6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-06 09:19:53 -07:00
Claire Wang 41f2195899
util/syspolicy: add auto exit node related keys (#11996)
Updates tailscale/corp#19681

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-05-06 12:14:10 -04:00
Brad Fitzpatrick 1a963342c7 util/set: add Of variant of SetOf that takes variadic parameter
set.Of(1, 2, 3) is prettier than set.SetOf([]int{1, 2, 3}).

I was going to change the signature of SetOf but then I noticed its
name has stutter anyway, so I kept it for compatibility. People can
prefer to use set.Of for new code or slowly migrate.

Also add a lazy Make method, which I often find myself wanting,
without having to resort to uglier mak.Set(&set, k, struct{}{}).

Updates #cleanup

Change-Id: Ic6f3870115334efcbd65e79c437de2ad3edb7625
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-05 21:14:28 -07:00
Will Norris 80decd83c1 tsweb: remove redundant bumpStartIfNeeded func
Updates #12001

Signed-off-by: Will Norris <will@tailscale.com>
2024-05-05 18:04:58 -07:00
Maisem Ali ed843e643f types/views: add AppendStrings util func
Updates tailscale/corp#19623

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-03 19:19:33 -07:00
Maisem Ali fd6ba43b97 types/views: remove duplicate SliceContainsFunc
We already have `(Slice[T]).ContainsFunc`.

Updates #cleanup

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-03 19:19:33 -07:00
Will Norris 46980c9664 tsweb: ensure in-flight requests are always marked as finished
The inflight request tracker only starts recording a new bucket after
the first non-error request. Unfortunately, it's written in such a way
that ONLY successful requests are ever marked as being finished. Once a
bucket has had at least one successful request and begun to be tracked,
all subsequent error cases are never marked finished and always appear
as in-flight.

This change ensures that if a request is recorded has having been
started, we also mark it as finished at the end.

Updates tailscale/corp#19767

Signed-off-by: Will Norris <will@tailscale.com>
2024-05-03 15:36:14 -07:00
Percy Wegmann 817badf9ca ipn/ipnlocal: reuse transport across Taildrive remotes
This prevents us from opening a new connection on each HTTP
request.

Updates #11967

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 16:07:52 -05:00
Percy Wegmann 2cf764e998 drive: actually cache results on statcache
Updates #11967

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 16:07:52 -05:00
Irbe Krumina 406293682c
cmd/k8s-operator: cleanup runReconciler signature (#11993)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-05-03 19:05:37 +01:00
Claire Wang 35872e86d2
ipnlocal, magicsock: store last suggested exit node id in local backend (#11959)
Updates tailscale/corp#19681

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-05-03 13:24:26 -04:00
Brad Fitzpatrick b62cfc430a tstest/integration/testcontrol: fix data race
Noticed in earlier GitHub actions failure.

Fixes #11994

Change-Id: Iba8d753caaa3dacbe2da9171d96c5f99b12e62d7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 10:03:48 -07:00
Andrew Dunham e9505e5432 ipn/ipnlocal: plumb health.Tracker into profileManager constructor
Setting the field after-the-fact wasn't working because we could migrate
prefs on creation, which would set health status for auto updates.

Updates #11986

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I41d79ebd61d64829a3a9e70586ce56f62d24ccfd
2024-05-03 08:25:38 -07:00
Brad Fitzpatrick e42c4396cf net/netcheck: don't spam on ICMP socket permission denied errors
While debugging a failing test in airplane mode on macOS, I noticed
netcheck logspam about ICMP socket creation permission denied errors.

Apparently macOS just can't do those, or at least not in airplane
mode. Not worth spamming about.

Updates #cleanup

Change-Id: I302620cfd3c8eabb25202d7eef040c01bd8a843c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick 15fc6cd966 derp/derphttp: fix netcheck HTTPS probes
The netcheck client, when no UDP is available, probes distance using
HTTPS.

Several problems:

* It probes using /derp/latency-check.
* But cmd/derper serves the handler at /derp/probe
* Despite the difference, it work by accident until c8f4dfc8c0
  which made netcheck's probe require a 2xx status code.
* in tests, we only use derphttp.Handler, so the cmd/derper-installed
  mux routes aren't preesnt, so there's no probe. That breaks
  tests in airplane mode. netcheck.Client then reports "unexpected
  HTTP status 426" (Upgrade Required)

This makes derp handle both /derp/probe and /derp/latency-check
equivalently, and in both cmd/derper and derphttp.Handler standalone
modes.

I notice this when wgengine/magicsock TestActiveDiscovery was failing
in airplane mode (no wifi). It still doesn't pass, but it gets
further.

Fixes #11989

Change-Id: I45213d4bd137e0f29aac8bd4a9ac92091065113f
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick 1fe0983f2d cmd/derper,tstest/nettest: skip network-needing test in airplane mode
Not buying wifi on a short flight is a good way to find tests
that require network. Whoops.

Updates #cleanup

Change-Id: Ibe678e9c755d27269ad7206413ffe9971f07d298
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick 46f3feae96 ssh/tailssh: plumb health.Tracker in test
In prep for it being required in more places.

Updates #11874

Change-Id: Ib743205fc2a6c6ff3d2c4ed3a2b28cac79156539
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick 4fa6cbec27 ssh/tailssh: use ptr.To in test
Updates #cleanup

Change-Id: Ic98ba1b63c8205084b30f59f0ca343788edea5b0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick ee3bd4dbda derp/derphttp, net/netcheck: plumb netmon.Monitor to derp netcheck client
Fixes #11981

Change-Id: I0e15a09f93aefb3cfddbc12d463c1c08b83e09fd
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Percy Wegmann a03cb866b4 drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 745fb31bd4 drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 07e783c7be drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 3349e86c0a drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 0c11fd978b drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 9d22ec0ba2 drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Irbe Krumina cd633a7252
cmd/k8s-operator/deploy,k8s-operator: document that metrics are unstable (#11979)
Updates#11292

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-05-03 14:02:10 +01:00
Andrew Dunham f97d0ac994 net/dns/resolver: add better error wrapping
To aid in debugging exactly what's going wrong, instead of the
not-particularly-useful "dns udp query: context deadline exceeded" error
that we currently get.

Updates #3786
Updates #10768
Updates #11620
(etc.)

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I76334bf0681a8a2c72c90700f636c4174931432c
2024-05-02 14:08:05 -04:00
Claire Wang e0287a4b33
wgengine: add exit destination logging enable for wgengine logger (#11952)
Updates tailscale/corp#18625
Co-authored-by: Kevin Liang <kevinliang@tailscale.com>
Signed-off-by: Claire Wang <claire@tailscale.com>
2024-05-02 13:55:05 -04:00
Irbe Krumina 19b31ac9a6
cmd/{k8s-operator,k8s-nameserver},k8s-operator: update nameserver config with records for ingress/egress proxies (#11019)
cmd/k8s-operator: optionally update dnsrecords Configmap with DNS records for proxies.

This commit adds functionality to automatically populate
DNS records for the in-cluster ts.net nameserver
to allow cluster workloads to resolve MagicDNS names
associated with operator's proxies.

The records are created as follows:
* For tailscale Ingress proxies there will be
a record mapping the MagicDNS name of the Ingress
device and each proxy Pod's IP address.
* For cluster egress proxies, configured via
tailscale.com/tailnet-fqdn annotation, there will be
a record for each proxy Pod, mapping
the MagicDNS name of the exposed
tailnet workload to the proxy Pod's IP.

No records will be created for any other proxy types.
Records will only be created if users have configured
the operator to deploy an in-cluster ts.net nameserver
by applying tailscale.com/v1alpha1.DNSConfig.

It is user's responsibility to add the ts.net nameserver
as a stub nameserver for ts.net DNS names.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-coredns
https://cloud.google.com/kubernetes-engine/docs/how-to/kube-dns#upstream_nameservers

See also https://github.com/tailscale/tailscale/pull/11017

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-05-02 17:29:46 +01:00
Maisem Ali a49ed2e145 derp,ipn/ipnlocal: stop calling rand.Seed
It's deprecated and using it gets us the old slow behavior
according to https://go.dev/blog/randv2.

> Having eliminated repeatability of the global output stream, Go 1.20
> was also able to make the global generator scale better in programs
> that don’t call rand.Seed, replacing the Go 1 generator with a very
> cheap per-thread wyrand generator already used inside the Go
> runtime. This removed the global mutex and made the top-level
> functions scale much better. Programs that do call rand.Seed fall
> back to the mutex-protected Go 1 generator.

Updates #7123

Change-Id: Ia5452e66bd16b5457d4b1c290a59294545e13291
Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-02 09:09:09 -07:00
Brad Fitzpatrick 96712e10a7 health, ipn/ipnlocal: move more health warning code into health.Tracker
In prep for making health warnings rich objects with metadata rather
than a bunch of strings, start moving it all into the same place.

We'll still ultimately need the stringified form for the CLI and
LocalAPI for compatibility but we'll next convert all these warnings
into Warnables that have severity levels and such, and legacy
stringification will just be something each Warnable thing can do.

Updates #4136

Change-Id: I83e189435daae3664135ed53c98627c66e9e53da
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-01 15:03:21 -07:00
Andrew Dunham be663c84c1 net/tstun: rename natConfig to peerConfig
So that we can use this for additional, non-NAT configuration without it
being confusing.

Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I1658d59c9824217917a94ee76d2d08f0a682986f
2024-05-01 15:01:52 -04:00
Andrew Dunham 10497acc95 net/tstun: refactor natConfig to not be per-family
This was a holdover from the older, pre-BART days and is no longer
necessary.

Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I71b892bab1898077767b9ff51cef33d59c08faf8
2024-05-01 14:06:35 -04:00
Andrew Lytvynov 13e1355546
scripts/installer.sh: remove unnecessary escaping in grep (#11950)
Updates #11263

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-05-01 11:09:10 -06:00
Percy Wegmann 843afe7c53 ssh/tailssh: add integration test
Updates tailscale/corp#11854

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-01 11:19:36 -05:00
Jonathan Nobels 45b9aa0d83
net/netmon: remove spammy log statements (#11953)
Updates tailscale/corp#18960

Tests in corp called us using the wrong logging calls.  Removed.
This is logged downstream anyway.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2024-05-01 12:02:16 -04:00
Paul Scott 4c08410011 cmd/tailscale/cli: set localClient.UseSocketOnly during flag parsing
This configures localClient correctly during flag parsing, so that the --socket
option is effective when generating tab-completion results. For example, the
following would not connect to the system Tailscale for tab-completion results:

    tailscale --socket=/tmp/tailscaled.socket switch <TAB>

Updates #3793

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-05-01 17:01:03 +01:00
Paul Scott ba34943133 cmd/tailscale/cli/ffcomplete: omit and clean completion results
Updates #3793

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-05-01 17:01:03 +01:00
Jonathan Nobels fa1303d632
net/netmon: swap to swift-derived defaultRoute on macos (#11936)
Updates tailscale/corp#18960

iOS uses Apple's NetworkMonitor to track the default interface and
there's no reason we shouldn't also use this on macOS, for the same
reasons noted in the comments for why this change was made on iOS.

This eliminates the need to load and parse the routing table when
querying the defaultRouter() in almost all cases.

A slight modification here (on both platforms) to fallback to the default
BSD logic in the unhappy-path rather than making assumptions that
may not hold.  If netmon is eventually parsing AF_ROUTE and able
to give a consistently correct answer for the  default interface index,
we can fall back to that and eliminate the Swift dependency.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2024-05-01 09:20:09 -04:00
Gabe Gorelick de85610be0
cmd/k8s-operator/deploy/chart: allow users to configure additional labels for the operator's Pod via Helm chart values.
cmd/k8s-operator/deploy/chart: allow users to configure additional labels for the operator's Pod via Helm chart values.

Fixes #11947

Signed-off-by: Gabe Gorelick <gabe@hightouch.io>
2024-05-01 10:37:21 +01:00
Percy Wegmann 2648d475d7 drive: don't allow DELETE on read-only shares
Fixes tailscale/corp#19646

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-30 22:29:33 -05:00
Brad Fitzpatrick 7455e027e9 util/slicesx: add AppendMatching
We had this in a different repo, but moving it here, as this a more
fitting package.

Updates #cleanup

Change-Id: I5fb9b10e465932aeef5841c67deba4d77d473d57
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-30 16:47:21 -07:00
Andrew Dunham fe009c134e ipn/ipnlocal: reset the dialPlan only when the URL is unchanged
Also, reset it in a few more places (e.g. logout, new blank profiles,
etc.) to avoid a few more cases where a pre-existing dialPlan can cause
a new Headscale server take 10+ seconds to connect.

Updates #11938

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I3095173a5a3d9720507afe4452548491e9e45a3e
2024-04-30 18:33:48 -04:00
Brad Fitzpatrick c47f9303b0 types/views: use slices.Contains{,Func}
Updates #8419

Change-Id: Ib1a9cb3fb425284b7e02684072a4e7a35975f35c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-30 15:29:23 -07:00
Joe Tsai 5db80cf2d8
syncs: fix AtomicValue for interface kinds (#11943)
If AtomicValue[T] is used with a T that is an interface kind,
then Store may panic if different concret types are ever stored.

Fix this by always wrapping in a concrete type.
Technically, this is only needed if T is an interface kind,
but there is no harm in doing it also for non-interface kinds.

Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-04-30 14:27:58 -07:00
Irbe Krumina 44aa809cb0
cmd/{k8s-nameserver,k8s-operator},k8s-operator: add a kube nameserver, make operator deploy it (#11919)
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.

Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.

The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.

DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-30 20:18:23 +01:00
Shaw Drastin 1fe073098c
Reset dial plan when switching profile (#11933)
When switching profile, the server URL can change (e.g.
because of switching to a self-hosted headscale instance).

If it is not reset here, dial plans returned by old
server (e.g. tailscale control server) will be used to
connect to new server (e.g. self-hosted headscale server),
and the register request will be blocked by it until
timeout, leading to very slow profile switches.

Updates #11938 11938

Signed-off-by: Shaw Drastin <showier.drastic0a@icloud.com>
2024-04-30 13:42:49 -04:00
Jordan Whited a47ce618bd
net/tstun: implement env var for disabling UDP GRO on Linux (#11924)
Certain device drivers (e.g. vxlan, geneve) do not properly handle
coalesced UDP packets later in the stack, resulting in packet loss.

Updates #11026

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2024-04-30 09:14:02 -07:00
Mario Minardi ec04c677c0
api.md: add documentation for new split DNS endpoints (#11922)
Add documentation for GET/PATCH/PUT `api/v2/tailnet/<ID>/dns/split-dns`.
These endpoints allow for reading, partially updating, and replacing the
split DNS settings for a given tailnet.

Updates https://github.com/tailscale/corp/issues/19483

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-04-30 09:42:33 -06:00
Andrew Lytvynov 7ba8f03936
ipn/ipnlocal: fix TestOnTailnetDefaultAutoUpdate on unsupported platforms (#11921)
Fixes #11894

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-29 14:35:29 -06:00
Irbe Krumina 7d9c3f9897
cmd/k8s-operator/deploy/manifests: check if IPv6 module is loaded before using it (#11867)
Before attempting to enable IPv6 forwarding in the proxy init container
check if the relevant module is found, else the container crashes
on hosts that don't have it.

Updates#11860

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-29 21:12:23 +01:00
Andrew Lytvynov d02f1be46a
scripts/installer.sh: enable Alpine community repo if needed (#11837)
The tailscale package is in the community Alpine repo. Check if it's
commented out in `/etc/apk/repositories` and run `setup-apkrepos -c -1`
if it's not.

Fixes #11263

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-29 13:23:46 -06:00
Claire Wang 5254f6de06
tailcfg: add suggest exit node UI node attribute (#11918)
Add node attribute to determine whether or not to show suggested exit
node in UI.
Updates tailscale/corp#19515

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-04-29 15:20:52 -04:00
Andrew Lytvynov ce5c80d0fe
clientupdate: exec systemctl instead of using dbus to restart (#11923)
Shell out to "systemctl", which lets us drop an extra dependency.

Updates https://github.com/tailscale/corp/issues/18935

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-29 13:16:40 -06:00
Fran Bull 6a0fbacc28 appc: setting AdvertiseRoutes explicitly discards app connector routes
This fixes bugs where after using the cli to set AdvertiseRoutes users
were finding that they had to restart tailscaled before the app
connector would advertise previously learned routes again. And seems
more in line with user expectations.

Fixes #11006
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Fran Bull c27dc1ca31 appc: unadvertise routes when reconfiguring app connector
If the controlknob to persist app connector routes is enabled, when
reconfiguring an app connector unadvertise routes that are no longer
relevant.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Fran Bull fea2e73bc1 appc: write discovered domains to StateStore
If the controlknob is on.
This will allow us to remove discovered routes associated with a
particular domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Fran Bull 1bd1b387b2 appc: add flag shouldStoreRoutes and controlknob for it
When an app connector is reconfigured and domains to route are removed,
we would like to no longer advertise routes that were discovered for
those domains. In order to do this we plan to store which routes were
discovered for which domains.

Add a controlknob so that we can enable/disable the new behavior.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Fran Bull 79836e7bfd appc: add RouteInfo struct and persist it to StateStore
Lays the groundwork for the ability to persist app connectors discovered
routes, which will allow us to stop advertising routes for a domain if
the app connector no longer monitors that domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Andrew Dunham b2b49cb3d5 wgengine/wgcfg/nmcfg: skip expired peers
Updates tailscale/corp#19315

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I1ad0c8796efe3dd456280e51efaf81f6d2049772
2024-04-29 13:48:00 -04:00
Mario Minardi 74c399483c
api.md: explicitly set content-type headers in POST CURL examples (#11916)
Explicitly set `-H "Content-Type: application/json"` in CURL examples
for POST endpoints as the default content type used by CURL is otherwise
`application/x-www-form-urlencoded` and these endpoints expect JSON data.

Updates https://github.com/tailscale/tailscale/issues/11914

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-04-29 10:25:52 -06:00
Irbe Krumina 1452faf510
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login on kube, check Secret create perms, allow empty state Secret (#11326)
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login and empty state Secrets, check perms

* Allow users to pre-create empty state Secrets

* Add a fake internal kube client, test functionality that has dependencies on kube client operations.

* Fix an issue where interactive login was not allowed in an edge case where state Secret does not exist

* Make the CheckSecretPermissions method report whether we have permissions to create/patch a Secret if it's determined that these operations will be needed

Updates tailscale/tailscale#11170

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-29 17:03:48 +01:00
127 changed files with 5932 additions and 1051 deletions

View File

@ -32,7 +32,6 @@ jobs:
- "ubuntu:18.04"
- "ubuntu:20.04"
- "ubuntu:22.04"
- "ubuntu:22.10"
- "ubuntu:23.04"
- "elementary/docker:stable"
- "elementary/docker:unstable"
@ -91,7 +90,10 @@ jobs:
|| contains(matrix.image, 'parrotsec')
|| contains(matrix.image, 'kalilinux')
- name: checkout
uses: actions/checkout@v4
# We cannot use v4, as it requires a newer glibc version than some of the
# tested images provide. See
# https://github.com/actions/checkout/issues/1487
uses: actions/checkout@v3
- name: run installer
run: scripts/installer.sh
# Package installation can fail in docker because systemd is not running

1
.gitignore vendored
View File

@ -9,6 +9,7 @@
cmd/tailscale/tailscale
cmd/tailscaled/tailscaled
ssh/tailssh/testcontainers/tailscaled
# Test binary, built with `go test -c`
*.test

View File

@ -100,6 +100,26 @@ publishdevoperator: ## Build and publish k8s-operator image to location specifie
@test "${REPO}" != "ghcr.io/tailscale/k8s-operator" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-operator" && exit 1)
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=operator ./build_docker.sh
publishdevnameserver: ## Build and publish k8s-nameserver image to location specified by ${REPO}
@test -n "${REPO}" || (echo "REPO=... required; e.g. REPO=ghcr.io/${USER}/tailscale" && exit 1)
@test "${REPO}" != "tailscale/tailscale" || (echo "REPO=... must not be tailscale/tailscale" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/tailscale" || (echo "REPO=... must not be ghcr.io/tailscale/tailscale" && exit 1)
@test "${REPO}" != "tailscale/k8s-nameserver" || (echo "REPO=... must not be tailscale/k8s-nameserver" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/k8s-nameserver" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-nameserver" && exit 1)
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=k8s-nameserver ./build_docker.sh
.PHONY: sshintegrationtest
sshintegrationtest: ## Run the SSH integration tests in various Docker containers
@GOOS=linux GOARCH=amd64 go test -tags integrationtest -c ./ssh/tailssh -o ssh/tailssh/testcontainers/tailssh.test && \
GOOS=linux GOARCH=amd64 go build -o ssh/tailssh/testcontainers/tailscaled ./cmd/tailscaled && \
echo "Testing on ubuntu:focal" && docker build --build-arg="BASE=ubuntu:focal" -t ssh-ubuntu-focal ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:jammy" && docker build --build-arg="BASE=ubuntu:jammy" -t ssh-ubuntu-jammy ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:mantic" && docker build --build-arg="BASE=ubuntu:mantic" -t ssh-ubuntu-mantic ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:noble" && docker build --build-arg="BASE=ubuntu:noble" -t ssh-ubuntu-noble ssh/tailssh/testcontainers && \
echo "Testing on fedora:38" && docker build --build-arg="BASE=dokken/fedora-38" -t ssh-fedora-38 ssh/tailssh/testcontainers && \
echo "Testing on fedora:39" && docker build --build-arg="BASE=dokken/fedora-39" -t ssh-fedora-39 ssh/tailssh/testcontainers && \
echo "Testing on fedora:40" && docker build --build-arg="BASE=dokken/fedora-40" -t ssh-fedora-40 ssh/tailssh/testcontainers
help: ## Show this help
@echo "\nSpecify a command. The choices are:\n"
@grep -hE '^[0-9a-zA-Z_-]+:.*?## .*$$' ${MAKEFILE_LIST} | awk 'BEGIN {FS = ":.*?## "}; {printf " \033[0;36m%-20s\033[m %s\n", $$1, $$2}'

188
api.md
View File

@ -93,6 +93,10 @@ The Tailscale API does not currently support pagination. All results are returne
- **Search paths**
- Get search paths: [`GET /api/v2/tailnet/{tailnet}/dns/searchpaths`](#get-search-paths)
- Set search paths: [`POST /api/v2/tailnet/{tailnet}/dns/searchpaths`](#set-search-paths)
- **Split DNS**
- Get split DNS: [`GET /api/v2/tailnet/{tailnet}/dns/split-dns`](#get-split-dns)
- Update split DNS: [`PATCH /api/v2/tailnet/{tailnet}/dns/split-dns`](#update-split-dns)
- Set split DNS: [`PUT /api/v2/tailnet/{tailnet}/dns/split-dns`](#set-split-dns)
# Device
@ -428,7 +432,8 @@ The ID of the device.
```sh
curl -X POST 'https://api.tailscale.com/api/v2/device/12345/expire' \
-u "tskey-api-xxxxx:"
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json"
```
### Response
@ -508,6 +513,7 @@ The new list of enabled subnet routes.
```sh
curl "https://api.tailscale.com/api/v2/device/11055/routes" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"routes": ["10.0.0.0/16", "192.168.1.0/24"]}'
```
@ -556,6 +562,7 @@ Specify whether the device is authorized. False to deauthorize an authorized dev
```sh
curl "https://api.tailscale.com/api/v2/device/11055/authorized" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"authorized": true}'
```
@ -605,6 +612,7 @@ The new list of tags for the device.
```sh
curl "https://api.tailscale.com/api/v2/device/11055/tags" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"tags": ["tag:foo", "tag:bar"]}'
```
@ -667,6 +675,7 @@ This returns a 2xx code on success, with an empty JSON object in the response bo
```sh
curl "https://api.tailscale.com/api/v2/device/11055/key" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"keyExpiryDisabled": true}'
```
@ -712,6 +721,7 @@ This returns a 2xx code on success, with an empty JSON object in the response bo
```sh
curl "https://api.tailscale.com/api/v2/device/11055/ip" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"ipv4": "100.80.0.1"}'
```
@ -808,6 +818,7 @@ A number value is an integer and must be a JSON safe number (up to 2^53 - 1).
```
curl "https://api.tailscale.com/api/v2/device/11055/attributes/custom:my_attribute" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"value": "my_value"}'
```
@ -918,7 +929,7 @@ The response will contain a JSON object with the fields:
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl" \
-u "tskey-api-xxxxx:" \
-u "tskey-api-xxxxx:"
```
### Response in HuJSON format
@ -958,8 +969,8 @@ Etag: "e0b2816b418b3f266309d94426ac7668ab3c1fa87798785bf82f1085cc2f6d9c"
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl" \
-u "tskey-api-xxxxx:" \
-H "Accept: application/json" \
-u "tskey-api-xxxxx:"
-H "Accept: application/json"
```
### Response in JSON format
@ -1000,7 +1011,7 @@ Etag: "e0b2816b418b3f266309d94426ac7668ab3c1fa87798785bf82f1085cc2f6d9c"
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl?details=1" \
-u "tskey-api-xxxxx:" \
-u "tskey-api-xxxxx:"
```
### Response (with details)
@ -1072,6 +1083,7 @@ Learn about the [ACL policy properties you can include in the request](https://t
POST /api/v2/tailnet/example.com/acl
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
-H "If-Match: \"e0b2816b418b3f266309d94426ac7668ab3c1fa87798785bf82f1085cc2f6d9c\""
--data-binary '// Example/default ACLs for unrestricted connections.
{
@ -1184,6 +1196,7 @@ Learn about [tailnet policy file entries](https://tailscale.com/kb/1018).
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl/preview?previewFor=user1@example.com&type=user" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '// Example/default ACLs for unrestricted connections.
{
// Declare tests to check functionality of ACL rules. User must be a valid user with registered machines.
@ -1267,6 +1280,7 @@ Learn more about [tailnet policy file tests](https://tailscale.com/kb/1018/#test
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl/validate" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '
[
{"src": "user1@example.com", "accept": ["example-host-1:22"], "deny": ["example-host-2:100"]}
@ -1288,6 +1302,7 @@ The `POST` body should be a JSON object with a JSON or HuJSON representation of
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl/validate" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '
{
"acls": [
@ -1548,6 +1563,7 @@ Note the following about required vs. optional values:
```jsonc
curl "https://api.tailscale.com/api/v2/tailnet/example.com/keys" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '
{
"capabilities": {
@ -1759,6 +1775,7 @@ Adding DNS nameservers with the MagicDNS on:
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/dns/nameservers" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"dns": ["8.8.8.8"]}'
```
@ -1778,6 +1795,7 @@ The response is a JSON object containing the new list of nameservers and the sta
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/dns/nameservers" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"dns": []}'
```
@ -1864,6 +1882,7 @@ The DNS preferences in JSON. Currently, MagicDNS is the only setting available:
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/dns/preferences" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"magicDNS": true}'
```
@ -1947,6 +1966,7 @@ Specify a list of search paths in a JSON object:
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/dns/searchpaths" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"searchPaths": ["user1.example.com", "user2.example.com"]}'
```
@ -1959,3 +1979,161 @@ The response is a JSON object containing the new list of search paths.
"searchPaths": ["user1.example.com", "user2.example.com"],
}
```
## Get split DNS
```http
GET /api/v2/tailnet/{tailnet}/dns/split-dns
```
Retrieves the split DNS settings, which is a map from domains to lists of nameservers, that is currently set for the given tailnet.
### Parameters
#### `tailnet` (required in URL path)
The tailnet organization name.
### Request example
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/dns/split-dns" \
-u "tskey-api-xxxxx:"
```
### Response
```jsonc
{
"example.com": ["1.1.1.1", "1.2.3.4"],
"other.com": ["2.2.2.2"]
}
```
## Update split DNS
```http
PATCH /api/v2/tailnet/{tailnet}/dns/split-dns
```
Performs partial updates of the split DNS settings for a given tailnet. Only domains specified in the request map will be modified. Setting the value of a mapping to "null" clears the nameservers for that domain.
### Parameters
#### `tailnet` (required in URL path)
The tailnet organization name.
#### `PATCH` body format
Specify mappings from domain name to a list of nameservers in a JSON object:
```jsonc
{
"example.com": ["1.1.1.1", "1.2.3.4"],
"other.com": ["2.2.2.2"]
}
```
### Request example: updating split DNS settings for multiple domains
```sh
curl -X PATCH "https://api.tailscale.com/api/v2/tailnet/example.com/dns/split-dns" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"example.com": ["1.1.1.1", "1.2.3.4"], "other.com": ["2.2.2.2"]}'
```
### Response: updating split DNS settings for multiple domains
The response is a JSON object containing the updated map of split DNS settings.
```jsonc
{
"example.com": ["1.1.1.1", "1.2.3.4"],
"other.com": ["2.2.2.2"],
<existing unmodified key / value pairs>
}
```
### Request example: unsetting nameservers for a domain
```sh
curl -X PATCH "https://api.tailscale.com/api/v2/tailnet/example.com/dns/split-dns" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"example.com": null}'
```
### Response: unsetting nameservers for a domain
The response is a JSON object containing the updated map of split DNS settings.
```jsonc
{
<existing unmodified key / value pairs without example.com>
}
```
## Set split DNS
```http
PUT /api/v2/tailnet/{tailnet}/dns/split-dns
```
Replaces the split DNS settings for a given tailnet. Setting the value of a mapping to "null" clears the nameservers for that domain. Sending an empty object clears nameservers for all domains.
### Parameters
#### `tailnet` (required in URL path)
The tailnet organization name.
#### `PUT` body format
Specify mappings from domain name to a list of nameservers in a JSON object:
```jsonc
{
"example.com": ["1.2.3.4"],
"other.com": ["2.2.2.2"]
}
```
### Request example: setting multiple domains
```sh
curl -X PUT "https://api.tailscale.com/api/v2/tailnet/example.com/dns/split-dns" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"example.com": ["1.2.3.4"], "other.com": ["2.2.2.2"]}'
```
### Response: unsetting nameservers for a domain
The response is a JSON object containing the updated map of split DNS settings.
```jsonc
{
"example.com": ["1.2.3.4"],
"other.com": ["2.2.2.2"]
}
```
### Request example: unsetting all domains
```sh
curl -X PUT "https://api.tailscale.com/api/v2/tailnet/example.com/dns/split-dns" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{}'
```
### Response: unsetting nameservers for a domain
The response is a JSON object containing the updated map of split DNS settings.
```jsonc
{
}
```

View File

@ -23,6 +23,7 @@ import (
"tailscale.com/util/dnsname"
"tailscale.com/util/execqueue"
"tailscale.com/util/mak"
"tailscale.com/util/slicesx"
)
// RouteAdvertiser is an interface that allows the AppConnector to advertise
@ -36,6 +37,19 @@ type RouteAdvertiser interface {
UnadvertiseRoute(...netip.Prefix) error
}
// RouteInfo is a data structure used to persist the in memory state of an AppConnector
// so that we can know, even after a restart, which routes came from ACLs and which were
// learned from domains.
type RouteInfo struct {
// Control is the routes from the 'routes' section of an app connector acl.
Control []netip.Prefix `json:",omitempty"`
// Domains are the routes discovered by observing DNS lookups for configured domains.
Domains map[string][]netip.Addr `json:",omitempty"`
// Wildcards are the configured DNS lookup domains to observe. When a DNS query matches Wildcards,
// its result is added to Domains.
Wildcards []string `json:",omitempty"`
}
// AppConnector is an implementation of an AppConnector that performs
// its function as a subsystem inside of a tailscale node. At the control plane
// side App Connector routing is configured in terms of domains rather than IP
@ -49,6 +63,9 @@ type AppConnector struct {
logf logger.Logf
routeAdvertiser RouteAdvertiser
// storeRoutesFunc will be called to persist routes if it is not nil.
storeRoutesFunc func(*RouteInfo) error
// mu guards the fields that follow
mu sync.Mutex
@ -67,11 +84,46 @@ type AppConnector struct {
}
// NewAppConnector creates a new AppConnector.
func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser) *AppConnector {
return &AppConnector{
func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser, routeInfo *RouteInfo, storeRoutesFunc func(*RouteInfo) error) *AppConnector {
ac := &AppConnector{
logf: logger.WithPrefix(logf, "appc: "),
routeAdvertiser: routeAdvertiser,
storeRoutesFunc: storeRoutesFunc,
}
if routeInfo != nil {
ac.domains = routeInfo.Domains
ac.wildcards = routeInfo.Wildcards
ac.controlRoutes = routeInfo.Control
}
return ac
}
// ShouldStoreRoutes returns true if the appconnector was created with the controlknob on
// and is storing its discovered routes persistently.
func (e *AppConnector) ShouldStoreRoutes() bool {
return e.storeRoutesFunc != nil
}
// storeRoutesLocked takes the current state of the AppConnector and persists it
func (e *AppConnector) storeRoutesLocked() error {
if !e.ShouldStoreRoutes() {
return nil
}
return e.storeRoutesFunc(&RouteInfo{
Control: e.controlRoutes,
Domains: e.domains,
Wildcards: e.wildcards,
})
}
// ClearRoutes removes all route state from the AppConnector.
func (e *AppConnector) ClearRoutes() error {
e.mu.Lock()
defer e.mu.Unlock()
e.controlRoutes = nil
e.domains = nil
e.wildcards = nil
return e.storeRoutesLocked()
}
// UpdateDomainsAndRoutes starts an asynchronous update of the configuration
@ -125,10 +177,26 @@ func (e *AppConnector) updateDomains(domains []string) {
for _, wc := range e.wildcards {
if dnsname.HasSuffix(d, wc) {
e.domains[d] = addrs
delete(oldDomains, d)
break
}
}
}
// Everything left in oldDomains is a domain we're no longer tracking
// and if we are storing route info we can unadvertise the routes
if e.ShouldStoreRoutes() {
toRemove := []netip.Prefix{}
for _, addrs := range oldDomains {
for _, a := range addrs {
toRemove = append(toRemove, netip.PrefixFrom(a, a.BitLen()))
}
}
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.logf("failed to unadvertise routes on domain removal: %v: %v: %v", xmaps.Keys(oldDomains), toRemove, err)
}
}
e.logf("handling domains: %v and wildcards: %v", xmaps.Keys(e.domains), e.wildcards)
}
@ -152,6 +220,14 @@ func (e *AppConnector) updateRoutes(routes []netip.Prefix) {
var toRemove []netip.Prefix
// If we're storing routes and know e.controlRoutes is a good
// representation of what should be in AdvertisedRoutes we can stop
// advertising routes that used to be in e.controlRoutes but are not
// in routes.
if e.ShouldStoreRoutes() {
toRemove = routesWithout(e.controlRoutes, routes)
}
nextRoute:
for _, r := range routes {
for _, addr := range e.domains {
@ -170,6 +246,9 @@ nextRoute:
}
e.controlRoutes = routes
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
}
// Domains returns the currently configured domain list.
@ -380,6 +459,9 @@ func (e *AppConnector) scheduleAdvertisement(domain string, routes ...netip.Pref
e.logf("[v2] advertised route for %v: %v", domain, addr)
}
}
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
})
}
@ -400,3 +482,15 @@ func (e *AppConnector) addDomainAddrLocked(domain string, addr netip.Addr) {
func compareAddr(l, r netip.Addr) int {
return l.Compare(r)
}
// routesWithout returns a without b where a and b
// are unsorted slices of netip.Prefix
func routesWithout(a, b []netip.Prefix) []netip.Prefix {
m := make(map[netip.Prefix]bool, len(b))
for _, p := range b {
m[p] = true
}
return slicesx.Filter(make([]netip.Prefix, 0, len(a)), a, func(p netip.Prefix) bool {
return !m[p]
})
}

View File

@ -17,194 +17,238 @@ import (
"tailscale.com/util/must"
)
func fakeStoreRoutes(*RouteInfo) error { return nil }
func TestUpdateDomains(t *testing.T) {
ctx := context.Background()
a := NewAppConnector(t.Logf, nil)
a.UpdateDomains([]string{"example.com"})
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, nil, nil)
}
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.Wait(ctx)
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
addr := netip.MustParseAddr("192.0.0.8")
a.domains["example.com"] = append(a.domains["example.com"], addr)
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
addr := netip.MustParseAddr("192.0.0.8")
a.domains["example.com"] = append(a.domains["example.com"], addr)
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx)
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
// domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx)
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
}
}
func TestUpdateRoutes(t *testing.T) {
ctx := context.Background()
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
a.updateDomains([]string{"*.example.com"})
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"*.example.com"})
// This route should be collapsed into the range
a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1"))
a.Wait(ctx)
// This route should be collapsed into the range
a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) {
t.Fatalf("got %v, want %v", rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
}
if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) {
t.Fatalf("got %v, want %v", rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
}
// This route should not be collapsed or removed
a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1"))
a.Wait(ctx)
// This route should not be collapsed or removed
a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1"))
a.Wait(ctx)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")}
a.updateRoutes(routes)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")}
a.updateRoutes(routes)
slices.SortFunc(rc.Routes(), prefixCompare)
rc.SetRoutes(slices.Compact(rc.Routes()))
slices.SortFunc(routes, prefixCompare)
slices.SortFunc(rc.Routes(), prefixCompare)
rc.SetRoutes(slices.Compact(rc.Routes()))
slices.SortFunc(routes, prefixCompare)
// Ensure that the non-matching /32 is preserved, even though it's in the domains table.
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Errorf("added routes: got %v, want %v", rc.Routes(), routes)
}
// Ensure that the non-matching /32 is preserved, even though it's in the domains table.
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Errorf("added routes: got %v, want %v", rc.Routes(), routes)
}
// Ensure that the contained /32 is removed, replaced by the /24.
wantRemoved := []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}
if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) {
t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes())
// Ensure that the contained /32 is removed, replaced by the /24.
wantRemoved := []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}
if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) {
t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes())
}
}
}
func TestUpdateRoutesUnadvertisesContainedRoutes(t *testing.T) {
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.updateRoutes(routes)
for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.updateRoutes(routes)
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Fatalf("got %v, want %v", rc.Routes(), routes)
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Fatalf("got %v, want %v", rc.Routes(), routes)
}
}
}
func TestDomainRoutes(t *testing.T) {
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(context.Background())
for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(context.Background())
want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")},
}
want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")},
}
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want)
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want)
}
}
}
func TestObserveDNSResponse(t *testing.T) {
ctx := context.Background()
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
// a has no domains configured, so it should not advertise any routes
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a has no domains configured, so it should not advertise any routes
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a CNAME record chain should result in a route being added if the chain
// matches a routed domain.
a.updateDomains([]string{"www.example.com", "example.com"})
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a CNAME record chain should result in a route being added if the chain
// matches a routed domain.
a.updateDomains([]string{"www.example.com", "example.com"})
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a CNAME record chain should result in a route being added if the chain
// even if only found in the middle of the chain
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a CNAME record chain should result in a route being added if the chain
// even if only found in the middle of the chain
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// don't re-advertise routes that have already been advertised
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
// don't re-advertise routes that have already been advertised
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
// don't advertise addresses that are already in a control provided route
pfx := netip.MustParsePrefix("192.0.2.0/24")
a.updateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx)
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
// don't advertise addresses that are already in a control provided route
pfx := netip.MustParsePrefix("192.0.2.0/24")
a.updateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx)
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
}
}
}
func TestWildcardDomains(t *testing.T) {
ctx := context.Background()
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"*.example.com"})
a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want)
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
a.updateDomains([]string{"*.example.com"})
a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want)
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
a.updateDomains([]string{"*.example.com", "example.com"})
if _, ok := a.domains["foo.example.com"]; !ok {
t.Errorf("expected foo.example.com to be preserved in domains due to wildcard")
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
a.updateDomains([]string{"*.example.com", "example.com"})
if _, ok := a.domains["foo.example.com"]; !ok {
t.Errorf("expected foo.example.com to be preserved in domains due to wildcard")
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
// There was an early regression where the wildcard domain was added repeatedly, this guards against that.
a.updateDomains([]string{"*.example.com", "example.com"})
if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
// There was an early regression where the wildcard domain was added repeatedly, this guards against that.
a.updateDomains([]string{"*.example.com", "example.com"})
if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
}
}
}
@ -310,3 +354,169 @@ func prefixCompare(a, b netip.Prefix) int {
}
return a.Addr().Compare(b.Addr())
}
func prefixes(in ...string) []netip.Prefix {
toRet := make([]netip.Prefix, len(in))
for i, s := range in {
toRet[i] = netip.MustParsePrefix(s)
}
return toRet
}
func TestUpdateRouteRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
// nothing has yet been advertised
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.2/32"))
a.Wait(ctx)
// the routes passed to UpdateDomainsAndRoutes have been advertised
assertRoutes("simple update", prefixes("1.2.3.1/32", "1.2.3.2/32"), []netip.Prefix{})
// one route the same, one different
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.3/32"))
a.Wait(ctx)
// old behavior: routes are not removed, resulting routes are both old and new
// (we have dupe 1.2.3.1 routes because the test RouteAdvertiser doesn't have the deduplication
// the real one does)
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior: routes are removed, resulting routes are new only
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes = prefixes("1.2.3.2/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateDomainRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateWildcardRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "*.b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("1.b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("2.b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for *.b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestRoutesWithout(t *testing.T) {
assert := func(msg string, got, want []netip.Prefix) {
if !slices.Equal(want, got) {
t.Errorf("%s: want %v, got %v", msg, want, got)
}
}
assert("empty routes", routesWithout([]netip.Prefix{}, []netip.Prefix{}), []netip.Prefix{})
assert("a empty", routesWithout([]netip.Prefix{}, prefixes("1.1.1.1/32", "1.1.1.2/32")), []netip.Prefix{})
assert("b empty", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), []netip.Prefix{}), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("no overlap", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.3/32", "1.1.1.4/32")), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("a has fewer", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32")), []netip.Prefix{})
assert("a has more", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32"), prefixes("1.1.1.1/32", "1.1.1.3/32")), prefixes("1.1.1.2/32", "1.1.1.4/32"))
}

View File

@ -71,6 +71,22 @@ case "$TARGET" in
--target="${PLATFORM}" \
/usr/local/bin/operator
;;
k8s-nameserver)
DEFAULT_REPOS="tailscale/k8s-nameserver"
REPOS="${REPOS:-${DEFAULT_REPOS}}"
go run github.com/tailscale/mkctr \
--gopaths="tailscale.com/cmd/k8s-nameserver:/usr/local/bin/k8s-nameserver" \
--ldflags=" \
-X tailscale.com/version.longStamp=${VERSION_LONG} \
-X tailscale.com/version.shortStamp=${VERSION_SHORT} \
-X tailscale.com/version.gitCommitStamp=${VERSION_GIT_HASH}" \
--base="${BASE}" \
--tags="${TAGS}" \
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
/usr/local/bin/k8s-nameserver
;;
*)
echo "unknown target: $TARGET"
exit 1

View File

@ -1019,6 +1019,20 @@ func (up *Updater) updateLinuxBinary() error {
return nil
}
func restartSystemdUnit(ctx context.Context) error {
if _, err := exec.LookPath("systemctl"); err != nil {
// Likely not a systemd-managed distro.
return errors.ErrUnsupported
}
if out, err := exec.Command("systemctl", "daemon-reload").CombinedOutput(); err != nil {
return fmt.Errorf("systemctl daemon-reload failed: %w\noutput: %s", err, out)
}
if out, err := exec.Command("systemctl", "restart", "tailscaled.service").CombinedOutput(); err != nil {
return fmt.Errorf("systemctl restart failed: %w\noutput: %s", err, out)
}
return nil
}
func (up *Updater) downloadLinuxTarball(ver string) (string, error) {
dlDir, err := os.UserCacheDir()
if err != nil {

View File

@ -1,37 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package clientupdate
import (
"context"
"errors"
"fmt"
"github.com/coreos/go-systemd/v22/dbus"
)
func restartSystemdUnit(ctx context.Context) error {
c, err := dbus.NewWithContext(ctx)
if err != nil {
// Likely not a systemd-managed distro.
return errors.ErrUnsupported
}
defer c.Close()
if err := c.ReloadContext(ctx); err != nil {
return fmt.Errorf("failed to reload tailscaled.service: %w", err)
}
ch := make(chan string, 1)
if _, err := c.RestartUnitContext(ctx, "tailscaled.service", "replace", ch); err != nil {
return fmt.Errorf("failed to restart tailscaled.service: %w", err)
}
select {
case res := <-ch:
if res != "done" {
return fmt.Errorf("systemd service restart failed with result %q", res)
}
case <-ctx.Done():
return ctx.Err()
}
return nil
}

View File

@ -1,15 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !linux
package clientupdate
import (
"context"
"errors"
)
func restartSystemdUnit(ctx context.Context) error {
return errors.ErrUnsupported
}

View File

@ -8,6 +8,7 @@ package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"net/http"
@ -18,20 +19,6 @@ import (
"tailscale.com/tailcfg"
)
// findKeyInKubeSecret inspects the kube secret secretName for a data
// field called "authkey", and returns its value if present.
func findKeyInKubeSecret(ctx context.Context, secretName string) (string, error) {
s, err := kc.GetSecret(ctx, secretName)
if err != nil {
return "", err
}
ak, ok := s.Data["authkey"]
if !ok {
return "", nil
}
return string(ak), nil
}
// storeDeviceInfo writes deviceID into the "device_id" data field of the kube
// secret secretName.
func storeDeviceInfo(ctx context.Context, secretName string, deviceID tailcfg.StableNodeID, fqdn string, addresses []netip.Prefix) error {
@ -88,9 +75,59 @@ func deleteAuthKey(ctx context.Context, secretName string) error {
return nil
}
var kc *kube.Client
var kc kube.Client
func initKube(root string) {
// setupKube is responsible for doing any necessary configuration and checks to
// ensure that tailscale state storage and authentication mechanism will work on
// Kubernetes.
func (cfg *settings) setupKube(ctx context.Context) error {
if cfg.KubeSecret == "" {
return nil
}
canPatch, canCreate, err := kc.CheckSecretPermissions(ctx, cfg.KubeSecret)
if err != nil {
return fmt.Errorf("Some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
}
cfg.KubernetesCanPatch = canPatch
s, err := kc.GetSecret(ctx, cfg.KubeSecret)
if err != nil && kube.IsNotFoundErr(err) && !canCreate {
return fmt.Errorf("Tailscale state Secret %s does not exist and we don't have permissions to create it. "+
"If you intend to store tailscale state elsewhere than a Kubernetes Secret, "+
"you can explicitly set TS_KUBE_SECRET env var to an empty string. "+
"Else ensure that RBAC is set up that allows the service account associated with this installation to create Secrets.", cfg.KubeSecret)
} else if err != nil && !kube.IsNotFoundErr(err) {
return fmt.Errorf("Getting Tailscale state Secret %s: %v", cfg.KubeSecret, err)
}
if cfg.AuthKey == "" && !isOneStepConfig(cfg) {
if s == nil {
log.Print("TS_AUTHKEY not provided and kube secret does not exist, login will be interactive if needed.")
return nil
}
keyBytes, _ := s.Data["authkey"]
key := string(keyBytes)
if key != "" {
// This behavior of pulling authkeys from kube secrets was added
// at the same time as the patch permission, so we can enforce
// that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
return errors.New("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the secret to manage the authkey.")
}
cfg.AuthKey = key
} else {
log.Print("No authkey found in kube secret and TS_AUTHKEY not provided, login will be interactive if needed.")
}
}
return nil
}
func initKubeClient(root string) {
if root != "/" {
// If we are running in a test, we need to set the root path to the fake
// service account directory.

View File

@ -0,0 +1,206 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"context"
"errors"
"testing"
"github.com/google/go-cmp/cmp"
"tailscale.com/kube"
)
func TestSetupKube(t *testing.T) {
tests := []struct {
name string
cfg *settings
wantErr bool
wantCfg *settings
kc kube.Client
}{
{
name: "TS_AUTHKEY set, state Secret exists",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, nil
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
},
{
name: "TS_AUTHKEY set, state Secret does not exist, we have permissions to create it",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
},
{
name: "TS_AUTHKEY set, state Secret does not exist, we do not have permissions to create it",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY set, we encounter a non-404 error when trying to retrieve the state Secret",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 403}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY set, we encounter a non-404 error when trying to check Secret permissions",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, errors.New("broken")
},
},
wantErr: true,
},
{
// Interactive login using URL in Pod logs
name: "TS_AUTHKEY not set, state Secret does not exist, we have permissions to create it",
cfg: &settings{
KubeSecret: "foo",
},
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
},
{
// Interactive login using URL in Pod logs
name: "TS_AUTHKEY not set, state Secret exists, but does not contain auth key",
cfg: &settings{
KubeSecret: "foo",
},
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{}, nil
},
},
},
{
name: "TS_AUTHKEY not set, state Secret contains auth key, we do not have RBAC to patch it",
cfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
wantCfg: &settings{
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY not set, state Secret contains auth key, we have RBAC to patch it",
cfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return true, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
wantCfg: &settings{
KubeSecret: "foo",
AuthKey: "foo",
KubernetesCanPatch: true,
},
},
}
for _, tt := range tests {
kc = tt.kc
t.Run(tt.name, func(t *testing.T) {
if err := tt.cfg.setupKube(context.Background()); (err != nil) != tt.wantErr {
t.Errorf("settings.setupKube() error = %v, wantErr %v", err, tt.wantErr)
}
if diff := cmp.Diff(*tt.cfg, *tt.wantCfg); diff != "" {
t.Errorf("unexpected contents of settings after running settings.setupKube()\n(-got +want):\n%s", diff)
}
})
}
}

View File

@ -171,44 +171,16 @@ func main() {
}
}
if cfg.InKubernetes {
initKube(cfg.Root)
}
// Context is used for all setup stuff until we're in steady
// state, so that if something is hanging we eventually time out
// and crashloop the container.
bootCtx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
if cfg.InKubernetes && cfg.KubeSecret != "" {
canPatch, err := kc.CheckSecretPermissions(bootCtx, cfg.KubeSecret)
if err != nil {
log.Fatalf("Some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
}
cfg.KubernetesCanPatch = canPatch
if cfg.AuthKey == "" && !isOneStepConfig(cfg) {
key, err := findKeyInKubeSecret(bootCtx, cfg.KubeSecret)
if err != nil {
log.Fatalf("Getting authkey from kube secret: %v", err)
}
if key != "" {
// This behavior of pulling authkeys from kube secrets was added
// at the same time as the patch permission, so we can enforce
// that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
log.Fatalf("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the secret to manage the authkey.")
}
log.Print("Using authkey found in kube secret")
cfg.AuthKey = key
} else {
log.Print("No authkey found in kube secret and TS_AUTHKEY not provided, login will be interactive if needed.")
}
if cfg.InKubernetes {
initKubeClient(cfg.Root)
if err := cfg.setupKube(bootCtx); err != nil {
log.Fatalf("error setting up for running on Kubernetes: %v", err)
}
}

View File

@ -13,6 +13,7 @@ import (
"testing"
"tailscale.com/tstest"
"tailscale.com/tstest/nettest"
)
func BenchmarkHandleBootstrapDNS(b *testing.B) {
@ -55,6 +56,8 @@ func getBootstrapDNS(t *testing.T, q string) dnsEntryMap {
}
func TestUnpublishedDNS(t *testing.T) {
nettest.SkipIfNoNetwork(t)
const published = "login.tailscale.com"
const unpublished = "log.tailscale.io"

View File

@ -191,7 +191,12 @@ func main() {
http.Error(w, "derp server disabled", http.StatusNotFound)
}))
}
mux.HandleFunc("/derp/probe", probeHandler)
// These two endpoints are the same. Different versions of the clients
// have assumes different paths over time so we support both.
mux.HandleFunc("/derp/probe", derphttp.ProbeHandler)
mux.HandleFunc("/derp/latency-check", derphttp.ProbeHandler)
go refreshBootstrapDNSLoop()
mux.HandleFunc("/bootstrap-dns", tsweb.BrowserHeaderHandlerFunc(handleBootstrapDNS))
mux.Handle("/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@ -370,17 +375,6 @@ func isChallengeChar(c rune) bool {
c == '.' || c == '-' || c == '_'
}
// probeHandler is the endpoint that js/wasm clients hit to measure
// DERP latency, since they can't do UDP STUN queries.
func probeHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case "HEAD", "GET":
w.Header().Set("Access-Control-Allow-Origin", "*")
default:
http.Error(w, "bogus probe method", http.StatusMethodNotAllowed)
}
}
var validProdHostname = regexp.MustCompile(`^derp([^.]*)\.tailscale\.com\.?$`)
func prodAutocertHostPolicy(_ context.Context, host string) error {

354
cmd/k8s-nameserver/main.go Normal file
View File

@ -0,0 +1,354 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// k8s-nameserver is a simple nameserver implementation meant to be used with
// k8s-operator to allow to resolve magicDNS names associated with tailnet
// proxies in cluster.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"net"
"os"
"os/signal"
"path/filepath"
"sync"
"syscall"
"github.com/fsnotify/fsnotify"
"github.com/miekg/dns"
operatorutils "tailscale.com/k8s-operator"
"tailscale.com/util/dnsname"
)
const (
// tsNetDomain is the domain that this DNS nameserver has registered a handler for.
tsNetDomain = "ts.net"
// addr is the the address that the UDP and TCP listeners will listen on.
addr = ":1053"
// The following constants are specific to the nameserver configuration
// provided by a mounted Kubernetes Configmap. The Configmap mounted at
// /config is the only supported way for configuring this nameserver.
defaultDNSConfigDir = "/config"
kubeletMountedConfigLn = "..data"
)
// nameserver is a simple nameserver that responds to DNS queries for A records
// for ts.net domain names over UDP or TCP. It serves DNS responses from
// in-memory IPv4 host records. It is intended to be deployed on Kubernetes with
// a ConfigMap mounted at /config that should contain the host records. It
// dynamically reconfigures its in-memory mappings as the contents of the
// mounted ConfigMap changes.
type nameserver struct {
// configReader returns the latest desired configuration (host records)
// for the nameserver. By default it gets set to a reader that reads
// from a Kubernetes ConfigMap mounted at /config, but this can be
// overridden in tests.
configReader configReaderFunc
// configWatcher is a watcher that returns an event when the desired
// configuration has changed and the nameserver should update the
// in-memory records.
configWatcher <-chan string
mu sync.Mutex // protects following
// ip4 are the in-memory hostname -> IP4 mappings that the nameserver
// uses to respond to A record queries.
ip4 map[dnsname.FQDN][]net.IP
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
// Ensure that we watch the kube Configmap mounted at /config for
// nameserver configuration updates and send events when updates happen.
c := ensureWatcherForKubeConfigMap(ctx)
ns := &nameserver{
configReader: configMapConfigReader,
configWatcher: c,
}
// Ensure that in-memory records get set up to date now and will get
// reset when the configuration changes.
ns.runRecordsReconciler(ctx)
// Register a DNS server handle for ts.net domain names. Not having a
// handle registered for any other domain names is how we enforce that
// this nameserver can only be used for ts.net domains - querying any
// other domain names returns Rcode Refused.
dns.HandleFunc(tsNetDomain, ns.handleFunc())
// Listen for DNS queries over UDP and TCP.
udpSig := make(chan os.Signal)
tcpSig := make(chan os.Signal)
go listenAndServe("udp", addr, udpSig)
go listenAndServe("tcp", addr, tcpSig)
sig := make(chan os.Signal, 1)
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM)
s := <-sig
log.Printf("OS signal (%s) received, shutting down", s)
cancel() // exit the records reconciler and configmap watcher goroutines
udpSig <- s // stop the UDP listener
tcpSig <- s // stop the TCP listener
}
// handleFunc is a DNS query handler that can respond to A record queries from
// the nameserver's in-memory records.
// - If an A record query is received and the
// nameserver's in-memory records contain records for the queried domain name,
// return a success response.
// - If an A record query is received, but the
// nameserver's in-memory records do not contain records for the queried domain name,
// return NXDOMAIN.
// - If an A record query is received, but the queried domain name is not valid, return Format Error.
// - If a query is received for any other record type than A, return Not Implemented.
func (n *nameserver) handleFunc() func(w dns.ResponseWriter, r *dns.Msg) {
h := func(w dns.ResponseWriter, r *dns.Msg) {
m := new(dns.Msg)
defer func() {
w.WriteMsg(m)
}()
if len(r.Question) < 1 {
log.Print("[unexpected] nameserver received a request with no questions")
m = r.SetRcodeFormatError(r)
return
}
// TODO (irbekrm): maybe set message compression
switch r.Question[0].Qtype {
case dns.TypeA:
q := r.Question[0].Name
fqdn, err := dnsname.ToFQDN(q)
if err != nil {
m = r.SetRcodeFormatError(r)
return
}
// The only supported use of this nameserver is as a
// single source of truth for MagicDNS names by
// non-tailnet Kubernetes workloads.
m.Authoritative = true
m.RecursionAvailable = false
ips := n.lookupIP4(fqdn)
if ips == nil || len(ips) == 0 {
// As we are the authoritative nameserver for MagicDNS
// names, if we do not have a record for this MagicDNS
// name, it does not exist.
m = m.SetRcode(r, dns.RcodeNameError)
return
}
// TODO (irbekrm): TTL is currently set to 0, meaning
// that cluster workloads will not cache the DNS
// records. Revisit this in future when we understand
// the usage patterns better- is it putting too much
// load on kube DNS server or is this fine?
for _, ip := range ips {
rr := &dns.A{Hdr: dns.RR_Header{Name: q, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 0}, A: ip}
m.SetRcode(r, dns.RcodeSuccess)
m.Answer = append(m.Answer, rr)
}
case dns.TypeAAAA:
// TODO (irbekrm): implement IPv6 support.
// Kubernetes distributions that I am most familiar with
// default to IPv4 for Pod CIDR ranges and often many cases don't
// support IPv6 at all, so this should not be crucial for now.
fallthrough
default:
log.Printf("[unexpected] nameserver received a query for an unsupported record type: %s", r.Question[0].String())
m.SetRcode(r, dns.RcodeNotImplemented)
}
}
return h
}
// runRecordsReconciler ensures that nameserver's in-memory records are
// reset when the provided configuration changes.
func (n *nameserver) runRecordsReconciler(ctx context.Context) {
log.Print("updating nameserver's records from the provided configuration...")
if err := n.resetRecords(); err != nil { // ensure records are up to date before the nameserver starts
log.Fatalf("error setting nameserver's records: %v", err)
}
log.Print("nameserver's records were updated")
go func() {
for {
select {
case <-ctx.Done():
log.Printf("context cancelled, exiting records reconciler")
return
case <-n.configWatcher:
log.Print("configuration update detected, resetting records")
if err := n.resetRecords(); err != nil {
// TODO (irbekrm): this runs in a
// container that will be thrown away,
// so this should be ok. But maybe still
// need to ensure that the DNS server
// terminates connections more
// gracefully.
log.Fatalf("error resetting records: %v", err)
}
log.Print("nameserver records were reset")
}
}
}()
}
// resetRecords sets the in-memory DNS records of this nameserver from the
// provided configuration. It does not check for the diff, so the caller is
// expected to ensure that this is only called when reset is needed.
func (n *nameserver) resetRecords() error {
dnsCfgBytes, err := n.configReader()
if err != nil {
log.Printf("error reading nameserver's configuration: %v", err)
return err
}
if dnsCfgBytes == nil || len(dnsCfgBytes) < 1 {
log.Print("nameserver's configuration is empty, any in-memory records will be unset")
n.mu.Lock()
n.ip4 = make(map[dnsname.FQDN][]net.IP)
n.mu.Unlock()
return nil
}
dnsCfg := &operatorutils.Records{}
err = json.Unmarshal(dnsCfgBytes, dnsCfg)
if err != nil {
return fmt.Errorf("error unmarshalling nameserver configuration: %v\n", err)
}
if dnsCfg.Version != operatorutils.Alpha1Version {
return fmt.Errorf("unsupported configuration version %s, supported versions are %s\n", dnsCfg.Version, operatorutils.Alpha1Version)
}
ip4 := make(map[dnsname.FQDN][]net.IP)
defer func() {
n.mu.Lock()
defer n.mu.Unlock()
n.ip4 = ip4
}()
if len(dnsCfg.IP4) == 0 {
log.Print("nameserver's configuration contains no records, any in-memory records will be unset")
return nil
}
for fqdn, ips := range dnsCfg.IP4 {
fqdn, err := dnsname.ToFQDN(fqdn)
if err != nil {
log.Printf("invalid nameserver's configuration: %s is not a valid FQDN: %v; skipping this record", fqdn, err)
continue // one invalid hostname should not break the whole nameserver
}
for _, ipS := range ips {
ip := net.ParseIP(ipS).To4()
if ip == nil { // To4 returns nil if IP is not a IPv4 address
log.Printf("invalid nameserver's configuration: %v does not appear to be an IPv4 address; skipping this record", ipS)
continue // one invalid IP address should not break the whole nameserver
}
ip4[fqdn] = []net.IP{ip}
}
}
return nil
}
// listenAndServe starts a DNS server for the provided network and address.
func listenAndServe(net, addr string, shutdown chan os.Signal) {
s := &dns.Server{Addr: addr, Net: net}
go func() {
<-shutdown
log.Printf("shutting down server for %s", net)
s.Shutdown()
}()
log.Printf("listening for %s queries on %s", net, addr)
if err := s.ListenAndServe(); err != nil {
log.Fatalf("error running %s server: %v", net, err)
}
}
// ensureWatcherForKubeConfigMap sets up a new file watcher for the ConfigMap
// that's expected to be mounted at /config. Returns a channel that receives an
// event every time the contents get updated.
func ensureWatcherForKubeConfigMap(ctx context.Context) chan string {
c := make(chan string)
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatalf("error creating a new watcher for the mounted ConfigMap: %v", err)
}
// kubelet mounts configmap to a Pod using a series of symlinks, one of
// which is <mount-dir>/..data that Kubernetes recommends consumers to
// use if they need to monitor changes
// https://github.com/kubernetes/kubernetes/blob/v1.28.1/pkg/volume/util/atomic_writer.go#L39-L61
toWatch := filepath.Join(defaultDNSConfigDir, kubeletMountedConfigLn)
go func() {
defer watcher.Close()
log.Printf("starting file watch for %s", defaultDNSConfigDir)
for {
select {
case <-ctx.Done():
log.Print("context cancelled, exiting ConfigMap watcher")
return
case event, ok := <-watcher.Events:
if !ok {
log.Fatal("watcher finished; exiting")
}
if event.Name == toWatch {
msg := fmt.Sprintf("ConfigMap update received: %s", event)
log.Print(msg)
c <- msg
}
case err, ok := <-watcher.Errors:
if err != nil {
// TODO (irbekrm): this runs in a
// container that will be thrown away,
// so this should be ok. But maybe still
// need to ensure that the DNS server
// terminates connections more
// gracefully.
log.Fatalf("[unexpected] error watching configuration: %v", err)
}
if !ok {
// TODO (irbekrm): this runs in a
// container that will be thrown away,
// so this should be ok. But maybe still
// need to ensure that the DNS server
// terminates connections more
// gracefully.
log.Fatalf("[unexpected] errors watcher exited")
}
}
}
}()
if err = watcher.Add(defaultDNSConfigDir); err != nil {
log.Fatalf("failed setting up a watcher for the mounted ConfigMap: %v", err)
}
return c
}
// configReaderFunc is a function that returns the desired nameserver configuration.
type configReaderFunc func() ([]byte, error)
// configMapConfigReader reads the desired nameserver configuration from a
// records.json file in a ConfigMap mounted at /config.
var configMapConfigReader configReaderFunc = func() ([]byte, error) {
if contents, err := os.ReadFile(filepath.Join(defaultDNSConfigDir, operatorutils.DNSRecordsCMKey)); err == nil {
return contents, nil
} else if os.IsNotExist(err) {
return nil, nil
} else {
return nil, err
}
}
// lookupIP4 returns any IPv4 addresses for the given FQDN from nameserver's
// in-memory records.
func (n *nameserver) lookupIP4(fqdn dnsname.FQDN) []net.IP {
if n.ip4 == nil {
return nil
}
n.mu.Lock()
defer n.mu.Unlock()
f := n.ip4[fqdn]
return f
}

View File

@ -0,0 +1,227 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"net"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/miekg/dns"
"tailscale.com/util/dnsname"
)
func TestNameserver(t *testing.T) {
tests := []struct {
name string
ip4 map[dnsname.FQDN][]net.IP
query *dns.Msg
wantResp *dns.Msg
}{
{
name: "A record query, record exists",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{Id: 1, RecursionDesired: true},
},
wantResp: &dns.Msg{
Answer: []dns.RR{&dns.A{Hdr: dns.RR_Header{
Name: "foo.bar.com", Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 0},
A: net.IP{1, 2, 3, 4}}},
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeSuccess,
RecursionAvailable: false,
RecursionDesired: true,
Response: true,
Opcode: dns.OpcodeQuery,
Authoritative: true,
}},
},
{
name: "A record query, record does not exist",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "baz.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "baz.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNameError,
RecursionAvailable: false,
Response: true,
Opcode: dns.OpcodeQuery,
Authoritative: true,
}},
},
{
name: "A record query, but the name is not a valid FQDN",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo..bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo..bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeFormatError,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
{
name: "AAAA record query",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNotImplemented,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
{
name: "AAAA record query",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNotImplemented,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
{
name: "CNAME record query",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeCNAME}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeCNAME}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNotImplemented,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ns := &nameserver{
ip4: tt.ip4,
}
handler := ns.handleFunc()
fakeRespW := &fakeResponseWriter{}
handler(fakeRespW, tt.query)
if diff := cmp.Diff(*fakeRespW.msg, *tt.wantResp); diff != "" {
t.Fatalf("unexpected response (-got +want): \n%s", diff)
}
})
}
}
func TestResetRecords(t *testing.T) {
tests := []struct {
name string
config []byte
hasIp4 map[dnsname.FQDN][]net.IP
wantsIp4 map[dnsname.FQDN][]net.IP
wantsErr bool
}{
{
name: "previously empty nameserver.ip4 gets set",
config: []byte(`{"version": "v1alpha1", "ip4": {"foo.bar.com": ["1.2.3.4"]}}`),
wantsIp4: map[dnsname.FQDN][]net.IP{"foo.bar.com.": {{1, 2, 3, 4}}},
},
{
name: "nameserver.ip4 gets reset",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
config: []byte(`{"version": "v1alpha1", "ip4": {"foo.bar.com": ["1.2.3.4"]}}`),
wantsIp4: map[dnsname.FQDN][]net.IP{"foo.bar.com.": {{1, 2, 3, 4}}},
},
{
name: "configuration with incompatible version",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
config: []byte(`{"version": "v1beta1", "ip4": {"foo.bar.com": ["1.2.3.4"]}}`),
wantsIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
wantsErr: true,
},
{
name: "nameserver.ip4 gets reset to empty config when no configuration is provided",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
wantsIp4: make(map[dnsname.FQDN][]net.IP),
},
{
name: "nameserver.ip4 gets reset to empty config when the provided configuration is empty",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
config: []byte(`{"version": "v1alpha1", "ip4": {}}`),
wantsIp4: make(map[dnsname.FQDN][]net.IP),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ns := &nameserver{
ip4: tt.hasIp4,
configReader: func() ([]byte, error) { return tt.config, nil },
}
if err := ns.resetRecords(); err == nil == tt.wantsErr {
t.Errorf("resetRecords() returned err: %v, wantsErr: %v", err, tt.wantsErr)
}
if diff := cmp.Diff(ns.ip4, tt.wantsIp4); diff != "" {
t.Fatalf("unexpected nameserver.ip4 contents (-got +want): \n%s", diff)
}
})
}
}
// fakeResponseWriter is a faked out dns.ResponseWriter that can be used in
// tests that need to read the response message that was written.
type fakeResponseWriter struct {
msg *dns.Msg
}
var _ dns.ResponseWriter = &fakeResponseWriter{}
func (fr *fakeResponseWriter) WriteMsg(msg *dns.Msg) error {
fr.msg = msg
return nil
}
func (fr *fakeResponseWriter) LocalAddr() net.Addr {
return nil
}
func (fr *fakeResponseWriter) RemoteAddr() net.Addr {
return nil
}
func (fr *fakeResponseWriter) Write([]byte) (int, error) {
return 0, nil
}
func (fr *fakeResponseWriter) Close() error {
return nil
}
func (fr *fakeResponseWriter) TsigStatus() error {
return nil
}
func (fr *fakeResponseWriter) TsigTimersOnly(bool) {}
func (fr *fakeResponseWriter) Hijack() {}

View File

@ -21,6 +21,9 @@ spec:
{{- end }}
labels:
app: operator
{{- with .Values.operatorConfig.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:

View File

@ -24,6 +24,9 @@ rules:
- apiGroups: ["tailscale.com"]
resources: ["connectors", "connectors/status", "proxyclasses", "proxyclasses/status"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["tailscale.com"]
resources: ["dnsconfigs", "dnsconfigs/status"]
verbs: ["get", "list", "watch", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
@ -45,11 +48,14 @@ metadata:
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["secrets"]
resources: ["secrets", "serviceaccounts", "configmaps"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
resources: ["statefulsets", "deployments"]
verbs: ["*"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@ -37,6 +37,7 @@ operatorConfig:
resources: {}
podAnnotations: {}
podLabels: {}
tolerations: []

View File

@ -0,0 +1,96 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
name: dnsconfigs.tailscale.com
spec:
group: tailscale.com
names:
kind: DNSConfig
listKind: DNSConfigList
plural: dnsconfigs
shortNames:
- dc
singular: dnsconfig
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Service IP address of the nameserver
jsonPath: .status.nameserver.ip
name: NameserverIP
type: string
name: v1alpha1
schema:
openAPIV3Schema:
type: object
required:
- spec
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
type: object
required:
- nameserver
properties:
nameserver:
type: object
properties:
image:
type: object
properties:
repo:
type: string
tag:
type: string
status:
type: object
properties:
conditions:
type: array
items:
description: ConnectorCondition contains condition information for a Connector.
type: object
required:
- status
- type
properties:
lastTransitionTime:
description: LastTransitionTime is the timestamp corresponding to the last status change of this condition.
type: string
format: date-time
message:
description: Message is a human readable description of the details of the last transition, complementing reason.
type: string
observedGeneration:
description: If set, this represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.condition[x].observedGeneration is 9, the condition is out of date with respect to the current state of the Connector.
type: integer
format: int64
reason:
description: Reason is a brief machine readable explanation for the condition's last transition.
type: string
status:
description: Status of the condition, one of ('True', 'False', 'Unknown').
type: string
type:
description: Type of the condition, known values are (`SubnetRouterReady`).
type: string
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
nameserver:
type: object
properties:
ip:
type: string
served: true
storage: true
subresources:
status: {}

View File

@ -39,7 +39,7 @@ spec:
type: object
properties:
metrics:
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation. Note that the metrics are currently considered unstable and will likely change in breaking ways in the future - we only recommend that you use those for debugging purposes.
type: object
required:
- enable

View File

@ -0,0 +1,9 @@
apiVersion: tailscale.com/v1alpha1
kind: DNSConfig
metadata:
name: ts-dns
spec:
nameserver:
image:
repo: tailscale/k8s-nameserver
tag: unstable-v1.65

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: dnsrecords

View File

@ -0,0 +1,37 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nameserver
spec:
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
app: nameserver
strategy:
type: Recreate
template:
metadata:
labels:
app: nameserver
spec:
containers:
- imagePullPolicy: IfNotPresent
name: nameserver
ports:
- name: tcp
protocol: TCP
containerPort: 1053
- name: udp
protocol: UDP
containerPort: 1053
volumeMounts:
- name: dnsrecords
mountPath: /config
restartPolicy: Always
serviceAccount: nameserver
serviceAccountName: nameserver
volumes:
- name: dnsrecords
configMap:
name: dnsrecords

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: nameserver

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: nameserver
spec:
selector:
app: nameserver
ports:
- name: udp
targetPort: 1053
port: 53
protocol: UDP
- name: tcp
targetPort: 1053
port: 53
protocol: TCP

View File

@ -159,6 +159,103 @@ spec:
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
name: dnsconfigs.tailscale.com
spec:
group: tailscale.com
names:
kind: DNSConfig
listKind: DNSConfigList
plural: dnsconfigs
shortNames:
- dc
singular: dnsconfig
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Service IP address of the nameserver
jsonPath: .status.nameserver.ip
name: NameserverIP
type: string
name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
nameserver:
properties:
image:
properties:
repo:
type: string
tag:
type: string
type: object
type: object
required:
- nameserver
type: object
status:
properties:
conditions:
items:
description: ConnectorCondition contains condition information for a Connector.
properties:
lastTransitionTime:
description: LastTransitionTime is the timestamp corresponding to the last status change of this condition.
format: date-time
type: string
message:
description: Message is a human readable description of the details of the last transition, complementing reason.
type: string
observedGeneration:
description: If set, this represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.condition[x].observedGeneration is 9, the condition is out of date with respect to the current state of the Connector.
format: int64
type: integer
reason:
description: Reason is a brief machine readable explanation for the condition's last transition.
type: string
status:
description: Status of the condition, one of ('True', 'False', 'Unknown').
type: string
type:
description: Type of the condition, known values are (`SubnetRouterReady`).
type: string
required:
- status
- type
type: object
type: array
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
nameserver:
properties:
ip:
type: string
type: object
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
@ -194,7 +291,7 @@ spec:
description: Specification of the desired state of the ProxyClass resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
properties:
metrics:
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation. Note that the metrics are currently considered unstable and will likely change in breaking ways in the future - we only recommend that you use those for debugging purposes.
properties:
enable:
description: Setting enable to true will make the proxy serve Tailscale metrics at <pod-ip>:9001/debug/metrics. Defaults to false.
@ -1252,6 +1349,16 @@ rules:
- list
- watch
- update
- apiGroups:
- tailscale.com
resources:
- dnsconfigs
- dnsconfigs/status
verbs:
- get
- list
- watch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
@ -1276,14 +1383,25 @@ rules:
- ""
resources:
- secrets
- serviceaccounts
- configmaps
verbs:
- '*'
- apiGroups:
- apps
resources:
- statefulsets
- deployments
verbs:
- '*'
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role

View File

@ -14,10 +14,8 @@ spec:
- name: sysctler
securityContext:
privileged: true
command: ["/bin/sh"]
args:
- -c
- sysctl -w net.ipv4.ip_forward=1 net.ipv6.conf.all.forwarding=1
command: ["/bin/sh", "-c"]
args: [sysctl -w net.ipv4.ip_forward=1 && if sysctl net.ipv6.conf.all.forwarding; then sysctl -w net.ipv6.conf.all.forwarding=1; fi]
resources:
requests:
cpu: 1m

View File

@ -0,0 +1,337 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// tailscale-operator provides a way to expose services running in a Kubernetes
// cluster to your Tailnet and to make Tailscale nodes available to cluster
// workloads
package main
import (
"context"
"encoding/json"
"fmt"
"slices"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/net"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/util/mak"
)
const (
dnsRecordsRecocilerFinalizer = "tailscale.com/dns-records-reconciler"
annotationTSMagicDNSName = "tailscale.com/magic-dnsname"
)
// dnsRecordsReconciler knows how to update dnsrecords ConfigMap with DNS
// records.
// The records that it creates are:
// - For tailscale Ingress, a mapping of the Ingress's MagicDNSName to the IP address of
// the ingress proxy Pod.
// - For egress proxies configured via tailscale.com/tailnet-fqdn annotation, a
// mapping of the tailnet FQDN to the IP address of the egress proxy Pod.
//
// Records will only be created if there is exactly one ready
// tailscale.com/v1alpha1.DNSConfig instance in the cluster (so that we know
// that there is a ts.net nameserver deployed in the cluster).
type dnsRecordsReconciler struct {
client.Client
tsNamespace string // namespace in which we provision tailscale resources
logger *zap.SugaredLogger
isDefaultLoadBalancer bool // true if operator is the default ingress controller in this cluster
}
// Reconcile takes a reconcile.Request for a headless Service fronting a
// tailscale proxy and updates DNS Records in dnsrecords ConfigMap for the
// in-cluster ts.net nameserver if required.
func (dnsRR *dnsRecordsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
logger := dnsRR.logger.With("Service", req.NamespacedName)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
headlessSvc := new(corev1.Service)
err = dnsRR.Client.Get(ctx, req.NamespacedName, headlessSvc)
if apierrors.IsNotFound(err) {
logger.Debugf("Service not found")
return reconcile.Result{}, nil
}
if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get Service: %w", err)
}
if !(isManagedByType(headlessSvc, "svc") || isManagedByType(headlessSvc, "ingress")) {
logger.Debugf("Service is not a headless Service for a tailscale ingress or egress proxy; do nothing")
return reconcile.Result{}, nil
}
if !headlessSvc.DeletionTimestamp.IsZero() {
logger.Debug("Service is being deleted, clean up resources")
return reconcile.Result{}, dnsRR.maybeCleanup(ctx, headlessSvc, logger)
}
// Check that there is a ts.net nameserver deployed to the cluster by
// checking that there is tailscale.com/v1alpha1.DNSConfig resource in a
// Ready state.
dnsCfgLst := new(tsapi.DNSConfigList)
if err = dnsRR.List(ctx, dnsCfgLst); err != nil {
return reconcile.Result{}, fmt.Errorf("error listing DNSConfigs: %w", err)
}
if len(dnsCfgLst.Items) == 0 {
logger.Debugf("DNSConfig does not exist, not creating DNS records")
return reconcile.Result{}, nil
}
if len(dnsCfgLst.Items) > 1 {
logger.Errorf("Invalid cluster state - more than one DNSConfig found in cluster. Please ensure no more than one exists")
return reconcile.Result{}, nil
}
dnsCfg := dnsCfgLst.Items[0]
if !operatorutils.DNSCfgIsReady(&dnsCfg) {
logger.Info("DNSConfig is not ready yet, waiting...")
return reconcile.Result{}, nil
}
return reconcile.Result{}, dnsRR.maybeProvision(ctx, headlessSvc, logger)
}
// maybeProvision ensures that dnsrecords ConfigMap contains a record for the
// proxy associated with the headless Service.
// The record is only provisioned if the proxy is for a tailscale Ingress or
// egress configured via tailscale.com/tailnet-fqdn annotation.
//
// For Ingress, the record is a mapping between the MagicDNSName of the Ingress, retrieved from
// ingress.status.loadBalancer.ingress.hostname field and the proxy Pod IP addresses
// retrieved from the EndpoinSlice associated with this headless Service, i.e
// Records{IP4: <MagicDNS name of the Ingress>: <[IPs of the ingress proxy Pods]>}
//
// For egress, the record is a mapping between tailscale.com/tailnet-fqdn
// annotation and the proxy Pod IP addresses, retrieved from the EndpointSlice
// associated with this headless Service, i.e
// Records{IP4: {<tailscale.com/tailnet-fqdn>: <[IPs of the egress proxy Pods]>}
//
// If records need to be created for this proxy, maybeProvision will also:
// - update the headless Service with a tailscale.com/magic-dnsname annotation
// - update the headless Service with a finalizer
func (dnsRR *dnsRecordsReconciler) maybeProvision(ctx context.Context, headlessSvc *corev1.Service, logger *zap.SugaredLogger) error {
if headlessSvc == nil {
logger.Info("[unexpected] maybeProvision called with a nil Service")
return nil
}
isEgressFQDNSvc, err := dnsRR.isSvcForFQDNEgressProxy(ctx, headlessSvc)
if err != nil {
return fmt.Errorf("error checking whether the Service is for an egress proxy: %w", err)
}
if !(isEgressFQDNSvc || isManagedByType(headlessSvc, "ingress")) {
logger.Debug("Service is not fronting a proxy that we create DNS records for; do nothing")
return nil
}
fqdn, err := dnsRR.fqdnForDNSRecord(ctx, headlessSvc, logger)
if err != nil {
return fmt.Errorf("error determining DNS name for record: %w", err)
}
if fqdn == "" {
logger.Debugf("MagicDNS name does not (yet) exist, not provisioning DNS record")
return nil // a new reconcile will be triggered once it's added
}
oldHeadlessSvc := headlessSvc.DeepCopy()
// Ensure that headless Service is annotated with a finalizer to help
// with records cleanup when proxy resources are deleted.
if !slices.Contains(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer) {
headlessSvc.Finalizers = append(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer)
}
// Ensure that headless Service is annotated with the current MagicDNS
// name to help with records cleanup when proxy resources are deleted or
// MagicDNS name changes.
oldFqdn := headlessSvc.Annotations[annotationTSMagicDNSName]
if oldFqdn != "" && oldFqdn != fqdn { // i.e user has changed the value of tailscale.com/tailnet-fqdn annotation
logger.Debugf("MagicDNS name has changed, remvoving record for %s", oldFqdn)
updateFunc := func(rec *operatorutils.Records) {
delete(rec.IP4, oldFqdn)
}
if err = dnsRR.updateDNSConfig(ctx, updateFunc); err != nil {
return fmt.Errorf("error removing record for %s: %w", oldFqdn, err)
}
}
mak.Set(&headlessSvc.Annotations, annotationTSMagicDNSName, fqdn)
if !apiequality.Semantic.DeepEqual(oldHeadlessSvc, headlessSvc) {
logger.Infof("provisioning DNS record for MagicDNS name: %s", fqdn) // this will be printed exactly once
if err := dnsRR.Update(ctx, headlessSvc); err != nil {
return fmt.Errorf("error updating proxy headless Service metadata: %w", err)
}
}
// Get the Pod IP addresses for the proxy from the EndpointSlice for the
// headless Service.
labels := map[string]string{discoveryv1.LabelServiceName: headlessSvc.Name} // https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#ownership
eps, err := getSingleObject[discoveryv1.EndpointSlice](ctx, dnsRR.Client, dnsRR.tsNamespace, labels)
if err != nil {
return fmt.Errorf("error getting the EndpointSlice for the proxy's headless Service: %w", err)
}
if eps == nil {
logger.Debugf("proxy's headless Service EndpointSlice does not yet exist. We will reconcile again once it's created")
return nil
}
// An EndpointSlice for a Service can have a list of endpoints that each
// can have multiple addresses - these are the IP addresses of any Pods
// selected by that Service. Pick all the IPv4 addresses.
ips := make([]string, 0)
for _, ep := range eps.Endpoints {
for _, ip := range ep.Addresses {
if !net.IsIPv4String(ip) {
logger.Infof("EndpointSlice contains IP address %q that is not IPv4, ignoring. Currently only IPv4 is supported", ip)
} else {
ips = append(ips, ip)
}
}
}
if len(ips) == 0 {
logger.Debugf("EndpointSlice for the Service contains no IPv4 addresses. We will reconcile again once they are created.")
return nil
}
updateFunc := func(rec *operatorutils.Records) {
mak.Set(&rec.IP4, fqdn, ips)
}
if err = dnsRR.updateDNSConfig(ctx, updateFunc); err != nil {
return fmt.Errorf("error updating DNS records: %w", err)
}
return nil
}
// maybeCleanup ensures that the DNS record for the proxy has been removed from
// dnsrecords ConfigMap and the tailscale.com/dns-records-reconciler finalizer
// has been removed from the Service. If the record is not found in the
// ConfigMap, the ConfigMap does not exist, or the Service does not have
// tailscale.com/magic-dnsname annotation, just remove the finalizer.
func (h *dnsRecordsReconciler) maybeCleanup(ctx context.Context, headlessSvc *corev1.Service, logger *zap.SugaredLogger) error {
ix := slices.Index(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer)
if ix == -1 {
logger.Debugf("no finalizer, nothing to do")
return nil
}
cm := &corev1.ConfigMap{}
err := h.Client.Get(ctx, types.NamespacedName{Name: operatorutils.DNSRecordsCMName, Namespace: h.tsNamespace}, cm)
if apierrors.IsNotFound(err) {
logger.Debug("'dsnrecords' ConfigMap not found")
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
if err != nil {
return fmt.Errorf("error retrieving 'dnsrecords' ConfigMap: %w", err)
}
if cm.Data == nil {
logger.Debug("'dnsrecords' ConfigMap contains no records")
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
_, ok := cm.Data[operatorutils.DNSRecordsCMKey]
if !ok {
logger.Debug("'dnsrecords' ConfigMap contains no records")
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
fqdn, _ := headlessSvc.GetAnnotations()[annotationTSMagicDNSName]
if fqdn == "" {
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
logger.Infof("removing DNS record for MagicDNS name %s", fqdn)
updateFunc := func(rec *operatorutils.Records) {
delete(rec.IP4, fqdn)
}
if err = h.updateDNSConfig(ctx, updateFunc); err != nil {
return fmt.Errorf("error updating DNS config: %w", err)
}
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
func (dnsRR *dnsRecordsReconciler) removeHeadlessSvcFinalizer(ctx context.Context, headlessSvc *corev1.Service) error {
idx := slices.Index(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer)
if idx == -1 {
return nil
}
headlessSvc.Finalizers = append(headlessSvc.Finalizers[:idx], headlessSvc.Finalizers[idx+1:]...)
return dnsRR.Update(ctx, headlessSvc)
}
// fqdnForDNSRecord returns MagicDNS name associated with a given headless Service.
// If the headless Service is for a tailscale Ingress proxy, returns ingress.status.loadBalancer.ingress.hostname.
// If the headless Service is for an tailscale egress proxy configured via tailscale.com/tailnet-fqdn annotation, returns the annotation value.
// This function is not expected to be called with headless Services for other
// proxy types, or any other Services, but it just returns an empty string if
// that happens.
func (dnsRR *dnsRecordsReconciler) fqdnForDNSRecord(ctx context.Context, headlessSvc *corev1.Service, logger *zap.SugaredLogger) (string, error) {
parentName := parentFromObjectLabels(headlessSvc)
if isManagedByType(headlessSvc, "ingress") {
ing := new(networkingv1.Ingress)
if err := dnsRR.Get(ctx, parentName, ing); err != nil {
return "", err
}
if len(ing.Status.LoadBalancer.Ingress) == 0 {
return "", nil
}
return ing.Status.LoadBalancer.Ingress[0].Hostname, nil
}
if isManagedByType(headlessSvc, "svc") {
svc := new(corev1.Service)
if err := dnsRR.Get(ctx, parentName, svc); apierrors.IsNotFound(err) {
logger.Info("[unexpected] parent Service for egress proxy %s not found", headlessSvc.Name)
return "", nil
} else if err != nil {
return "", err
}
return svc.Annotations[AnnotationTailnetTargetFQDN], nil
}
return "", nil
}
// updateDNSConfig runs the provided update function against dnsrecords
// ConfigMap. At this point the in-cluster ts.net nameserver is expected to be
// successfully created together with the ConfigMap.
func (dnsRR *dnsRecordsReconciler) updateDNSConfig(ctx context.Context, update func(*operatorutils.Records)) error {
cm := &corev1.ConfigMap{}
err := dnsRR.Get(ctx, types.NamespacedName{Name: operatorutils.DNSRecordsCMName, Namespace: dnsRR.tsNamespace}, cm)
if apierrors.IsNotFound(err) {
dnsRR.logger.Info("[unexpected] dnsrecords ConfigMap not found in cluster. Not updating DNS records. Please open an isue and attach operator logs.")
return nil
}
if err != nil {
return fmt.Errorf("error retrieving dnsrecords ConfigMap: %w", err)
}
dnsRecords := operatorutils.Records{Version: operatorutils.Alpha1Version, IP4: map[string][]string{}}
if cm.Data != nil && cm.Data[operatorutils.DNSRecordsCMKey] != "" {
if err := json.Unmarshal([]byte(cm.Data[operatorutils.DNSRecordsCMKey]), &dnsRecords); err != nil {
return err
}
}
update(&dnsRecords)
dnsRecordsBs, err := json.Marshal(dnsRecords)
if err != nil {
return fmt.Errorf("error marshalling DNS records: %w", err)
}
mak.Set(&cm.Data, operatorutils.DNSRecordsCMKey, string(dnsRecordsBs))
return dnsRR.Update(ctx, cm)
}
// isSvcForFQDNEgressProxy returns true if the Service is a headless Service
// created for a proxy for a tailscale egress Service configured via
// tailscale.com/tailnet-fqdn annotation.
func (dnsRR *dnsRecordsReconciler) isSvcForFQDNEgressProxy(ctx context.Context, svc *corev1.Service) (bool, error) {
if !isManagedByType(svc, "svc") {
return false, nil
}
parentName := parentFromObjectLabels(svc)
parentSvc := new(corev1.Service)
if err := dnsRR.Get(ctx, parentName, parentSvc); apierrors.IsNotFound(err) {
return false, nil
} else if err != nil {
return false, err
}
annots := parentSvc.Annotations
return annots != nil && annots[AnnotationTailnetTargetFQDN] != "", nil
}

View File

@ -0,0 +1,198 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"encoding/json"
"testing"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest"
"tailscale.com/types/ptr"
)
func TestDNSRecordsReconciler(t *testing.T) {
// Preconfigure a cluster with a DNSConfig
dnsConfig := &tsapi.DNSConfig{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
TypeMeta: metav1.TypeMeta{Kind: "DNSConfig"},
Spec: tsapi.DNSConfigSpec{
Nameserver: &tsapi.Nameserver{},
}}
ing := &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{
Name: "ts-ingress",
Namespace: "test",
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
},
Status: networkingv1.IngressStatus{
LoadBalancer: networkingv1.IngressLoadBalancerStatus{
Ingress: []networkingv1.IngressLoadBalancerIngress{{
Hostname: "cluster.ingress.ts.net"}},
},
},
}
cm := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "dnsrecords", Namespace: "tailscale"}}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(cm).
WithObjects(dnsConfig).
WithObjects(ing).
WithStatusSubresource(dnsConfig, ing).
Build()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
cl := tstest.NewClock(tstest.ClockOpts{})
// Set the ready condition of the DNSConfig
mustUpdateStatus[tsapi.DNSConfig](t, fc, "", "test", func(c *tsapi.DNSConfig) {
operatorutils.SetDNSConfigCondition(c, tsapi.NameserverReady, metav1.ConditionTrue, reasonNameserverCreated, reasonNameserverCreated, 0, cl, zl.Sugar())
})
dnsRR := &dnsRecordsReconciler{
Client: fc,
logger: zl.Sugar(),
tsNamespace: "tailscale",
}
// 1. DNS record is created for an egress proxy configured via
// tailscale.com/tailnet-fqdn annotation
egressSvcFQDN := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "egress-fqdn",
Namespace: "test",
Annotations: map[string]string{"tailscale.com/tailnet-fqdn": "foo.bar.ts.net"},
},
Spec: corev1.ServiceSpec{
ExternalName: "unused",
Type: corev1.ServiceTypeExternalName,
},
}
headlessForEgressSvcFQDN := headlessSvcForParent(egressSvcFQDN, "svc") // create the proxy headless Service
ep := endpointSliceForService(headlessForEgressSvcFQDN, "10.9.8.7")
mustCreate(t, fc, egressSvcFQDN)
mustCreate(t, fc, headlessForEgressSvcFQDN)
mustCreate(t, fc, ep)
expectReconciled(t, dnsRR, "tailscale", "egress-fqdn") // dns-records-reconciler reconcile the headless Service
// ConfigMap should now have a record for foo.bar.ts.net -> 10.8.8.7
wantHosts := map[string][]string{"foo.bar.ts.net": {"10.9.8.7"}}
expectHostsRecords(t, fc, wantHosts)
// 2. DNS record is updated if tailscale.com/tailnet-fqdn annotation's
// value changes
mustUpdate(t, fc, "test", "egress-fqdn", func(svc *corev1.Service) {
svc.Annotations["tailscale.com/tailnet-fqdn"] = "baz.bar.ts.net"
})
expectReconciled(t, dnsRR, "tailscale", "egress-fqdn") // dns-records-reconciler reconcile the headless Service
wantHosts = map[string][]string{"baz.bar.ts.net": {"10.9.8.7"}}
expectHostsRecords(t, fc, wantHosts)
// 3. DNS record is updated if the IP address of the proxy Pod changes.
ep = endpointSliceForService(headlessForEgressSvcFQDN, "10.6.5.4")
mustUpdate(t, fc, ep.Namespace, ep.Name, func(ep *discoveryv1.EndpointSlice) {
ep.Endpoints[0].Addresses = []string{"10.6.5.4"}
})
expectReconciled(t, dnsRR, "tailscale", "egress-fqdn") // dns-records-reconciler reconcile the headless Service
wantHosts = map[string][]string{"baz.bar.ts.net": {"10.6.5.4"}}
expectHostsRecords(t, fc, wantHosts)
// 4. DNS record is created for an ingress proxy configured via Ingress
headlessForIngress := headlessSvcForParent(ing, "ingress")
ep = endpointSliceForService(headlessForIngress, "10.9.8.7")
mustCreate(t, fc, headlessForIngress)
mustCreate(t, fc, ep)
expectReconciled(t, dnsRR, "tailscale", "ts-ingress") // dns-records-reconciler should reconcile the headless Service
wantHosts["cluster.ingress.ts.net"] = []string{"10.9.8.7"}
expectHostsRecords(t, fc, wantHosts)
// 5. DNS records are updated if Ingress's MagicDNS name changes (i.e users changed spec.tls.hosts[0])
t.Log("test case 5")
mustUpdateStatus(t, fc, "test", "ts-ingress", func(ing *networkingv1.Ingress) {
ing.Status.LoadBalancer.Ingress[0].Hostname = "another.ingress.ts.net"
})
expectReconciled(t, dnsRR, "tailscale", "ts-ingress") // dns-records-reconciler should reconcile the headless Service
delete(wantHosts, "cluster.ingress.ts.net")
wantHosts["another.ingress.ts.net"] = []string{"10.9.8.7"}
expectHostsRecords(t, fc, wantHosts)
// 6. DNS records are updated if Ingress proxy's Pod IP changes
mustUpdate(t, fc, ep.Namespace, ep.Name, func(ep *discoveryv1.EndpointSlice) {
ep.Endpoints[0].Addresses = []string{"7.8.9.10"}
})
expectReconciled(t, dnsRR, "tailscale", "ts-ingress")
wantHosts["another.ingress.ts.net"] = []string{"7.8.9.10"}
expectHostsRecords(t, fc, wantHosts)
}
func headlessSvcForParent(o client.Object, typ string) *corev1.Service {
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: o.GetName(),
Namespace: "tailscale",
Labels: map[string]string{
LabelManaged: "true",
LabelParentName: o.GetName(),
LabelParentNamespace: o.GetNamespace(),
LabelParentType: typ,
},
},
Spec: corev1.ServiceSpec{
ClusterIP: "None",
Type: corev1.ServiceTypeClusterIP,
Selector: map[string]string{"foo": "bar"},
},
}
}
func endpointSliceForService(svc *corev1.Service, ip string) *discoveryv1.EndpointSlice {
return &discoveryv1.EndpointSlice{
ObjectMeta: metav1.ObjectMeta{
Name: svc.Name,
Namespace: svc.Namespace,
Labels: map[string]string{discoveryv1.LabelServiceName: svc.Name},
},
Endpoints: []discoveryv1.Endpoint{{
Addresses: []string{ip},
}},
}
}
func expectHostsRecords(t *testing.T, cl client.Client, wantsHosts map[string][]string) {
t.Helper()
cm := new(corev1.ConfigMap)
if err := cl.Get(context.Background(), types.NamespacedName{Name: "dnsrecords", Namespace: "tailscale"}, cm); err != nil {
t.Fatalf("getting dnsconfig ConfigMap: %v", err)
}
if cm.Data == nil {
t.Fatal("dnsconfig ConfigMap has no data")
}
dnsConfigString, ok := cm.Data[operatorutils.DNSRecordsCMKey]
if !ok {
t.Fatal("dnsconfig ConfigMap does not contain dnsconfig")
}
dnsConfig := &operatorutils.Records{}
if err := json.Unmarshal([]byte(dnsConfigString), dnsConfig); err != nil {
t.Fatalf("unmarshaling dnsconfig: %v", err)
}
if diff := cmp.Diff(dnsConfig.IP4, wantsHosts); diff != "" {
t.Fatalf("unexpected dns config (-got +want):\n%s", diff)
}
}

View File

@ -22,9 +22,11 @@ const (
operatorDeploymentFilesPath = "cmd/k8s-operator/deploy"
connectorCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_connectors.yaml"
proxyClassCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_proxyclasses.yaml"
dnsConfigCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_dnsconfigs.yaml"
helmTemplatesPath = operatorDeploymentFilesPath + "/chart/templates"
connectorCRDHelmTemplatePath = helmTemplatesPath + "/connector.yaml"
proxyClassCRDHelmTemplatePath = helmTemplatesPath + "/proxyclass.yaml"
dnsConfigCRDHelmTemplatePath = helmTemplatesPath + "/dnsconfig.yaml"
helmConditionalStart = "{{ if .Values.installCRDs -}}\n"
helmConditionalEnd = "{{- end -}}"
@ -36,10 +38,10 @@ func main() {
}
repoRoot := "../../"
switch os.Args[1] {
case "helmcrd": // insert CRD to Helm templates behind a installCRDs=true conditional check
log.Print("Adding Connector CRD to Helm templates")
case "helmcrd": // insert CRDs to Helm templates behind a installCRDs=true conditional check
log.Print("Adding CRDs to Helm templates")
if err := generate("./"); err != nil {
log.Fatalf("error adding Connector CRD to Helm templates: %v", err)
log.Fatalf("error adding CRDs to Helm templates: %v", err)
}
return
case "staticmanifests": // generate static manifests from Helm templates (including the CRD)
@ -108,7 +110,7 @@ func main() {
}
}
// generate places tailscale.com CRDs (currently Connector and ProxyClass) into
// generate places tailscale.com CRDs (currently Connector, ProxyClass and DNSConfig) into
// the Helm chart templates behind .Values.installCRDs=true condition (true by
// default).
func generate(baseDir string) error {
@ -140,6 +142,9 @@ func generate(baseDir string) error {
if err := addCRDToHelm(proxyClassCRDPath, proxyClassCRDHelmTemplatePath); err != nil {
return fmt.Errorf("error adding ProxyClass CRD to Helm templates: %w", err)
}
if err := addCRDToHelm(dnsConfigCRDPath, dnsConfigCRDHelmTemplatePath); err != nil {
return fmt.Errorf("error adding DNSConfig CRD to Helm templates: %w", err)
}
return nil
}
@ -151,5 +156,8 @@ func cleanup(baseDir string) error {
if err := os.Remove(filepath.Join(baseDir, proxyClassCRDHelmTemplatePath)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error cleaning up ProxyClass CRD template: %w", err)
}
if err := os.Remove(filepath.Join(baseDir, dnsConfigCRDHelmTemplatePath)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error cleaning up DNSConfig CRD template: %w", err)
}
return nil
}

View File

@ -56,6 +56,9 @@ func Test_generate(t *testing.T) {
if !strings.Contains(installContentsWithCRD.String(), "name: proxyclasses.tailscale.com") {
t.Errorf("ProxyClass CRD not found in default chart install")
}
if !strings.Contains(installContentsWithCRD.String(), "name: dnsconfigs.tailscale.com") {
t.Errorf("DNSConfig CRD not found in default chart install")
}
// Test that CRDs can be excluded from Helm chart install
installContentsWithoutCRD := bytes.NewBuffer([]byte{})
@ -71,4 +74,7 @@ func Test_generate(t *testing.T) {
if strings.Contains(installContentsWithoutCRD.String(), "name: connectors.tailscale.com") {
t.Errorf("ProxyClass CRD found in chart install that should not contain a CRD")
}
if strings.Contains(installContentsWithoutCRD.String(), "name: dnsconfigs.tailscale.com") {
t.Errorf("DNSConfig CRD found in chart install that should not contain a CRD")
}
}

View File

@ -0,0 +1,275 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"fmt"
"slices"
"sync"
_ "embed"
"github.com/pkg/errors"
"go.uber.org/zap"
xslices "golang.org/x/exp/slices"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/yaml"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstime"
"tailscale.com/util/clientmetric"
"tailscale.com/util/set"
)
const (
reasonNameserverCreationFailed = "NameserverCreationFailed"
reasonMultipleDNSConfigsPresent = "MultipleDNSConfigsPresent"
reasonNameserverCreated = "NameserverCreated"
messageNameserverCreationFailed = "Failed creating nameserver resources: %v"
messageMultipleDNSConfigsPresent = "Multiple DNSConfig resources found in cluster. Please ensure no more than one is present."
)
// NameserverReconciler knows how to create nameserver resources in cluster in
// response to users applying DNSConfig.
type NameserverReconciler struct {
client.Client
logger *zap.SugaredLogger
recorder record.EventRecorder
clock tstime.Clock
tsNamespace string
mu sync.Mutex // protects following
managedNameservers set.Slice[types.UID] // one or none
}
var (
gaugeNameserverResources = clientmetric.NewGauge("k8s_nameserver_resources")
)
func (a *NameserverReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
logger := a.logger.With("dnsConfig", req.Name)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
var dnsCfg tsapi.DNSConfig
err = a.Get(ctx, req.NamespacedName, &dnsCfg)
if apierrors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
logger.Debugf("dnsconfig not found, assuming it was deleted")
return reconcile.Result{}, nil
} else if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get dnsconfig: %w", err)
}
if !dnsCfg.DeletionTimestamp.IsZero() {
ix := xslices.Index(dnsCfg.Finalizers, FinalizerName)
if ix < 0 {
logger.Debugf("no finalizer, nothing to do")
return reconcile.Result{}, nil
}
logger.Info("Cleaning up DNSConfig resources")
if err := a.maybeCleanup(ctx, &dnsCfg, logger); err != nil {
logger.Errorf("error cleaning up reconciler resource: %v", err)
return res, err
}
dnsCfg.Finalizers = append(dnsCfg.Finalizers[:ix], dnsCfg.Finalizers[ix+1:]...)
if err := a.Update(ctx, &dnsCfg); err != nil {
logger.Errorf("error removing finalizer: %v", err)
return reconcile.Result{}, err
}
logger.Infof("Nameserver resources cleaned up")
return reconcile.Result{}, nil
}
oldCnStatus := dnsCfg.Status.DeepCopy()
setStatus := func(dnsCfg *tsapi.DNSConfig, conditionType tsapi.ConnectorConditionType, status metav1.ConditionStatus, reason, message string) (reconcile.Result, error) {
tsoperator.SetDNSConfigCondition(dnsCfg, tsapi.NameserverReady, status, reason, message, dnsCfg.Generation, a.clock, logger)
if !apiequality.Semantic.DeepEqual(oldCnStatus, dnsCfg.Status) {
// An error encountered here should get returned by the Reconcile function.
if updateErr := a.Client.Status().Update(ctx, dnsCfg); updateErr != nil {
err = errors.Wrap(err, updateErr.Error())
}
}
return res, err
}
var dnsCfgs tsapi.DNSConfigList
if err := a.List(ctx, &dnsCfgs); err != nil {
return res, fmt.Errorf("error listing DNSConfigs: %w", err)
}
if len(dnsCfgs.Items) > 1 { // enforce DNSConfig to be a singleton
msg := "invalid cluster configuration: more than one tailscale.com/dnsconfigs found. Please ensure that no more than one is created."
logger.Error(msg)
a.recorder.Event(&dnsCfg, corev1.EventTypeWarning, reasonMultipleDNSConfigsPresent, messageMultipleDNSConfigsPresent)
setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionFalse, reasonMultipleDNSConfigsPresent, messageMultipleDNSConfigsPresent)
}
if !slices.Contains(dnsCfg.Finalizers, FinalizerName) {
logger.Infof("ensuring nameserver resources")
dnsCfg.Finalizers = append(dnsCfg.Finalizers, FinalizerName)
if err := a.Update(ctx, &dnsCfg); err != nil {
msg := fmt.Sprintf(messageNameserverCreationFailed, err)
logger.Error(msg)
return setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionFalse, reasonNameserverCreationFailed, msg)
}
}
if err := a.maybeProvision(ctx, &dnsCfg, logger); err != nil {
return reconcile.Result{}, fmt.Errorf("error provisioning nameserver resources: %w", err)
}
a.mu.Lock()
a.managedNameservers.Add(dnsCfg.UID)
a.mu.Unlock()
gaugeNameserverResources.Set(int64(a.managedNameservers.Len()))
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{Name: "nameserver", Namespace: a.tsNamespace},
}
if err := a.Client.Get(ctx, client.ObjectKeyFromObject(svc), svc); err != nil {
return res, fmt.Errorf("error getting Service: %w", err)
}
if ip := svc.Spec.ClusterIP; ip != "" && ip != "None" {
dnsCfg.Status.Nameserver = &tsapi.NameserverStatus{
IP: ip,
}
return setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionTrue, reasonNameserverCreated, reasonNameserverCreated)
}
logger.Info("nameserver Service does not have an IP address allocated, waiting...")
return reconcile.Result{}, nil
}
func nameserverResourceLabels(name, namespace string) map[string]string {
labels := childResourceLabels(name, namespace, "nameserver")
labels["app.kubernetes.io/name"] = "tailscale"
labels["app.kubernetes.io/component"] = "nameserver"
return labels
}
func (a *NameserverReconciler) maybeProvision(ctx context.Context, tsDNSCfg *tsapi.DNSConfig, logger *zap.SugaredLogger) error {
labels := nameserverResourceLabels(tsDNSCfg.Name, a.tsNamespace)
dCfg := &deployConfig{
ownerRefs: []metav1.OwnerReference{*metav1.NewControllerRef(tsDNSCfg, tsapi.SchemeGroupVersion.WithKind("DNSConfig"))},
namespace: a.tsNamespace,
labels: labels,
}
if tsDNSCfg.Spec.Nameserver.Image.Repo != "" {
dCfg.imageRepo = tsDNSCfg.Spec.Nameserver.Image.Repo
}
if tsDNSCfg.Spec.Nameserver.Image.Tag != "" {
dCfg.imageTag = tsDNSCfg.Spec.Nameserver.Image.Tag
}
for _, deployable := range []deployable{saDeployable, deployDeployable, svcDeployable, cmDeployable} {
if err := deployable.updateObj(ctx, dCfg, a.Client); err != nil {
return fmt.Errorf("error reconciling %s: %w", deployable.kind, err)
}
}
return nil
}
// maybeCleanup removes DNSConfig from being tracked. The cluster resources
// created, will be automatically garbage collected as they are owned by the
// DNSConfig.
func (a *NameserverReconciler) maybeCleanup(ctx context.Context, dnsCfg *tsapi.DNSConfig, logger *zap.SugaredLogger) error {
a.mu.Lock()
a.managedNameservers.Remove(dnsCfg.UID)
a.mu.Unlock()
gaugeNameserverResources.Set(int64(a.managedNameservers.Len()))
return nil
}
type deployable struct {
kind string
updateObj func(context.Context, *deployConfig, client.Client) error
}
type deployConfig struct {
imageRepo string
imageTag string
labels map[string]string
ownerRefs []metav1.OwnerReference
namespace string
}
var (
//go:embed deploy/manifests/nameserver/cm.yaml
cmYaml []byte
//go:embed deploy/manifests/nameserver/deploy.yaml
deployYaml []byte
//go:embed deploy/manifests/nameserver/sa.yaml
saYaml []byte
//go:embed deploy/manifests/nameserver/svc.yaml
svcYaml []byte
deployDeployable = deployable{
kind: "Deployment",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
d := new(appsv1.Deployment)
if err := yaml.Unmarshal(deployYaml, &d); err != nil {
return fmt.Errorf("error unmarshalling Deployment yaml: %w", err)
}
d.Spec.Template.Spec.Containers[0].Image = fmt.Sprintf("%s:%s", cfg.imageRepo, cfg.imageTag)
d.ObjectMeta.Namespace = cfg.namespace
d.ObjectMeta.Labels = cfg.labels
d.ObjectMeta.OwnerReferences = cfg.ownerRefs
updateF := func(oldD *appsv1.Deployment) {
oldD.Spec = d.Spec
}
_, err := createOrUpdate[appsv1.Deployment](ctx, kubeClient, cfg.namespace, d, updateF)
return err
},
}
saDeployable = deployable{
kind: "ServiceAccount",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
sa := new(corev1.ServiceAccount)
if err := yaml.Unmarshal(saYaml, &sa); err != nil {
return fmt.Errorf("error unmarshalling ServiceAccount yaml: %w", err)
}
sa.ObjectMeta.Labels = cfg.labels
sa.ObjectMeta.OwnerReferences = cfg.ownerRefs
sa.ObjectMeta.Namespace = cfg.namespace
_, err := createOrUpdate(ctx, kubeClient, cfg.namespace, sa, func(*corev1.ServiceAccount) {})
return err
},
}
svcDeployable = deployable{
kind: "Service",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
svc := new(corev1.Service)
if err := yaml.Unmarshal(svcYaml, &svc); err != nil {
return fmt.Errorf("error unmarshalling Service yaml: %w", err)
}
svc.ObjectMeta.Labels = cfg.labels
svc.ObjectMeta.OwnerReferences = cfg.ownerRefs
svc.ObjectMeta.Namespace = cfg.namespace
_, err := createOrUpdate[corev1.Service](ctx, kubeClient, cfg.namespace, svc, func(*corev1.Service) {})
return err
},
}
cmDeployable = deployable{
kind: "ConfigMap",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
cm := new(corev1.ConfigMap)
if err := yaml.Unmarshal(cmYaml, &cm); err != nil {
return fmt.Errorf("error unmarshalling ConfigMap yaml: %w", err)
}
cm.ObjectMeta.Labels = cfg.labels
cm.ObjectMeta.OwnerReferences = cfg.ownerRefs
cm.ObjectMeta.Namespace = cfg.namespace
_, err := createOrUpdate[corev1.ConfigMap](ctx, kubeClient, cfg.namespace, cm, func(cm *corev1.ConfigMap) {})
return err
},
}
)

View File

@ -0,0 +1,118 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// tailscale-operator provides a way to expose services running in a Kubernetes
// cluster to your Tailnet and to make Tailscale nodes available to cluster
// workloads
package main
import (
"encoding/json"
"testing"
"time"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/yaml"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest"
"tailscale.com/util/mak"
)
func TestNameserverReconciler(t *testing.T) {
dnsCfg := &tsapi.DNSConfig{
TypeMeta: metav1.TypeMeta{Kind: "DNSConfig", APIVersion: "tailscale.com/v1alpha1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
Spec: tsapi.DNSConfigSpec{
Nameserver: &tsapi.Nameserver{
Image: &tsapi.Image{
Repo: "test",
Tag: "v0.0.1",
},
},
},
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(dnsCfg).
WithStatusSubresource(dnsCfg).
Build()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
cl := tstest.NewClock(tstest.ClockOpts{})
nr := &NameserverReconciler{
Client: fc,
clock: cl,
logger: zl.Sugar(),
tsNamespace: "tailscale",
}
expectReconciled(t, nr, "", "test")
// Verify that nameserver Deployment has been created and has the expected fields.
wantsDeploy := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: "nameserver", Namespace: "tailscale"}, TypeMeta: metav1.TypeMeta{Kind: "Deployment", APIVersion: appsv1.SchemeGroupVersion.Identifier()}}
if err := yaml.Unmarshal(deployYaml, wantsDeploy); err != nil {
t.Fatalf("unmarshalling yaml: %v", err)
}
dnsCfgOwnerRef := metav1.NewControllerRef(dnsCfg, tsapi.SchemeGroupVersion.WithKind("DNSConfig"))
wantsDeploy.OwnerReferences = []metav1.OwnerReference{*dnsCfgOwnerRef}
wantsDeploy.Spec.Template.Spec.Containers[0].Image = "test:v0.0.1"
wantsDeploy.Namespace = "tailscale"
labels := nameserverResourceLabels("test", "tailscale")
wantsDeploy.ObjectMeta.Labels = labels
expectEqual(t, fc, wantsDeploy, nil)
// Verify that DNSConfig advertizes the nameserver's Service IP address,
// has the ready status condition and tailscale finalizer.
mustUpdate(t, fc, "tailscale", "nameserver", func(svc *corev1.Service) {
svc.Spec.ClusterIP = "1.2.3.4"
})
expectReconciled(t, nr, "", "test")
dnsCfg.Status.Nameserver = &tsapi.NameserverStatus{
IP: "1.2.3.4",
}
dnsCfg.Finalizers = []string{FinalizerName}
dnsCfg.Status.Conditions = append(dnsCfg.Status.Conditions, tsapi.ConnectorCondition{
Type: tsapi.NameserverReady,
Status: metav1.ConditionTrue,
Reason: reasonNameserverCreated,
Message: reasonNameserverCreated,
LastTransitionTime: &metav1.Time{Time: cl.Now().Truncate(time.Second)},
})
expectEqual(t, fc, dnsCfg, nil)
// // Verify that nameserver image gets updated to match DNSConfig spec.
mustUpdate(t, fc, "", "test", func(dnsCfg *tsapi.DNSConfig) {
dnsCfg.Spec.Nameserver.Image.Tag = "v0.0.2"
})
expectReconciled(t, nr, "", "test")
wantsDeploy.Spec.Template.Spec.Containers[0].Image = "test:v0.0.2"
expectEqual(t, fc, wantsDeploy, nil)
// Verify that when another actor sets ConfigMap data, it does not get
// overwritten by nameserver reconciler.
dnsRecords := &operatorutils.Records{Version: "v1alpha1", IP4: map[string][]string{"foo.ts.net": {"1.2.3.4"}}}
bs, err := json.Marshal(dnsRecords)
if err != nil {
t.Fatalf("error marshalling ConfigMap contents: %v", err)
}
mustUpdate(t, fc, "tailscale", "dnsrecords", func(cm *corev1.ConfigMap) {
mak.Set(&cm.Data, "records.json", string(bs))
})
expectReconciled(t, nr, "", "test")
wantCm := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "dnsrecords",
Namespace: "tailscale", Labels: labels, OwnerReferences: []metav1.OwnerReference{*dnsCfgOwnerRef}},
TypeMeta: metav1.TypeMeta{Kind: "ConfigMap", APIVersion: "v1"},
Data: map[string]string{"records.json": string(bs)},
}
expectEqual(t, fc, wantCm, nil)
}

View File

@ -20,6 +20,7 @@ import (
"golang.org/x/oauth2/clientcredentials"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/rest"
@ -59,12 +60,13 @@ func main() {
tailscale.I_Acknowledge_This_API_Is_Unstable = true
var (
tsNamespace = defaultEnv("OPERATOR_NAMESPACE", "")
tslogging = defaultEnv("OPERATOR_LOGGING", "info")
image = defaultEnv("PROXY_IMAGE", "tailscale/tailscale:latest")
priorityClassName = defaultEnv("PROXY_PRIORITY_CLASS_NAME", "")
tags = defaultEnv("PROXY_TAGS", "tag:k8s")
tsFirewallMode = defaultEnv("PROXY_FIREWALL_MODE", "")
tsNamespace = defaultEnv("OPERATOR_NAMESPACE", "")
tslogging = defaultEnv("OPERATOR_LOGGING", "info")
image = defaultEnv("PROXY_IMAGE", "tailscale/tailscale:latest")
priorityClassName = defaultEnv("PROXY_PRIORITY_CLASS_NAME", "")
tags = defaultEnv("PROXY_TAGS", "tag:k8s")
tsFirewallMode = defaultEnv("PROXY_FIREWALL_MODE", "")
isDefaultLoadBalancer = defaultBool("OPERATOR_DEFAULT_LOAD_BALANCER", false)
)
var opts []kzap.Opts
@ -93,9 +95,19 @@ func main() {
defer s.Close()
restConfig := config.GetConfigOrDie()
maybeLaunchAPIServerProxy(zlog, restConfig, s, mode)
// TODO (irbekrm): gather the reconciler options into an opts struct
// rather than passing a million of them in one by one.
runReconcilers(zlog, s, tsNamespace, restConfig, tsClient, image, priorityClassName, tags, tsFirewallMode)
rOpts := reconcilerOpts{
log: zlog,
tsServer: s,
tsClient: tsClient,
tailscaleNamespace: tsNamespace,
restConfig: restConfig,
proxyImage: image,
proxyPriorityClassName: priorityClassName,
proxyActAsDefaultLoadBalancer: isDefaultLoadBalancer,
proxyTags: tags,
proxyFirewallMode: tsFirewallMode,
}
runReconcilers(rOpts)
}
// initTSNet initializes the tsnet.Server and logs in to Tailscale. It uses the
@ -203,11 +215,8 @@ waitOnline:
// runReconcilers starts the controller-runtime manager and registers the
// ServiceReconciler. It blocks forever.
func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string, restConfig *rest.Config, tsClient *tailscale.Client, image, priorityClassName, tags, tsFirewallMode string) {
var (
isDefaultLoadBalancer = defaultBool("OPERATOR_DEFAULT_LOAD_BALANCER", false)
)
startlog := zlog.Named("startReconcilers")
func runReconcilers(opts reconcilerOpts) {
startlog := opts.log.Named("startReconcilers")
// For secrets and statefulsets, we only get permission to touch the objects
// in the controller's own namespace. This cannot be expressed by
// .Watches(...) below, instead you have to add a per-type field selector to
@ -215,7 +224,7 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
// implicitly filter what parts of the world the builder code gets to see at
// all.
nsFilter := cache.ByObject{
Field: client.InNamespace(tsNamespace).AsSelector(),
Field: client.InNamespace(opts.tailscaleNamespace).AsSelector(),
}
mgrOpts := manager.Options{
// TODO (irbekrm): stricter filtering what we watch/cache/call
@ -223,33 +232,37 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
// resources that we GET via the controller manager's client.
Cache: cache.Options{
ByObject: map[client.Object]cache.ByObject{
&corev1.Secret{}: nsFilter,
&appsv1.StatefulSet{}: nsFilter,
&corev1.Secret{}: nsFilter,
&corev1.ServiceAccount{}: nsFilter,
&corev1.ConfigMap{}: nsFilter,
&appsv1.StatefulSet{}: nsFilter,
&appsv1.Deployment{}: nsFilter,
&discoveryv1.EndpointSlice{}: nsFilter,
},
},
Scheme: tsapi.GlobalScheme,
}
mgr, err := manager.New(restConfig, mgrOpts)
mgr, err := manager.New(opts.restConfig, mgrOpts)
if err != nil {
startlog.Fatalf("could not create manager: %v", err)
}
svcFilter := handler.EnqueueRequestsFromMapFunc(serviceHandler)
svcChildFilter := handler.EnqueueRequestsFromMapFunc(managedResourceHandlerForType("svc"))
// If a ProxyClassChanges, enqueue all Services labeled with that
// If a ProxyClass changes, enqueue all Services labeled with that
// ProxyClass's name.
proxyClassFilterForSvc := handler.EnqueueRequestsFromMapFunc(proxyClassHandlerForSvc(mgr.GetClient(), startlog))
eventRecorder := mgr.GetEventRecorderFor("tailscale-operator")
ssr := &tailscaleSTSReconciler{
Client: mgr.GetClient(),
tsnetServer: s,
tsClient: tsClient,
defaultTags: strings.Split(tags, ","),
operatorNamespace: tsNamespace,
proxyImage: image,
proxyPriorityClassName: priorityClassName,
tsFirewallMode: tsFirewallMode,
tsnetServer: opts.tsServer,
tsClient: opts.tsClient,
defaultTags: strings.Split(opts.proxyTags, ","),
operatorNamespace: opts.tailscaleNamespace,
proxyImage: opts.proxyImage,
proxyPriorityClassName: opts.proxyPriorityClassName,
tsFirewallMode: opts.proxyFirewallMode,
}
err = builder.
ControllerManagedBy(mgr).
@ -261,10 +274,10 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
Complete(&ServiceReconciler{
ssr: ssr,
Client: mgr.GetClient(),
logger: zlog.Named("service-reconciler"),
isDefaultLoadBalancer: isDefaultLoadBalancer,
logger: opts.log.Named("service-reconciler"),
isDefaultLoadBalancer: opts.proxyActAsDefaultLoadBalancer,
recorder: eventRecorder,
tsNamespace: tsNamespace,
tsNamespace: opts.tailscaleNamespace,
})
if err != nil {
startlog.Fatalf("could not create service reconciler: %v", err)
@ -286,7 +299,7 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
ssr: ssr,
recorder: eventRecorder,
Client: mgr.GetClient(),
logger: zlog.Named("ingress-reconciler"),
logger: opts.log.Named("ingress-reconciler"),
})
if err != nil {
startlog.Fatalf("could not create ingress reconciler: %v", err)
@ -305,29 +318,201 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
ssr: ssr,
recorder: eventRecorder,
Client: mgr.GetClient(),
logger: zlog.Named("connector-reconciler"),
logger: opts.log.Named("connector-reconciler"),
clock: tstime.DefaultClock{},
})
if err != nil {
startlog.Fatal("could not create connector reconciler: %v", err)
startlog.Fatalf("could not create connector reconciler: %v", err)
}
// TODO (irbekrm): switch to metadata-only watches for resources whose
// spec we don't need to inspect to reduce memory consumption.
// https://github.com/kubernetes-sigs/controller-runtime/issues/1159
nameserverFilter := handler.EnqueueRequestsFromMapFunc(managedResourceHandlerForType("nameserver"))
err = builder.ControllerManagedBy(mgr).
For(&tsapi.DNSConfig{}).
Watches(&appsv1.Deployment{}, nameserverFilter).
Watches(&corev1.ConfigMap{}, nameserverFilter).
Watches(&corev1.Service{}, nameserverFilter).
Watches(&corev1.ServiceAccount{}, nameserverFilter).
Complete(&NameserverReconciler{
recorder: eventRecorder,
tsNamespace: opts.tailscaleNamespace,
Client: mgr.GetClient(),
logger: opts.log.Named("nameserver-reconciler"),
clock: tstime.DefaultClock{},
})
if err != nil {
startlog.Fatalf("could not create nameserver reconciler: %v", err)
}
err = builder.ControllerManagedBy(mgr).
For(&tsapi.ProxyClass{}).
Complete(&ProxyClassReconciler{
Client: mgr.GetClient(),
recorder: eventRecorder,
logger: zlog.Named("proxyclass-reconciler"),
logger: opts.log.Named("proxyclass-reconciler"),
clock: tstime.DefaultClock{},
})
if err != nil {
startlog.Fatal("could not create proxyclass reconciler: %v", err)
}
logger := startlog.Named("dns-records-reconciler-event-handlers")
// On EndpointSlice events, if it is an EndpointSlice for an
// ingress/egress proxy headless Service, reconcile the headless
// Service.
dnsRREpsOpts := handler.EnqueueRequestsFromMapFunc(dnsRecordsReconcilerEndpointSliceHandler)
// On DNSConfig changes, reconcile all headless Services for
// ingress/egress proxies in operator namespace.
dnsRRDNSConfigOpts := handler.EnqueueRequestsFromMapFunc(enqueueAllIngressEgressProxySvcsInNS(opts.tailscaleNamespace, mgr.GetClient(), logger))
// On Service events, if it is an ingress/egress proxy headless Service, reconcile it.
dnsRRServiceOpts := handler.EnqueueRequestsFromMapFunc(dnsRecordsReconcilerServiceHandler)
// On Ingress events, if it is a tailscale Ingress or if tailscale is the default ingress controller, reconcile the proxy
// headless Service.
dnsRRIngressOpts := handler.EnqueueRequestsFromMapFunc(dnsRecordsReconcilerIngressHandler(opts.tailscaleNamespace, opts.proxyActAsDefaultLoadBalancer, mgr.GetClient(), logger))
err = builder.ControllerManagedBy(mgr).
Named("dns-records-reconciler").
Watches(&corev1.Service{}, dnsRRServiceOpts).
Watches(&networkingv1.Ingress{}, dnsRRIngressOpts).
Watches(&discoveryv1.EndpointSlice{}, dnsRREpsOpts).
Watches(&tsapi.DNSConfig{}, dnsRRDNSConfigOpts).
Complete(&dnsRecordsReconciler{
Client: mgr.GetClient(),
tsNamespace: opts.tailscaleNamespace,
logger: opts.log.Named("dns-records-reconciler"),
isDefaultLoadBalancer: opts.proxyActAsDefaultLoadBalancer,
})
if err != nil {
startlog.Fatalf("could not create DNS records reconciler: %v", err)
}
startlog.Infof("Startup complete, operator running, version: %s", version.Long())
if err := mgr.Start(signals.SetupSignalHandler()); err != nil {
startlog.Fatalf("could not start manager: %v", err)
}
}
type reconcilerOpts struct {
log *zap.SugaredLogger
tsServer *tsnet.Server
tsClient *tailscale.Client
tailscaleNamespace string // namespace in which operator resources will be deployed
restConfig *rest.Config // config for connecting to the kube API server
proxyImage string // <proxy-image-repo>:<proxy-image-tag>
// proxyPriorityClassName isPriorityClass to be set for proxy Pods. This
// is a legacy mechanism for cluster resource configuration options -
// going forward use ProxyClass.
// https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
proxyPriorityClassName string
// proxyTags are ACL tags to tag proxy auth keys. Multiple tags should
// be provided as a string with comma-separated tag values. Proxy tags
// default to tag:k8s.
// https://tailscale.com/kb/1085/auth-keys
proxyTags string
// proxyActAsDefaultLoadBalancer determines whether this operator
// instance should act as the default ingress controller when looking at
// Ingress resources with unset ingress.spec.ingressClassName.
// TODO (irbekrm): this setting does not respect the default
// IngressClass.
// https://kubernetes.io/docs/concepts/services-networking/ingress/#default-ingress-class
// We should fix that and preferably integrate with that mechanism as
// well - perhaps make the operator itself create the default
// IngressClass if this is set to true.
proxyActAsDefaultLoadBalancer bool
// proxyFirewallMode determines whether non-userspace proxies should use
// iptables or nftables for firewall configuration. Accepted values are
// iptables, nftables and auto. If set to auto, proxy will automatically
// determine which mode is supported for a given host (prefer nftables).
// Auto is usually the best choice, unless you want to explicitly set
// specific mode for debugging purposes.
proxyFirewallMode string
}
// enqueueAllIngressEgressProxySvcsinNS returns a reconcile request for each
// ingress/egress proxy headless Service found in the provided namespace.
func enqueueAllIngressEgressProxySvcsInNS(ns string, cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, _ client.Object) []reconcile.Request {
reqs := make([]reconcile.Request, 0)
// Get all headless Services for proxies configured using Service.
svcProxyLabels := map[string]string{
LabelManaged: "true",
LabelParentType: "svc",
}
svcHeadlessSvcList := &corev1.ServiceList{}
if err := cl.List(ctx, svcHeadlessSvcList, client.InNamespace(ns), client.MatchingLabels(svcProxyLabels)); err != nil {
logger.Errorf("error listing headless Services for tailscale ingress/egress Services in operator namespace: %v", err)
return nil
}
for _, svc := range svcHeadlessSvcList.Items {
reqs = append(reqs, reconcile.Request{NamespacedName: types.NamespacedName{Namespace: svc.Namespace, Name: svc.Name}})
}
// Get all headless Services for proxies configured using Ingress.
ingProxyLabels := map[string]string{
LabelManaged: "true",
LabelParentType: "ingress",
}
ingHeadlessSvcList := &corev1.ServiceList{}
if err := cl.List(ctx, ingHeadlessSvcList, client.InNamespace(ns), client.MatchingLabels(ingProxyLabels)); err != nil {
logger.Errorf("error listing headless Services for tailscale Ingresses in operator namespace: %v", err)
return nil
}
for _, svc := range ingHeadlessSvcList.Items {
reqs = append(reqs, reconcile.Request{NamespacedName: types.NamespacedName{Namespace: svc.Namespace, Name: svc.Name}})
}
return reqs
}
}
// dnsRecordsReconciler filters EndpointSlice events for which
// dns-records-reconciler should reconcile a headless Service. The only events
// it should reconcile are those for EndpointSlices associated with proxy
// headless Services.
func dnsRecordsReconcilerEndpointSliceHandler(ctx context.Context, o client.Object) []reconcile.Request {
if !isManagedByType(o, "svc") && !isManagedByType(o, "ingress") {
return nil
}
headlessSvcName, ok := o.GetLabels()[discoveryv1.LabelServiceName] // https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#ownership
if !ok {
return nil
}
return []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: o.GetNamespace(), Name: headlessSvcName}}}
}
// dnsRecordsReconcilerServiceHandler filters Service events for which
// dns-records-reconciler should reconcile. If the event is for a cluster
// ingress/cluster egress proxy's headless Service, returns the Service for
// reconcile.
func dnsRecordsReconcilerServiceHandler(ctx context.Context, o client.Object) []reconcile.Request {
if isManagedByType(o, "svc") || isManagedByType(o, "ingress") {
return []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: o.GetNamespace(), Name: o.GetName()}}}
}
return nil
}
// dnsRecordsReconcilerIngressHandler filters Ingress events to ensure that
// dns-records-reconciler only reconciles on tailscale Ingress events. When an
// event is observed on a tailscale Ingress, reconcile the proxy headless Service.
func dnsRecordsReconcilerIngressHandler(ns string, isDefaultLoadBalancer bool, cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
ing, ok := o.(*networkingv1.Ingress)
if !ok {
return nil
}
if !isDefaultLoadBalancer && (ing.Spec.IngressClassName == nil || *ing.Spec.IngressClassName != "tailscale") {
return nil
}
proxyResourceLabels := childResourceLabels(ing.Name, ing.Namespace, "ingress")
headlessSvc, err := getSingleObject[corev1.Service](ctx, cl, ns, proxyResourceLabels)
if err != nil {
logger.Errorf("error getting headless Service from parent labels: %v", err)
return nil
}
if headlessSvc == nil {
return nil
}
return []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: headlessSvc.Namespace, Name: headlessSvc.Name}}}
}
}
type tsClient interface {
CreateKey(ctx context.Context, caps tailscale.KeyCapabilities) (string, *tailscale.Key, error)
DeleteDevice(ctx context.Context, nodeStableID string) error

View File

@ -1196,7 +1196,6 @@ func TestTailscaledConfigfileHash(t *testing.T) {
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, o), nil)
}
func Test_isMagicDNSName(t *testing.T) {
tests := []struct {
in string

View File

@ -3,8 +3,6 @@
//go:build !plan9
// tailscale-operator provides a way to expose services running in a Kubernetes
// cluster to your Tailnet.
package main
import (

View File

@ -90,7 +90,7 @@ func (a *ServiceReconciler) Reconcile(ctx context.Context, req reconcile.Request
} else if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get svc: %w", err)
}
targetIP := a.tailnetTargetAnnotation(svc)
targetIP := tailnetTargetAnnotation(svc)
targetFQDN := svc.Annotations[AnnotationTailnetTargetFQDN]
if !svc.DeletionTimestamp.IsZero() || !a.shouldExpose(svc) && targetIP == "" && targetFQDN == "" {
logger.Debugf("service is being deleted or is (no longer) referring to Tailscale ingress/egress, ensuring any created resources are cleaned up")
@ -216,7 +216,7 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
sts.ClusterTargetDNSName = svc.Spec.ExternalName
a.managedIngressProxies.Add(svc.UID)
gaugeIngressProxies.Set(int64(a.managedIngressProxies.Len()))
} else if ip := a.tailnetTargetAnnotation(svc); ip != "" {
} else if ip := tailnetTargetAnnotation(svc); ip != "" {
sts.TailnetTargetIP = ip
a.managedEgressProxies.Add(svc.UID)
gaugeEgressProxies.Set(int64(a.managedEgressProxies.Len()))
@ -250,7 +250,7 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
return nil
}
if !a.hasLoadBalancerClass(svc) {
if !isTailscaleLoadBalancerService(svc, a.isDefaultLoadBalancer) {
logger.Debugf("service is not a LoadBalancer, so not updating ingress")
return nil
}
@ -310,29 +310,27 @@ func (a *ServiceReconciler) shouldExpose(svc *corev1.Service) bool {
return a.shouldExposeClusterIP(svc) || a.shouldExposeDNSName(svc)
}
func (a *ServiceReconciler) shouldExposeDNSName(svc *corev1.Service) bool {
return hasExposeAnnotation(svc) && svc.Spec.Type == corev1.ServiceTypeExternalName && svc.Spec.ExternalName != ""
}
func (a *ServiceReconciler) shouldExposeClusterIP(svc *corev1.Service) bool {
// Headless services can't be exposed, since there is no ClusterIP to
// forward to.
if svc.Spec.ClusterIP == "" || svc.Spec.ClusterIP == "None" {
return false
}
return a.hasLoadBalancerClass(svc) || a.hasExposeAnnotation(svc)
return isTailscaleLoadBalancerService(svc, a.isDefaultLoadBalancer) || hasExposeAnnotation(svc)
}
func (a *ServiceReconciler) shouldExposeDNSName(svc *corev1.Service) bool {
return a.hasExposeAnnotation(svc) && svc.Spec.Type == corev1.ServiceTypeExternalName && svc.Spec.ExternalName != ""
}
func (a *ServiceReconciler) hasLoadBalancerClass(svc *corev1.Service) bool {
func isTailscaleLoadBalancerService(svc *corev1.Service, isDefaultLoadBalancer bool) bool {
return svc != nil &&
svc.Spec.Type == corev1.ServiceTypeLoadBalancer &&
(svc.Spec.LoadBalancerClass != nil && *svc.Spec.LoadBalancerClass == "tailscale" ||
svc.Spec.LoadBalancerClass == nil && a.isDefaultLoadBalancer)
svc.Spec.LoadBalancerClass == nil && isDefaultLoadBalancer)
}
// hasExposeAnnotation reports whether Service has the tailscale.com/expose
// annotation set
func (a *ServiceReconciler) hasExposeAnnotation(svc *corev1.Service) bool {
func hasExposeAnnotation(svc *corev1.Service) bool {
return svc != nil && svc.Annotations[AnnotationExpose] == "true"
}
@ -340,7 +338,7 @@ func (a *ServiceReconciler) hasExposeAnnotation(svc *corev1.Service) bool {
// annotation or of the deprecated tailscale.com/ts-tailnet-target-ip
// annotation. If neither is set, it returns an empty string. If both are set,
// it returns the value of the new annotation.
func (a *ServiceReconciler) tailnetTargetAnnotation(svc *corev1.Service) string {
func tailnetTargetAnnotation(svc *corev1.Service) string {
if svc == nil {
return ""
}

View File

@ -189,8 +189,8 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
{
Name: "sysctler",
Image: "tailscale/tailscale",
Command: []string{"/bin/sh"},
Args: []string{"-c", "sysctl -w net.ipv4.ip_forward=1 net.ipv6.conf.all.forwarding=1"},
Command: []string{"/bin/sh", "-c"},
Args: []string{"sysctl -w net.ipv4.ip_forward=1 && if sysctl net.ipv6.conf.all.forwarding; then sysctl -w net.ipv6.conf.all.forwarding=1; fi"},
SecurityContext: &corev1.SecurityContext{
Privileged: ptr.To(true),
},

View File

@ -24,6 +24,7 @@ import (
"tailscale.com/tsnet"
"tailscale.com/tstest/integration"
"tailscale.com/tstest/integration/testcontrol"
"tailscale.com/tstest/nettest"
"tailscale.com/types/appctype"
"tailscale.com/types/ipproto"
"tailscale.com/types/key"
@ -111,6 +112,7 @@ func startNode(t *testing.T, ctx context.Context, controlURL, hostname string) (
}
func TestSNIProxyWithNetmapConfig(t *testing.T) {
nettest.SkipIfNoNetwork(t)
c, controlURL := startControl(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
@ -189,6 +191,7 @@ func TestSNIProxyWithNetmapConfig(t *testing.T) {
}
func TestSNIProxyWithFlagConfig(t *testing.T) {
nettest.SkipIfNoNetwork(t)
_, controlURL := startControl(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

View File

@ -78,7 +78,9 @@ func CleanUpArgs(args []string) []string {
return out
}
var localClient tailscale.LocalClient
var localClient = tailscale.LocalClient{
Socket: paths.DefaultTailscaledSocket(),
}
// Run runs the CLI. The args do not include the binary name.
func Run(args []string) (err error) {
@ -139,13 +141,6 @@ func Run(args []string) (err error) {
return
}
localClient.Socket = rootArgs.socket
rootCmd.FlagSet.Visit(func(f *flag.Flag) {
if f.Name == "socket" {
localClient.UseSocketOnly = true
}
})
err = rootCmd.Run(context.Background())
if tailscale.IsAccessDeniedError(err) && os.Getuid() != 0 && runtime.GOOS != "windows" {
return fmt.Errorf("%v\n\nUse 'sudo tailscale %s' or 'tailscale up --operator=$USER' to not require root.", err, strings.Join(args, " "))
@ -158,7 +153,12 @@ func Run(args []string) (err error) {
func newRootCmd() *ffcli.Command {
rootfs := newFlagSet("tailscale")
rootfs.StringVar(&rootArgs.socket, "socket", paths.DefaultTailscaledSocket(), "path to tailscaled socket")
rootfs.Func("socket", "path to tailscaled socket", func(s string) error {
localClient.Socket = s
localClient.UseSocketOnly = true
return nil
})
rootfs.Lookup("socket").DefValue = localClient.Socket
rootCmd := &ffcli.Command{
Name: "tailscale",
@ -236,10 +236,6 @@ func fatalf(format string, a ...any) {
// Fatalf, if non-nil, is used instead of log.Fatalf.
var Fatalf func(format string, a ...any)
var rootArgs struct {
socket string
}
type cmdWalk struct {
*ffcli.Command
parents []*ffcli.Command

View File

@ -193,12 +193,26 @@ walk:
}
// Strip any descriptions if they were suppressed.
if !descs {
for i := range words {
words[i], _, _ = strings.Cut(words[i], "\t")
clean := words[:0]
for _, w := range words {
if !descs {
w, _, _ = strings.Cut(w, "\t")
}
w = cutAny(w, "\n\r")
if w == "" || w[0] == '\t' {
continue
}
clean = append(clean, w)
}
return words, dir, nil
return clean, dir, nil
}
func cutAny(s, cutset string) string {
i := strings.IndexAny(s, cutset)
if i == -1 {
return s
}
return s[:i]
}
// splitFlagArgs separates a list of command-line arguments into arguments

View File

@ -50,11 +50,17 @@ func TestComplete(t *testing.T) {
cmd := &ffcli.Command{
Name: "ping",
FlagSet: newFlagSet("prog ping", flag.ContinueOnError, func(fs *flag.FlagSet) {
fs.String("until", "", "when pinging should end")
fs.String("until", "", "when pinging should end\nline break!")
ffcomplete.Flag(fs, "until", ffcomplete.Fixed("forever", "direct"))
}),
}
ffcomplete.Args(cmd, ffcomplete.Fixed("jupiter", "neptune", "venus"))
ffcomplete.Args(cmd, ffcomplete.Fixed(
"jupiter\t5th planet\nand largets",
"neptune\t8th planet",
"venus\t2nd planet",
"\tonly description",
"\nonly line break",
))
return cmd
}(),
},
@ -170,9 +176,9 @@ func TestComplete(t *testing.T) {
showDescs: true,
wantComp: []string{
"--until\twhen pinging should end",
"jupiter",
"neptune",
"venus",
"jupiter\t5th planet",
"neptune\t8th planet",
"venus\t2nd planet",
},
wantDir: ffcomplete.ShellCompDirectiveNoFileComp,
},

View File

@ -110,8 +110,8 @@ func runSSH(ctx context.Context, args []string) error {
// mode, so 'nc' isn't very useful.
if runtime.GOOS != "darwin" {
socketArg := ""
if rootArgs.socket != "" && rootArgs.socket != paths.DefaultTailscaledSocket() {
socketArg = fmt.Sprintf("--socket=%q", rootArgs.socket)
if localClient.Socket != "" && localClient.Socket != paths.DefaultTailscaledSocket() {
socketArg = fmt.Sprintf("--socket=%q", localClient.Socket)
}
argv = append(argv,

View File

@ -6,11 +6,9 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
W github.com/alexbrainman/sspi/internal/common from github.com/alexbrainman/sspi/negotiate
W 💣 github.com/alexbrainman/sspi/negotiate from tailscale.com/net/tshttpproxy
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
L github.com/coreos/go-systemd/v22/dbus from tailscale.com/clientupdate
W 💣 github.com/dblohm7/wingoes from github.com/dblohm7/wingoes/pe+
W 💣 github.com/dblohm7/wingoes/pe from tailscale.com/util/winutil/authenticode
github.com/fxamacker/cbor/v2 from tailscale.com/tka
L 💣 github.com/godbus/dbus/v5 from github.com/coreos/go-systemd/v22/dbus
github.com/golang/groupcache/lru from tailscale.com/net/dnscache
L github.com/google/nftables from tailscale.com/util/linuxfw
L 💣 github.com/google/nftables/alignedbuff from github.com/google/nftables/xt
@ -271,7 +269,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
image/png from github.com/skip2/go-qrcode
io from archive/tar+
io/fs from archive/tar+
io/ioutil from github.com/godbus/dbus/v5+
io/ioutil from github.com/mitchellh/go-ps+
log from expvar+
log/internal from log
maps from tailscale.com/clientupdate+

View File

@ -80,7 +80,6 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
L github.com/aws/smithy-go/waiter from github.com/aws/aws-sdk-go-v2/service/ssm
github.com/bits-and-blooms/bitset from github.com/gaissmai/bart
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
L github.com/coreos/go-systemd/v22/dbus from tailscale.com/clientupdate
LD 💣 github.com/creack/pty from tailscale.com/ssh/tailssh
W 💣 github.com/dblohm7/wingoes from github.com/dblohm7/wingoes/com+
W 💣 github.com/dblohm7/wingoes/com from tailscale.com/cmd/tailscaled+
@ -98,7 +97,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
github.com/go-json-experiment/json/jsontext from tailscale.com/logtail
W 💣 github.com/go-ole/go-ole from github.com/go-ole/go-ole/oleutil+
W 💣 github.com/go-ole/go-ole/oleutil from tailscale.com/wgengine/winnet
L 💣 github.com/godbus/dbus/v5 from github.com/coreos/go-systemd/v22/dbus+
L 💣 github.com/godbus/dbus/v5 from tailscale.com/net/dns+
github.com/golang/groupcache/lru from tailscale.com/net/dnscache
github.com/google/btree from gvisor.dev/gvisor/pkg/tcpip/header+
L github.com/google/nftables from tailscale.com/util/linuxfw

View File

@ -72,6 +72,10 @@ type Knobs struct {
// ProbeUDPLifetime is whether the node should probe UDP path lifetime on
// the tail end of an active direct connection in magicsock.
ProbeUDPLifetime atomic.Bool
// AppCStoreRoutes is whether the node should store RouteInfo to StateStore
// if it's an app connector.
AppCStoreRoutes atomic.Bool
}
// UpdateFromNodeAttributes updates k (if non-nil) based on the provided self
@ -96,6 +100,7 @@ func (k *Knobs) UpdateFromNodeAttributes(capMap tailcfg.NodeCapMap) {
forceNfTables = has(tailcfg.NodeAttrLinuxMustUseNfTables)
seamlessKeyRenewal = has(tailcfg.NodeAttrSeamlessKeyRenewal)
probeUDPLifetime = has(tailcfg.NodeAttrProbeUDPLifetime)
appCStoreRoutes = has(tailcfg.NodeAttrStoreAppCRoutes)
)
if has(tailcfg.NodeAttrOneCGNATEnable) {
@ -118,6 +123,7 @@ func (k *Knobs) UpdateFromNodeAttributes(capMap tailcfg.NodeCapMap) {
k.LinuxForceNfTables.Store(forceNfTables)
k.SeamlessKeyRenewal.Store(seamlessKeyRenewal)
k.ProbeUDPLifetime.Store(probeUDPLifetime)
k.AppCStoreRoutes.Store(appCStoreRoutes)
}
// AsDebugJSON returns k as something that can be marshalled with json.Marshal
@ -141,5 +147,6 @@ func (k *Knobs) AsDebugJSON() map[string]any {
"LinuxForceNfTables": k.LinuxForceNfTables.Load(),
"SeamlessKeyRenewal": k.SeamlessKeyRenewal.Load(),
"ProbeUDPLifetime": k.ProbeUDPLifetime.Load(),
"AppCStoreRoutes": k.AppCStoreRoutes.Load(),
}
}

View File

@ -69,10 +69,6 @@ func init() {
}
}
func init() {
rand.Seed(time.Now().UnixNano())
}
const (
perClientSendQueueDepth = 32 // packets buffered for sending
writeTimeout = 2 * time.Second

View File

@ -136,8 +136,15 @@ func NewRegionClient(privateKey key.NodePrivate, logf logger.Logf, netMon *netmo
// NewNetcheckClient returns a Client that's only able to have its DialRegionTLS method called.
// It's used by the netcheck package.
func NewNetcheckClient(logf logger.Logf) *Client {
return &Client{logf: logf, clock: tstime.StdClock{}}
func NewNetcheckClient(logf logger.Logf, netMon *netmon.Monitor) *Client {
if netMon == nil {
panic("nil netMon")
}
return &Client{
logf: logf,
clock: tstime.StdClock{},
netMon: netMon,
}
}
// NewClient returns a new DERP-over-HTTP client. It connects lazily.

View File

@ -20,6 +20,16 @@ const fastStartHeader = "Derp-Fast-Start"
func Handler(s *derp.Server) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// These are installed both here and in cmd/derper. The check here
// catches both cmd/derper run with DERP disabled (STUN only mode) as
// well as DERP being run in tests with derphttp.Handler directly,
// as netcheck still assumes this replies.
switch r.URL.Path {
case "/derp/probe", "/derp/latency-check":
ProbeHandler(w, r)
return
}
up := strings.ToLower(r.Header.Get("Upgrade"))
if up != "websocket" && up != "derp" {
if up != "" {
@ -58,3 +68,14 @@ func Handler(s *derp.Server) http.Handler {
s.Accept(r.Context(), netConn, conn, netConn.RemoteAddr().String())
})
}
// ProbeHandler is the endpoint that clients without UDP access (including js/wasm) hit to measure
// DERP latency, as a replacement for UDP STUN queries.
func ProbeHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case "HEAD", "GET":
w.Header().Set("Access-Control-Allow-Origin", "*")
default:
http.Error(w, "bogus probe method", http.StatusMethodNotAllowed)
}
}

View File

@ -10,6 +10,7 @@ import (
"fmt"
"net"
"net/http"
"net/http/httptest"
"net/netip"
"sync"
"testing"
@ -464,3 +465,24 @@ func TestLocalAddrNoMutex(t *testing.T) {
t.Errorf("got error %q; want %q", got, want)
}
}
func TestProbe(t *testing.T) {
h := Handler(nil)
tests := []struct {
path string
want int
}{
{"/derp/probe", 200},
{"/derp/latency-check", 200},
{"/derp/sdf", http.StatusUpgradeRequired},
}
for _, tt := range tests {
rec := httptest.NewRecorder()
h.ServeHTTP(rec, httptest.NewRequest("GET", tt.path, nil))
if got := rec.Result().StatusCode; got != tt.want {
t.Errorf("for path %q got HTTP status %v; want %v", tt.path, got, tt.want)
}
}
}

View File

@ -27,10 +27,10 @@ import (
type Child struct {
*dirfs.Child
// BaseURL is the base URL of the WebDAV service to which we'll proxy
// BaseURL returns the base URL of the WebDAV service to which we'll proxy
// requests for this Child. We will append the filename from the original
// URL to this.
BaseURL string
BaseURL func() (string, error)
// Transport (if specified) is the http transport to use when communicating
// with this Child's WebDAV service.
@ -154,9 +154,15 @@ func (h *Handler) delegate(mpl int, pathComponents []string, w http.ResponseWrit
return
}
u, err := url.Parse(child.BaseURL)
baseURL, err := child.BaseURL()
if err != nil {
h.logf("warning: parse base URL %s failed: %s", child.BaseURL, err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
u, err := url.Parse(baseURL)
if err != nil {
h.logf("warning: parse base URL %s failed: %s", baseURL, err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}

View File

@ -24,33 +24,26 @@ func (h *Handler) handlePROPFIND(w http.ResponseWriter, r *http.Request) {
// Delegate to a Child.
depth := getDepth(r)
cached := h.StatCache.get(r.URL.Path, depth)
if cached != nil {
w.Header().Del("Content-Length")
w.WriteHeader(http.StatusMultiStatus)
w.Write(cached)
return
}
status, result := h.StatCache.getOr(r.URL.Path, depth, func() (int, []byte) {
// Use a buffering ResponseWriter so that we can manipulate the result.
// The only thing we use from the original ResponseWriter is Header().
bw := &bufferingResponseWriter{ResponseWriter: w}
// Use a buffering ResponseWriter so that we can manipulate the result.
// The only thing we use from the original ResponseWriter is Header().
bw := &bufferingResponseWriter{ResponseWriter: w}
mpl := h.maxPathLength(r)
h.delegate(mpl, pathComponents[mpl-1:], bw, r)
mpl := h.maxPathLength(r)
h.delegate(mpl, pathComponents[mpl-1:], bw, r)
// Fixup paths to add the requested path as a prefix.
pathPrefix := shared.Join(pathComponents[0:mpl]...)
b := hrefRegex.ReplaceAll(bw.buf.Bytes(), []byte(fmt.Sprintf("<D:href>%s/$1</D:href>", pathPrefix)))
// Fixup paths to add the requested path as a prefix.
pathPrefix := shared.Join(pathComponents[0:mpl]...)
b := hrefRegex.ReplaceAll(bw.buf.Bytes(), []byte(fmt.Sprintf("<D:href>%s/$1</D:href>", pathPrefix)))
if h.StatCache != nil && bw.status == http.StatusMultiStatus && b != nil {
h.StatCache.set(r.URL.Path, depth, b)
}
return bw.status, b
})
w.Header().Del("Content-Length")
w.WriteHeader(bw.status)
w.Write(b)
w.WriteHeader(status)
if result != nil {
w.Write(result)
}
return
}

View File

@ -4,6 +4,7 @@
package compositedav
import (
"net/http"
"sync"
"time"
@ -25,6 +26,23 @@ type StatCache struct {
cachesByDepthAndPath map[int]*ttlcache.Cache[string, []byte]
}
// getOr checks the cache for the named value at the given depth. If a cached
// value was found, it returns http.StatusMultiStatus along with the cached
// value. Otherwise, it executes the given function and returns the resulting
// status and value. If the function returned http.StatusMultiStatus, getOr
// caches the resulting value at the given name and depth before returning.
func (c *StatCache) getOr(name string, depth int, or func() (int, []byte)) (int, []byte) {
cached := c.get(name, depth)
if cached != nil {
return http.StatusMultiStatus, cached
}
status, next := or()
if c != nil && status == http.StatusMultiStatus && next != nil {
c.set(name, depth, next)
}
return status, next
}
func (c *StatCache) get(name string, depth int) []byte {
if c == nil {
return nil

View File

@ -10,10 +10,12 @@ import (
"log"
"net"
"net/http"
"net/url"
"os"
"path"
"path/filepath"
"slices"
"strings"
"sync"
"testing"
"time"
@ -88,11 +90,61 @@ func TestFileManipulation(t *testing.T) {
s.checkFileContents(remote1, share11, file112)
s.addShare(remote1, share12, drive.PermissionReadOnly)
s.writeFile("writing file to read-only remote should fail", remote1, share12, file111, "hello world", false)
s.writeFile("writing file to non-existent remote should fail", "non-existent", share11, file111, "hello world", false)
s.writeFile("writing file to non-existent share should fail", remote1, "non-existent", file111, "hello world", false)
}
func TestPermissions(t *testing.T) {
s := newSystem(t)
s.addRemote(remote1)
s.addShare(remote1, share12, drive.PermissionReadOnly)
s.writeFile("writing file to read-only remote should fail", remote1, share12, file111, "hello world", false)
if err := s.client.Mkdir(path.Join(remote1, share12), 0644); err == nil {
t.Error("making directory on read-only remote should fail")
}
// Now, write file directly to file system so that we can test permissions
// on other operations.
s.write(remote1, share12, file111, "hello world")
if err := s.client.Remove(pathTo(remote1, share12, file111)); err == nil {
t.Error("deleting file from read-only remote should fail")
}
if err := s.client.Rename(pathTo(remote1, share12, file111), pathTo(remote1, share12, file112), true); err == nil {
t.Error("moving file on read-only remote should fail")
}
}
// TestSecretTokenAuth verifies that the fileserver running at localhost cannot
// be accessed directly without the correct secret token. This matters because
// if a victim can be induced to visit the localhost URL and access a malicious
// file on their own share, it could allow a Mark-of-the-Web bypass attack.
func TestSecretTokenAuth(t *testing.T) {
s := newSystem(t)
fileserverAddr := s.addRemote(remote1)
s.addShare(remote1, share11, drive.PermissionReadWrite)
s.writeFile("writing file to read/write remote should succeed", remote1, share11, file111, "hello world", true)
client := &http.Client{
Transport: &http.Transport{DisableKeepAlives: true},
}
addr := strings.Split(fileserverAddr, "|")[1]
wrongSecret, err := generateSecretToken()
if err != nil {
t.Fatal(err)
}
u := fmt.Sprintf("http://%s/%s/%s", addr, wrongSecret, url.PathEscape(file111))
resp, err := client.Get(u)
if err != nil {
t.Fatal(err)
}
if resp.StatusCode != http.StatusForbidden {
t.Errorf("expected %d for incorrect secret token, but got %d", http.StatusForbidden, resp.StatusCode)
}
}
type local struct {
l net.Listener
fs *FileSystemForLocal
@ -132,7 +184,7 @@ func newSystem(t *testing.T) *system {
// Make sure we don't leak goroutines
tstest.ResourceCheck(t)
fs := NewFileSystemForLocal(log.Printf)
fs := newFileSystemForLocal(log.Printf, nil)
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("failed to Listen: %s", err)
@ -161,7 +213,7 @@ func newSystem(t *testing.T) *system {
return s
}
func (s *system) addRemote(name string) {
func (s *system) addRemote(name string) string {
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
s.t.Fatalf("failed to Listen: %s", err)
@ -200,6 +252,8 @@ func (s *system) addRemote(name string) {
DisableKeepAlives: true,
ResponseHeaderTimeout: 5 * time.Second,
})
return fileServer.Addr()
}
func (s *system) addShare(remoteName, shareName string, permission drive.Permission) {
@ -324,6 +378,14 @@ func (s *system) read(remoteName, shareName, name string) string {
return string(b)
}
func (s *system) write(remoteName, shareName, name, contents string) {
filename := filepath.Join(s.remotes[remoteName].shares[shareName], name)
err := os.WriteFile(filename, []byte(contents), 0644)
if err != nil {
s.t.Fatalf("failed to WriteFile: %s", err)
}
}
func (s *system) readViaWebDAV(remoteName, shareName, name string) string {
path := pathTo(remoteName, shareName, name)
b, err := s.client.Read(path)

View File

@ -4,6 +4,10 @@
package driveimpl
import (
"crypto/rand"
"crypto/subtle"
"encoding/hex"
"fmt"
"net"
"net/http"
"sync"
@ -17,6 +21,7 @@ import (
// serve up files as an unprivileged user.
type FileServer struct {
l net.Listener
secretToken string
shareHandlers map[string]http.Handler
sharesMu sync.RWMutex
}
@ -40,19 +45,36 @@ func NewFileServer() (*FileServer, error) {
// if err != nil {
// TODO(oxtoacart): actually get safesocket working in more environments (MacOS Sandboxed, Windows, ???)
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
return nil, fmt.Errorf("listen: %w", err)
}
secretToken, err := generateSecretToken()
if err != nil {
return nil, err
}
// }
return &FileServer{
l: l,
secretToken: secretToken,
shareHandlers: make(map[string]http.Handler),
}, nil
}
// Addr returns the address at which this FileServer is listening.
// generateSecretToken generates a hex-encoded 256 bit secet.
func generateSecretToken() (string, error) {
tokenBytes := make([]byte, 32)
_, err := rand.Read(tokenBytes)
if err != nil {
return "", fmt.Errorf("generateSecretToken: %w", err)
}
return hex.EncodeToString(tokenBytes), nil
}
// Addr returns the address at which this FileServer is listening. This
// includes the secret token in front of the address, delimited by a pipe |.
func (s *FileServer) Addr() string {
return s.l.Addr().String()
return fmt.Sprintf("%s|%s", s.secretToken, s.l.Addr().String())
}
// Serve() starts serving files and blocks until it encounters a fatal error.
@ -95,11 +117,33 @@ func (s *FileServer) SetShares(shares map[string]string) {
}
}
// ServeHTTP implements the http.Handler interface.
// ServeHTTP implements the http.Handler interface. This requires a secret
// token in the path in order to prevent Mark-of-the-Web (MOTW) bypass attacks
// of the below sort:
//
// 1. Attacker with write access to the share puts a malicious file via
// http://100.100.100.100:8080/<tailnet>/<machine>/</share>/bad.exe
// 2. Attacker then induces victim to visit
// http://localhost:[PORT]/<share>/bad.exe
// 3. Because that is loaded from localhost, it does not get the MOTW
// thereby bypasses some OS-level security.
//
// The path on this file server is actually not as above, but rather
// http://localhost:[PORT]/<secretToken>/<share>/bad.exe. Unless the attacker
// can discover the secretToken, the attacker cannot craft a localhost URL that
// will work.
func (s *FileServer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
parts := shared.CleanAndSplit(r.URL.Path)
r.URL.Path = shared.Join(parts[1:]...)
share := parts[0]
token := parts[0]
a, b := []byte(token), []byte(s.secretToken)
if subtle.ConstantTimeCompare(a, b) != 1 {
w.WriteHeader(http.StatusForbidden)
return
}
r.URL.Path = shared.Join(parts[2:]...)
share := parts[1]
s.sharesMu.RLock()
h, found := s.shareHandlers[share]
s.sharesMu.RUnlock()

View File

@ -27,6 +27,10 @@ const (
// NewFileSystemForLocal starts serving a filesystem for local clients.
// Inbound connections must be handed to HandleConn.
func NewFileSystemForLocal(logf logger.Logf) *FileSystemForLocal {
return newFileSystemForLocal(logf, &compositedav.StatCache{TTL: statCacheTTL})
}
func newFileSystemForLocal(logf logger.Logf, statCache *compositedav.StatCache) *FileSystemForLocal {
if logf == nil {
logf = log.Printf
}
@ -34,7 +38,7 @@ func NewFileSystemForLocal(logf logger.Logf) *FileSystemForLocal {
logf: logf,
h: &compositedav.Handler{
Logf: logf,
StatCache: &compositedav.StatCache{TTL: statCacheTTL},
StatCache: statCache,
},
listener: newConnListener(),
}
@ -77,7 +81,7 @@ func (s *FileSystemForLocal) SetRemotes(domain string, remotes []*drive.Remote,
Name: remote.Name,
Available: remote.Available,
},
BaseURL: remote.URL,
BaseURL: func() (string, error) { return remote.URL, nil },
Transport: transport,
})
}

View File

@ -51,17 +51,18 @@ type FileSystemForRemote struct {
// mu guards the below values. Acquire a write lock before updating any of
// them, acquire a read lock before reading any of them.
mu sync.RWMutex
fileServerAddr string
shares []*drive.Share
children map[string]*compositedav.Child
userServers map[string]*userServer
mu sync.RWMutex
// fileServerTokenAndAddr is the secretToken|fileserverAddress
fileServerTokenAndAddr string
shares []*drive.Share
children map[string]*compositedav.Child
userServers map[string]*userServer
}
// SetFileServerAddr implements drive.FileSystemForRemote.
func (s *FileSystemForRemote) SetFileServerAddr(addr string) {
s.mu.Lock()
s.fileServerAddr = addr
s.fileServerTokenAndAddr = addr
s.mu.Unlock()
}
@ -113,11 +114,58 @@ func (s *FileSystemForRemote) SetShares(shares []*drive.Share) {
}
func (s *FileSystemForRemote) buildChild(share *drive.Share) *compositedav.Child {
getTokenAndAddr := func(shareName string) (string, string, error) {
s.mu.RLock()
var share *drive.Share
i, shareFound := slices.BinarySearchFunc(s.shares, shareName, func(s *drive.Share, name string) int {
return strings.Compare(s.Name, name)
})
if shareFound {
share = s.shares[i]
}
userServers := s.userServers
fileServerTokenAndAddr := s.fileServerTokenAndAddr
s.mu.RUnlock()
if !shareFound {
return "", "", fmt.Errorf("unknown share %v", shareName)
}
var tokenAndAddr string
if !drive.AllowShareAs() {
tokenAndAddr = fileServerTokenAndAddr
} else {
userServer, found := userServers[share.As]
if found {
userServer.mu.RLock()
tokenAndAddr = userServer.tokenAndAddr
userServer.mu.RUnlock()
}
}
if tokenAndAddr == "" {
return "", "", fmt.Errorf("unable to determine address for share %v", shareName)
}
parts := strings.Split(tokenAndAddr, "|")
if len(parts) != 2 {
return "", "", fmt.Errorf("invalid address for share %v", shareName)
}
return parts[0], parts[1], nil
}
return &compositedav.Child{
Child: &dirfs.Child{
Name: share.Name,
},
BaseURL: fmt.Sprintf("http://%v/%v", hex.EncodeToString([]byte(share.Name)), url.PathEscape(share.Name)),
BaseURL: func() (string, error) {
secretToken, _, err := getTokenAndAddr(share.Name)
if err != nil {
return "", err
}
return fmt.Sprintf("http://%s/%s/%s", hex.EncodeToString([]byte(share.Name)), secretToken, url.PathEscape(share.Name)), nil
},
Transport: &http.Transport{
Dial: func(_, shareAddr string) (net.Conn, error) {
shareNameHex, _, err := net.SplitHostPort(shareAddr)
@ -132,36 +180,9 @@ func (s *FileSystemForRemote) buildChild(share *drive.Share) *compositedav.Child
}
shareName := string(shareNameBytes)
s.mu.RLock()
var share *drive.Share
i, shareFound := slices.BinarySearchFunc(s.shares, shareName, func(s *drive.Share, name string) int {
return strings.Compare(s.Name, name)
})
if shareFound {
share = s.shares[i]
}
userServers := s.userServers
fileServerAddr := s.fileServerAddr
s.mu.RUnlock()
if !shareFound {
return nil, fmt.Errorf("unknown share %v", shareName)
}
var addr string
if !drive.AllowShareAs() {
addr = fileServerAddr
} else {
userServer, found := userServers[share.As]
if found {
userServer.mu.RLock()
addr = userServer.addr
userServer.mu.RUnlock()
}
}
if addr == "" {
return nil, fmt.Errorf("unable to determine address for share %v", shareName)
_, addr, err := getTokenAndAddr(shareName)
if err != nil {
return nil, err
}
_, err = netip.ParseAddrPort(addr)
@ -253,10 +274,10 @@ type userServer struct {
// mu guards the below values. Acquire a write lock before updating any of
// them, acquire a read lock before reading any of them.
mu sync.RWMutex
cmd *exec.Cmd
addr string
closed bool
mu sync.RWMutex
cmd *exec.Cmd
tokenAndAddr string
closed bool
}
func (s *userServer) Close() error {
@ -366,7 +387,7 @@ func (s *userServer) run() error {
}
}()
s.mu.Lock()
s.addr = strings.TrimSpace(addr)
s.tokenAndAddr = strings.TrimSpace(addr)
s.mu.Unlock()
return cmd.Wait()
}
@ -380,6 +401,7 @@ var writeMethods = map[string]bool{
"MKCOL": true,
"MOVE": true,
"PROPPATCH": true,
"DELETE": true,
}
// canSudo checks wether we can sudo -u the configured executable as the

View File

@ -95,6 +95,9 @@ type FileSystemForRemote interface {
// sandboxed where we can't spawn user-specific sub-processes and instead
// rely on the UI application that's already running as an unprivileged
// user to access the filesystem for us.
//
// Note that this includes both the file server's secret token and its
// address, delimited by a pipe |.
SetFileServerAddr(addr string)
// SetShares sets the complete set of shares exposed by this node. If

5
go.mod
View File

@ -16,7 +16,6 @@ require (
github.com/aws/aws-sdk-go-v2/service/ssm v1.44.7
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf
github.com/coreos/go-systemd/v22 v22.5.0
github.com/creack/pty v1.1.21
github.com/dave/courtney v0.4.0
github.com/dave/jennifer v1.7.0
@ -77,7 +76,7 @@ require (
github.com/tailscale/peercred v0.0.0-20240214030740-b535050b2aa4
github.com/tailscale/web-client-prebuilt v0.0.0-20240226180453-5db17b287bf1
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6
github.com/tailscale/wireguard-go v0.0.0-20240413175505-64040e66467d
github.com/tailscale/wireguard-go v0.0.0-20240429185444-03c5a0ccf754
github.com/tailscale/xnet v0.0.0-20240117122442-62b9a7c569f9
github.com/tc-hib/winres v0.2.1
github.com/tcnksm/go-httpstat v0.2.0
@ -372,7 +371,7 @@ require (
k8s.io/component-base v0.29.1 // indirect
k8s.io/klog/v2 v2.120.1 // indirect
k8s.io/kube-openapi v0.0.0-20240117194847-208609032b15 // indirect
k8s.io/utils v0.0.0-20240102154912-e7106e64919e // indirect
k8s.io/utils v0.0.0-20240102154912-e7106e64919e
mvdan.cc/gofumpt v0.5.0 // indirect
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed // indirect
mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b // indirect

7
go.sum
View File

@ -217,8 +217,6 @@ github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 h1:8h5+bWd7R6
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf h1:iW4rZ826su+pqaw19uhpSCzhj44qo35pNgKFGqzDKkU=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.21 h1:1/QdRyBaHHJP61QkWMXlOIBfsgdDeeKfK8SYVUWJKf0=
@ -362,7 +360,6 @@ github.com/gobuffalo/flect v1.0.2 h1:eqjPGSo2WmjgY2XlpGwo2NXgL3RucAKo4k4qQMNA5sA
github.com/gobuffalo/flect v1.0.2/go.mod h1:A5msMlrHtLqh9umBSnvabjsMrCcCpAyzglnDvkbYKHs=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 h1:sQspH8M4niEijh3PFscJRLDnkL547IeP7kpPe3uUhEg=
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466/go.mod h1:ZiQxhyQ+bbbfxUKVvjfO498oPYvtYhZzycal3G/NHmU=
github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
@ -889,8 +886,8 @@ github.com/tailscale/web-client-prebuilt v0.0.0-20240226180453-5db17b287bf1 h1:t
github.com/tailscale/web-client-prebuilt v0.0.0-20240226180453-5db17b287bf1/go.mod h1:agQPE6y6ldqCOui2gkIh7ZMztTkIQKH049tv8siLuNQ=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6 h1:l10Gi6w9jxvinoiq15g8OToDdASBni4CyJOdHY1Hr8M=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6/go.mod h1:ZXRML051h7o4OcI0d3AaILDIad/Xw0IkXaHM17dic1Y=
github.com/tailscale/wireguard-go v0.0.0-20240413175505-64040e66467d h1:IREg0cn4nDyMjnqTgPacjgl66qDYmoOnt1k/vb96ius=
github.com/tailscale/wireguard-go v0.0.0-20240413175505-64040e66467d/go.mod h1:BOm5fXUBFM+m9woLNBoxI9TaBXXhGNP50LX/TGIvGb4=
github.com/tailscale/wireguard-go v0.0.0-20240429185444-03c5a0ccf754 h1:iazWjqVHE6CbNam7WXRhi33Qad5o7a8LVYgVoILpZdI=
github.com/tailscale/wireguard-go v0.0.0-20240429185444-03c5a0ccf754/go.mod h1:BOm5fXUBFM+m9woLNBoxI9TaBXXhGNP50LX/TGIvGb4=
github.com/tailscale/xnet v0.0.0-20240117122442-62b9a7c569f9 h1:81P7rjnikHKTJ75EkjppvbwUfKHDHYk6LJpO5PZy8pA=
github.com/tailscale/xnet v0.0.0-20240117122442-62b9a7c569f9/go.mod h1:orPd6JZXXRyuDusYilywte7k094d7dycXXU5YnWsrwg=
github.com/tc-hib/winres v0.2.1 h1:YDE0FiP0VmtRaDn7+aaChp1KiF4owBiJa5l964l5ujA=

View File

@ -23,6 +23,7 @@ import (
"tailscale.com/util/mak"
"tailscale.com/util/multierr"
"tailscale.com/util/set"
"tailscale.com/version"
)
var (
@ -72,6 +73,9 @@ type Tracker struct {
watchers set.HandleSet[func(Subsystem, error)] // opt func to run if error state changes
timer *time.Timer
latestVersion *tailcfg.ClientVersion // or nil
checkForUpdates bool
inMapPoll bool
inMapPollSince time.Time
lastMapPollEndedAt time.Time
@ -543,7 +547,37 @@ func (t *Tracker) SetAuthRoutineInError(err error) {
}
t.mu.Lock()
defer t.mu.Unlock()
if err == nil && t.lastLoginErr == nil {
return
}
t.lastLoginErr = err
t.selfCheckLocked()
}
// SetLatestVersion records the latest version of the Tailscale client.
// v can be nil if unknown.
func (t *Tracker) SetLatestVersion(v *tailcfg.ClientVersion) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.latestVersion = v
t.selfCheckLocked()
}
// SetCheckForUpdates sets whether the client wants to check for updates.
func (t *Tracker) SetCheckForUpdates(v bool) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
if t.checkForUpdates == v {
return
}
t.checkForUpdates = v
t.selfCheckLocked()
}
func (t *Tracker) timerSelfCheck() {
@ -567,6 +601,23 @@ func (t *Tracker) selfCheckLocked() {
t.setLocked(SysOverall, t.overallErrorLocked())
}
// AppendWarnings appends all current health warnings to dst and returns the
// result.
func (t *Tracker) AppendWarnings(dst []string) []string {
err := t.OverallError()
if err == nil {
return dst
}
if me, ok := err.(multierr.Error); ok {
for _, err := range me.Errors() {
dst = append(dst, err.Error())
}
} else {
dst = append(dst, err.Error())
}
return dst
}
// OverallError returns a summary of the health state.
//
// If there are multiple problems, the error will be of type
@ -609,42 +660,76 @@ var errNetworkDown = errors.New("network down")
var errNotInMapPoll = errors.New("not in map poll")
var errNoDERPHome = errors.New("no DERP home")
var errNoUDP4Bind = errors.New("no udp4 bind")
var errUnstable = errors.New("This is an unstable (development) version of Tailscale; frequent updates and bugs are likely")
func (t *Tracker) overallErrorLocked() error {
var errs []error
add := func(err error) {
if err != nil {
errs = append(errs, err)
}
}
merged := func() error {
return multierr.New(errs...)
}
if t.checkForUpdates {
if cv := t.latestVersion; cv != nil && !cv.RunningLatest && cv.LatestVersion != "" {
if cv.UrgentSecurityUpdate {
add(fmt.Errorf("Security update available: %v -> %v, run `tailscale update` or `tailscale set --auto-update` to update", version.Short(), cv.LatestVersion))
} else {
add(fmt.Errorf("Update available: %v -> %v, run `tailscale update` or `tailscale set --auto-update` to update", version.Short(), cv.LatestVersion))
}
}
}
if version.IsUnstableBuild() {
add(errUnstable)
}
if v, ok := t.anyInterfaceUp.Get(); ok && !v {
return errNetworkDown
add(errNetworkDown)
return merged()
}
if t.localLogConfigErr != nil {
return t.localLogConfigErr
add(t.localLogConfigErr)
return merged()
}
if !t.ipnWantRunning {
return fmt.Errorf("state=%v, wantRunning=%v", t.ipnState, t.ipnWantRunning)
add(fmt.Errorf("state=%v, wantRunning=%v", t.ipnState, t.ipnWantRunning))
return merged()
}
if t.lastLoginErr != nil {
return fmt.Errorf("not logged in, last login error=%v", t.lastLoginErr)
add(fmt.Errorf("not logged in, last login error=%v", t.lastLoginErr))
return merged()
}
now := time.Now()
if !t.inMapPoll && (t.lastMapPollEndedAt.IsZero() || now.Sub(t.lastMapPollEndedAt) > 10*time.Second) {
return errNotInMapPoll
add(errNotInMapPoll)
return merged()
}
const tooIdle = 2*time.Minute + 5*time.Second
if d := now.Sub(t.lastStreamedMapResponse).Round(time.Second); d > tooIdle {
return t.networkErrorfLocked("no map response in %v", d)
add(t.networkErrorfLocked("no map response in %v", d))
return merged()
}
if !t.derpHomeless {
rid := t.derpHomeRegion
if rid == 0 {
return errNoDERPHome
add(errNoDERPHome)
return merged()
}
if !t.derpRegionConnected[rid] {
return t.networkErrorfLocked("not connected to home DERP region %v", rid)
add(t.networkErrorfLocked("not connected to home DERP region %v", rid))
return merged()
}
if d := now.Sub(t.derpRegionLastFrame[rid]).Round(time.Second); d > tooIdle {
return t.networkErrorfLocked("haven't heard from home DERP region %v in %v", rid, d)
add(t.networkErrorfLocked("haven't heard from home DERP region %v in %v", rid, d))
return merged()
}
}
if t.udp4Unbound {
return errNoUDP4Bind
add(errNoUDP4Bind)
return merged()
}
// TODO: use
@ -653,7 +738,6 @@ func (t *Tracker) overallErrorLocked() error {
_ = t.lastStreamedMapResponse
_ = t.lastMapRequestHeard
var errs []error
for i := range t.MagicSockReceiveFuncs {
f := &t.MagicSockReceiveFuncs[i]
if f.missing {

View File

@ -312,7 +312,7 @@ func (b *LocalBackend) updateDrivePeersLocked(nm *netmap.NetworkMap) {
driveRemotes = b.driveRemotesFromPeers(nm)
}
fs.SetRemotes(b.netMap.Domain, driveRemotes, &driveTransport{b: b})
fs.SetRemotes(b.netMap.Domain, driveRemotes, b.newDriveTransport())
}
func (b *LocalBackend) driveRemotesFromPeers(nm *netmap.NetworkMap) []*drive.Remote {

View File

@ -151,6 +151,12 @@ type watchSession struct {
sessionID string
}
// lastSuggestedExitNode stores the last suggested exit node ID and name in local backend.
type lastSuggestedExitNode struct {
id tailcfg.StableNodeID
name string
}
// LocalBackend is the glue between the major pieces of the Tailscale
// network software: the cloud control plane (via controlclient), the
// network data plane (via wgengine), and the user-facing UIs and CLIs
@ -324,6 +330,10 @@ type LocalBackend struct {
// outgoingFiles keeps track of Taildrop outgoing files keyed to their OutgoingFile.ID
outgoingFiles map[string]*ipn.OutgoingFile
// lastSuggestedExitNode stores the last suggested exit node ID and name.
// lastSuggestedExitNode updates whenever the suggestion changes.
lastSuggestedExitNode lastSuggestedExitNode
}
// HealthTracker returns the health tracker for the backend.
@ -364,7 +374,7 @@ func NewLocalBackend(logf logger.Logf, logID logid.PublicID, sys *tsd.System, lo
if loginFlags&controlclient.LocalBackendStartKeyOSNeutral != 0 {
goos = ""
}
pm, err := newProfileManagerWithGOOS(store, logf, goos)
pm, err := newProfileManagerWithGOOS(store, logf, sys.HealthTracker(), goos)
if err != nil {
return nil, err
}
@ -604,6 +614,8 @@ func (b *LocalBackend) ReloadConfig() (ok bool, err error) {
return true, nil
}
var assumeNetworkUpdateForTest = envknob.RegisterBool("TS_ASSUME_NETWORK_UP_FOR_TEST")
// pauseOrResumeControlClientLocked pauses b.cc if there is no network available
// or if the LocalBackend is in Stopped state with a valid NetMap. In all other
// cases, it unpauses it. It is a no-op if b.cc is nil.
@ -614,7 +626,7 @@ func (b *LocalBackend) pauseOrResumeControlClientLocked() {
return
}
networkUp := b.prevIfState.AnyInterfaceUp()
b.cc.SetPaused((b.state == ipn.Stopped && b.netMap != nil) || (!networkUp && !testenv.InTest()))
b.cc.SetPaused((b.state == ipn.Stopped && b.netMap != nil) || (!networkUp && !testenv.InTest() && !assumeNetworkUpdateForTest()))
}
// linkChange is our network monitor callback, called whenever the network changes.
@ -770,30 +782,14 @@ func (b *LocalBackend) UpdateStatus(sb *ipnstate.StatusBuilder) {
s.AuthURL = b.authURLSticky
if prefs := b.pm.CurrentPrefs(); prefs.Valid() && prefs.AutoUpdate().Check {
s.ClientVersion = b.lastClientVersion
if cv := b.lastClientVersion; cv != nil && !cv.RunningLatest && cv.LatestVersion != "" {
if cv.UrgentSecurityUpdate {
s.Health = append(s.Health, fmt.Sprintf("Security update available: %v -> %v, run `tailscale update` or `tailscale set --auto-update` to update", version.Short(), cv.LatestVersion))
} else {
s.Health = append(s.Health, fmt.Sprintf("Update available: %v -> %v, run `tailscale update` or `tailscale set --auto-update` to update", version.Short(), cv.LatestVersion))
}
}
}
if err := b.health.OverallError(); err != nil {
switch e := err.(type) {
case multierr.Error:
for _, err := range e.Errors() {
s.Health = append(s.Health, err.Error())
}
default:
s.Health = append(s.Health, err.Error())
}
}
s.Health = b.health.AppendWarnings(s.Health)
// TODO(bradfitz): move this health check into a health.Warnable
// and remove from here.
if m := b.sshOnButUnusableHealthCheckMessageLocked(); m != "" {
s.Health = append(s.Health, m)
}
if version.IsUnstableBuild() {
s.Health = append(s.Health, "This is an unstable (development) version of Tailscale; frequent updates and bugs are likely")
}
if b.netMap != nil {
s.CertDomains = append([]string(nil), b.netMap.DNS.CertDomains...)
s.MagicDNSSuffix = b.netMap.MagicDNSSuffix()
@ -2517,6 +2513,7 @@ func (b *LocalBackend) tellClientToBrowseToURL(url string) {
func (b *LocalBackend) onClientVersion(v *tailcfg.ClientVersion) {
b.mu.Lock()
b.lastClientVersion = v
b.health.SetLatestVersion(v)
b.mu.Unlock()
b.send(ipn.Notify{ClientVersion: v})
}
@ -3138,6 +3135,19 @@ func (b *LocalBackend) SetUseExitNodeEnabled(v bool) (ipn.PrefsView, error) {
return b.editPrefsLockedOnEntry(mp, unlock)
}
// MaybeClearAppConnector clears the routes from any AppConnector if
// AdvertiseRoutes has been set in the MaskedPrefs.
func (b *LocalBackend) MaybeClearAppConnector(mp *ipn.MaskedPrefs) error {
var err error
if b.appConnector != nil && mp.AdvertiseRoutesSet {
err = b.appConnector.ClearRoutes()
if err != nil {
b.logf("appc: clear routes error: %v", err)
}
}
return err
}
func (b *LocalBackend) EditPrefs(mp *ipn.MaskedPrefs) (ipn.PrefsView, error) {
if mp.SetsInternal() {
return ipn.PrefsView{}, errors.New("can't set Internal fields")
@ -3516,8 +3526,22 @@ func (b *LocalBackend) reconfigAppConnectorLocked(nm *netmap.NetworkMap, prefs i
return
}
if b.appConnector == nil {
b.appConnector = appc.NewAppConnector(b.logf, b)
shouldAppCStoreRoutes := b.ControlKnobs().AppCStoreRoutes.Load()
if b.appConnector == nil || b.appConnector.ShouldStoreRoutes() != shouldAppCStoreRoutes {
var ri *appc.RouteInfo
var storeFunc func(*appc.RouteInfo) error
if shouldAppCStoreRoutes {
var err error
ri, err = b.readRouteInfoLocked()
if err != nil {
ri = &appc.RouteInfo{}
if err != ipn.ErrStateNotExist {
b.logf("Unsuccessful Read RouteInfo: ", err)
}
}
storeFunc = b.storeRouteInfo
}
b.appConnector = appc.NewAppConnector(b.logf, b, ri, storeFunc)
}
if nm == nil {
return
@ -3674,8 +3698,8 @@ func dnsConfigForNetmap(nm *netmap.NetworkMap, peers map[tailcfg.NodeID]tailcfg.
}
// selfV6Only is whether we only have IPv6 addresses ourselves.
selfV6Only := views.SliceContainsFunc(nm.GetAddresses(), tsaddr.PrefixIs6) &&
!views.SliceContainsFunc(nm.GetAddresses(), tsaddr.PrefixIs4)
selfV6Only := nm.GetAddresses().ContainsFunc(tsaddr.PrefixIs6) &&
!nm.GetAddresses().ContainsFunc(tsaddr.PrefixIs4)
dcfg.OnlyIPv6 = selfV6Only
// Populate MagicDNS records. We do this unconditionally so that
@ -4567,6 +4591,7 @@ func (b *LocalBackend) ResetForClientDisconnect() {
b.authURLSticky = ""
b.authURLTime = time.Time{}
b.activeLogin = ""
b.resetDialPlan()
b.setAtomicValuesFromPrefsLocked(ipn.PrefsView{})
b.enterStateLockedOnEntry(ipn.Stopped, unlock)
}
@ -4648,7 +4673,7 @@ func (b *LocalBackend) Logout(ctx context.Context) error {
// b.mu is now unlocked, after editPrefsLockedOnEntry.
// Clear any previous dial plan(s), if set.
b.dialPlan.Store(nil)
b.resetDialPlan()
if cc == nil {
// Double Logout can happen via repeated IPN
@ -4800,13 +4825,6 @@ func (b *LocalBackend) updatePeersFromNetmapLocked(nm *netmap.NetworkMap) {
}
}
// driveTransport is an http.RoundTripper that uses the latest value of
// b.Dialer().PeerAPITransport() for each round trip and imposes a short
// dial timeout to avoid hanging on connecting to offline/unreachable hosts.
type driveTransport struct {
b *LocalBackend
}
// responseBodyWrapper wraps an io.ReadCloser and stores
// the number of bytesRead.
type responseBodyWrapper struct {
@ -4860,6 +4878,20 @@ func (rbw *responseBodyWrapper) Close() error {
return err
}
// driveTransport is an http.RoundTripper that wraps
// b.Dialer().PeerAPITransport() with metrics tracking.
type driveTransport struct {
b *LocalBackend
tr *http.Transport
}
func (b *LocalBackend) newDriveTransport() *driveTransport {
return &driveTransport{
b: b,
tr: b.Dialer().PeerAPITransport(),
}
}
func (dt *driveTransport) RoundTrip(req *http.Request) (resp *http.Response, err error) {
// Some WebDAV clients include origin and refer headers, which peerapi does
// not like. Remove them.
@ -4917,18 +4949,7 @@ func (dt *driveTransport) RoundTrip(req *http.Request) (resp *http.Response, err
}
}()
// dialTimeout is fairly aggressive to avoid hangs on contacting offline or
// unreachable hosts.
dialTimeout := 1 * time.Second // TODO(oxtoacart): tune this
tr := dt.b.Dialer().PeerAPITransport().Clone()
dialContext := tr.DialContext
tr.DialContext = func(ctx context.Context, network, addr string) (net.Conn, error) {
ctxWithTimeout, cancel := context.WithTimeout(ctx, dialTimeout)
defer cancel()
return dialContext(ctxWithTimeout, network, addr)
}
return tr.RoundTrip(req)
return dt.tr.RoundTrip(req)
}
// roundTraffic rounds bytes. This is used to preserve user privacy within logs.
@ -5851,9 +5872,18 @@ func (b *LocalBackend) SwitchProfile(profile ipn.ProfileID) error {
unlock := b.lockAndGetUnlock()
defer unlock()
oldControlURL := b.pm.CurrentPrefs().ControlURLOrDefault()
if err := b.pm.SwitchProfile(profile); err != nil {
return err
}
// As an optimization, only reset the dialPlan if the control URL
// changed; we treat an empty URL as "unknown" and always reset.
newControlURL := b.pm.CurrentPrefs().ControlURLOrDefault()
if oldControlURL != newControlURL || oldControlURL == "" || newControlURL == "" {
b.resetDialPlan()
}
return b.resetForProfileChangeLockedOnEntry(unlock)
}
@ -5904,6 +5934,17 @@ func (b *LocalBackend) initTKALocked() error {
return nil
}
// resetDialPlan resets the dialPlan for this LocalBackend. It will log if
// anything is reset.
//
// It is safe to call this concurrently, with or without b.mu held.
func (b *LocalBackend) resetDialPlan() {
old := b.dialPlan.Swap(nil)
if old != nil {
b.logf("resetDialPlan: did reset")
}
}
// resetForProfileChangeLockedOnEntry resets the backend for a profile change.
//
// b.mu must held on entry. It is released on exit.
@ -5924,7 +5965,8 @@ func (b *LocalBackend) resetForProfileChangeLockedOnEntry(unlock unlockOnce) err
}
b.lastServeConfJSON = mem.B(nil)
b.serveConfig = ipn.ServeConfigView{}
b.enterStateLockedOnEntry(ipn.NoState, unlock) // Reset state; releases b.mu
b.lastSuggestedExitNode = lastSuggestedExitNode{} // Reset last suggested exit node.
b.enterStateLockedOnEntry(ipn.NoState, unlock) // Reset state; releases b.mu
b.health.SetLocalLogConfigHealth(nil)
return b.Start(ipn.Options{})
}
@ -5962,6 +6004,11 @@ func (b *LocalBackend) NewProfile() error {
defer unlock()
b.pm.NewProfile()
// The new profile doesn't yet have a ControlURL because it hasn't been
// set. Conservatively reset the dialPlan.
b.resetDialPlan()
return b.resetForProfileChangeLockedOnEntry(unlock)
}
@ -5990,6 +6037,7 @@ func (b *LocalBackend) ResetAuth() error {
if err := b.pm.DeleteAllProfiles(); err != nil {
return err
}
b.resetDialPlan() // always reset if we're removing everything
return b.resetForProfileChangeLockedOnEntry(unlock)
}
@ -6210,6 +6258,43 @@ func (b *LocalBackend) UnadvertiseRoute(toRemove ...netip.Prefix) error {
return err
}
// namespace a key with the profile manager's current profile key, if any
func namespaceKeyForCurrentProfile(pm *profileManager, key ipn.StateKey) ipn.StateKey {
return pm.CurrentProfile().Key + "||" + key
}
const routeInfoStateStoreKey ipn.StateKey = "_routeInfo"
func (b *LocalBackend) storeRouteInfo(ri *appc.RouteInfo) error {
b.mu.Lock()
defer b.mu.Unlock()
if b.pm.CurrentProfile().ID == "" {
return nil
}
key := namespaceKeyForCurrentProfile(b.pm, routeInfoStateStoreKey)
bs, err := json.Marshal(ri)
if err != nil {
return err
}
return b.pm.WriteState(key, bs)
}
func (b *LocalBackend) readRouteInfoLocked() (*appc.RouteInfo, error) {
if b.pm.CurrentProfile().ID == "" {
return &appc.RouteInfo{}, nil
}
key := namespaceKeyForCurrentProfile(b.pm, routeInfoStateStoreKey)
bs, err := b.pm.Store().ReadState(key)
ri := &appc.RouteInfo{}
if err != nil {
return nil, err
}
if err := json.Unmarshal(bs, ri); err != nil {
return nil, err
}
return ri, nil
}
// seamlessRenewalEnabled reports whether seamless key renewals are enabled
// (i.e. we saw our self node with the SeamlessKeyRenewal attr in a netmap).
// This enables beta functionality of renewing node keys without breaking
@ -6258,6 +6343,7 @@ func mayDeref[T any](p *T) (v T) {
var ErrNoPreferredDERP = errors.New("no preferred DERP, try again later")
var ErrCannotSuggestExitNode = errors.New("unable to suggest an exit node, try again later")
var ErrUnableToSuggestLastExitNode = errors.New("unable to suggest last exit node")
// SuggestExitNode computes a suggestion based on the current netmap and last netcheck report. If
// there are multiple equally good options, one is selected at random, so the result is not stable. To be
@ -6270,13 +6356,41 @@ func (b *LocalBackend) SuggestExitNode() (response apitype.ExitNodeSuggestionRes
b.mu.Lock()
lastReport := b.MagicConn().GetLastNetcheckReport(b.ctx)
netMap := b.netMap
lastSuggestedExitNode := b.lastSuggestedExitNode
b.mu.Unlock()
if lastReport == nil || netMap == nil {
return response, ErrCannotSuggestExitNode
last, err := suggestLastExitNode(lastSuggestedExitNode)
if err != nil {
return response, ErrCannotSuggestExitNode
}
return last, err
}
seed := time.Now().UnixNano()
r := rand.New(rand.NewSource(seed))
return suggestExitNode(lastReport, netMap, r)
res, err := suggestExitNode(lastReport, netMap, r)
if err != nil {
last, err := suggestLastExitNode(lastSuggestedExitNode)
if err != nil {
return response, ErrCannotSuggestExitNode
}
return last, err
}
b.mu.Lock()
b.lastSuggestedExitNode.id = res.ID
b.lastSuggestedExitNode.name = res.Name
b.mu.Unlock()
return res, err
}
// suggestLastExitNode formats a response with the last suggested exit node's ID and name.
// Used as a fallback before returning a nil response and error.
func suggestLastExitNode(lastSuggestedExitNode lastSuggestedExitNode) (res apitype.ExitNodeSuggestionResponse, err error) {
if lastSuggestedExitNode.id != "" && lastSuggestedExitNode.name != "" {
res.ID = lastSuggestedExitNode.id
res.Name = lastSuggestedExitNode.name
return res, nil
}
return res, ErrUnableToSuggestLastExitNode
}
func suggestExitNode(report *netcheck.Report, netMap *netmap.NetworkMap, r *rand.Rand) (res apitype.ExitNodeSuggestionResponse, err error) {

View File

@ -15,7 +15,6 @@ import (
"net/netip"
"os"
"reflect"
"runtime"
"slices"
"sync"
"testing"
@ -26,10 +25,12 @@ import (
"golang.org/x/net/dns/dnsmessage"
"tailscale.com/appc"
"tailscale.com/appc/appctest"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/clientupdate"
"tailscale.com/control/controlclient"
"tailscale.com/drive"
"tailscale.com/drive/driveimpl"
"tailscale.com/health"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
"tailscale.com/net/netcheck"
@ -55,6 +56,8 @@ import (
"tailscale.com/wgengine/wgcfg"
)
func fakeStoreRoutes(*appc.RouteInfo) error { return nil }
func inRemove(ip netip.Addr) bool {
for _, pfx := range removeFromDefaultRoute {
if pfx.Contains(ip) {
@ -1291,13 +1294,19 @@ func TestDNSConfigForNetmapForExitNodeConfigs(t *testing.T) {
}
func TestOfferingAppConnector(t *testing.T) {
b := newTestBackend(t)
if b.OfferingAppConnector() {
t.Fatal("unexpected offering app connector")
}
b.appConnector = appc.NewAppConnector(t.Logf, nil)
if !b.OfferingAppConnector() {
t.Fatal("unexpected not offering app connector")
for _, shouldStore := range []bool{false, true} {
b := newTestBackend(t)
if b.OfferingAppConnector() {
t.Fatal("unexpected offering app connector")
}
if shouldStore {
b.appConnector = appc.NewAppConnector(t.Logf, nil, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
b.appConnector = appc.NewAppConnector(t.Logf, nil, nil, nil)
}
if !b.OfferingAppConnector() {
t.Fatal("unexpected not offering app connector")
}
}
}
@ -1342,21 +1351,27 @@ func TestRouterAdvertiserIgnoresContainedRoutes(t *testing.T) {
}
func TestObserveDNSResponse(t *testing.T) {
b := newTestBackend(t)
for _, shouldStore := range []bool{false, true} {
b := newTestBackend(t)
// ensure no error when no app connector is configured
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
// ensure no error when no app connector is configured
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
rc := &appctest.RouteCollector{}
b.appConnector = appc.NewAppConnector(t.Logf, rc)
b.appConnector.UpdateDomains([]string{"example.com"})
b.appConnector.Wait(context.Background())
rc := &appctest.RouteCollector{}
if shouldStore {
b.appConnector = appc.NewAppConnector(t.Logf, rc, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
b.appConnector = appc.NewAppConnector(t.Logf, rc, nil, nil)
}
b.appConnector.UpdateDomains([]string{"example.com"})
b.appConnector.Wait(context.Background())
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
b.appConnector.Wait(context.Background())
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Fatalf("got routes %v, want %v", rc.Routes(), wantRoutes)
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
b.appConnector.Wait(context.Background())
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Fatalf("got routes %v, want %v", rc.Routes(), wantRoutes)
}
}
}
@ -1836,7 +1851,7 @@ func TestSetExitNodeIDPolicy(t *testing.T) {
if test.prefs == nil {
test.prefs = ipn.NewPrefs()
}
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
pm.prefs = test.prefs.View()
b.netMap = test.nm
b.pm = pm
@ -2118,7 +2133,7 @@ func TestApplySysPolicy(t *testing.T) {
wantPrefs.ControlURL = ipn.DefaultControlURL
}
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
pm.prefs = usePrefs.View()
b := newTestBackend(t)
@ -2269,9 +2284,6 @@ func TestPreferencePolicyInfo(t *testing.T) {
}
func TestOnTailnetDefaultAutoUpdate(t *testing.T) {
if runtime.GOOS == "darwin" {
t.Skip("test broken on macOS; see https://github.com/tailscale/tailscale/issues/11894")
}
tests := []struct {
before, after opt.Bool
tailnetDefault bool
@ -2316,7 +2328,14 @@ func TestOnTailnetDefaultAutoUpdate(t *testing.T) {
t.Fatal(err)
}
b.onTailnetDefaultAutoUpdate(tt.tailnetDefault)
if want, got := tt.after, b.pm.CurrentPrefs().AutoUpdate().Apply; got != want {
want := tt.after
// On platforms that don't support auto-update we can never
// transition to auto-updates being enabled. The value should
// remain unchanged after onTailnetDefaultAutoUpdate.
if !clientupdate.CanAutoUpdate() && want.EqualBool(true) {
want = tt.before
}
if got := b.pm.CurrentPrefs().AutoUpdate().Apply; got != want {
t.Errorf("got: %q, want %q", got, want)
}
})
@ -3419,6 +3438,357 @@ func TestMinLatencyDERPregion(t *testing.T) {
}
}
func TestSuggestLastExitNode(t *testing.T) {
tests := []struct {
name string
lastSuggestedExitNode lastSuggestedExitNode
wantRes apitype.ExitNodeSuggestionResponse
wantLastSuggestedExitNode lastSuggestedExitNode
wantErr error
}{
{
name: "last suggested exit node is populated",
lastSuggestedExitNode: lastSuggestedExitNode{id: "test", name: "test"},
wantRes: apitype.ExitNodeSuggestionResponse{ID: "test", Name: "test"},
wantLastSuggestedExitNode: lastSuggestedExitNode{id: "test", name: "test"},
},
{
name: "last suggested exit node is not populated",
wantErr: ErrUnableToSuggestLastExitNode,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := suggestLastExitNode(tt.lastSuggestedExitNode)
if got != tt.wantRes || err != tt.wantErr {
t.Errorf("got %v error %v, want %v error %v", got, err, tt.wantRes, tt.wantErr)
}
})
}
}
func TestLocalBackendSuggestExitNode(t *testing.T) {
tests := []struct {
name string
lastSuggestedExitNode lastSuggestedExitNode
report netcheck.Report
netMap netmap.NetworkMap
wantID tailcfg.StableNodeID
wantName string
wantErr error
wantLastSuggestedExitNode lastSuggestedExitNode
}{
{
name: "nil netmap, returns last suggested exit node",
lastSuggestedExitNode: lastSuggestedExitNode{name: "test", id: "test"},
report: netcheck.Report{
RegionLatency: map[int]time.Duration{
1: 0,
2: -1,
3: 0,
},
},
wantID: "test",
wantName: "test",
wantLastSuggestedExitNode: lastSuggestedExitNode{name: "test", id: "test"},
},
{
name: "nil report, returns last suggested exit node",
lastSuggestedExitNode: lastSuggestedExitNode{name: "test", id: "test"},
netMap: netmap.NetworkMap{
SelfNode: (&tailcfg.Node{
Addresses: []netip.Prefix{
netip.MustParsePrefix("100.64.1.1/32"),
netip.MustParsePrefix("fe70::1/128"),
},
}).View(),
DERPMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {},
2: {},
3: {},
},
},
},
wantID: "test",
wantName: "test",
wantLastSuggestedExitNode: lastSuggestedExitNode{name: "test", id: "test"},
},
{
name: "found better derp node, last suggested exit node updates",
lastSuggestedExitNode: lastSuggestedExitNode{name: "test", id: "test"},
report: netcheck.Report{
RegionLatency: map[int]time.Duration{
1: 10,
2: 10,
3: 5,
},
PreferredDERP: 1,
},
netMap: netmap.NetworkMap{
SelfNode: (&tailcfg.Node{
Addresses: []netip.Prefix{
netip.MustParsePrefix("100.64.1.1/32"),
netip.MustParsePrefix("fe70::1/128"),
},
}).View(),
DERPMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {},
2: {},
3: {},
},
},
Peers: []tailcfg.NodeView{
(&tailcfg.Node{
ID: 2,
StableID: "test",
Name: "test",
DERP: "127.3.3.40:1",
AllowedIPs: []netip.Prefix{
netip.MustParsePrefix("0.0.0.0/0"), netip.MustParsePrefix("::/0"),
},
CapMap: (tailcfg.NodeCapMap)(map[tailcfg.NodeCapability][]tailcfg.RawMessage{
tailcfg.NodeAttrSuggestExitNode: {},
}),
}).View(),
(&tailcfg.Node{
ID: 3,
StableID: "foo",
Name: "foo",
DERP: "127.3.3.40:3",
AllowedIPs: []netip.Prefix{
netip.MustParsePrefix("0.0.0.0/0"), netip.MustParsePrefix("::/0"),
},
CapMap: (tailcfg.NodeCapMap)(map[tailcfg.NodeCapability][]tailcfg.RawMessage{
tailcfg.NodeAttrSuggestExitNode: {},
}),
}).View(),
},
},
wantID: "foo",
wantName: "foo",
wantLastSuggestedExitNode: lastSuggestedExitNode{name: "foo", id: "foo"},
},
{
name: "found better mullvad node, last suggested exit node updates",
lastSuggestedExitNode: lastSuggestedExitNode{name: "San Jose", id: "3"},
report: netcheck.Report{
RegionLatency: map[int]time.Duration{
1: 0,
2: 0,
3: 0,
},
PreferredDERP: 1,
},
netMap: netmap.NetworkMap{
SelfNode: (&tailcfg.Node{
Addresses: []netip.Prefix{
netip.MustParsePrefix("100.64.1.1/32"),
netip.MustParsePrefix("fe70::1/128"),
},
}).View(),
DERPMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
Latitude: 40.73061,
Longitude: -73.935242,
},
2: {},
3: {},
},
},
Peers: []tailcfg.NodeView{
(&tailcfg.Node{
ID: 2,
StableID: "2",
AllowedIPs: []netip.Prefix{
netip.MustParsePrefix("0.0.0.0/0"), netip.MustParsePrefix("::/0"),
},
Name: "Dallas",
Hostinfo: (&tailcfg.Hostinfo{
Location: &tailcfg.Location{
Latitude: 32.89748,
Longitude: -97.040443,
Priority: 100,
},
}).View(),
CapMap: (tailcfg.NodeCapMap)(map[tailcfg.NodeCapability][]tailcfg.RawMessage{
tailcfg.NodeAttrSuggestExitNode: {},
}),
}).View(),
(&tailcfg.Node{
ID: 3,
StableID: "3",
AllowedIPs: []netip.Prefix{
netip.MustParsePrefix("0.0.0.0/0"), netip.MustParsePrefix("::/0"),
},
Name: "San Jose",
Hostinfo: (&tailcfg.Hostinfo{
Location: &tailcfg.Location{
Latitude: 37.3382082,
Longitude: -121.8863286,
Priority: 20,
},
}).View(),
CapMap: (tailcfg.NodeCapMap)(map[tailcfg.NodeCapability][]tailcfg.RawMessage{
tailcfg.NodeAttrSuggestExitNode: {},
}),
}).View(),
},
},
wantID: "2",
wantName: "Dallas",
wantLastSuggestedExitNode: lastSuggestedExitNode{name: "Dallas", id: "2"},
},
{
name: "ErrNoPreferredDERP, use last suggested exit node",
lastSuggestedExitNode: lastSuggestedExitNode{name: "test", id: "test"},
report: netcheck.Report{
RegionLatency: map[int]time.Duration{
1: 10,
2: 10,
3: 5,
},
PreferredDERP: 0,
},
netMap: netmap.NetworkMap{
SelfNode: (&tailcfg.Node{
Addresses: []netip.Prefix{
netip.MustParsePrefix("100.64.1.1/32"),
netip.MustParsePrefix("fe70::1/128"),
},
}).View(),
DERPMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {},
2: {},
3: {},
},
},
Peers: []tailcfg.NodeView{
(&tailcfg.Node{
ID: 2,
StableID: "test",
Name: "test",
DERP: "127.3.3.40:1",
AllowedIPs: []netip.Prefix{
netip.MustParsePrefix("0.0.0.0/0"), netip.MustParsePrefix("::/0"),
},
CapMap: (tailcfg.NodeCapMap)(map[tailcfg.NodeCapability][]tailcfg.RawMessage{
tailcfg.NodeAttrSuggestExitNode: {},
}),
}).View(),
(&tailcfg.Node{
ID: 3,
StableID: "foo",
Name: "foo",
DERP: "127.3.3.40:3",
AllowedIPs: []netip.Prefix{
netip.MustParsePrefix("0.0.0.0/0"), netip.MustParsePrefix("::/0"),
},
CapMap: (tailcfg.NodeCapMap)(map[tailcfg.NodeCapability][]tailcfg.RawMessage{
tailcfg.NodeAttrSuggestExitNode: {},
}),
}).View(),
},
},
wantID: "test",
wantName: "test",
wantLastSuggestedExitNode: lastSuggestedExitNode{name: "test", id: "test"},
},
{
name: "ErrNoPreferredDERP, use last suggested exit node",
lastSuggestedExitNode: lastSuggestedExitNode{name: "test", id: "test"},
report: netcheck.Report{
RegionLatency: map[int]time.Duration{
1: 10,
2: 10,
3: 5,
},
PreferredDERP: 0,
},
netMap: netmap.NetworkMap{
SelfNode: (&tailcfg.Node{
Addresses: []netip.Prefix{
netip.MustParsePrefix("100.64.1.1/32"),
netip.MustParsePrefix("fe70::1/128"),
},
}).View(),
DERPMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {},
2: {},
3: {},
},
},
Peers: []tailcfg.NodeView{
(&tailcfg.Node{
ID: 2,
StableID: "test",
Name: "test",
DERP: "127.3.3.40:1",
AllowedIPs: []netip.Prefix{
netip.MustParsePrefix("0.0.0.0/0"), netip.MustParsePrefix("::/0"),
},
CapMap: (tailcfg.NodeCapMap)(map[tailcfg.NodeCapability][]tailcfg.RawMessage{
tailcfg.NodeAttrSuggestExitNode: {},
}),
}).View(),
(&tailcfg.Node{
ID: 3,
StableID: "foo",
Name: "foo",
DERP: "127.3.3.40:3",
AllowedIPs: []netip.Prefix{
netip.MustParsePrefix("0.0.0.0/0"), netip.MustParsePrefix("::/0"),
},
CapMap: (tailcfg.NodeCapMap)(map[tailcfg.NodeCapability][]tailcfg.RawMessage{
tailcfg.NodeAttrSuggestExitNode: {},
}),
}).View(),
},
},
wantID: "test",
wantName: "test",
wantLastSuggestedExitNode: lastSuggestedExitNode{name: "test", id: "test"},
},
{
name: "unable to use last suggested exit node",
report: netcheck.Report{
RegionLatency: map[int]time.Duration{
1: 10,
2: 10,
3: 5,
},
PreferredDERP: 0,
},
wantErr: ErrCannotSuggestExitNode,
},
}
for _, tt := range tests {
lb := newTestLocalBackend(t)
lb.lastSuggestedExitNode = tt.lastSuggestedExitNode
lb.netMap = &tt.netMap
lb.sys.MagicSock.Get().SetLastNetcheckReport(context.Background(), tt.report)
got, err := lb.SuggestExitNode()
if got.ID != tt.wantID {
t.Errorf("ID=%v, want=%v", got.ID, tt.wantID)
}
if got.Name != tt.wantName {
t.Errorf("Name=%v, want=%v", got.Name, tt.wantName)
}
if lb.lastSuggestedExitNode != tt.wantLastSuggestedExitNode {
t.Errorf("lastSuggestedExitNode=%v, want=%v", lb.lastSuggestedExitNode, tt.wantLastSuggestedExitNode)
}
if err != tt.wantErr {
t.Errorf("Error=%v, want=%v", err, tt.wantErr)
}
}
}
func TestEnableAutoUpdates(t *testing.T) {
lb := newTestLocalBackend(t)
@ -3454,3 +3824,66 @@ func TestEnableAutoUpdates(t *testing.T) {
t.Fatalf("disabling auto-updates: got error: %v", err)
}
}
func TestReadWriteRouteInfo(t *testing.T) {
// set up a backend with more than one profile
b := newTestBackend(t)
prof1 := ipn.LoginProfile{ID: "id1", Key: "key1"}
prof2 := ipn.LoginProfile{ID: "id2", Key: "key2"}
b.pm.knownProfiles["id1"] = &prof1
b.pm.knownProfiles["id2"] = &prof2
b.pm.currentProfile = &prof1
// set up routeInfo
ri1 := &appc.RouteInfo{}
ri1.Wildcards = []string{"1"}
ri2 := &appc.RouteInfo{}
ri2.Wildcards = []string{"2"}
// read before write
readRi, err := b.readRouteInfoLocked()
if readRi != nil {
t.Fatalf("read before writing: want nil, got %v", readRi)
}
if err != ipn.ErrStateNotExist {
t.Fatalf("read before writing: want %v, got %v", ipn.ErrStateNotExist, err)
}
// write the first routeInfo
if err := b.storeRouteInfo(ri1); err != nil {
t.Fatal(err)
}
// write the other routeInfo as the other profile
if err := b.pm.SwitchProfile("id2"); err != nil {
t.Fatal(err)
}
if err := b.storeRouteInfo(ri2); err != nil {
t.Fatal(err)
}
// read the routeInfo of the first profile
if err := b.pm.SwitchProfile("id1"); err != nil {
t.Fatal(err)
}
readRi, err = b.readRouteInfoLocked()
if err != nil {
t.Fatal(err)
}
if !slices.Equal(readRi.Wildcards, ri1.Wildcards) {
t.Fatalf("read prof1 routeInfo wildcards: want %v, got %v", ri1.Wildcards, readRi.Wildcards)
}
// read the routeInfo of the second profile
if err := b.pm.SwitchProfile("id2"); err != nil {
t.Fatal(err)
}
readRi, err = b.readRouteInfoLocked()
if err != nil {
t.Fatal(err)
}
if !slices.Equal(readRi.Wildcards, ri2.Wildcards) {
t.Fatalf("read prof2 routeInfo wildcards: want %v, got %v", ri2.Wildcards, readRi.Wildcards)
}
}

View File

@ -17,6 +17,7 @@ import (
"github.com/google/go-cmp/cmp"
"tailscale.com/control/controlclient"
"tailscale.com/health"
"tailscale.com/hostinfo"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
@ -148,7 +149,7 @@ func TestTKAEnablementFlow(t *testing.T) {
temp := t.TempDir()
cc := fakeControlClient(t, client)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
must.Do(pm.SetPrefs((&ipn.Prefs{
Persist: &persist.Persist{
PrivateNodeKey: nodePriv,
@ -188,7 +189,7 @@ func TestTKADisablementFlow(t *testing.T) {
nlPriv := key.NewNLPrivate()
key := tka.Key{Kind: tka.Key25519, Public: nlPriv.Public().Verifier(), Votes: 2}
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
must.Do(pm.SetPrefs((&ipn.Prefs{
Persist: &persist.Persist{
PrivateNodeKey: nodePriv,
@ -380,7 +381,7 @@ func TestTKASync(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
nodePriv := key.NewNode()
nlPriv := key.NewNLPrivate()
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
must.Do(pm.SetPrefs((&ipn.Prefs{
Persist: &persist.Persist{
PrivateNodeKey: nodePriv,
@ -602,7 +603,7 @@ func TestTKADisable(t *testing.T) {
disablementSecret := bytes.Repeat([]byte{0xa5}, 32)
nlPriv := key.NewNLPrivate()
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
must.Do(pm.SetPrefs((&ipn.Prefs{
Persist: &persist.Persist{
PrivateNodeKey: nodePriv,
@ -693,7 +694,7 @@ func TestTKASign(t *testing.T) {
toSign := key.NewNode()
nlPriv := key.NewNLPrivate()
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
must.Do(pm.SetPrefs((&ipn.Prefs{
Persist: &persist.Persist{
PrivateNodeKey: nodePriv,
@ -782,7 +783,7 @@ func TestTKAForceDisable(t *testing.T) {
nlPriv := key.NewNLPrivate()
key := tka.Key{Kind: tka.Key25519, Public: nlPriv.Public().Verifier(), Votes: 2}
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
must.Do(pm.SetPrefs((&ipn.Prefs{
Persist: &persist.Persist{
PrivateNodeKey: nodePriv,
@ -877,7 +878,7 @@ func TestTKAAffectedSigs(t *testing.T) {
// toSign := key.NewNode()
nlPriv := key.NewNLPrivate()
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
must.Do(pm.SetPrefs((&ipn.Prefs{
Persist: &persist.Persist{
PrivateNodeKey: nodePriv,
@ -1010,7 +1011,7 @@ func TestTKARecoverCompromisedKeyFlow(t *testing.T) {
cosignPriv := key.NewNLPrivate()
compromisedPriv := key.NewNLPrivate()
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
must.Do(pm.SetPrefs((&ipn.Prefs{
Persist: &persist.Persist{
PrivateNodeKey: nodePriv,
@ -1101,7 +1102,7 @@ func TestTKARecoverCompromisedKeyFlow(t *testing.T) {
// Cosign using the cosigning key.
{
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
must.Do(pm.SetPrefs((&ipn.Prefs{
Persist: &persist.Persist{
PrivateNodeKey: nodePriv,

View File

@ -26,6 +26,7 @@ import (
"tailscale.com/appc"
"tailscale.com/appc/appctest"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/health"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
"tailscale.com/tailcfg"
@ -642,7 +643,7 @@ func TestPeerAPIReplyToDNSQueries(t *testing.T) {
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
@ -687,185 +688,209 @@ func TestPeerAPIReplyToDNSQueries(t *testing.T) {
}
func TestPeerAPIPrettyReplyCNAME(t *testing.T) {
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
for _, shouldStore := range []bool{false, true} {
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
// configure as an app connector just to enable the API.
appConnector: appc.NewAppConnector(t.Logf, &appctest.RouteCollector{}),
},
}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
var a *appc.AppConnector
if shouldStore {
a = appc.NewAppConnector(t.Logf, &appctest.RouteCollector{}, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
a = appc.NewAppConnector(t.Logf, &appctest.RouteCollector{}, nil, nil)
}
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
// configure as an app connector just to enable the API.
appConnector: a,
},
}
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."),
},
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."),
},
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
var addrs []string
json.NewDecoder(w.Body).Decode(&addrs)
if len(addrs) == 0 {
t.Fatalf("no addresses returned")
}
for _, addr := range addrs {
netip.MustParseAddr(addr)
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
var addrs []string
json.NewDecoder(w.Body).Decode(&addrs)
if len(addrs) == 0 {
t.Fatalf("no addresses returned")
}
for _, addr := range addrs {
netip.MustParseAddr(addr)
}
}
}
func TestPeerAPIReplyToDNSQueriesAreObserved(t *testing.T) {
ctx := context.Background()
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
appConnector: appc.NewAppConnector(t.Logf, rc),
},
}
h.ps.b.appConnector.UpdateDomains([]string{"example.com"})
h.ps.b.appConnector.Wait(ctx)
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
var a *appc.AppConnector
if shouldStore {
a = appc.NewAppConnector(t.Logf, rc, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
a = appc.NewAppConnector(t.Logf, rc, nil, nil)
}
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
appConnector: a,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
}
h.ps.b.appConnector.UpdateDomains([]string{"example.com"})
h.ps.b.appConnector.Wait(ctx)
if !h.ps.b.OfferingAppConnector() {
t.Fatal("expecting to be offering app connector")
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
if !h.ps.b.OfferingAppConnector() {
t.Fatal("expecting to be offering app connector")
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
}
}
}
func TestPeerAPIReplyToDNSQueriesAreObservedWithCNAMEFlattening(t *testing.T) {
ctx := context.Background()
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
appConnector: appc.NewAppConnector(t.Logf, rc),
},
}
h.ps.b.appConnector.UpdateDomains([]string{"www.example.com"})
h.ps.b.appConnector.Wait(ctx)
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
var a *appc.AppConnector
if shouldStore {
a = appc.NewAppConnector(t.Logf, rc, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
a = appc.NewAppConnector(t.Logf, rc, nil, nil)
}
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
appConnector: a,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."),
},
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
}
h.ps.b.appConnector.UpdateDomains([]string{"www.example.com"})
h.ps.b.appConnector.Wait(ctx)
if !h.ps.b.OfferingAppConnector() {
t.Fatal("expecting to be offering app connector")
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."),
},
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
if !h.ps.b.OfferingAppConnector() {
t.Fatal("expecting to be offering app connector")
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
}
}
}

View File

@ -12,10 +12,10 @@ import (
"runtime"
"slices"
"strings"
"time"
"tailscale.com/clientupdate"
"tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/ipn"
"tailscale.com/types/logger"
"tailscale.com/util/clientmetric"
@ -30,8 +30,9 @@ var debug = envknob.RegisterBool("TS_DEBUG_PROFILES")
//
// It is not safe for concurrent use.
type profileManager struct {
store ipn.StateStore
logf logger.Logf
store ipn.StateStore
logf logger.Logf
health *health.Tracker
currentUserID ipn.WindowsUserID
knownProfiles map[ipn.ProfileID]*ipn.LoginProfile // always non-nil
@ -102,6 +103,7 @@ func (pm *profileManager) SetCurrentUserID(uid ipn.WindowsUserID) error {
}
pm.currentProfile = prof
pm.prefs = prefs
pm.updateHealth()
return nil
}
@ -198,10 +200,6 @@ func (pm *profileManager) Reset() {
pm.NewProfile()
}
func init() {
rand.Seed(time.Now().UnixNano())
}
// SetPrefs sets the current profile's prefs to the provided value.
// It also saves the prefs to the StateStore. It stores a copy of the
// provided prefs, which may be accessed via CurrentPrefs.
@ -285,6 +283,7 @@ func newUnusedID(knownProfiles map[ipn.ProfileID]*ipn.LoginProfile) (ipn.Profile
// is not new.
func (pm *profileManager) setPrefsLocked(clonedPrefs ipn.PrefsView) error {
pm.prefs = clonedPrefs
pm.updateHealth()
if pm.currentProfile.ID == "" {
return nil
}
@ -336,6 +335,7 @@ func (pm *profileManager) SwitchProfile(id ipn.ProfileID) error {
return err
}
pm.prefs = prefs
pm.updateHealth()
pm.currentProfile = kp
return pm.setAsUserSelectedProfileLocked()
}
@ -443,12 +443,20 @@ func (pm *profileManager) writeKnownProfiles() error {
return pm.WriteState(ipn.KnownProfilesStateKey, b)
}
func (pm *profileManager) updateHealth() {
if !pm.prefs.Valid() {
return
}
pm.health.SetCheckForUpdates(pm.prefs.AutoUpdate().Check)
}
// NewProfile creates and switches to a new unnamed profile. The new profile is
// not persisted until SetPrefs is called with a logged-in user.
func (pm *profileManager) NewProfile() {
metricNewProfile.Add(1)
pm.prefs = defaultPrefs
pm.updateHealth()
pm.currentProfile = &ipn.LoginProfile{}
}
@ -477,7 +485,8 @@ func (pm *profileManager) CurrentPrefs() ipn.PrefsView {
// ReadStartupPrefsForTest reads the startup prefs from disk. It is only used for testing.
func ReadStartupPrefsForTest(logf logger.Logf, store ipn.StateStore) (ipn.PrefsView, error) {
pm, err := newProfileManager(store, logf)
ht := new(health.Tracker) // in tests, don't care about the health status
pm, err := newProfileManager(store, logf, ht)
if err != nil {
return ipn.PrefsView{}, err
}
@ -486,8 +495,8 @@ func ReadStartupPrefsForTest(logf logger.Logf, store ipn.StateStore) (ipn.PrefsV
// newProfileManager creates a new ProfileManager using the provided StateStore.
// It also loads the list of known profiles from the StateStore.
func newProfileManager(store ipn.StateStore, logf logger.Logf) (*profileManager, error) {
return newProfileManagerWithGOOS(store, logf, envknob.GOOS())
func newProfileManager(store ipn.StateStore, logf logger.Logf, health *health.Tracker) (*profileManager, error) {
return newProfileManagerWithGOOS(store, logf, health, envknob.GOOS())
}
func readAutoStartKey(store ipn.StateStore, goos string) (ipn.StateKey, error) {
@ -520,7 +529,7 @@ func readKnownProfiles(store ipn.StateStore) (map[ipn.ProfileID]*ipn.LoginProfil
return knownProfiles, nil
}
func newProfileManagerWithGOOS(store ipn.StateStore, logf logger.Logf, goos string) (*profileManager, error) {
func newProfileManagerWithGOOS(store ipn.StateStore, logf logger.Logf, ht *health.Tracker, goos string) (*profileManager, error) {
logf = logger.WithPrefix(logf, "pm: ")
stateKey, err := readAutoStartKey(store, goos)
if err != nil {
@ -536,6 +545,7 @@ func newProfileManagerWithGOOS(store ipn.StateStore, logf logger.Logf, goos stri
store: store,
knownProfiles: knownProfiles,
logf: logf,
health: ht,
}
if stateKey != "" {

View File

@ -12,6 +12,7 @@ import (
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"tailscale.com/clientupdate"
"tailscale.com/health"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
"tailscale.com/tailcfg"
@ -24,7 +25,7 @@ import (
func TestProfileCurrentUserSwitch(t *testing.T) {
store := new(mem.Store)
pm, err := newProfileManagerWithGOOS(store, logger.Discard, "linux")
pm, err := newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "linux")
if err != nil {
t.Fatal(err)
}
@ -61,7 +62,7 @@ func TestProfileCurrentUserSwitch(t *testing.T) {
t.Fatalf("CurrentPrefs() = %v, want emptyPrefs", pm.CurrentPrefs().Pretty())
}
pm, err = newProfileManagerWithGOOS(store, logger.Discard, "linux")
pm, err = newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "linux")
if err != nil {
t.Fatal(err)
}
@ -79,7 +80,7 @@ func TestProfileCurrentUserSwitch(t *testing.T) {
func TestProfileList(t *testing.T) {
store := new(mem.Store)
pm, err := newProfileManagerWithGOOS(store, logger.Discard, "linux")
pm, err := newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "linux")
if err != nil {
t.Fatal(err)
}
@ -283,7 +284,7 @@ func TestProfileDupe(t *testing.T) {
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
store := new(mem.Store)
pm, err := newProfileManagerWithGOOS(store, logger.Discard, "linux")
pm, err := newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "linux")
if err != nil {
t.Fatal(err)
}
@ -316,7 +317,7 @@ func TestProfileDupe(t *testing.T) {
func TestProfileManagement(t *testing.T) {
store := new(mem.Store)
pm, err := newProfileManagerWithGOOS(store, logger.Discard, "linux")
pm, err := newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "linux")
if err != nil {
t.Fatal(err)
}
@ -414,7 +415,7 @@ func TestProfileManagement(t *testing.T) {
t.Logf("Recreate profile manager from store")
// Recreate the profile manager to ensure that it can load the profiles
// from the store at startup.
pm, err = newProfileManagerWithGOOS(store, logger.Discard, "linux")
pm, err = newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "linux")
if err != nil {
t.Fatal(err)
}
@ -430,7 +431,7 @@ func TestProfileManagement(t *testing.T) {
t.Logf("Recreate profile manager from store after deleting default profile")
// Recreate the profile manager to ensure that it can load the profiles
// from the store at startup.
pm, err = newProfileManagerWithGOOS(store, logger.Discard, "linux")
pm, err = newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "linux")
if err != nil {
t.Fatal(err)
}
@ -472,7 +473,7 @@ func TestProfileManagement(t *testing.T) {
t.Fatal("SetPrefs failed to save auto-update setting")
}
// Re-load profiles to trigger migration for invalid auto-update value.
pm, err = newProfileManagerWithGOOS(store, logger.Discard, "linux")
pm, err = newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "linux")
if err != nil {
t.Fatal(err)
}
@ -494,7 +495,7 @@ func TestProfileManagementWindows(t *testing.T) {
store := new(mem.Store)
pm, err := newProfileManagerWithGOOS(store, logger.Discard, "windows")
pm, err := newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "windows")
if err != nil {
t.Fatal(err)
}
@ -565,7 +566,7 @@ func TestProfileManagementWindows(t *testing.T) {
t.Logf("Recreate profile manager from store, should reset prefs")
// Recreate the profile manager to ensure that it can load the profiles
// from the store at startup.
pm, err = newProfileManagerWithGOOS(store, logger.Discard, "windows")
pm, err = newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "windows")
if err != nil {
t.Fatal(err)
}
@ -590,7 +591,7 @@ func TestProfileManagementWindows(t *testing.T) {
}
// Recreate the profile manager to ensure that it starts with test profile.
pm, err = newProfileManagerWithGOOS(store, logger.Discard, "windows")
pm, err = newProfileManagerWithGOOS(store, logger.Discard, new(health.Tracker), "windows")
if err != nil {
t.Fatal(err)
}

View File

@ -24,6 +24,7 @@ import (
"testing"
"time"
"tailscale.com/health"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
"tailscale.com/tailcfg"
@ -686,7 +687,7 @@ func newTestBackend(t *testing.T) *LocalBackend {
dir := t.TempDir()
b.SetVarRoot(dir)
pm := must.Get(newProfileManager(new(mem.Store), logf))
pm := must.Get(newProfileManager(new(mem.Store), logf, new(health.Tracker)))
pm.currentProfile = &ipn.LoginProfile{ID: "id0"}
b.pm = pm

View File

@ -10,6 +10,7 @@ import (
"reflect"
"testing"
"tailscale.com/health"
"tailscale.com/ipn/store/mem"
"tailscale.com/tailcfg"
"tailscale.com/util/must"
@ -49,7 +50,7 @@ type fakeSSHServer struct {
}
func TestGetSSHUsernames(t *testing.T) {
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, new(health.Tracker)))
b := &LocalBackend{pm: pm, store: pm.Store()}
b.sshServer = fakeSSHServer{}
res, err := b.getSSHUsernames(new(tailcfg.C2NSSHUsernamesRequest))

View File

@ -1366,6 +1366,12 @@ func (h *Handler) servePrefs(w http.ResponseWriter, r *http.Request) {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
if err := h.b.MaybeClearAppConnector(mp); err != nil {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusInternalServerError)
json.NewEncoder(w).Encode(resJSON{Error: err.Error()})
return
}
var err error
prefs, err = h.b.EditPrefs(mp)
if err != nil {

View File

@ -7,6 +7,7 @@ package kubestore
import (
"context"
"fmt"
"net"
"strings"
"time"
@ -18,7 +19,7 @@ import (
// Store is an ipn.StateStore that uses a Kubernetes Secret for persistence.
type Store struct {
client *kube.Client
client kube.Client
canPatch bool
secretName string
}
@ -29,7 +30,7 @@ func New(_ logger.Logf, secretName string) (*Store, error) {
if err != nil {
return nil, err
}
canPatch, err := c.CheckSecretPermissions(context.Background(), secretName)
canPatch, _, err := c.CheckSecretPermissions(context.Background(), secretName)
if err != nil {
return nil, err
}
@ -83,7 +84,7 @@ func (s *Store) WriteState(id ipn.StateKey, bs []byte) error {
secret, err := s.client.GetSecret(ctx, s.secretName)
if err != nil {
if st, ok := err.(*kube.Status); ok && st.Code == 404 {
if kube.IsNotFoundErr(err) {
return s.client.CreateSecret(ctx, &kube.Secret{
TypeMeta: kube.TypeMeta{
APIVersion: "v1",
@ -100,6 +101,19 @@ func (s *Store) WriteState(id ipn.StateKey, bs []byte) error {
return err
}
if s.canPatch {
if len(secret.Data) == 0 { // if user has pre-created a blank Secret
m := []kube.JSONPatch{
{
Op: "add",
Path: "/data",
Value: map[string][]byte{sanitizeKey(id): bs},
},
}
if err := s.client.JSONPatchSecret(ctx, s.secretName, m); err != nil {
return fmt.Errorf("error patching Secret %s with a /data field: %v", s.secretName, err)
}
return nil
}
m := []kube.JSONPatch{
{
Op: "add",
@ -108,7 +122,7 @@ func (s *Store) WriteState(id ipn.StateKey, bs []byte) error {
},
}
if err := s.client.JSONPatchSecret(ctx, s.secretName, m); err != nil {
return err
return fmt.Errorf("error patching Secret %s with /data/%s field", s.secretName, sanitizeKey(id))
}
return nil
}

View File

@ -10,6 +10,8 @@ Resource Types:
- [Connector](#connector)
- [DNSConfig](#dnsconfig)
- [ProxyClass](#proxyclass)
@ -259,6 +261,274 @@ ConnectorCondition contains condition information for a Connector.
</tr></tbody>
</table>
## DNSConfig
<sup><sup>[↩ Parent](#tailscalecomv1alpha1 )</sup></sup>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody><tr>
<td><b>apiVersion</b></td>
<td>string</td>
<td>tailscale.com/v1alpha1</td>
<td>true</td>
</tr>
<tr>
<td><b>kind</b></td>
<td>string</td>
<td>DNSConfig</td>
<td>true</td>
</tr>
<tr>
<td><b><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#objectmeta-v1-meta">metadata</a></b></td>
<td>object</td>
<td>Refer to the Kubernetes API documentation for the fields of the `metadata` field.</td>
<td>true</td>
</tr><tr>
<td><b><a href="#dnsconfigspec">spec</a></b></td>
<td>object</td>
<td>
<br/>
</td>
<td>true</td>
</tr><tr>
<td><b><a href="#dnsconfigstatus">status</a></b></td>
<td>object</td>
<td>
<br/>
</td>
<td>false</td>
</tr></tbody>
</table>
### DNSConfig.spec
<sup><sup>[↩ Parent](#dnsconfig)</sup></sup>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody><tr>
<td><b><a href="#dnsconfigspecnameserver">nameserver</a></b></td>
<td>object</td>
<td>
<br/>
</td>
<td>true</td>
</tr></tbody>
</table>
### DNSConfig.spec.nameserver
<sup><sup>[↩ Parent](#dnsconfigspec)</sup></sup>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody><tr>
<td><b><a href="#dnsconfigspecnameserverimage">image</a></b></td>
<td>object</td>
<td>
<br/>
</td>
<td>false</td>
</tr></tbody>
</table>
### DNSConfig.spec.nameserver.image
<sup><sup>[↩ Parent](#dnsconfigspecnameserver)</sup></sup>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody><tr>
<td><b>repo</b></td>
<td>string</td>
<td>
<br/>
</td>
<td>false</td>
</tr><tr>
<td><b>tag</b></td>
<td>string</td>
<td>
<br/>
</td>
<td>false</td>
</tr></tbody>
</table>
### DNSConfig.status
<sup><sup>[↩ Parent](#dnsconfig)</sup></sup>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody><tr>
<td><b><a href="#dnsconfigstatusconditionsindex">conditions</a></b></td>
<td>[]object</td>
<td>
<br/>
</td>
<td>false</td>
</tr><tr>
<td><b><a href="#dnsconfigstatusnameserver">nameserver</a></b></td>
<td>object</td>
<td>
<br/>
</td>
<td>false</td>
</tr></tbody>
</table>
### DNSConfig.status.conditions[index]
<sup><sup>[↩ Parent](#dnsconfigstatus)</sup></sup>
ConnectorCondition contains condition information for a Connector.
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody><tr>
<td><b>status</b></td>
<td>string</td>
<td>
Status of the condition, one of ('True', 'False', 'Unknown').<br/>
</td>
<td>true</td>
</tr><tr>
<td><b>type</b></td>
<td>string</td>
<td>
Type of the condition, known values are (`SubnetRouterReady`).<br/>
</td>
<td>true</td>
</tr><tr>
<td><b>lastTransitionTime</b></td>
<td>string</td>
<td>
LastTransitionTime is the timestamp corresponding to the last status change of this condition.<br/>
<br/>
<i>Format</i>: date-time<br/>
</td>
<td>false</td>
</tr><tr>
<td><b>message</b></td>
<td>string</td>
<td>
Message is a human readable description of the details of the last transition, complementing reason.<br/>
</td>
<td>false</td>
</tr><tr>
<td><b>observedGeneration</b></td>
<td>integer</td>
<td>
If set, this represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.condition[x].observedGeneration is 9, the condition is out of date with respect to the current state of the Connector.<br/>
<br/>
<i>Format</i>: int64<br/>
</td>
<td>false</td>
</tr><tr>
<td><b>reason</b></td>
<td>string</td>
<td>
Reason is a brief machine readable explanation for the condition's last transition.<br/>
</td>
<td>false</td>
</tr></tbody>
</table>
### DNSConfig.status.nameserver
<sup><sup>[↩ Parent](#dnsconfigstatus)</sup></sup>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody><tr>
<td><b>ip</b></td>
<td>string</td>
<td>
<br/>
</td>
<td>false</td>
</tr></tbody>
</table>
## ProxyClass
<sup><sup>[↩ Parent](#tailscalecomv1alpha1 )</sup></sup>
@ -333,7 +603,7 @@ Specification of the desired state of the ProxyClass resource. https://git.k8s.i
<td><b><a href="#proxyclassspecmetrics">metrics</a></b></td>
<td>object</td>
<td>
Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.<br/>
Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation. Note that the metrics are currently considered unstable and will likely change in breaking ways in the future - we only recommend that you use those for debugging purposes.<br/>
</td>
<td>false</td>
</tr><tr>
@ -352,7 +622,7 @@ Specification of the desired state of the ProxyClass resource. https://git.k8s.i
Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.
Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation. Note that the metrics are currently considered unstable and will likely change in breaking ways in the future - we only recommend that you use those for debugging purposes.
<table>
<thead>

View File

@ -49,7 +49,7 @@ func init() {
// Adds the list of known types to api.Scheme.
func addKnownTypes(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(SchemeGroupVersion, &Connector{}, &ConnectorList{}, &ProxyClass{}, &ProxyClassList{})
scheme.AddKnownTypes(SchemeGroupVersion, &Connector{}, &ConnectorList{}, &ProxyClass{}, &ProxyClassList{}, &DNSConfig{}, &DNSConfigList{})
metav1.AddToGroupVersion(scheme, SchemeGroupVersion)
return nil

View File

@ -57,7 +57,9 @@ type ProxyClassSpec struct {
// Configuration for proxy metrics. Metrics are currently not supported
// for egress proxies and for Ingress proxies that have been configured
// with tailscale.com/experimental-forward-cluster-traffic-via-ingress
// annotation.
// annotation. Note that the metrics are currently considered unstable
// and will likely change in breaking ways in the future - we only
// recommend that you use those for debugging purposes.
// +optional
Metrics *Metrics `json:"metrics,omitempty"`
}

View File

@ -0,0 +1,71 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// Code comments on these types should be treated as user facing documentation-
// they will appear on the DNSConfig CRD i.e if someone runs kubectl explain dnsconfig.
var DNSConfigKind = "DNSConfig"
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:resource:scope=Cluster,shortName=dc
// +kubebuilder:printcolumn:name="NameserverIP",type="string",JSONPath=`.status.nameserver.ip`,description="Service IP address of the nameserver"
type DNSConfig struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec DNSConfigSpec `json:"spec"`
// +optional
Status DNSConfigStatus `json:"status"`
}
// +kubebuilder:object:root=true
type DNSConfigList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata"`
Items []DNSConfig `json:"items"`
}
type DNSConfigSpec struct {
Nameserver *Nameserver `json:"nameserver"`
}
type Nameserver struct {
// +optional
Image *Image `json:"image,omitempty"`
}
type Image struct {
// +optional
Repo string `json:"repo,omitempty"`
// +optional
Tag string `json:"tag,omitempty"`
}
type DNSConfigStatus struct {
// +listType=map
// +listMapKey=type
// +optional
Conditions []ConnectorCondition `json:"conditions"`
// +optional
Nameserver *NameserverStatus `json:"nameserver"`
}
type NameserverStatus struct {
// +optional
IP string `json:"ip"`
}
const NameserverReady ConnectorConditionType = `NameserverReady`

View File

@ -163,6 +163,112 @@ func (in *Container) DeepCopy() *Container {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DNSConfig) DeepCopyInto(out *DNSConfig) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DNSConfig.
func (in *DNSConfig) DeepCopy() *DNSConfig {
if in == nil {
return nil
}
out := new(DNSConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *DNSConfig) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DNSConfigList) DeepCopyInto(out *DNSConfigList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]DNSConfig, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DNSConfigList.
func (in *DNSConfigList) DeepCopy() *DNSConfigList {
if in == nil {
return nil
}
out := new(DNSConfigList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *DNSConfigList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DNSConfigSpec) DeepCopyInto(out *DNSConfigSpec) {
*out = *in
if in.Nameserver != nil {
in, out := &in.Nameserver, &out.Nameserver
*out = new(Nameserver)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DNSConfigSpec.
func (in *DNSConfigSpec) DeepCopy() *DNSConfigSpec {
if in == nil {
return nil
}
out := new(DNSConfigSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DNSConfigStatus) DeepCopyInto(out *DNSConfigStatus) {
*out = *in
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]ConnectorCondition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Nameserver != nil {
in, out := &in.Nameserver, &out.Nameserver
*out = new(NameserverStatus)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DNSConfigStatus.
func (in *DNSConfigStatus) DeepCopy() *DNSConfigStatus {
if in == nil {
return nil
}
out := new(DNSConfigStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Env) DeepCopyInto(out *Env) {
*out = *in
@ -178,6 +284,21 @@ func (in *Env) DeepCopy() *Env {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Image) DeepCopyInto(out *Image) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Image.
func (in *Image) DeepCopy() *Image {
if in == nil {
return nil
}
out := new(Image)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Metrics) DeepCopyInto(out *Metrics) {
*out = *in
@ -193,6 +314,41 @@ func (in *Metrics) DeepCopy() *Metrics {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Nameserver) DeepCopyInto(out *Nameserver) {
*out = *in
if in.Image != nil {
in, out := &in.Image, &out.Image
*out = new(Image)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Nameserver.
func (in *Nameserver) DeepCopy() *Nameserver {
if in == nil {
return nil
}
out := new(Nameserver)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NameserverStatus) DeepCopyInto(out *NameserverStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NameserverStatus.
func (in *NameserverStatus) DeepCopy() *NameserverStatus {
if in == nil {
return nil
}
out := new(NameserverStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Pod) DeepCopyInto(out *Pod) {
*out = *in

View File

@ -24,7 +24,7 @@ func SetConnectorCondition(cn *tsapi.Connector, conditionType tsapi.ConnectorCon
cn.Status.Conditions = conds
}
// RemoveConnectorCondition will remove condition of the given type.
// RemoveConnectorCondition will remove condition of the given type if it exists.
func RemoveConnectorCondition(conn *tsapi.Connector, conditionType tsapi.ConnectorConditionType) {
conn.Status.Conditions = slices.DeleteFunc(conn.Status.Conditions, func(cond tsapi.ConnectorCondition) bool {
return cond.Type == conditionType
@ -39,6 +39,14 @@ func SetProxyClassCondition(pc *tsapi.ProxyClass, conditionType tsapi.ConnectorC
pc.Status.Conditions = conds
}
// SetDNSConfigCondition ensures that DNSConfig status has a condition with the
// given attributes. LastTransitionTime gets set every time condition's status
// changes
func SetDNSConfigCondition(dnsCfg *tsapi.DNSConfig, conditionType tsapi.ConnectorConditionType, status metav1.ConditionStatus, reason, message string, gen int64, clock tstime.Clock, logger *zap.SugaredLogger) {
conds := updateCondition(dnsCfg.Status.Conditions, conditionType, status, reason, message, gen, clock, logger)
dnsCfg.Status.Conditions = conds
}
func updateCondition(conds []tsapi.ConnectorCondition, conditionType tsapi.ConnectorConditionType, status metav1.ConditionStatus, reason, message string, gen int64, clock tstime.Clock, logger *zap.SugaredLogger) []tsapi.ConnectorCondition {
newCondition := tsapi.ConnectorCondition{
Type: conditionType,
@ -61,8 +69,9 @@ func updateCondition(conds []tsapi.ConnectorCondition, conditionType tsapi.Conne
}
cond := conds[idx] // update the existing condition
// If this update doesn't contain a state transition, we don't update
// the conditions LastTransitionTime to Now().
// If this update doesn't contain a state transition, don't update last
// transition time.
if cond.Status == status {
newCondition.LastTransitionTime = cond.LastTransitionTime
} else {
@ -82,3 +91,14 @@ func ProxyClassIsReady(pc *tsapi.ProxyClass) bool {
cond := pc.Status.Conditions[idx]
return cond.Status == metav1.ConditionTrue && cond.ObservedGeneration == pc.Generation
}
func DNSCfgIsReady(cfg *tsapi.DNSConfig) bool {
idx := xslices.IndexFunc(cfg.Status.Conditions, func(cond tsapi.ConnectorCondition) bool {
return cond.Type == tsapi.NameserverReady
})
if idx == -1 {
return false
}
cond := cfg.Status.Conditions[idx]
return cond.Status == metav1.ConditionTrue && cond.ObservedGeneration == cfg.Generation
}

View File

@ -98,5 +98,4 @@ func TestSetConnectorCondition(t *testing.T) {
},
},
})
}

23
k8s-operator/tsdns.go Normal file
View File

@ -0,0 +1,23 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package kube
const (
Alpha1Version = "v1alpha1"
DNSRecordsCMName = "dnsrecords"
DNSRecordsCMKey = "records.json"
)
type Records struct {
// Version is the version of this Records configuration. Version is
// written by the operator, i.e when it first populates the Records.
// k8s-nameserver must verify that it knows how to parse a given
// version.
Version string `json:"version"`
// IP4 contains a mapping of DNS names to IPv4 address(es).
IP4 map[string][]string `json:"ip4"`
}

View File

@ -49,7 +49,18 @@ func readFile(n string) ([]byte, error) {
// Client handles connections to Kubernetes.
// It expects to be run inside a cluster.
type Client struct {
type Client interface {
GetSecret(context.Context, string) (*Secret, error)
UpdateSecret(context.Context, *Secret) error
CreateSecret(context.Context, *Secret) error
StrategicMergePatchSecret(context.Context, string, *Secret, string) error
JSONPatchSecret(context.Context, string, []JSONPatch) error
CheckSecretPermissions(context.Context, string) (bool, bool, error)
SetDialer(dialer func(context.Context, string, string) (net.Conn, error))
SetURL(string)
}
type client struct {
mu sync.Mutex
url string
ns string
@ -59,7 +70,7 @@ type Client struct {
}
// New returns a new client
func New() (*Client, error) {
func New() (Client, error) {
ns, err := readFile("namespace")
if err != nil {
return nil, err
@ -72,7 +83,7 @@ func New() (*Client, error) {
if ok := cp.AppendCertsFromPEM(caCert); !ok {
return nil, fmt.Errorf("kube: error in creating root cert pool")
}
return &Client{
return &client{
url: defaultURL,
ns: string(ns),
client: &http.Client{
@ -87,23 +98,23 @@ func New() (*Client, error) {
// SetURL sets the URL to use for the Kubernetes API.
// This is used only for testing.
func (c *Client) SetURL(url string) {
func (c *client) SetURL(url string) {
c.url = url
}
// SetDialer sets the dialer to use when establishing a connection
// to the Kubernetes API server.
func (c *Client) SetDialer(dialer func(ctx context.Context, network, addr string) (net.Conn, error)) {
func (c *client) SetDialer(dialer func(ctx context.Context, network, addr string) (net.Conn, error)) {
c.client.Transport.(*http.Transport).DialContext = dialer
}
func (c *Client) expireToken() {
func (c *client) expireToken() {
c.mu.Lock()
defer c.mu.Unlock()
c.tokenExpiry = time.Now()
}
func (c *Client) getOrRenewToken() (string, error) {
func (c *client) getOrRenewToken() (string, error) {
c.mu.Lock()
defer c.mu.Unlock()
tk, te := c.token, c.tokenExpiry
@ -120,7 +131,7 @@ func (c *Client) getOrRenewToken() (string, error) {
return c.token, nil
}
func (c *Client) secretURL(name string) string {
func (c *client) secretURL(name string) string {
if name == "" {
return fmt.Sprintf("%s/api/v1/namespaces/%s/secrets", c.url, c.ns)
}
@ -153,7 +164,7 @@ func setHeader(key, value string) func(*http.Request) {
// decoded from JSON.
// If the request fails with a 401, the token is expired and a new one is
// requested.
func (c *Client) doRequest(ctx context.Context, method, url string, in, out any, opts ...func(*http.Request)) error {
func (c *client) doRequest(ctx context.Context, method, url string, in, out any, opts ...func(*http.Request)) error {
req, err := c.newRequest(ctx, method, url, in)
if err != nil {
return err
@ -178,7 +189,7 @@ func (c *Client) doRequest(ctx context.Context, method, url string, in, out any,
return nil
}
func (c *Client) newRequest(ctx context.Context, method, url string, in any) (*http.Request, error) {
func (c *client) newRequest(ctx context.Context, method, url string, in any) (*http.Request, error) {
tk, err := c.getOrRenewToken()
if err != nil {
return nil, err
@ -209,7 +220,7 @@ func (c *Client) newRequest(ctx context.Context, method, url string, in any) (*h
}
// GetSecret fetches the secret from the Kubernetes API.
func (c *Client) GetSecret(ctx context.Context, name string) (*Secret, error) {
func (c *client) GetSecret(ctx context.Context, name string) (*Secret, error) {
s := &Secret{Data: make(map[string][]byte)}
if err := c.doRequest(ctx, "GET", c.secretURL(name), nil, s); err != nil {
return nil, err
@ -218,18 +229,18 @@ func (c *Client) GetSecret(ctx context.Context, name string) (*Secret, error) {
}
// CreateSecret creates a secret in the Kubernetes API.
func (c *Client) CreateSecret(ctx context.Context, s *Secret) error {
func (c *client) CreateSecret(ctx context.Context, s *Secret) error {
s.Namespace = c.ns
return c.doRequest(ctx, "POST", c.secretURL(""), s, nil)
}
// UpdateSecret updates a secret in the Kubernetes API.
func (c *Client) UpdateSecret(ctx context.Context, s *Secret) error {
func (c *client) UpdateSecret(ctx context.Context, s *Secret) error {
return c.doRequest(ctx, "PUT", c.secretURL(s.Name), s, nil)
}
// JSONPatch is a JSON patch operation.
// It currently (2023-03-02) only supports the "remove" operation.
// It currently (2023-03-02) only supports "add" and "remove" operations.
//
// https://tools.ietf.org/html/rfc6902
type JSONPatch struct {
@ -239,8 +250,8 @@ type JSONPatch struct {
}
// JSONPatchSecret updates a secret in the Kubernetes API using a JSON patch.
// It currently (2023-03-02) only supports the "remove" operation.
func (c *Client) JSONPatchSecret(ctx context.Context, name string, patch []JSONPatch) error {
// It currently (2023-03-02) only supports "add" and "remove" operations.
func (c *client) JSONPatchSecret(ctx context.Context, name string, patch []JSONPatch) error {
for _, p := range patch {
if p.Op != "remove" && p.Op != "add" {
panic(fmt.Errorf("unsupported JSON patch operation: %q", p.Op))
@ -252,7 +263,7 @@ func (c *Client) JSONPatchSecret(ctx context.Context, name string, patch []JSONP
// StrategicMergePatchSecret updates a secret in the Kubernetes API using a
// strategic merge patch.
// If a fieldManager is provided, it will be used to track the patch.
func (c *Client) StrategicMergePatchSecret(ctx context.Context, name string, s *Secret, fieldManager string) error {
func (c *client) StrategicMergePatchSecret(ctx context.Context, name string, s *Secret, fieldManager string) error {
surl := c.secretURL(name)
if fieldManager != "" {
uv := url.Values{
@ -267,7 +278,7 @@ func (c *Client) StrategicMergePatchSecret(ctx context.Context, name string, s *
// CheckSecretPermissions checks the secret access permissions of the current
// pod. It returns an error if the basic permissions tailscale needs are
// missing, and reports whether the patch permission is additionally present.
// missing, and reports whether the patch and create permissions are additionally present.
//
// Errors encountered during the access checking process are logged, but ignored
// so that the pod tries to fail alive if the permissions exist and there's just
@ -275,7 +286,7 @@ func (c *Client) StrategicMergePatchSecret(ctx context.Context, name string, s *
// should always be able to use SSARs to assess their own permissions, but since
// we didn't use to check permissions this way we'll be cautious in case some
// old version of k8s deviates from the current behavior.
func (c *Client) CheckSecretPermissions(ctx context.Context, secretName string) (canPatch bool, err error) {
func (c *client) CheckSecretPermissions(ctx context.Context, secretName string) (canPatch, canCreate bool, err error) {
var errs []error
for _, verb := range []string{"get", "update"} {
ok, err := c.checkPermission(ctx, verb, secretName)
@ -286,19 +297,24 @@ func (c *Client) CheckSecretPermissions(ctx context.Context, secretName string)
}
}
if len(errs) > 0 {
return false, multierr.New(errs...)
return false, false, multierr.New(errs...)
}
ok, err := c.checkPermission(ctx, "patch", secretName)
canPatch, err = c.checkPermission(ctx, "patch", secretName)
if err != nil {
log.Printf("error checking patch permission on secret %s: %v", secretName, err)
return false, nil
return false, false, nil
}
return ok, nil
canCreate, err = c.checkPermission(ctx, "create", secretName)
if err != nil {
log.Printf("error checking create permission on secret %s: %v", secretName, err)
return false, false, nil
}
return canPatch, canCreate, nil
}
// checkPermission reports whether the current pod has permission to use the
// given verb (e.g. get, update, patch) on secretName.
func (c *Client) checkPermission(ctx context.Context, verb, secretName string) (bool, error) {
// given verb (e.g. get, update, patch, create) on secretName.
func (c *client) checkPermission(ctx context.Context, verb, secretName string) (bool, error) {
sar := map[string]any{
"apiVersion": "authorization.k8s.io/v1",
"kind": "SelfSubjectAccessReview",
@ -322,3 +338,10 @@ func (c *Client) checkPermission(ctx context.Context, verb, secretName string) (
}
return res.Status.Allowed, nil
}
func IsNotFoundErr(err error) bool {
if st, ok := err.(*Status); ok && st.Code == 404 {
return true
}
return false
}

37
kube/fake_client.go Normal file
View File

@ -0,0 +1,37 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// Package kube provides a client to interact with Kubernetes.
// This package is Tailscale-internal and not meant for external consumption.
// Further, the API should not be considered stable.
package kube
import (
"context"
"net"
)
var _ Client = &FakeClient{}
type FakeClient struct {
GetSecretImpl func(context.Context, string) (*Secret, error)
CheckSecretPermissionsImpl func(ctx context.Context, name string) (bool, bool, error)
}
func (fc *FakeClient) CheckSecretPermissions(ctx context.Context, name string) (bool, bool, error) {
return fc.CheckSecretPermissionsImpl(ctx, name)
}
func (fc *FakeClient) GetSecret(ctx context.Context, name string) (*Secret, error) {
return fc.GetSecretImpl(ctx, name)
}
func (fc *FakeClient) SetURL(_ string) {}
func (fc *FakeClient) SetDialer(dialer func(ctx context.Context, network, addr string) (net.Conn, error)) {
}
func (fc *FakeClient) StrategicMergePatchSecret(context.Context, string, *Secret, string) error {
return nil
}
func (fc *FakeClient) JSONPatchSecret(context.Context, string, []JSONPatch) error {
return nil
}
func (fc *FakeClient) UpdateSecret(context.Context, *Secret) error { return nil }
func (fc *FakeClient) CreateSecret(context.Context, *Secret) error { return nil }

View File

@ -26,8 +26,7 @@ Client][]. See also the dependencies in the [Tailscale CLI][].
- [github.com/aws/smithy-go](https://pkg.go.dev/github.com/aws/smithy-go) ([Apache-2.0](https://github.com/aws/smithy-go/blob/v1.19.0/LICENSE))
- [github.com/aws/smithy-go/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/smithy-go/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/smithy-go/blob/v1.19.0/internal/sync/singleflight/LICENSE))
- [github.com/bits-and-blooms/bitset](https://pkg.go.dev/github.com/bits-and-blooms/bitset) ([BSD-3-Clause](https://github.com/bits-and-blooms/bitset/blob/v1.13.0/LICENSE))
- [github.com/coreos/go-iptables/iptables](https://pkg.go.dev/github.com/coreos/go-iptables/iptables) ([Apache-2.0](https://github.com/coreos/go-iptables/blob/v0.7.0/LICENSE))
- [github.com/coreos/go-systemd/v22/dbus](https://pkg.go.dev/github.com/coreos/go-systemd/v22/dbus) ([Apache-2.0](https://github.com/coreos/go-systemd/blob/v22.5.0/LICENSE))
- [github.com/coreos/go-iptables/iptables](https://pkg.go.dev/github.com/coreos/go-iptables/iptables) ([Apache-2.0](https://github.com/coreos/go-iptables/blob/65c67c9f46e6/LICENSE))
- [github.com/djherbis/times](https://pkg.go.dev/github.com/djherbis/times) ([MIT](https://github.com/djherbis/times/blob/v1.6.0/LICENSE))
- [github.com/fxamacker/cbor/v2](https://pkg.go.dev/github.com/fxamacker/cbor/v2) ([MIT](https://github.com/fxamacker/cbor/blob/v2.5.0/LICENSE))
- [github.com/gaissmai/bart](https://pkg.go.dev/github.com/gaissmai/bart) ([MIT](https://github.com/gaissmai/bart/blob/v0.4.1/LICENSE))
@ -61,7 +60,7 @@ Client][]. See also the dependencies in the [Tailscale CLI][].
- [github.com/tailscale/netlink](https://pkg.go.dev/github.com/tailscale/netlink) ([Apache-2.0](https://github.com/tailscale/netlink/blob/cabfb018fe85/LICENSE))
- [github.com/tailscale/peercred](https://pkg.go.dev/github.com/tailscale/peercred) ([BSD-3-Clause](https://github.com/tailscale/peercred/blob/b535050b2aa4/LICENSE))
- [github.com/tailscale/tailscale-android/libtailscale](https://pkg.go.dev/github.com/tailscale/tailscale-android/libtailscale) ([BSD-3-Clause](https://github.com/tailscale/tailscale-android/blob/HEAD/LICENSE))
- [github.com/tailscale/wireguard-go](https://pkg.go.dev/github.com/tailscale/wireguard-go) ([MIT](https://github.com/tailscale/wireguard-go/blob/64040e66467d/LICENSE))
- [github.com/tailscale/wireguard-go](https://pkg.go.dev/github.com/tailscale/wireguard-go) ([MIT](https://github.com/tailscale/wireguard-go/blob/03c5a0ccf754/LICENSE))
- [github.com/tailscale/xnet/webdav](https://pkg.go.dev/github.com/tailscale/xnet/webdav) ([BSD-3-Clause](https://github.com/tailscale/xnet/blob/62b9a7c569f9/LICENSE))
- [github.com/tcnksm/go-httpstat](https://pkg.go.dev/github.com/tcnksm/go-httpstat) ([MIT](https://github.com/tcnksm/go-httpstat/blob/v0.2.0/LICENSE))
- [github.com/u-root/uio](https://pkg.go.dev/github.com/u-root/uio) ([BSD-3-Clause](https://github.com/u-root/uio/blob/a3c409a6018e/LICENSE))

View File

@ -12,25 +12,24 @@ See also the dependencies in the [Tailscale CLI][].
- [filippo.io/edwards25519](https://pkg.go.dev/filippo.io/edwards25519) ([BSD-3-Clause](https://github.com/FiloSottile/edwards25519/blob/v1.1.0/LICENSE))
- [github.com/aws/aws-sdk-go-v2](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/v1.24.1/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/config](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/config) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.26.6/config/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/credentials](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/credentials) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/credentials/v1.16.16/credentials/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/feature/ec2/imds) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/feature/ec2/imds/v1.14.11/feature/ec2/imds/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/configsources](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/configsources) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/configsources/v1.2.10/internal/configsources/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/endpoints/v2.5.10/internal/endpoints/v2/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/ini](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/ini) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/ini/v1.7.3/internal/ini/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/aws-sdk-go-v2/blob/v1.24.1/internal/sync/singleflight/LICENSE))
- [github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/internal/accept-encoding/v1.10.4/service/internal/accept-encoding/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/internal/presigned-url/v1.10.10/service/internal/presigned-url/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/v1.26.1/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/config](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/config) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.27.11/config/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/credentials](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/credentials) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/credentials/v1.17.11/credentials/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/feature/ec2/imds) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/feature/ec2/imds/v1.16.1/feature/ec2/imds/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/configsources](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/configsources) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/configsources/v1.3.5/internal/configsources/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/endpoints/v2.6.5/internal/endpoints/v2/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/ini](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/ini) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/ini/v1.8.0/internal/ini/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/aws-sdk-go-v2/blob/v1.26.1/internal/sync/singleflight/LICENSE))
- [github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/internal/accept-encoding/v1.11.2/service/internal/accept-encoding/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/internal/presigned-url/v1.11.7/service/internal/presigned-url/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/ssm](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/ssm) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/ssm/v1.45.0/service/ssm/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/sso](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/sso) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/sso/v1.18.7/service/sso/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/ssooidc) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/ssooidc/v1.21.7/service/ssooidc/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/sts](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/sts) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/sts/v1.26.7/service/sts/LICENSE.txt))
- [github.com/aws/smithy-go](https://pkg.go.dev/github.com/aws/smithy-go) ([Apache-2.0](https://github.com/aws/smithy-go/blob/v1.19.0/LICENSE))
- [github.com/aws/smithy-go/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/smithy-go/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/smithy-go/blob/v1.19.0/internal/sync/singleflight/LICENSE))
- [github.com/aws/aws-sdk-go-v2/service/sso](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/sso) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/sso/v1.20.5/service/sso/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/ssooidc) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/ssooidc/v1.23.4/service/ssooidc/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/sts](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/sts) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/sts/v1.28.6/service/sts/LICENSE.txt))
- [github.com/aws/smithy-go](https://pkg.go.dev/github.com/aws/smithy-go) ([Apache-2.0](https://github.com/aws/smithy-go/blob/v1.20.2/LICENSE))
- [github.com/aws/smithy-go/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/smithy-go/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/smithy-go/blob/v1.20.2/internal/sync/singleflight/LICENSE))
- [github.com/bits-and-blooms/bitset](https://pkg.go.dev/github.com/bits-and-blooms/bitset) ([BSD-3-Clause](https://github.com/bits-and-blooms/bitset/blob/v1.13.0/LICENSE))
- [github.com/coreos/go-iptables/iptables](https://pkg.go.dev/github.com/coreos/go-iptables/iptables) ([Apache-2.0](https://github.com/coreos/go-iptables/blob/v0.7.0/LICENSE))
- [github.com/coreos/go-systemd/v22/dbus](https://pkg.go.dev/github.com/coreos/go-systemd/v22/dbus) ([Apache-2.0](https://github.com/coreos/go-systemd/blob/v22.5.0/LICENSE))
- [github.com/coreos/go-iptables/iptables](https://pkg.go.dev/github.com/coreos/go-iptables/iptables) ([Apache-2.0](https://github.com/coreos/go-iptables/blob/65c67c9f46e6/LICENSE))
- [github.com/digitalocean/go-smbios/smbios](https://pkg.go.dev/github.com/digitalocean/go-smbios/smbios) ([Apache-2.0](https://github.com/digitalocean/go-smbios/blob/390a4f403a8e/LICENSE.md))
- [github.com/djherbis/times](https://pkg.go.dev/github.com/djherbis/times) ([MIT](https://github.com/djherbis/times/blob/v1.6.0/LICENSE))
- [github.com/fxamacker/cbor/v2](https://pkg.go.dev/github.com/fxamacker/cbor/v2) ([MIT](https://github.com/fxamacker/cbor/blob/v2.5.0/LICENSE))
@ -65,7 +64,7 @@ See also the dependencies in the [Tailscale CLI][].
- [github.com/tailscale/hujson](https://pkg.go.dev/github.com/tailscale/hujson) ([BSD-3-Clause](https://github.com/tailscale/hujson/blob/20486734a56a/LICENSE))
- [github.com/tailscale/netlink](https://pkg.go.dev/github.com/tailscale/netlink) ([Apache-2.0](https://github.com/tailscale/netlink/blob/cabfb018fe85/LICENSE))
- [github.com/tailscale/peercred](https://pkg.go.dev/github.com/tailscale/peercred) ([BSD-3-Clause](https://github.com/tailscale/peercred/blob/b535050b2aa4/LICENSE))
- [github.com/tailscale/wireguard-go](https://pkg.go.dev/github.com/tailscale/wireguard-go) ([MIT](https://github.com/tailscale/wireguard-go/blob/64040e66467d/LICENSE))
- [github.com/tailscale/wireguard-go](https://pkg.go.dev/github.com/tailscale/wireguard-go) ([MIT](https://github.com/tailscale/wireguard-go/blob/03c5a0ccf754/LICENSE))
- [github.com/tailscale/xnet/webdav](https://pkg.go.dev/github.com/tailscale/xnet/webdav) ([BSD-3-Clause](https://github.com/tailscale/xnet/blob/62b9a7c569f9/LICENSE))
- [github.com/tcnksm/go-httpstat](https://pkg.go.dev/github.com/tcnksm/go-httpstat) ([MIT](https://github.com/tcnksm/go-httpstat/blob/v0.2.0/LICENSE))
- [github.com/u-root/uio](https://pkg.go.dev/github.com/u-root/uio) ([BSD-3-Clause](https://github.com/u-root/uio/blob/a3c409a6018e/LICENSE))
@ -74,12 +73,12 @@ See also the dependencies in the [Tailscale CLI][].
- [github.com/x448/float16](https://pkg.go.dev/github.com/x448/float16) ([MIT](https://github.com/x448/float16/blob/v0.8.4/LICENSE))
- [go4.org/mem](https://pkg.go.dev/go4.org/mem) ([Apache-2.0](https://github.com/go4org/mem/blob/4f986261bf13/LICENSE))
- [go4.org/netipx](https://pkg.go.dev/go4.org/netipx) ([BSD-3-Clause](https://github.com/go4org/netipx/blob/fdeea329fbba/LICENSE))
- [golang.org/x/crypto](https://pkg.go.dev/golang.org/x/crypto) ([BSD-3-Clause](https://cs.opensource.google/go/x/crypto/+/v0.21.0:LICENSE))
- [golang.org/x/exp](https://pkg.go.dev/golang.org/x/exp) ([BSD-3-Clause](https://cs.opensource.google/go/x/exp/+/1b970713:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.23.0:LICENSE))
- [golang.org/x/sync](https://pkg.go.dev/golang.org/x/sync) ([BSD-3-Clause](https://cs.opensource.google/go/x/sync/+/v0.6.0:LICENSE))
- [golang.org/x/sys](https://pkg.go.dev/golang.org/x/sys) ([BSD-3-Clause](https://cs.opensource.google/go/x/sys/+/v0.18.0:LICENSE))
- [golang.org/x/term](https://pkg.go.dev/golang.org/x/term) ([BSD-3-Clause](https://cs.opensource.google/go/x/term/+/v0.18.0:LICENSE))
- [golang.org/x/crypto](https://pkg.go.dev/golang.org/x/crypto) ([BSD-3-Clause](https://cs.opensource.google/go/x/crypto/+/v0.22.0:LICENSE))
- [golang.org/x/exp](https://pkg.go.dev/golang.org/x/exp) ([BSD-3-Clause](https://cs.opensource.google/go/x/exp/+/fe59bbe5:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.24.0:LICENSE))
- [golang.org/x/sync](https://pkg.go.dev/golang.org/x/sync) ([BSD-3-Clause](https://cs.opensource.google/go/x/sync/+/v0.7.0:LICENSE))
- [golang.org/x/sys](https://pkg.go.dev/golang.org/x/sys) ([BSD-3-Clause](https://cs.opensource.google/go/x/sys/+/v0.19.0:LICENSE))
- [golang.org/x/term](https://pkg.go.dev/golang.org/x/term) ([BSD-3-Clause](https://cs.opensource.google/go/x/term/+/v0.19.0:LICENSE))
- [golang.org/x/text](https://pkg.go.dev/golang.org/x/text) ([BSD-3-Clause](https://cs.opensource.google/go/x/text/+/v0.14.0:LICENSE))
- [golang.org/x/time/rate](https://pkg.go.dev/golang.org/x/time/rate) ([BSD-3-Clause](https://cs.opensource.google/go/x/time/+/v0.5.0:LICENSE))
- [gvisor.dev/gvisor/pkg](https://pkg.go.dev/gvisor.dev/gvisor/pkg) ([Apache-2.0](https://github.com/google/gvisor/blob/ee1e1f6070e3/LICENSE))
@ -92,5 +91,5 @@ See also the dependencies in the [Tailscale CLI][].
- [Sparkle](https://sparkle-project.org/) ([MIT](https://github.com/sparkle-project/Sparkle/blob/2.x/LICENSE))
- [wireguard-apple](https://git.zx2c4.com/wireguard-apple) ([MIT](https://git.zx2c4.com/wireguard-apple/tree/COPYING))
- [apple-oss-distributions/configd](https://github.com/apple-oss-distributions/configd) ([APSL](https://github.com/apple-oss-distributions/configd/blob/main/APPLE_LICENSE))
- [WebDAV-Swift](https://github.com/skjiisa/WebDAV-Swift) ([BSD](https://github.com/skjiisa/WebDAV-Swift#BSD-2-Clause-1-ov-file)
- [WebDAV-Swift](https://github.com/skjiisa/WebDAV-Swift) ([BSD](https://github.com/skjiisa/WebDAV-Swift#BSD-2-Clause-1-ov-file))
- [CachedAsyncImage](https://github.com/lorenzofiamingo/swiftui-cached-async-image) ([MIT](https://github.com/lorenzofiamingo/swiftui-cached-async-image/blob/main/LICENSE.md))

View File

@ -34,8 +34,7 @@ Some packages may only be included on certain architectures or operating systems
- [github.com/aws/smithy-go](https://pkg.go.dev/github.com/aws/smithy-go) ([Apache-2.0](https://github.com/aws/smithy-go/blob/v1.19.0/LICENSE))
- [github.com/aws/smithy-go/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/smithy-go/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/smithy-go/blob/v1.19.0/internal/sync/singleflight/LICENSE))
- [github.com/bits-and-blooms/bitset](https://pkg.go.dev/github.com/bits-and-blooms/bitset) ([BSD-3-Clause](https://github.com/bits-and-blooms/bitset/blob/v1.13.0/LICENSE))
- [github.com/coreos/go-iptables/iptables](https://pkg.go.dev/github.com/coreos/go-iptables/iptables) ([Apache-2.0](https://github.com/coreos/go-iptables/blob/v0.7.0/LICENSE))
- [github.com/coreos/go-systemd/v22/dbus](https://pkg.go.dev/github.com/coreos/go-systemd/v22/dbus) ([Apache-2.0](https://github.com/coreos/go-systemd/blob/v22.5.0/LICENSE))
- [github.com/coreos/go-iptables/iptables](https://pkg.go.dev/github.com/coreos/go-iptables/iptables) ([Apache-2.0](https://github.com/coreos/go-iptables/blob/65c67c9f46e6/LICENSE))
- [github.com/creack/pty](https://pkg.go.dev/github.com/creack/pty) ([MIT](https://github.com/creack/pty/blob/v1.1.21/LICENSE))
- [github.com/dblohm7/wingoes](https://pkg.go.dev/github.com/dblohm7/wingoes) ([BSD-3-Clause](https://github.com/dblohm7/wingoes/blob/a09d6be7affa/LICENSE))
- [github.com/digitalocean/go-smbios/smbios](https://pkg.go.dev/github.com/digitalocean/go-smbios/smbios) ([Apache-2.0](https://github.com/digitalocean/go-smbios/blob/390a4f403a8e/LICENSE.md))
@ -84,7 +83,7 @@ Some packages may only be included on certain architectures or operating systems
- [github.com/tailscale/peercred](https://pkg.go.dev/github.com/tailscale/peercred) ([BSD-3-Clause](https://github.com/tailscale/peercred/blob/b535050b2aa4/LICENSE))
- [github.com/tailscale/web-client-prebuilt](https://pkg.go.dev/github.com/tailscale/web-client-prebuilt) ([BSD-3-Clause](https://github.com/tailscale/web-client-prebuilt/blob/5db17b287bf1/LICENSE))
- [github.com/tailscale/wf](https://pkg.go.dev/github.com/tailscale/wf) ([BSD-3-Clause](https://github.com/tailscale/wf/blob/6fbb0a674ee6/LICENSE))
- [github.com/tailscale/wireguard-go](https://pkg.go.dev/github.com/tailscale/wireguard-go) ([MIT](https://github.com/tailscale/wireguard-go/blob/64040e66467d/LICENSE))
- [github.com/tailscale/wireguard-go](https://pkg.go.dev/github.com/tailscale/wireguard-go) ([MIT](https://github.com/tailscale/wireguard-go/blob/03c5a0ccf754/LICENSE))
- [github.com/tailscale/xnet/webdav](https://pkg.go.dev/github.com/tailscale/xnet/webdav) ([BSD-3-Clause](https://github.com/tailscale/xnet/blob/62b9a7c569f9/LICENSE))
- [github.com/tcnksm/go-httpstat](https://pkg.go.dev/github.com/tcnksm/go-httpstat) ([MIT](https://github.com/tcnksm/go-httpstat/blob/v0.2.0/LICENSE))
- [github.com/toqueteos/webbrowser](https://pkg.go.dev/github.com/toqueteos/webbrowser) ([MIT](https://github.com/toqueteos/webbrowser/blob/v1.2.0/LICENSE.md))

View File

@ -13,23 +13,23 @@ Windows][]. See also the dependencies in the [Tailscale CLI][].
- [github.com/alexbrainman/sspi](https://pkg.go.dev/github.com/alexbrainman/sspi) ([BSD-3-Clause](https://github.com/alexbrainman/sspi/blob/1a75b4708caa/LICENSE))
- [github.com/apenwarr/fixconsole](https://pkg.go.dev/github.com/apenwarr/fixconsole) ([Apache-2.0](https://github.com/apenwarr/fixconsole/blob/5a9f6489cc29/LICENSE))
- [github.com/apenwarr/w32](https://pkg.go.dev/github.com/apenwarr/w32) ([BSD-3-Clause](https://github.com/apenwarr/w32/blob/aa00fece76ab/LICENSE))
- [github.com/aws/aws-sdk-go-v2](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/v1.24.1/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/config](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/config) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.26.6/config/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/credentials](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/credentials) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/credentials/v1.16.16/credentials/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/feature/ec2/imds) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/feature/ec2/imds/v1.14.11/feature/ec2/imds/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/configsources](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/configsources) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/configsources/v1.2.10/internal/configsources/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/endpoints/v2.5.10/internal/endpoints/v2/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/ini](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/ini) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/ini/v1.7.3/internal/ini/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/aws-sdk-go-v2/blob/v1.24.1/internal/sync/singleflight/LICENSE))
- [github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/internal/accept-encoding/v1.10.4/service/internal/accept-encoding/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/internal/presigned-url/v1.10.10/service/internal/presigned-url/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/v1.26.1/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/config](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/config) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.27.11/config/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/credentials](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/credentials) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/credentials/v1.17.11/credentials/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/feature/ec2/imds) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/feature/ec2/imds/v1.16.1/feature/ec2/imds/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/configsources](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/configsources) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/configsources/v1.3.5/internal/configsources/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/endpoints/v2.6.5/internal/endpoints/v2/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/ini](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/ini) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/internal/ini/v1.8.0/internal/ini/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/aws-sdk-go-v2/blob/v1.26.1/internal/sync/singleflight/LICENSE))
- [github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/internal/accept-encoding/v1.11.2/service/internal/accept-encoding/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/internal/presigned-url/v1.11.7/service/internal/presigned-url/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/ssm](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/ssm) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/ssm/v1.45.0/service/ssm/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/sso](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/sso) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/sso/v1.18.7/service/sso/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/ssooidc) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/ssooidc/v1.21.7/service/ssooidc/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/sts](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/sts) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/sts/v1.26.7/service/sts/LICENSE.txt))
- [github.com/aws/smithy-go](https://pkg.go.dev/github.com/aws/smithy-go) ([Apache-2.0](https://github.com/aws/smithy-go/blob/v1.19.0/LICENSE))
- [github.com/aws/smithy-go/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/smithy-go/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/smithy-go/blob/v1.19.0/internal/sync/singleflight/LICENSE))
- [github.com/coreos/go-iptables/iptables](https://pkg.go.dev/github.com/coreos/go-iptables/iptables) ([Apache-2.0](https://github.com/coreos/go-iptables/blob/v0.7.0/LICENSE))
- [github.com/aws/aws-sdk-go-v2/service/sso](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/sso) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/sso/v1.20.5/service/sso/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/ssooidc) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/ssooidc/v1.23.4/service/ssooidc/LICENSE.txt))
- [github.com/aws/aws-sdk-go-v2/service/sts](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/sts) ([Apache-2.0](https://github.com/aws/aws-sdk-go-v2/blob/service/sts/v1.28.6/service/sts/LICENSE.txt))
- [github.com/aws/smithy-go](https://pkg.go.dev/github.com/aws/smithy-go) ([Apache-2.0](https://github.com/aws/smithy-go/blob/v1.20.2/LICENSE))
- [github.com/aws/smithy-go/internal/sync/singleflight](https://pkg.go.dev/github.com/aws/smithy-go/internal/sync/singleflight) ([BSD-3-Clause](https://github.com/aws/smithy-go/blob/v1.20.2/internal/sync/singleflight/LICENSE))
- [github.com/coreos/go-iptables/iptables](https://pkg.go.dev/github.com/coreos/go-iptables/iptables) ([Apache-2.0](https://github.com/coreos/go-iptables/blob/65c67c9f46e6/LICENSE))
- [github.com/dblohm7/wingoes](https://pkg.go.dev/github.com/dblohm7/wingoes) ([BSD-3-Clause](https://github.com/dblohm7/wingoes/blob/b75a8a7d7eb0/LICENSE))
- [github.com/djherbis/times](https://pkg.go.dev/github.com/djherbis/times) ([MIT](https://github.com/djherbis/times/blob/v1.6.0/LICENSE))
- [github.com/fxamacker/cbor/v2](https://pkg.go.dev/github.com/fxamacker/cbor/v2) ([MIT](https://github.com/fxamacker/cbor/blob/v2.5.0/LICENSE))
@ -66,14 +66,14 @@ Windows][]. See also the dependencies in the [Tailscale CLI][].
- [github.com/x448/float16](https://pkg.go.dev/github.com/x448/float16) ([MIT](https://github.com/x448/float16/blob/v0.8.4/LICENSE))
- [go4.org/mem](https://pkg.go.dev/go4.org/mem) ([Apache-2.0](https://github.com/go4org/mem/blob/4f986261bf13/LICENSE))
- [go4.org/netipx](https://pkg.go.dev/go4.org/netipx) ([BSD-3-Clause](https://github.com/go4org/netipx/blob/fdeea329fbba/LICENSE))
- [golang.org/x/crypto](https://pkg.go.dev/golang.org/x/crypto) ([BSD-3-Clause](https://cs.opensource.google/go/x/crypto/+/v0.21.0:LICENSE))
- [golang.org/x/exp/constraints](https://pkg.go.dev/golang.org/x/exp/constraints) ([BSD-3-Clause](https://cs.opensource.google/go/x/exp/+/1b970713:LICENSE))
- [golang.org/x/crypto](https://pkg.go.dev/golang.org/x/crypto) ([BSD-3-Clause](https://cs.opensource.google/go/x/crypto/+/v0.22.0:LICENSE))
- [golang.org/x/exp/constraints](https://pkg.go.dev/golang.org/x/exp/constraints) ([BSD-3-Clause](https://cs.opensource.google/go/x/exp/+/fe59bbe5:LICENSE))
- [golang.org/x/image/bmp](https://pkg.go.dev/golang.org/x/image/bmp) ([BSD-3-Clause](https://cs.opensource.google/go/x/image/+/v0.15.0:LICENSE))
- [golang.org/x/mod](https://pkg.go.dev/golang.org/x/mod) ([BSD-3-Clause](https://cs.opensource.google/go/x/mod/+/v0.14.0:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.23.0:LICENSE))
- [golang.org/x/sync](https://pkg.go.dev/golang.org/x/sync) ([BSD-3-Clause](https://cs.opensource.google/go/x/sync/+/v0.6.0:LICENSE))
- [golang.org/x/sys](https://pkg.go.dev/golang.org/x/sys) ([BSD-3-Clause](https://cs.opensource.google/go/x/sys/+/v0.18.0:LICENSE))
- [golang.org/x/term](https://pkg.go.dev/golang.org/x/term) ([BSD-3-Clause](https://cs.opensource.google/go/x/term/+/v0.18.0:LICENSE))
- [golang.org/x/mod](https://pkg.go.dev/golang.org/x/mod) ([BSD-3-Clause](https://cs.opensource.google/go/x/mod/+/v0.17.0:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.24.0:LICENSE))
- [golang.org/x/sync](https://pkg.go.dev/golang.org/x/sync) ([BSD-3-Clause](https://cs.opensource.google/go/x/sync/+/v0.7.0:LICENSE))
- [golang.org/x/sys](https://pkg.go.dev/golang.org/x/sys) ([BSD-3-Clause](https://cs.opensource.google/go/x/sys/+/v0.19.0:LICENSE))
- [golang.org/x/term](https://pkg.go.dev/golang.org/x/term) ([BSD-3-Clause](https://cs.opensource.google/go/x/term/+/v0.19.0:LICENSE))
- [golang.org/x/text](https://pkg.go.dev/golang.org/x/text) ([BSD-3-Clause](https://cs.opensource.google/go/x/text/+/v0.14.0:LICENSE))
- [golang.zx2c4.com/wintun](https://pkg.go.dev/golang.zx2c4.com/wintun) ([MIT](https://git.zx2c4.com/wintun-go/tree/LICENSE?id=0fa3db229ce2))
- [golang.zx2c4.com/wireguard/windows/tunnel/winipcfg](https://pkg.go.dev/golang.zx2c4.com/wireguard/windows/tunnel/winipcfg) ([MIT](https://git.zx2c4.com/wireguard-windows/tree/COPYING?h=v0.5.3))

View File

@ -859,7 +859,7 @@ func (f *forwarder) forwardWithDestChan(ctx context.Context, query packet, respo
}
select {
case <-ctx.Done():
return ctx.Err()
return fmt.Errorf("waiting to send NXDOMAIN: %w", ctx.Err())
case responseChan <- res:
return nil
}
@ -885,7 +885,7 @@ func (f *forwarder) forwardWithDestChan(ctx context.Context, query packet, respo
}
select {
case <-ctx.Done():
return ctx.Err()
return fmt.Errorf("waiting to send SERVFAIL: %w", ctx.Err())
case responseChan <- res:
return nil
}
@ -915,6 +915,7 @@ func (f *forwarder) forwardWithDestChan(ctx context.Context, query packet, respo
}
resb, err := f.send(ctx, fq, *rr)
if err != nil {
err = fmt.Errorf("resolving using %q: %w", rr.name.Addr, err)
select {
case errc <- err:
case <-ctx.Done():
@ -936,7 +937,7 @@ func (f *forwarder) forwardWithDestChan(ctx context.Context, query packet, respo
select {
case <-ctx.Done():
metricDNSFwdErrorContext.Add(1)
return ctx.Err()
return fmt.Errorf("waiting to send response: %w", ctx.Err())
case responseChan <- packet{v, query.family, query.addr}:
metricDNSFwdSuccess.Add(1)
return nil
@ -969,7 +970,16 @@ func (f *forwarder) forwardWithDestChan(ctx context.Context, query packet, respo
metricDNSFwdErrorContextGotError.Add(1)
return firstErr
}
return ctx.Err()
// If we haven't got an error or a successful response,
// include all resolvers in the error message so we can
// at least see what what servers we're trying to
// query.
var resolverAddrs []string
for _, rr := range resolvers {
resolverAddrs = append(resolverAddrs, rr.name.Addr)
}
return fmt.Errorf("waiting for response or error from %v: %w", resolverAddrs, ctx.Err())
}
}
}

View File

@ -21,6 +21,7 @@ import (
"sort"
"strings"
"sync"
"syscall"
"time"
"github.com/tcnksm/go-httpstat"
@ -395,9 +396,7 @@ func makeProbePlan(dm *tailcfg.DERPMap, ifState *netmon.State, last *Report) (pl
have6if := ifState.HaveV6
have4if := ifState.HaveV4
plan = make(probePlan)
if !have4if && !have6if {
return plan
}
had4 := len(last.RegionV4Latency) > 0
had6 := len(last.RegionV6Latency) > 0
hadBoth := have6if && had4 && had6
@ -451,10 +450,10 @@ func makeProbePlan(dm *tailcfg.DERPMap, ifState *netmon.State, last *Report) (pl
if try > 1 {
delay += time.Duration(try) * 50 * time.Millisecond
}
if do4 {
if do4 || n.IsTestNode() {
p4 = append(p4, probe{delay: delay, node: n.Name, proto: probeIPv4})
}
if do6 {
if do6 || n.IsTestNode() {
p6 = append(p6, probe{delay: delay, node: n.Name, proto: probeIPv6})
}
}
@ -477,10 +476,10 @@ func makeProbePlanInitial(dm *tailcfg.DERPMap, ifState *netmon.State) (plan prob
for try := 0; try < 3; try++ {
n := reg.Nodes[try%len(reg.Nodes)]
delay := time.Duration(try) * defaultInitialRetransmitTime
if ifState.HaveV4 && nodeMight4(n) {
if ifState.HaveV4 && nodeMight4(n) || n.IsTestNode() {
p4 = append(p4, probe{delay: delay, node: n.Name, proto: probeIPv4})
}
if ifState.HaveV6 && nodeMight6(n) {
if ifState.HaveV6 && nodeMight6(n) || n.IsTestNode() {
p6 = append(p6, probe{delay: delay, node: n.Name, proto: probeIPv6})
}
}
@ -1176,7 +1175,7 @@ func (c *Client) measureHTTPSLatency(ctx context.Context, reg *tailcfg.DERPRegio
var ip netip.Addr
dc := derphttp.NewNetcheckClient(c.logf)
dc := derphttp.NewNetcheckClient(c.logf, c.NetMon)
defer dc.Close()
tlsConn, tcpConn, node, err := dc.DialRegionTLS(ctx, reg)
@ -1257,9 +1256,9 @@ func (c *Client) measureAllICMPLatency(ctx context.Context, rs *reportState, nee
for _, reg := range need {
go func(reg *tailcfg.DERPRegion) {
defer wg.Done()
if d, err := c.measureICMPLatency(ctx, reg, p); err != nil {
if d, ok, err := c.measureICMPLatency(ctx, reg, p); err != nil {
c.logf("[v1] measuring ICMP latency of %v (%d): %v", reg.RegionCode, reg.RegionID, err)
} else {
} else if ok {
c.logf("[v1] ICMP latency of %v (%d): %v", reg.RegionCode, reg.RegionID, d)
rs.mu.Lock()
if l, ok := rs.report.RegionLatency[reg.RegionID]; !ok {
@ -1281,9 +1280,9 @@ func (c *Client) measureAllICMPLatency(ctx context.Context, rs *reportState, nee
return nil
}
func (c *Client) measureICMPLatency(ctx context.Context, reg *tailcfg.DERPRegion, p *ping.Pinger) (time.Duration, error) {
func (c *Client) measureICMPLatency(ctx context.Context, reg *tailcfg.DERPRegion, p *ping.Pinger) (_ time.Duration, ok bool, err error) {
if len(reg.Nodes) == 0 {
return 0, fmt.Errorf("no nodes for region %d (%v)", reg.RegionID, reg.RegionCode)
return 0, false, fmt.Errorf("no nodes for region %d (%v)", reg.RegionID, reg.RegionCode)
}
// Try pinging the first node in the region
@ -1295,7 +1294,7 @@ func (c *Client) measureICMPLatency(ctx context.Context, reg *tailcfg.DERPRegion
// TODO(andrew-d): this is a bit ugly
nodeAddr := c.nodeAddr(ctx, node, probeIPv4)
if !nodeAddr.IsValid() {
return 0, fmt.Errorf("no address for node %v", node.Name)
return 0, false, fmt.Errorf("no address for node %v", node.Name)
}
addr := &net.IPAddr{
IP: net.IP(nodeAddr.Addr().AsSlice()),
@ -1304,7 +1303,14 @@ func (c *Client) measureICMPLatency(ctx context.Context, reg *tailcfg.DERPRegion
// Use the unique node.Name field as the packet data to reduce the
// likelihood that we get a mismatched echo response.
return p.Send(ctx, addr, []byte(node.Name))
d, err := p.Send(ctx, addr, []byte(node.Name))
if err != nil {
if errors.Is(err, syscall.EPERM) {
return 0, false, nil
}
return 0, false, err
}
return d, true, nil
}
func (c *Client) logConciseReport(r *Report, dm *tailcfg.DERPMap) {

View File

@ -23,6 +23,7 @@ import (
"tailscale.com/net/stun/stuntest"
"tailscale.com/tailcfg"
"tailscale.com/tstest"
"tailscale.com/tstest/nettest"
)
func TestHairpinSTUN(t *testing.T) {
@ -882,6 +883,8 @@ func TestSortRegions(t *testing.T) {
}
func TestNoCaptivePortalWhenUDP(t *testing.T) {
nettest.SkipIfNoNetwork(t) // empirically. not sure why.
// Override noRedirectClient to handle the /generate_204 endpoint
var generate204Called atomic.Bool
tr := RoundTripFunc(func(req *http.Request) *http.Response {
@ -934,6 +937,7 @@ func (f RoundTripFunc) RoundTrip(req *http.Request) (*http.Response, error) {
}
func TestNodeAddrResolve(t *testing.T) {
nettest.SkipIfNoNetwork(t)
c := &Client{
Logf: t.Logf,
UseDNSCache: true,

View File

@ -1,11 +1,11 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// Common code for FreeBSD and Darwin. This might also work on other
// Common code for FreeBSD. This might also work on other
// BSD systems (e.g. OpenBSD) but has not been tested.
// Not used on iOS. See defaultroute_ios.go.
// Not used on iOS or macOS. See defaultroute_darwin.go.
//go:build !ios && (darwin || freebsd)
//go:build freebsd
package netmon

View File

@ -1,12 +1,13 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build ios
//go:build darwin || ios
package netmon
import (
"log"
"net"
"tailscale.com/syncs"
)
@ -22,12 +23,12 @@ func UpdateLastKnownDefaultRouteInterface(ifName string) {
return
}
if old := lastKnownDefaultRouteIfName.Swap(ifName); old != ifName {
log.Printf("defaultroute_ios: update from Swift, ifName = %s (was %s)", ifName, old)
log.Printf("defaultroute_darwin: update from Swift, ifName = %s (was %s)", ifName, old)
}
}
func defaultRoute() (d DefaultRouteDetails, err error) {
// We cannot rely on the delegated interface data on iOS. The NetworkExtension framework
// We cannot rely on the delegated interface data on darwin. The NetworkExtension framework
// seems to set the delegate interface only once, upon the *creation* of the VPN tunnel.
// If a network transition (e.g. from Wi-Fi to Cellular) happens while the tunnel is
// connected, it will be ignored and we will still try to set Wi-Fi as the default route
@ -37,16 +38,13 @@ func defaultRoute() (d DefaultRouteDetails, err error) {
// the interface name of the first currently satisfied network path. Our Swift code will
// call into `UpdateLastKnownDefaultRouteInterface`, so we can rely on that when it is set.
//
// If for any reason the Swift machinery didn't work and we don't get any updates, here
// we also have some fallback logic: we try finding a hardcoded Wi-Fi interface called en0.
// If en0 is down, we fall back to cellular (pdp_ip0) as a last resort. This doesn't handle
// all edge cases like USB-Ethernet adapters or multiple Ethernet interfaces, but is good
// enough to ensure connectivity isn't broken.
// If for any reason the Swift machinery didn't work and we don't get any updates, we will
// fallback to the BSD logic.
// Start by getting all available interfaces.
interfaces, err := netInterfaces()
if err != nil {
log.Printf("defaultroute_ios: could not get interfaces: %v", err)
log.Printf("defaultroute_darwin: could not get interfaces: %v", err)
return d, ErrNoGatewayIndexFound
}
@ -57,13 +55,13 @@ func defaultRoute() (d DefaultRouteDetails, err error) {
}
if !ifc.IsUp() {
log.Printf("defaultroute_ios: %s is down", name)
log.Printf("defaultroute_darwin: %s is down", name)
return nil
}
addrs, _ := ifc.Addrs()
if len(addrs) == 0 {
log.Printf("defaultroute_ios: %s has no addresses", name)
log.Printf("defaultroute_darwin: %s has no addresses", name)
return nil
}
return &ifc
@ -83,26 +81,16 @@ func defaultRoute() (d DefaultRouteDetails, err error) {
}
}
// Start of our fallback logic if Swift didn't give us an interface name, or gave us an invalid
// one.
// We start by attempting to use the Wi-Fi interface, which on iPhone is always called en0.
enZeroIf := getInterfaceByName("en0")
if enZeroIf != nil {
log.Println("defaultroute_ios: using en0 (fallback)")
d.InterfaceName = enZeroIf.Name
d.InterfaceIndex = enZeroIf.Index
return d, nil
// Fallback to the BSD logic
idx, err := DefaultRouteInterfaceIndex()
if err != nil {
return d, err
}
// Did it not work? Let's try with Cellular (pdp_ip0).
cellIf := getInterfaceByName("pdp_ip0")
if cellIf != nil {
log.Println("defaultroute_ios: using pdp_ip0 (fallback)")
d.InterfaceName = cellIf.Name
d.InterfaceIndex = cellIf.Index
return d, nil
iface, err := net.InterfaceByIndex(idx)
if err != nil {
return d, err
}
log.Println("defaultroute_ios: no running interfaces available")
return d, ErrNoGatewayIndexFound
d.InterfaceName = iface.Name
d.InterfaceIndex = idx
return d, nil
}

View File

@ -118,13 +118,14 @@ func DERPMapOf(stun ...string) *tailcfg.DERPMap {
ipv4 = "none"
}
node := &tailcfg.DERPNode{
Name: fmt.Sprint(regionID) + "a",
RegionID: regionID,
HostName: fmt.Sprintf("d%d%s", regionID, tailcfg.DotInvalid),
IPv4: ipv4,
IPv6: ipv6,
STUNPort: port,
STUNOnly: true,
Name: fmt.Sprint(regionID) + "a",
RegionID: regionID,
HostName: fmt.Sprintf("d%d%s", regionID, tailcfg.DotInvalid),
IPv4: ipv4,
IPv6: ipv6,
STUNPort: port,
STUNOnly: true,
STUNTestIP: host,
}
m.Regions[regionID] = &tailcfg.DERPRegion{
RegionID: regionID,

View File

@ -181,7 +181,7 @@ func NewContainsIPFunc(addrs views.Slice[netip.Prefix]) func(ip netip.Addr) bool
// If any addr is more than a single IP, then just do the slow
// linear thing until
// https://github.com/inetaf/netaddr/issues/139 is done.
if views.SliceContainsFunc(addrs, func(p netip.Prefix) bool { return !p.IsSingleIP() }) {
if addrs.ContainsFunc(func(p netip.Prefix) bool { return !p.IsSingleIP() }) {
acopy := addrs.AsSlice()
return func(ip netip.Addr) bool {
for _, a := range acopy {

View File

@ -53,6 +53,9 @@ func New(logf logger.Logf, tunName string) (tun.Device, string, error) {
dev.Close()
return nil, "", err
}
if err := setLinkFeatures(dev); err != nil {
logf("setting link features: %v", err)
}
if err := setLinkAttrs(dev); err != nil {
logf("setting link attributes: %v", err)
}

View File

@ -0,0 +1,19 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package tstun
import (
"github.com/tailscale/wireguard-go/tun"
"tailscale.com/envknob"
)
func setLinkFeatures(dev tun.Device) error {
if envknob.Bool("TS_TUN_DISABLE_UDP_GRO") {
linuxDev, ok := dev.(tun.LinuxDevice)
if ok {
linuxDev.DisableUDPGRO()
}
}
return nil
}

View File

@ -0,0 +1,14 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !linux
package tstun
import (
"github.com/tailscale/wireguard-go/tun"
)
func setLinkFeatures(dev tun.Device) error {
return nil
}

View File

@ -106,8 +106,8 @@ type Wrapper struct {
// timeNow, if non-nil, will be used to obtain the current time.
timeNow func() time.Time
// natConfig stores the current NAT configuration.
natConfig atomic.Pointer[natConfig]
// peerConfig stores the current NAT configuration.
peerConfig atomic.Pointer[peerConfig]
// vectorBuffer stores the oldest unconsumed packet vector from tdev. It is
// allocated in wrap() and the underlying arrays should never grow.
@ -505,9 +505,9 @@ func (t *Wrapper) sendVectorOutbound(r tunVectorReadResult) {
// snat does SNAT on p if the destination address requires a different source address.
func (t *Wrapper) snat(p *packet.Parsed) {
nc := t.natConfig.Load()
pc := t.peerConfig.Load()
oldSrc := p.Src.Addr()
newSrc := nc.selectSrcIP(oldSrc, p.Dst.Addr())
newSrc := pc.selectSrcIP(oldSrc, p.Dst.Addr())
if oldSrc != newSrc {
checksum.UpdateSrcAddr(p, newSrc)
}
@ -515,9 +515,9 @@ func (t *Wrapper) snat(p *packet.Parsed) {
// dnat does destination NAT on p.
func (t *Wrapper) dnat(p *packet.Parsed) {
nc := t.natConfig.Load()
pc := t.peerConfig.Load()
oldDst := p.Dst.Addr()
newDst := nc.mapDstIP(oldDst)
newDst := pc.mapDstIP(oldDst)
if newDst != oldDst {
checksum.UpdateDstAddr(p, newDst)
}
@ -545,66 +545,19 @@ func findV6(addrs []netip.Prefix) netip.Addr {
return netip.Addr{}
}
// natConfig is the configuration for NAT.
// peerConfig is the configuration for different peers.
// It should be treated as immutable.
//
// The nil value is a valid configuration.
type natConfig struct {
v4, v6 *natFamilyConfig
}
func (c *natConfig) String() string {
if c == nil {
return "<nil>"
}
var b strings.Builder
b.WriteString("natConfig{")
fmt.Fprintf(&b, "v4: %v, ", c.v4)
fmt.Fprintf(&b, "v6: %v", c.v6)
b.WriteString("}")
return b.String()
}
// mapDstIP returns the destination IP to use for a packet to dst.
// If dst is not one of the listen addresses, it is returned as-is,
// otherwise the native address is returned.
func (c *natConfig) mapDstIP(oldDst netip.Addr) netip.Addr {
if c == nil {
return oldDst
}
if oldDst.Is4() {
return c.v4.mapDstIP(oldDst)
}
if oldDst.Is6() {
return c.v6.mapDstIP(oldDst)
}
return oldDst
}
// selectSrcIP returns the source IP to use for a packet to dst.
// If the packet is not from the native address, it is returned as-is.
func (c *natConfig) selectSrcIP(oldSrc, dst netip.Addr) netip.Addr {
if c == nil {
return oldSrc
}
if oldSrc.Is4() {
return c.v4.selectSrcIP(oldSrc, dst)
}
if oldSrc.Is6() {
return c.v6.selectSrcIP(oldSrc, dst)
}
return oldSrc
}
// natFamilyConfig is the NAT configuration for a particular
// address family.
// It should be treated as immutable.
//
// The nil value is a valid configuration.
type natFamilyConfig struct {
// nativeAddr is the Tailscale Address of the current node.
nativeAddr netip.Addr
type peerConfig struct {
// nativeAddr4 and nativeAddr6 are the IPv4/IPv6 Tailscale Addresses of
// the current node.
//
// These are implicitly used as the address to rewrite to in the DNAT
// path (as configured by listenAddrs, below). The IPv4 address will be
// used if the inbound packet is IPv4, and the IPv6 address if the
// inbound packet is IPv6.
nativeAddr4, nativeAddr6 netip.Addr
// listenAddrs is the set of addresses that should be
// mapped to the native address. These are the addresses that
@ -620,13 +573,14 @@ type natFamilyConfig struct {
masqAddrCounts map[netip.Addr]int
}
func (c *natFamilyConfig) String() string {
func (c *peerConfig) String() string {
if c == nil {
return "natFamilyConfig(nil)"
return "peerConfig(nil)"
}
var b strings.Builder
b.WriteString("natFamilyConfig{")
fmt.Fprintf(&b, "nativeAddr: %v, ", c.nativeAddr)
b.WriteString("peerConfig{")
fmt.Fprintf(&b, "nativeAddr4: %v, ", c.nativeAddr4)
fmt.Fprintf(&b, "nativeAddr6: %v, ", c.nativeAddr6)
fmt.Fprint(&b, "listenAddrs: [")
i := 0
@ -656,23 +610,31 @@ func (c *natFamilyConfig) String() string {
// mapDstIP returns the destination IP to use for a packet to dst.
// If dst is not one of the listen addresses, it is returned as-is,
// otherwise the native address is returned.
func (c *natFamilyConfig) mapDstIP(oldDst netip.Addr) netip.Addr {
func (c *peerConfig) mapDstIP(oldDst netip.Addr) netip.Addr {
if c == nil {
return oldDst
}
if _, ok := c.listenAddrs.GetOk(oldDst); ok {
return c.nativeAddr
if oldDst.Is4() && c.nativeAddr4.IsValid() {
return c.nativeAddr4
}
if oldDst.Is6() && c.nativeAddr6.IsValid() {
return c.nativeAddr6
}
}
return oldDst
}
// selectSrcIP returns the source IP to use for a packet to dst.
// If the packet is not from the native address, it is returned as-is.
func (c *natFamilyConfig) selectSrcIP(oldSrc, dst netip.Addr) netip.Addr {
func (c *peerConfig) selectSrcIP(oldSrc, dst netip.Addr) netip.Addr {
if c == nil {
return oldSrc
}
if oldSrc != c.nativeAddr {
if oldSrc.Is4() && oldSrc != c.nativeAddr4 {
return oldSrc
}
if oldSrc.Is6() && oldSrc != c.nativeAddr6 {
return oldSrc
}
eip, ok := c.dstMasqAddrs.Get(dst)
@ -682,22 +644,16 @@ func (c *natFamilyConfig) selectSrcIP(oldSrc, dst netip.Addr) netip.Addr {
return eip
}
// natConfigFromWGConfig generates a natFamilyConfig from nm,
// for the indicated address family.
// If NAT is not required for that address family, it returns nil.
func natConfigFromWGConfig(wcfg *wgcfg.Config, addrFam ipproto.Version) *natFamilyConfig {
// peerConfigFromWGConfig generates a peerConfig from nm. If NAT is not required,
// and no additional configuration is present, it returns nil.
func peerConfigFromWGConfig(wcfg *wgcfg.Config) *peerConfig {
if wcfg == nil {
return nil
}
var nativeAddr netip.Addr
switch addrFam {
case ipproto.Version4:
nativeAddr = findV4(wcfg.Addresses)
case ipproto.Version6:
nativeAddr = findV6(wcfg.Addresses)
}
if !nativeAddr.IsValid() {
nativeAddr4 := findV4(wcfg.Addresses)
nativeAddr6 := findV6(wcfg.Addresses)
if !nativeAddr4.IsValid() && !nativeAddr6.IsValid() {
return nil
}
@ -714,10 +670,10 @@ func natConfigFromWGConfig(wcfg *wgcfg.Config, addrFam ipproto.Version) *natFami
for _, p := range wcfg.Peers {
isExitNode := slices.Contains(p.AllowedIPs, tsaddr.AllIPv4()) || slices.Contains(p.AllowedIPs, tsaddr.AllIPv6())
if isExitNode {
hasMasqAddrsForFamily := false ||
(addrFam == ipproto.Version4 && p.V4MasqAddr != nil && p.V4MasqAddr.IsValid()) ||
(addrFam == ipproto.Version6 && p.V6MasqAddr != nil && p.V6MasqAddr.IsValid())
if hasMasqAddrsForFamily {
hasMasqAddr := false ||
(p.V4MasqAddr != nil && p.V4MasqAddr.IsValid()) ||
(p.V6MasqAddr != nil && p.V6MasqAddr.IsValid())
if hasMasqAddr {
exitNodeRequiresMasq = true
}
break
@ -725,29 +681,56 @@ func natConfigFromWGConfig(wcfg *wgcfg.Config, addrFam ipproto.Version) *natFami
}
for i := range wcfg.Peers {
p := &wcfg.Peers[i]
var addrToUse netip.Addr
if addrFam == ipproto.Version4 && p.V4MasqAddr != nil && p.V4MasqAddr.IsValid() {
addrToUse = *p.V4MasqAddr
mak.Set(&listenAddrs, addrToUse, struct{}{})
} else if addrFam == ipproto.Version6 && p.V6MasqAddr != nil && p.V6MasqAddr.IsValid() {
addrToUse = *p.V6MasqAddr
mak.Set(&listenAddrs, addrToUse, struct{}{})
} else if exitNodeRequiresMasq {
addrToUse = nativeAddr
} else {
// Build a routing table that configures DNAT (i.e. changing
// the V4MasqAddr/V6MasqAddr for a given peer to the current
// peer's v4/v6 IP).
var addrToUse4, addrToUse6 netip.Addr
if p.V4MasqAddr != nil && p.V4MasqAddr.IsValid() {
addrToUse4 = *p.V4MasqAddr
mak.Set(&listenAddrs, addrToUse4, struct{}{})
masqAddrCounts[addrToUse4]++
}
if p.V6MasqAddr != nil && p.V6MasqAddr.IsValid() {
addrToUse6 = *p.V6MasqAddr
mak.Set(&listenAddrs, addrToUse6, struct{}{})
masqAddrCounts[addrToUse6]++
}
// If the exit node requires masquerading, set the masquerade
// addresses to our native addresses.
if exitNodeRequiresMasq {
if !addrToUse4.IsValid() && nativeAddr4.IsValid() {
addrToUse4 = nativeAddr4
}
if !addrToUse6.IsValid() && nativeAddr6.IsValid() {
addrToUse6 = nativeAddr6
}
}
if !addrToUse4.IsValid() && !addrToUse6.IsValid() {
// NAT not required for this peer.
continue
}
masqAddrCounts[addrToUse]++
// Build the SNAT table that maps each AllowedIP to the
// masquerade address.
for _, ip := range p.AllowedIPs {
rt.Insert(ip, addrToUse)
is4 := ip.Addr().Is4()
if is4 && addrToUse4.IsValid() {
rt.Insert(ip, addrToUse4)
}
if !is4 && addrToUse6.IsValid() {
rt.Insert(ip, addrToUse6)
}
}
}
if len(listenAddrs) == 0 && len(masqAddrCounts) == 0 {
return nil
}
return &natFamilyConfig{
nativeAddr: nativeAddr,
return &peerConfig{
nativeAddr4: nativeAddr4,
nativeAddr6: nativeAddr6,
listenAddrs: views.MapOf(listenAddrs),
dstMasqAddrs: &rt,
masqAddrCounts: masqAddrCounts,
@ -756,15 +739,11 @@ func natConfigFromWGConfig(wcfg *wgcfg.Config, addrFam ipproto.Version) *natFami
// SetNetMap is called when a new NetworkMap is received.
func (t *Wrapper) SetWGConfig(wcfg *wgcfg.Config) {
v4, v6 := natConfigFromWGConfig(wcfg, ipproto.Version4), natConfigFromWGConfig(wcfg, ipproto.Version6)
var cfg *natConfig
if v4 != nil || v6 != nil {
cfg = &natConfig{v4: v4, v6: v6}
}
cfg := peerConfigFromWGConfig(wcfg)
old := t.natConfig.Swap(cfg)
old := t.peerConfig.Swap(cfg)
if !reflect.DeepEqual(old, cfg) {
t.logf("nat config: %v", cfg)
t.logf("peer config: %v", cfg)
}
}

View File

@ -601,7 +601,9 @@ func TestFilterDiscoLoop(t *testing.T) {
}
}
func TestNATCfg(t *testing.T) {
// TODO(andrew-d): refactor this test to no longer use addrFam, after #11945
// removed it in peerConfigFromWGConfig
func TestPeerCfg_NAT(t *testing.T) {
node := func(ip, masqIP netip.Addr, otherAllowedIPs ...netip.Prefix) wgcfg.Peer {
p := wgcfg.Peer{
PublicKey: key.NewNode().Public(),
@ -800,19 +802,19 @@ func TestNATCfg(t *testing.T) {
for _, tc := range tests {
t.Run(fmt.Sprintf("%v/%v", addrFam, tc.name), func(t *testing.T) {
ncfg := natConfigFromWGConfig(tc.wcfg, addrFam)
pcfg := peerConfigFromWGConfig(tc.wcfg)
for peer, want := range tc.snatMap {
if got := ncfg.selectSrcIP(selfNativeIP, peer); got != want {
if got := pcfg.selectSrcIP(selfNativeIP, peer); got != want {
t.Errorf("selectSrcIP[%v]: got %v; want %v", peer, got, want)
}
}
for dstIP, want := range tc.dnatMap {
if got := ncfg.mapDstIP(dstIP); got != want {
if got := pcfg.mapDstIP(dstIP); got != want {
t.Errorf("mapDstIP[%v]: got %v; want %v", dstIP, got, want)
}
}
if t.Failed() {
t.Logf("%v", ncfg)
t.Logf("%v", pcfg)
}
})
}

Some files were not shown because too many files have changed in this diff Show More