Compare commits

...

541 Commits

Author SHA1 Message Date
Irbe Krumina 7ef2f72135
util/linuxfw: fix IPv6 availability check for nftables (#12009)
* util/linuxfw: fix IPv6 NAT availability check for nftables

When running firewall in nftables mode,
there is no need for a separate NAT availability check
(unlike with iptables, there are no hosts that support nftables, but not IPv6 NAT - see tailscale/tailscale#11353).
This change fixes a firewall NAT availability check that was using the no-longer set ipv6NATAvailable field
by removing the field and using a method that, for nftables, just checks that IPv6 is available.

Updates tailscale/tailscale#12008

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-05-14 08:51:53 +01:00
Brad Fitzpatrick 8aa5c3534d ipn/ipnlocal: simplify authURL vs authURLSticky, remove interact field
The previous LocalBackend & CLI 'up' changes improved some stuff, but
might've been too aggressive in some edge cases.

This simplifies the authURL vs authURLSticky distinction and removes
the interact field, which seemed to just just be about duplicate URL
suppression in IPN bus, back from when the IPN bus was a single client
at a time. This moves that suppression to a different spot.

Fixes #12119
Updates #12028
Updates #12042

Change-Id: I1f8800b1e82ccc1c8a0d7abba559e7404ddf41e4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-13 17:25:25 -07:00
Parker Higgins 7b3e30f391
words: add some fruit with scales (#8460)
Signed-off-by: Parker Higgins <parker@tailscale.com>
2024-05-13 09:26:24 -07:00
Maisem Ali 79b2d425cf types/views: move AsMap to Map from *Map
This was a typo in 2e19790f61.
It should have been on `Map` and not on `*Map` as otherwise
it doesn't allow for chaining like `someView.SomeMap().AsMap()`
and requires first assigning it to a variable.

Updates #typo

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-11 08:39:14 -07:00
Charlotte Brandhorst-Satzkorn fc1ae97e10
words: I had a feline we were missing some words (#12098)
pspspsps

Updates #tailscale/corp#14698

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-05-10 15:41:23 -07:00
Maisem Ali 486a423716 tsnet: split user facing and backend logging
This adds a new `UserLogf` field to the `Server` struct.
When set this any logs generated by Server are logged using
`UserLogf` and all spammy backend logs are logged to `Logf`.

If it `UserLogf` is unset, we default to `log.Printf` and
if `Logf` is unset we discard all the spammy logs.

Fixes #12094

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-10 15:29:13 -07:00
Percy Wegmann 7209c4f91e drive: parse depth 1 PROPFIND results to include children in cache
Clients often perform a PROPFIND for the parent directory before
performing PROPFIND for specific children within that directory.
The PROPFIND for the parent directory is usually done at depth 1,
meaning that we already have information for all of the children.
By immediately adding that to the cache, we save a roundtrip to
the remote peer on the PROPFIND for the specific child.

Updates tailscale/corp#19779

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-10 15:19:44 -05:00
Irbe Krumina d86d1e7601
cmd/k8s-operator,cmd/containerboot,ipn,k8s-operator: turn off stateful filter for egress proxies. (#12075)
Turn off stateful filtering for egress proxies to allow cluster
traffic to be forwarded to tailnet.

Allow configuring stateful filter via tailscaled config file.

Deprecate EXPERIMENTAL_TS_CONFIGFILE_PATH env var and introduce a new
TS_EXPERIMENTAL_VERSIONED_CONFIG env var that can be used to provide
containerboot a directory that should contain one or more
tailscaled config files named cap-<tailscaled-cap-version>.hujson.
Containerboot will pick the one with the newest capability version
that is not newer than its current capability version.

Proxies with this change will not work with older Tailscale
Kubernetes operator versions - users must ensure that
the deployed operator is at the same version or newer (up to
4 version skew) than the proxies.

Updates tailscale/tailscale#12061

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
2024-05-10 16:32:37 +01:00
Claire Wang e070af7414
ipnlocal, magicsock: add more description to storing last suggested exit (#11998)
node related functions
Updates tailscale/corp#19681

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-05-10 10:30:10 -04:00
Andrew Dunham 5708fc0639 wgengine/router: print Docker warning when stateful filtering is enabled
When Docker is detected on the host and stateful filtering is enabled,
Docker containers may be unable to reach Tailscale nodes (depending on
the network settings of a container). Detect Docker when stateful
filtering is enabled and print a health warning to aid users in noticing
this issue.

We avoid printing the warning if the current node isn't advertising any
subnet routes and isn't an exit node, since without one of those being
true, the node wouldn't have the correct AllowedIPs in WireGuard to
allow a Docker container to connect to another Tailscale node anyway.

Updates #12070

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Idef538695f4d101b0ef6f3fb398c0eaafc3ae281
2024-05-09 12:26:11 -06:00
Andrew Dunham 25e32cc3ae util/linuxfw: fix table name in DelStatefulRule
Updates #12061
Follow-up to #12072

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I2ba8c4bff14d93816760ff5eaa1a16f17bad13c1
2024-05-09 11:44:16 -06:00
Maisem Ali 21abb7f402 cmd/tailscale: add missing set flags for linux
We were missing `snat-subnet-routes`, `stateful-filtering`
and `netfilter-mode`. Add those to set too.

Fixes #12061

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-09 09:02:23 -07:00
Anton Tolchanov ac638f32c0 util/linuxfw: fix stateful packet filtering in nftables mode
To match iptables:
b5dbf155b1/util/linuxfw/iptables_runner.go (L536)

Updates #12066

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-05-09 15:12:44 +01:00
Irbe Krumina b5dbf155b1
cmd/k8s-operator: default nameserver image to tailscale/k8s-nameserver:unstable (#11991)
We are now publishing nameserver images to tailscale/k8s-nameserver,
so we can start defaulting the images if users haven't set
them explicitly, same as we already do with proxy images.

The nameserver images are currently only published for unstable
track, so we have to use the static 'unstable' tag.
Once we start publishing to stable, we can make the operator
default to its own tag (because then we'll know that for each
operator tag X there is also a nameserver tag X as we always
cut all images for a given tag.

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-05-09 07:29:10 +01:00
Andrew Dunham 8f7f9ac17e wgengine/netstack: handle 4via6 routes that are advertised by the same node
Previously, a node that was advertising a 4via6 route wouldn't be able
to make use of that same route; the packet would be delivered to
Tailscale, but since we weren't accepting it in handleLocalPackets, the
packet wouldn't be delivered to netstack and would never hit the 4via6
logic. Let's add that support so that usage of 4via6 is consistent
regardless of where the connection is initiated from.

Updates #11304

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ic28dc2e58080d76100d73b93360f4698605af7cb
2024-05-08 17:36:17 -06:00
Nick O'Neill 7901925ad3
VERSION.txt: this is v1.67.0 (#12063)
Signed-off-by: Nick O'Neill <nick@tailscale.com>
2024-05-08 14:00:17 -07:00
Sonia Appasamy 8130656780 api.md: remove extraneous commas in json examples
Updates #cleanup

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-05-08 16:36:52 -04:00
Anton Tolchanov 6f4a1dc6bf ipn/ipnlocal: fix another read of keyExpired outside mutex
Updates #12039

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-05-08 19:00:30 +01:00
Brad Fitzpatrick e968b0ecd7 cmd/tailscale,controlclient,ipnlocal: fix 'up', deflake tests more
The CLI's "up" is kinda chaotic and LocalBackend.Start is kinda
chaotic and they both need to be redone/deleted (respectively), but
this fixes some buggy behavior meanwhile. We were previously calling
StartLoginInteractive (to start the controlclient's RegisterRequest)
redundantly in some cases, causing test flakes depending on timing and
up's weird state machine.

We only need to call StartLoginInteractive in the client if Start itself
doesn't. But Start doesn't tell us that. So cheat a bit and a put the
information about whether there's a current NodeKey in the ipn.Status.
It used to be accessible over LocalAPI via GetPrefs as a private key but
we removed that for security. But a bool is fine.

So then only call StartLoginInteractive if that bool is false and don't
do it in the WatchIPNBus loop.

Fixes #12028
Updates #12042

Change-Id: I0923c3f704a9d6afd825a858eb9a63ca7c1df294
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-07 22:34:45 -07:00
Brad Fitzpatrick e5ef35857f ipn/ipnlocal: fix read of keyExpired outside mutex
Fixes #12039

Change-Id: I28c8a282ce12619f17103e9535841f15394ce685
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-07 22:22:53 -07:00
Brad Fitzpatrick 21509db121 ipn/ipnlocal, all: plumb health trackers in tests
I saw some panics in CI, like:

    2024-05-08T04:30:25.9553518Z ## WARNING: (non-fatal) nil health.Tracker (being strict in CI):
    2024-05-08T04:30:25.9554043Z goroutine 801 [running]:
    2024-05-08T04:30:25.9554489Z tailscale.com/health.(*Tracker).nil(0x0)
    2024-05-08T04:30:25.9555086Z 	tailscale.com/health/health.go:185 +0x70
    2024-05-08T04:30:25.9555688Z tailscale.com/health.(*Tracker).SetUDP4Unbound(0x0, 0x0)
    2024-05-08T04:30:25.9556373Z 	tailscale.com/health/health.go:532 +0x2f
    2024-05-08T04:30:25.9557296Z tailscale.com/wgengine/magicsock.(*Conn).bindSocket(0xc0003b4808, 0xc0003b4878, {0x1fbca53, 0x4}, 0x0)
    2024-05-08T04:30:25.9558301Z 	tailscale.com/wgengine/magicsock/magicsock.go:2481 +0x12c5
    2024-05-08T04:30:25.9559026Z tailscale.com/wgengine/magicsock.(*Conn).rebind(0xc0003b4808, 0x0)
    2024-05-08T04:30:25.9559874Z 	tailscale.com/wgengine/magicsock/magicsock.go:2510 +0x16f
    2024-05-08T04:30:25.9561038Z tailscale.com/wgengine/magicsock.NewConn({0xc000063c80, 0x0, 0xc000197930, 0xc000197950, 0xc000197960, {0x0, 0x0}, 0xc000197970, 0xc000198ee0, 0x0, ...})
    2024-05-08T04:30:25.9562402Z 	tailscale.com/wgengine/magicsock/magicsock.go:476 +0xd5f
    2024-05-08T04:30:25.9563779Z tailscale.com/wgengine.NewUserspaceEngine(0xc000063c80, {{0x22c8750, 0xc0001976b0}, 0x0, {0x22c3210, 0xc000063c80}, {0x22c31d8, 0x2d3c900}, 0x0, 0x0, ...})
    2024-05-08T04:30:25.9564982Z 	tailscale.com/wgengine/userspace.go:389 +0x159d
    2024-05-08T04:30:25.9565529Z tailscale.com/ipn/ipnlocal.newTestBackend(0xc000358b60)
    2024-05-08T04:30:25.9566086Z 	tailscale.com/ipn/ipnlocal/serve_test.go:675 +0x2a5
    2024-05-08T04:30:25.9566612Z ta

Updates #11874

Change-Id: I3432ed52d670743e532be4642f38dbd6e3763b1b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-07 22:22:10 -07:00
Brad Fitzpatrick 727c0d6cfd ipn/ipnserver: close a small race in ipnserver, ~simplify code
There was a small window in ipnserver after we assigned a LocalBackend
to the ipnserver's atomic but before we Start'ed it where our
initalization Start could conflict with API calls from the LocalAPI.

Simplify that a bit and lay out the rules in the docs.

Updates #12028

Change-Id: Ic5f5e4861e26340599184e20e308e709edec68b1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-07 21:27:06 -07:00
Maisem Ali 32bc596062 ipn/ipnlocal: acquire b.mu once in Start
We used to Lock, Unlock, Lock, Unlock quite a few
times in Start resulting in all sorts of weird race
conditions. Simplify it all and only Lock/Unlock once.

Updates #11649

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-07 20:29:59 -07:00
Maisem Ali 9380e2dfc6 ipn/ipnlocal: use lockAndGetUnlock in Start
This removes one of the Lock,Unlock,Lock,Unlock at least in
the Start function. Still has 3 more of these.

Updates #11649

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-07 17:54:51 -07:00
Maisem Ali e1011f1387 ipn/ipnlocal: call SetNetInfoCallback from NewLocalBackend
Instead of calling it from Start everytime, call it from NewLocalBackend
once.

Updates #11649

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-07 17:08:32 -07:00
Maisem Ali 85b9a6c601 net/netcheck: do not add derps if IPv4/IPv6 is set to "none"
It was documented as such but seems to have been dropped in a
refactor, restore the behavior. This brings down the time it
takes to run a single integration test by 2s which adds up
quite a bit.

Updates tailscale/corp#19786

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-07 15:57:28 -07:00
Brad Fitzpatrick d7bdd8e2a7 go.toolchain.rev: update to Go 1.22.3
Updates #12044

Change-Id: I4ad16f2bfcec13735cb10713e028b2c5527501ed
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-07 13:32:51 -07:00
kari-ts 3c4c9dc1d2
web: use EditPrefs instead of passing UpdatePrefs to starting (#12040)
Web version of https://github.com/tailscale/tailscale-android/pull/370
This allows us to update the prefs rather than creating new prefs

Updates tailscale/tailscale#11731

Signed-off-by: kari-ts <kari@tailscale.com>
2024-05-07 13:25:20 -07:00
Brad Fitzpatrick 80df8ffb85 control/controlclient: early return and outdent some code
I found this too hard to read before.

This is pulled out of #12033 as it's unrelated cleanup in retrospect.

Updates #12028

Change-Id: I727c47e573217e3d1973c5b66a76748139cf79ee
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-07 11:02:55 -07:00
Andrew Lytvynov 471731771c
ipn/ipnlocal: set default NoStatefulFiltering in ipn.NewPrefs (#12031)
This way the default gets populated on first start, when no existing
state exists to migrate. Also fix `ipn.PrefsFromBytes` to preserve empty
fields, rather than layering `NewPrefs` values on top.

Updates https://github.com/tailscale/corp/issues/19623

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-05-07 11:28:22 -06:00
Paul Scott 78fa698fe6 cmd/tailscale/cli/ffcomplete: remove fullstop from ShortHelp
Updates #cleanup

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-05-07 11:28:57 +01:00
Maisem Ali 482890b9ed tailcfg: bump capver for using NodeAttrUserDialUseRoutes for DNS
Missed in f62e678df8.

Updates tailscale/corp#18725
Updates #4529

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-06 15:52:50 -07:00
Maisem Ali af97e7a793 tailcfg,all: add/plumb Node.IsJailed
This adds a new bool that can be sent down from control
to do jailing on the client side. Previously this would
only be done from control by modifying the packet filter
we sent down to clients. This would result in a lot of
additional work/CPU on control, we could instead just
do this on the client. This has always been a TODO which
we keep putting off, might as well do it now.

Updates tailscale/corp#19623

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-06 15:32:22 -07:00
Maisem Ali e67069550b ipn/ipnlocal,net/tstun,wgengine: create and plumb jailed packet filter
This plumbs a packet filter for jailed nodes through to the
tstun.Wrapper; the filter for a jailed node is equivalent to a "shields
up" filter. Currently a no-op as there is no way for control to
tell the client whether a peer is jailed.

Updates tailscale/corp#19623

Co-authored-by: Andrew Dunham <andrew@du.nham.ca>
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Change-Id: I5ccc5f00e197fde15dd567485b2a99d8254391ad
2024-05-06 15:32:22 -07:00
Nick Khyl f62e678df8 net/dns/resolver, control/controlknobs, tailcfg: use UserDial instead of SystemDial to dial DNS servers
Now that tsdial.Dialer.UserDial has been updated to honor the configured routes
and dial external network addresses without going through Tailscale, while also being
able to dial a node/subnet router on the tailnet, we can start using UserDial to forward
DNS requests. This is primarily needed for DNS over TCP when forwarding requests
to internal DNS servers, but we also update getKnownDoHClientForProvider to use it.

Updates tailscale/corp#18725

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-05-06 17:29:24 -05:00
Andrew Lytvynov c28f5767bf
various: implement stateful firewalling on Linux (#12025)
Updates https://github.com/tailscale/corp/issues/19623


Change-Id: I7980e1fb736e234e66fa000d488066466c96ec85

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Co-authored-by: Andrew Dunham <andrew@du.nham.ca>
2024-05-06 16:22:17 -06:00
Maisem Ali 5ef178fdca net/tstun: refactor peerConfig to allow storing more details
This refactors the peerConfig struct to allow storing more
details about a peer and not just the masq addresses. To be
used in a follow up change.

As a side effect, this also makes the DNAT logic on the inbound
packet stricter. Previously it would only match against the packets
dst IP, not it also takes the src IP into consideration. The beahvior
is at parity with the SNAT case.

Updates tailscale/corp#19623

Co-authored-by: Andrew Dunham <andrew@du.nham.ca>
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Change-Id: I5f40802bebbf0f055436eb8824e4511d0052772d
2024-05-06 15:15:30 -07:00
Brad Fitzpatrick f3d2fd22ef cmd/tailscale/cli: don't start WatchIPNBus until after up's initial Start
The CLI "up" command is a historical mess, both on the CLI side and
the LocalBackend side. We're getting closer to cleaning it up, but in
the meantime it was again implicated in flaky tests.

In this case, the background goroutine running WatchIPNBus was very
occasionally running enough to get to its StartLoginInteractive call
before the original goroutine did its Start call. That meant
integration tests were very rarely but sometimes logging in with the
default control plane URL out on the internet
(controlplane.tailscale.com) instead of the localhost control server
for tests.

This also might've affected new Headscale etc users on initial "up".

Fixes #11960
Fixes #11962

Change-Id: I36f8817b69267a99271b5ee78cb7dbf0fcc0bd34
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-06 15:03:06 -07:00
Brad Fitzpatrick aadb8d9d21 ipn/ipnlocal: don't send an empty BrowseToURL w/ WatchIPNBus NotifyInitialState
I noticed this while working on the following fix to #11962.

Updates #11962

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Change-Id: I4c5894d8899d1ae8c42f54ecfd4d05a4a7ac598c
2024-05-06 15:03:06 -07:00
Brad Fitzpatrick e26f76a1c4 tstest/integration: add more debugging, logs to catch flaky test
Updates #11962

Change-Id: I1ab0db69bdf8d1d535aa2cef434c586311f0fe18
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-06 15:03:06 -07:00
Nick Khyl caa3d7594f ipn/ipnlocal, net/tsdial: plumb routes into tsdial and use them in UserDial
We'd like to use tsdial.Dialer.UserDial instead of SystemDial for DNS over TCP.
This is primarily necessary to properly dial internal DNS servers accessible
over Tailscale and subnet routes. However, to avoid issues when switching
between Wi-Fi and cellular, we need to ensure that we don't retain connections
to any external addresses on the old interface. Therefore, we need to determine
which dialer to use internally based on the configured routes.

This plumbs routes and localRoutes from router.Config to tsdial.Dialer,
and updates UserDial to use either the peer dialer or the system dialer,
depending on the network address and the configured routes.

Updates tailscale/corp#18725
Fixes #4529

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-05-06 15:44:44 -05:00
Brad Fitzpatrick ce8969d82b net/portmapper: add envknob to disable portmapper in localhost integration tests
Updates #11962

Change-Id: I8212cd814985b455d96986de0d4c45f119516cb3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-06 11:15:56 -07:00
Brad Fitzpatrick 7e0dd61e61 ipn/ipnlocal, tstest/integration: add panic to catch flaky test in the act
Updates #11962

Change-Id: Ifa24b82f9c76639bfd83278a7c2fe9cf42897bbb
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-06 11:15:56 -07:00
License Updater 258b5042fe licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-05-06 09:47:13 -07:00
Brad Fitzpatrick c3c18027c6 all: make more tests pass/skip in airplane mode
Updates tailscale/corp#19786

Change-Id: Iedc6730fe91c627b556bff5325bdbaf7bf79d8e6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-06 09:19:53 -07:00
Claire Wang 41f2195899
util/syspolicy: add auto exit node related keys (#11996)
Updates tailscale/corp#19681

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-05-06 12:14:10 -04:00
Brad Fitzpatrick 1a963342c7 util/set: add Of variant of SetOf that takes variadic parameter
set.Of(1, 2, 3) is prettier than set.SetOf([]int{1, 2, 3}).

I was going to change the signature of SetOf but then I noticed its
name has stutter anyway, so I kept it for compatibility. People can
prefer to use set.Of for new code or slowly migrate.

Also add a lazy Make method, which I often find myself wanting,
without having to resort to uglier mak.Set(&set, k, struct{}{}).

Updates #cleanup

Change-Id: Ic6f3870115334efcbd65e79c437de2ad3edb7625
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-05 21:14:28 -07:00
Will Norris 80decd83c1 tsweb: remove redundant bumpStartIfNeeded func
Updates #12001

Signed-off-by: Will Norris <will@tailscale.com>
2024-05-05 18:04:58 -07:00
Maisem Ali ed843e643f types/views: add AppendStrings util func
Updates tailscale/corp#19623

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-03 19:19:33 -07:00
Maisem Ali fd6ba43b97 types/views: remove duplicate SliceContainsFunc
We already have `(Slice[T]).ContainsFunc`.

Updates #cleanup

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-03 19:19:33 -07:00
Will Norris 46980c9664 tsweb: ensure in-flight requests are always marked as finished
The inflight request tracker only starts recording a new bucket after
the first non-error request. Unfortunately, it's written in such a way
that ONLY successful requests are ever marked as being finished. Once a
bucket has had at least one successful request and begun to be tracked,
all subsequent error cases are never marked finished and always appear
as in-flight.

This change ensures that if a request is recorded has having been
started, we also mark it as finished at the end.

Updates tailscale/corp#19767

Signed-off-by: Will Norris <will@tailscale.com>
2024-05-03 15:36:14 -07:00
Percy Wegmann 817badf9ca ipn/ipnlocal: reuse transport across Taildrive remotes
This prevents us from opening a new connection on each HTTP
request.

Updates #11967

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 16:07:52 -05:00
Percy Wegmann 2cf764e998 drive: actually cache results on statcache
Updates #11967

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 16:07:52 -05:00
Irbe Krumina 406293682c
cmd/k8s-operator: cleanup runReconciler signature (#11993)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-05-03 19:05:37 +01:00
Claire Wang 35872e86d2
ipnlocal, magicsock: store last suggested exit node id in local backend (#11959)
Updates tailscale/corp#19681

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-05-03 13:24:26 -04:00
Brad Fitzpatrick b62cfc430a tstest/integration/testcontrol: fix data race
Noticed in earlier GitHub actions failure.

Fixes #11994

Change-Id: Iba8d753caaa3dacbe2da9171d96c5f99b12e62d7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 10:03:48 -07:00
Andrew Dunham e9505e5432 ipn/ipnlocal: plumb health.Tracker into profileManager constructor
Setting the field after-the-fact wasn't working because we could migrate
prefs on creation, which would set health status for auto updates.

Updates #11986

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I41d79ebd61d64829a3a9e70586ce56f62d24ccfd
2024-05-03 08:25:38 -07:00
Brad Fitzpatrick e42c4396cf net/netcheck: don't spam on ICMP socket permission denied errors
While debugging a failing test in airplane mode on macOS, I noticed
netcheck logspam about ICMP socket creation permission denied errors.

Apparently macOS just can't do those, or at least not in airplane
mode. Not worth spamming about.

Updates #cleanup

Change-Id: I302620cfd3c8eabb25202d7eef040c01bd8a843c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick 15fc6cd966 derp/derphttp: fix netcheck HTTPS probes
The netcheck client, when no UDP is available, probes distance using
HTTPS.

Several problems:

* It probes using /derp/latency-check.
* But cmd/derper serves the handler at /derp/probe
* Despite the difference, it work by accident until c8f4dfc8c0
  which made netcheck's probe require a 2xx status code.
* in tests, we only use derphttp.Handler, so the cmd/derper-installed
  mux routes aren't preesnt, so there's no probe. That breaks
  tests in airplane mode. netcheck.Client then reports "unexpected
  HTTP status 426" (Upgrade Required)

This makes derp handle both /derp/probe and /derp/latency-check
equivalently, and in both cmd/derper and derphttp.Handler standalone
modes.

I notice this when wgengine/magicsock TestActiveDiscovery was failing
in airplane mode (no wifi). It still doesn't pass, but it gets
further.

Fixes #11989

Change-Id: I45213d4bd137e0f29aac8bd4a9ac92091065113f
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick 1fe0983f2d cmd/derper,tstest/nettest: skip network-needing test in airplane mode
Not buying wifi on a short flight is a good way to find tests
that require network. Whoops.

Updates #cleanup

Change-Id: Ibe678e9c755d27269ad7206413ffe9971f07d298
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick 46f3feae96 ssh/tailssh: plumb health.Tracker in test
In prep for it being required in more places.

Updates #11874

Change-Id: Ib743205fc2a6c6ff3d2c4ed3a2b28cac79156539
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick 4fa6cbec27 ssh/tailssh: use ptr.To in test
Updates #cleanup

Change-Id: Ic98ba1b63c8205084b30f59f0ca343788edea5b0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Brad Fitzpatrick ee3bd4dbda derp/derphttp, net/netcheck: plumb netmon.Monitor to derp netcheck client
Fixes #11981

Change-Id: I0e15a09f93aefb3cfddbc12d463c1c08b83e09fd
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-03 08:24:24 -07:00
Percy Wegmann a03cb866b4 drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 745fb31bd4 drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 07e783c7be drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 3349e86c0a drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 0c11fd978b drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Percy Wegmann 9d22ec0ba2 drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-03 09:03:32 -05:00
Irbe Krumina cd633a7252
cmd/k8s-operator/deploy,k8s-operator: document that metrics are unstable (#11979)
Updates#11292

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-05-03 14:02:10 +01:00
Andrew Dunham f97d0ac994 net/dns/resolver: add better error wrapping
To aid in debugging exactly what's going wrong, instead of the
not-particularly-useful "dns udp query: context deadline exceeded" error
that we currently get.

Updates #3786
Updates #10768
Updates #11620
(etc.)

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I76334bf0681a8a2c72c90700f636c4174931432c
2024-05-02 14:08:05 -04:00
Claire Wang e0287a4b33
wgengine: add exit destination logging enable for wgengine logger (#11952)
Updates tailscale/corp#18625
Co-authored-by: Kevin Liang <kevinliang@tailscale.com>
Signed-off-by: Claire Wang <claire@tailscale.com>
2024-05-02 13:55:05 -04:00
Irbe Krumina 19b31ac9a6
cmd/{k8s-operator,k8s-nameserver},k8s-operator: update nameserver config with records for ingress/egress proxies (#11019)
cmd/k8s-operator: optionally update dnsrecords Configmap with DNS records for proxies.

This commit adds functionality to automatically populate
DNS records for the in-cluster ts.net nameserver
to allow cluster workloads to resolve MagicDNS names
associated with operator's proxies.

The records are created as follows:
* For tailscale Ingress proxies there will be
a record mapping the MagicDNS name of the Ingress
device and each proxy Pod's IP address.
* For cluster egress proxies, configured via
tailscale.com/tailnet-fqdn annotation, there will be
a record for each proxy Pod, mapping
the MagicDNS name of the exposed
tailnet workload to the proxy Pod's IP.

No records will be created for any other proxy types.
Records will only be created if users have configured
the operator to deploy an in-cluster ts.net nameserver
by applying tailscale.com/v1alpha1.DNSConfig.

It is user's responsibility to add the ts.net nameserver
as a stub nameserver for ts.net DNS names.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-coredns
https://cloud.google.com/kubernetes-engine/docs/how-to/kube-dns#upstream_nameservers

See also https://github.com/tailscale/tailscale/pull/11017

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-05-02 17:29:46 +01:00
Maisem Ali a49ed2e145 derp,ipn/ipnlocal: stop calling rand.Seed
It's deprecated and using it gets us the old slow behavior
according to https://go.dev/blog/randv2.

> Having eliminated repeatability of the global output stream, Go 1.20
> was also able to make the global generator scale better in programs
> that don’t call rand.Seed, replacing the Go 1 generator with a very
> cheap per-thread wyrand generator already used inside the Go
> runtime. This removed the global mutex and made the top-level
> functions scale much better. Programs that do call rand.Seed fall
> back to the mutex-protected Go 1 generator.

Updates #7123

Change-Id: Ia5452e66bd16b5457d4b1c290a59294545e13291
Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-05-02 09:09:09 -07:00
Brad Fitzpatrick 96712e10a7 health, ipn/ipnlocal: move more health warning code into health.Tracker
In prep for making health warnings rich objects with metadata rather
than a bunch of strings, start moving it all into the same place.

We'll still ultimately need the stringified form for the CLI and
LocalAPI for compatibility but we'll next convert all these warnings
into Warnables that have severity levels and such, and legacy
stringification will just be something each Warnable thing can do.

Updates #4136

Change-Id: I83e189435daae3664135ed53c98627c66e9e53da
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-05-01 15:03:21 -07:00
Andrew Dunham be663c84c1 net/tstun: rename natConfig to peerConfig
So that we can use this for additional, non-NAT configuration without it
being confusing.

Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I1658d59c9824217917a94ee76d2d08f0a682986f
2024-05-01 15:01:52 -04:00
Andrew Dunham 10497acc95 net/tstun: refactor natConfig to not be per-family
This was a holdover from the older, pre-BART days and is no longer
necessary.

Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I71b892bab1898077767b9ff51cef33d59c08faf8
2024-05-01 14:06:35 -04:00
Andrew Lytvynov 13e1355546
scripts/installer.sh: remove unnecessary escaping in grep (#11950)
Updates #11263

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-05-01 11:09:10 -06:00
Percy Wegmann 843afe7c53 ssh/tailssh: add integration test
Updates tailscale/corp#11854

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-05-01 11:19:36 -05:00
Jonathan Nobels 45b9aa0d83
net/netmon: remove spammy log statements (#11953)
Updates tailscale/corp#18960

Tests in corp called us using the wrong logging calls.  Removed.
This is logged downstream anyway.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2024-05-01 12:02:16 -04:00
Paul Scott 4c08410011 cmd/tailscale/cli: set localClient.UseSocketOnly during flag parsing
This configures localClient correctly during flag parsing, so that the --socket
option is effective when generating tab-completion results. For example, the
following would not connect to the system Tailscale for tab-completion results:

    tailscale --socket=/tmp/tailscaled.socket switch <TAB>

Updates #3793

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-05-01 17:01:03 +01:00
Paul Scott ba34943133 cmd/tailscale/cli/ffcomplete: omit and clean completion results
Updates #3793

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-05-01 17:01:03 +01:00
Jonathan Nobels fa1303d632
net/netmon: swap to swift-derived defaultRoute on macos (#11936)
Updates tailscale/corp#18960

iOS uses Apple's NetworkMonitor to track the default interface and
there's no reason we shouldn't also use this on macOS, for the same
reasons noted in the comments for why this change was made on iOS.

This eliminates the need to load and parse the routing table when
querying the defaultRouter() in almost all cases.

A slight modification here (on both platforms) to fallback to the default
BSD logic in the unhappy-path rather than making assumptions that
may not hold.  If netmon is eventually parsing AF_ROUTE and able
to give a consistently correct answer for the  default interface index,
we can fall back to that and eliminate the Swift dependency.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2024-05-01 09:20:09 -04:00
Gabe Gorelick de85610be0
cmd/k8s-operator/deploy/chart: allow users to configure additional labels for the operator's Pod via Helm chart values.
cmd/k8s-operator/deploy/chart: allow users to configure additional labels for the operator's Pod via Helm chart values.

Fixes #11947

Signed-off-by: Gabe Gorelick <gabe@hightouch.io>
2024-05-01 10:37:21 +01:00
Percy Wegmann 2648d475d7 drive: don't allow DELETE on read-only shares
Fixes tailscale/corp#19646

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-30 22:29:33 -05:00
Brad Fitzpatrick 7455e027e9 util/slicesx: add AppendMatching
We had this in a different repo, but moving it here, as this a more
fitting package.

Updates #cleanup

Change-Id: I5fb9b10e465932aeef5841c67deba4d77d473d57
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-30 16:47:21 -07:00
Andrew Dunham fe009c134e ipn/ipnlocal: reset the dialPlan only when the URL is unchanged
Also, reset it in a few more places (e.g. logout, new blank profiles,
etc.) to avoid a few more cases where a pre-existing dialPlan can cause
a new Headscale server take 10+ seconds to connect.

Updates #11938

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I3095173a5a3d9720507afe4452548491e9e45a3e
2024-04-30 18:33:48 -04:00
Brad Fitzpatrick c47f9303b0 types/views: use slices.Contains{,Func}
Updates #8419

Change-Id: Ib1a9cb3fb425284b7e02684072a4e7a35975f35c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-30 15:29:23 -07:00
Joe Tsai 5db80cf2d8
syncs: fix AtomicValue for interface kinds (#11943)
If AtomicValue[T] is used with a T that is an interface kind,
then Store may panic if different concret types are ever stored.

Fix this by always wrapping in a concrete type.
Technically, this is only needed if T is an interface kind,
but there is no harm in doing it also for non-interface kinds.

Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-04-30 14:27:58 -07:00
Irbe Krumina 44aa809cb0
cmd/{k8s-nameserver,k8s-operator},k8s-operator: add a kube nameserver, make operator deploy it (#11919)
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.

Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.

The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.

DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-30 20:18:23 +01:00
Shaw Drastin 1fe073098c
Reset dial plan when switching profile (#11933)
When switching profile, the server URL can change (e.g.
because of switching to a self-hosted headscale instance).

If it is not reset here, dial plans returned by old
server (e.g. tailscale control server) will be used to
connect to new server (e.g. self-hosted headscale server),
and the register request will be blocked by it until
timeout, leading to very slow profile switches.

Updates #11938 11938

Signed-off-by: Shaw Drastin <showier.drastic0a@icloud.com>
2024-04-30 13:42:49 -04:00
Jordan Whited a47ce618bd
net/tstun: implement env var for disabling UDP GRO on Linux (#11924)
Certain device drivers (e.g. vxlan, geneve) do not properly handle
coalesced UDP packets later in the stack, resulting in packet loss.

Updates #11026

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2024-04-30 09:14:02 -07:00
Mario Minardi ec04c677c0
api.md: add documentation for new split DNS endpoints (#11922)
Add documentation for GET/PATCH/PUT `api/v2/tailnet/<ID>/dns/split-dns`.
These endpoints allow for reading, partially updating, and replacing the
split DNS settings for a given tailnet.

Updates https://github.com/tailscale/corp/issues/19483

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-04-30 09:42:33 -06:00
Andrew Lytvynov 7ba8f03936
ipn/ipnlocal: fix TestOnTailnetDefaultAutoUpdate on unsupported platforms (#11921)
Fixes #11894

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-29 14:35:29 -06:00
Irbe Krumina 7d9c3f9897
cmd/k8s-operator/deploy/manifests: check if IPv6 module is loaded before using it (#11867)
Before attempting to enable IPv6 forwarding in the proxy init container
check if the relevant module is found, else the container crashes
on hosts that don't have it.

Updates#11860

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-29 21:12:23 +01:00
Andrew Lytvynov d02f1be46a
scripts/installer.sh: enable Alpine community repo if needed (#11837)
The tailscale package is in the community Alpine repo. Check if it's
commented out in `/etc/apk/repositories` and run `setup-apkrepos -c -1`
if it's not.

Fixes #11263

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-29 13:23:46 -06:00
Claire Wang 5254f6de06
tailcfg: add suggest exit node UI node attribute (#11918)
Add node attribute to determine whether or not to show suggested exit
node in UI.
Updates tailscale/corp#19515

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-04-29 15:20:52 -04:00
Andrew Lytvynov ce5c80d0fe
clientupdate: exec systemctl instead of using dbus to restart (#11923)
Shell out to "systemctl", which lets us drop an extra dependency.

Updates https://github.com/tailscale/corp/issues/18935

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-29 13:16:40 -06:00
Fran Bull 6a0fbacc28 appc: setting AdvertiseRoutes explicitly discards app connector routes
This fixes bugs where after using the cli to set AdvertiseRoutes users
were finding that they had to restart tailscaled before the app
connector would advertise previously learned routes again. And seems
more in line with user expectations.

Fixes #11006
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Fran Bull c27dc1ca31 appc: unadvertise routes when reconfiguring app connector
If the controlknob to persist app connector routes is enabled, when
reconfiguring an app connector unadvertise routes that are no longer
relevant.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Fran Bull fea2e73bc1 appc: write discovered domains to StateStore
If the controlknob is on.
This will allow us to remove discovered routes associated with a
particular domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Fran Bull 1bd1b387b2 appc: add flag shouldStoreRoutes and controlknob for it
When an app connector is reconfigured and domains to route are removed,
we would like to no longer advertise routes that were discovered for
those domains. In order to do this we plan to store which routes were
discovered for which domains.

Add a controlknob so that we can enable/disable the new behavior.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Fran Bull 79836e7bfd appc: add RouteInfo struct and persist it to StateStore
Lays the groundwork for the ability to persist app connectors discovered
routes, which will allow us to stop advertising routes for a domain if
the app connector no longer monitors that domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-29 11:40:04 -07:00
Andrew Dunham b2b49cb3d5 wgengine/wgcfg/nmcfg: skip expired peers
Updates tailscale/corp#19315

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I1ad0c8796efe3dd456280e51efaf81f6d2049772
2024-04-29 13:48:00 -04:00
Mario Minardi 74c399483c
api.md: explicitly set content-type headers in POST CURL examples (#11916)
Explicitly set `-H "Content-Type: application/json"` in CURL examples
for POST endpoints as the default content type used by CURL is otherwise
`application/x-www-form-urlencoded` and these endpoints expect JSON data.

Updates https://github.com/tailscale/tailscale/issues/11914

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-04-29 10:25:52 -06:00
Irbe Krumina 1452faf510
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login on kube, check Secret create perms, allow empty state Secret (#11326)
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login and empty state Secrets, check perms

* Allow users to pre-create empty state Secrets

* Add a fake internal kube client, test functionality that has dependencies on kube client operations.

* Fix an issue where interactive login was not allowed in an edge case where state Secret does not exist

* Make the CheckSecretPermissions method report whether we have permissions to create/patch a Secret if it's determined that these operations will be needed

Updates tailscale/tailscale#11170

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-29 17:03:48 +01:00
Kristoffer Dalby 1e6cdb7d86 api.md: fix missing links after move of device posture
Updates tailscale/corp#18572

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-04-29 10:35:03 +02:00
Brad Fitzpatrick b9adbe2002 net/{interfaces,netmon}, all: merge net/interfaces package into net/netmon
In prep for most of the package funcs in net/interfaces to become
methods in a long-lived netmon.Monitor that can cache things.  (Many
of the funcs are very heavy to call regularly, whereas the long-lived
netmon.Monitor can subscribe to things from the OS and remember
answers to questions it's asked regularly later)

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: Ie4e8dedb70136af2d611b990b865a822cd1797e5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-28 07:34:52 -07:00
Brad Fitzpatrick 6b95219e3a net/netmon, add: add netmon.State type alias of interfaces.State
... in prep for merging the net/interfaces package into net/netmon.

This is a no-op change that updates a bunch of the API signatures ahead of
a future change to actually move things (and remove the type alias)

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: I477613388f09389214db0d77ccf24a65bff2199c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-28 07:34:52 -07:00
Irbe Krumina 45f0721530
cmd/containerboot: wait on tailscaled process only (#11897)
Modifies containerboot to wait on tailscaled process
only, not on any child process of containerboot.
Waiting on any subprocess was racing with Go's
exec.Cmd.Run, used to run iptables commands and
that starts its own subprocesses and waits on them.

Containerboot itself does not run anything else
except for tailscaled, so there shouldn't be a need
to wait on anything else.

Updates tailscale/tailscale#11593

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-27 20:28:09 +01:00
Brad Fitzpatrick 3672f29a4e net/netns, net/dns/resolver, etc: make netmon required in most places
The goal is to move more network state accessors to netmon.Monitor
where they can be cheaper/cached. But first (this change and others)
we need to make sure the one netmon.Monitor is plumbed everywhere.

Some notable bits:

* tsdial.NewDialer is added, taking a now-required netmon

* because a tsdial.Dialer always has a netmon, anything taking both
  a Dialer and a NetMon is now redundant; take only the Dialer and
  get the NetMon from that if/when needed.

* netmon.NewStatic is added, primarily for tests

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: I877f9cb87618c4eb037cee098241d18da9c01691
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-27 12:17:45 -07:00
Brad Fitzpatrick 4f73a26ea5 ipn/ipnlocal: skip TestOnTailnetDefaultAutoUpdate on macOS for now
While it's broken.

Updates #11894

Change-Id: I24698707ffe405471a14ab2683aea7e836531da8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-27 08:37:16 -07:00
Brad Fitzpatrick 7a62dddeac net/netcheck, wgengine/magicsock: make netmon.Monitor required
This has been a TODO for ages. Time to do it.

The goal is to move more network state accessors to netmon.Monitor
where they can be cheaper/cached.

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: I60fc6508cd2d8d079260bda371fc08b6318bcaf1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 20:23:43 -07:00
Brad Fitzpatrick 4dece0c359 net/netutil: remove a use of deprecated interfaces.GetState
I'm working on moving all network state queries to be on
netmon.Monitor, removing old APIs.

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: If0de137e0e2e145520f69e258597fb89cf39a2a3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 18:17:27 -07:00
Brad Fitzpatrick 7f587d0321 health, wgengine/magicsock: remove last of health package globals
Fixes #11874
Updates #4136

Change-Id: Ib70e6831d4c19c32509fe3d7eee4aa0e9f233564
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 17:36:19 -07:00
Jonathan Nobels 71e9258ad9
ipn/ipnlocal: fix null dereference for early suggested exit node queries (#11885)
Fixes tailscale/corp#19558

A request for the suggested exit nodes that occurs too early in the
VPN lifecycle would result in a null deref of the netmap and/or
the netcheck report.  This checks both and errors out.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2024-04-26 14:35:11 -07:00
Brad Fitzpatrick 745931415c health, all: remove health.Global, finish plumbing health.Tracker
Updates #11874
Updates #4136

Change-Id: I414470f71d90be9889d44c3afd53956d9f26cd61
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 12:03:11 -07:00
Brad Fitzpatrick a4a282cd49 control/controlclient: plumb health.Tracker
Updates #11874
Updates #4136

Change-Id: Ia941153bd83523f0c8b56852010f5231d774d91a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 10:12:33 -07:00
Brad Fitzpatrick 6d69fc137f ipn/{ipnlocal,localapi},wgengine{,/magicsock}: plumb health.Tracker
Down to 25 health.Global users. After this remains controlclient &
net/dns & wgengine/router.

Updates #11874
Updates #4136

Change-Id: I6dd1856e3d9bf523bdd44b60fb3b8f7501d5dc0d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 09:43:28 -07:00
Irbe Krumina df8f40905b
cmd/k8s-operator,k8s-operator: optionally serve tailscaled metrics on Pod IP (#11699)
Adds a new .spec.metrics field to ProxyClass to allow users to optionally serve
client metrics (tailscaled --debug) on <Pod-IP>:9001.
Metrics cannot currently be enabled for proxies that egress traffic to tailnet
and for Ingress proxies with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation
(because they currently forward all cluster traffic to their respective backends).

The assumption is that users will want to have these metrics enabled
continuously to be able to monitor proxy behaviour (as opposed to enabling
them temporarily for debugging). Hence we expose them on Pod IP to make it
easier to consume them i.e via Prometheus PodMonitor.

Updates tailscale/tailscale#11292

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-26 08:25:06 +01:00
Brad Fitzpatrick 723c775dbb tsd, ipnlocal, etc: add tsd.System.HealthTracker, start some plumbing
This adds a health.Tracker to tsd.System, accessible via
a new tsd.System.HealthTracker method.

In the future, that new method will return a tsd.System-specific
HealthTracker, so multiple tsnet.Servers in the same process are
isolated. For now, though, it just always returns the temporary
health.Global value. That permits incremental plumbing over a number
of changes. When the second to last health.Global reference is gone,
then the tsd.System.HealthTracker implementation can return a private
Tracker.

The primary plumbing this does is adding it to LocalBackend and its
dozen and change health calls. A few misc other callers are also
plumbed. Subsequent changes will flesh out other parts of the tree
(magicsock, controlclient, etc).

Updates #11874
Updates #4136

Change-Id: Id51e73cfc8a39110425b6dc19d18b3975eac75ce
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-25 22:13:04 -07:00
Brad Fitzpatrick cb66952a0d health: permit Tracker method calls on nil receiver
In prep for tsd.System Tracker plumbing throughout tailscaled,
defensively permit all methods on Tracker to accept a nil receiver
without crashing, lest I screw something up later. (A health tracking
system that itself causes crashes would be no good.) Methods on nil
receivers should not be called, so a future change will also collect
their stacks (and panic during dev/test), but we should at least not
crash in prod.

This also locks that in with a test using reflect to automatically
call all methods on a nil receiver and check they don't crash.

Updates #11874
Updates #4136

Change-Id: I8e955046ebf370ec8af0c1fb63e5123e6282a9d3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-25 20:45:57 -07:00
Chris Palmer 7349b274bd
safeweb: handle mux pattern collisions more generally (#11801)
Fixes #11800

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
2024-04-25 16:08:30 -07:00
Brad Fitzpatrick 5b32264033 health: break Warnable into a global and per-Tracker value halves
Previously it was both metadata about the class of warnable item as
well as the value.

Now it's only metadata and the value is per-Tracker.

Updates #11874
Updates #4136

Change-Id: Ia1ed1b6c95d34bc5aae36cffdb04279e6ba77015
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-25 14:40:11 -07:00
Brad Fitzpatrick ebc552d2e0 health: add Tracker type, in prep for removing global variables
This moves most of the health package global variables to a new
`health.Tracker` type.

But then rather than plumbing the Tracker in tsd.System everywhere,
this only goes halfway and makes one new global Tracker
(`health.Global`) that all the existing callers now use.

A future change will eliminate that global.

Updates #11874
Updates #4136

Change-Id: I6ee27e0b2e35f68cb38fecdb3b2dc4c3f2e09d68
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-25 13:46:22 -07:00
Claire Wang d5fc52a0f5
tailcfg: add auto exit node attribute (#11871)
Updates tailscale/corp#19515

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-04-25 15:05:39 -04:00
Sonia Appasamy 18765cd4f9 release/dist/qnap: omit .qpkg.codesigning files
Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-04-25 11:20:40 -04:00
Percy Wegmann 955ad12489 ipn/ipnlocal: only show Taildrive peers to which ACLs grant us access
This improves convenience and security.

* Convenience - no need to see nodes that can't share anything with you.
* Security - malicious nodes can't expose shares to peers that aren't
             allowed to access their shares.

Updates tailscale/corp#19432

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-24 17:49:04 -05:00
Sonia Appasamy 5d4b4ffc3c release/dist/qnap: update perms for tmpDir files
Allows all users to read all files, and .sh/.cgi files to be
executable.

Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-04-24 14:48:20 -04:00
Lee Briggs 14ac41febc
cmd/k8s-operator,k8s-operator: proxyclass affinity (#11862)
add ability to set affinity rules to proxyclass

Updates#11861

Signed-off-by: Lee Briggs <lee@leebriggs.co.uk>
2024-04-24 09:31:35 -07:00
Anton Tolchanov 31e6bdbc82 ipn/ipnlocal: always stop the engine on auth when key has expired
If seamless key renewal is enabled, we typically do not stop the engine
(deconfigure networking). However, if the node key has expired there is
no point in keeping the connection up, and it might actually prevent
key renewal if auth relies on endpoints routed via app connectors.

Fixes tailscale/corp#5800

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-04-24 14:47:57 +01:00
Andrea Gottardo 1d3e77f373
util/syspolicy: add ReadStringArray interface (#11857)
Fixes tailscale/corp#19459

This PR adds the ability for users of the syspolicy handler to read string arrays from the MDM solution configured on the system.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-04-23 22:23:48 -07:00
Sonia Appasamy 0cce456ee5 release/dist/qnap: use tmp file directory for qpkg building
This change allows for the release/dist/qnap package to be used
outside of the tailscale repo (notably, will be used from corp),
by using an embedded file system for build files which gets
temporarily written to a new folder during qnap build runs.

Without this change, when used from corp, the release/dist/qnap
folder will fail to be found within the corp repo, causing
various steps of the build to fail.

The file renames in this change are to combine the build files
into a /files folder, separated into /scripts and /Tailscale.

Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-04-23 21:34:45 -04:00
Percy Wegmann c8e912896e wgengine/router: consolidate routes before reconfiguring router for mobile clients
This helps reduce memory pressure on tailnets with large numbers
of routes.

Updates tailscale/corp#19332

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-23 20:15:56 -05:00
Irbe Krumina add62af7c6
util/linuxfw,go.{mod,sum}: don't log errors when deleting non-existant chains and rules (#11852)
This PR bumps iptables to a newer version that has a function to detect
'NotExists' errors and uses that function to determine whether errors
received on iptables rule and chain clean up are because the rule/chain
does not exist- if so don't log the error.

Updates corp#19336

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-23 21:08:18 +01:00
Irbe Krumina 3af0f526b8
cmd{containerboot,k8s-operator},util/linuxfw: support ExternalName Services (#11802)
* cmd/containerboot,util/linuxfw: support proxy backends specified by DNS name

Adds support for optionally configuring containerboot to proxy
traffic to backends configured by passing TS_EXPERIMENTAL_DEST_DNS_NAME env var
to containerboot.
Containerboot will periodically (every 10 minutes) attempt to resolve
the DNS name and ensure that all traffic sent to the node's
tailnet IP gets forwarded to the resolved backend IP addresses.

Currently:
- if the firewall mode is iptables, traffic will be load balanced
accross the backend IP addresses using round robin. There are
no health checks for whether the IPs are reachable.
- if the firewall mode is nftables traffic will only be forwarded
to the first IP address in the list. This is to be improved.

* cmd/k8s-operator: support ExternalName Services

 Adds support for exposing endpoints, accessible from within
a cluster to the tailnet via DNS names using ExternalName Services.
This can be done by annotating the ExternalName Service with
tailscale.com/expose: "true" annotation.
The operator will deploy a proxy configured to route tailnet
traffic to the backend IPs that service.spec.externalName
resolves to. The backend IPs must be reachable from the operator's
namespace.

Updates tailscale/tailscale#10606

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-23 17:30:00 +01:00
License Updater bf46bff678 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-04-23 09:10:39 -07:00
Percy Wegmann b7e5122226 util/osuser: add unit test for parseGroupIds
Updates #11682

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-23 08:54:17 -05:00
Andrew Dunham e985c6e58f ssh/tailssh: try fetching group IDs for user with the 'id' command
Since the tailscaled binaries that we distribute are static and don't
link cgo, we previously wouldn't fetch group IDs that are returned via
NSS. Try shelling out to the 'id' command, similar to how we call
'getent', to detect such cases.

Updates #11682

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I9bdc938bd76c71bc130d44a97cc2233064d64799
2024-04-23 08:54:17 -05:00
Kristoffer Dalby 9779eb6dba api.md: move device posture api to api.md
Updates tailscale/corp#18572

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-04-23 10:51:39 +02:00
Brad Fitzpatrick c07aa2cfed syncs: fix flaky test by deleting the code it tested (Watch)
Fixes #11766

Change-Id: Id5a875aab23eb1b48a57dc379d0cdd42412fd18b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-22 21:16:14 -07:00
Joe Tsai 63b3c82587
ipn/local: log OS-specific diagnostic information as JSON (#11700)
There is an undocumented 16KiB limit for text log messages.
However, the limit for JSON messages is 256KiB.
Even worse, logging JSON as text results in significant overhead
since each double quote needs to be escaped.

Instead, use logger.Logf.JSON to explicitly log the info as JSON.

We also modify osdiag to return the information as structured data
rather than implicitly have the package log on our behalf.
This gives more control to the caller on how to log.

Updates #7802

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-04-22 16:45:01 -07:00
Andrew Lytvynov 06502b9048
ipn/ipnlocal: reset auto-updates if unsupported on profile load (#11838)
Prior to
1613b18f82 (diff-314ba0d799f70c8998940903efb541e511f352b39a9eeeae8d475c921d66c2ac),
nodes could set AutoUpdate.Apply=true on unsupported platforms via
`EditPrefs`. Specifically, this affects tailnets where default
auto-updates are on.

Fix up those invalid prefs on profile reload, as a migration.

Updates #11544

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-22 16:55:25 -06:00
Sonia Appasamy 0a84215036 release/dist/qnap: add qnap target builder
Creates new QNAP builder target, which builds go binaries then uses
docker to build into QNAP packages. Much of the docker/script code
here is pulled over from https://github.com/tailscale/tailscale-qpkg,
with adaptation into our builder structures.

The qnap/Tailscale folder contains static resources needed to build
Tailscale qpkg packages, and is an exact copy of the existing folder
in the tailscale-qpkg repo.

Builds can be run with:
```
sudo ./tool/go run ./cmd/dist build qnap
```

Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-04-22 17:43:28 -04:00
Andrew Lytvynov b743b85dad
ipn/ipnlocal,ssh/tailssh: reject c2n /update if SSH conns are active (#11820)
Since we already track active SSH connections, it's not hard to
proactively reject updates until those finish. We attempt to do the same
on the control side, but the detection latency for new connections is in
the minutes, which is not fast enough for common short sessions.

Handle a `force=true` query parameter to override this behavior, so that
control can still trigger an update on a server where some long-running
abandoned SSH session is open.

Updates https://github.com/tailscale/corp/issues/18556

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-22 10:27:12 -06:00
Brad Fitzpatrick 5100bdeba7 types/persist: remove unused field Persist.Provider
It was only obviously unused after the previous change, c39cde79d.

Updates #19334

Change-Id: I9896d5fa692cb4346c070b4a339d0d12340c18f7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-21 10:48:25 -07:00
Brad Fitzpatrick c39cde79d2 tailcfg: remove some unused fields from RegisterResponseAuth
Fixes #19334

Change-Id: Id6463f28af23078a7bc25b9280c99d4491bd9651
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-21 10:29:19 -07:00
Brad Fitzpatrick 05bfa022f2 tailcfg: pointerify RegisterRequest.Auth, omitemptify RegisterResponseAuth
We were storing server-side lots of:

    "Auth":{"Provider":"","LoginName":"","Oauth2Token":null,"AuthKey":""},

That was about 7% of our total storage of pending RegisterRequest
bodies.

Updates tailscale/corp#19327

Change-Id: Ib73842759a2b303ff5fe4c052a76baea0d68ae7d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-21 07:10:43 -07:00
Andrew Dunham 375617c5c8 net/tsdial: assume all connections are affected if no default route is present
If this happens, it results in us pessimistically closing more
connections than might be necessary, but is more correct since we won't
"miss" a change to the default route interface and keep trying to send
data over a nonexistent interface, or one that can't reach the internet.

Updates tailscale/corp#19124

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ia0b8b04cb8cdcb0da0155fd08751c9dccba62c1a
2024-04-19 22:14:36 -04:00
Nick Khyl 9e1c86901b wgengine\router: fix the Tailscale-In firewall rule to work on domain networks
The Network Location Awareness service identifies networks authenticated against
an Active Directory domain and categorizes them as "Domain Authenticated".
This includes the Tailscale network if a Domain Controller is reachable through it.

If a network is categories as NLM_NETWORK_CATEGORY_DOMAIN_AUTHENTICATED,
it is not possible to override its category, and we shouldn't attempt to do so.
Additionally, our Windows Firewall rules should be compatible with both private
and domain networks.

This fixes both issues.

Fixes #11813

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-04-19 15:43:15 -05:00
Andrew Lytvynov bff527622d
ipn/ipnlocal,clientupdate: disallow auto-updates in containers (#11814)
Containers are typically immutable and should be updated as a whole (and
not individual packages within). Deny enablement of auto-updates in
containers.

Also, add the missing check in EditPrefs in LocalAPI, to catch cases
like tailnet default auto-updates getting enabled for nodes that don't
support it.

Updates #11544

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-19 14:37:21 -06:00
Andrew Lytvynov b3fb3bf084
clientupdate: return OS-specific version from LatestTailscaleVersion (#11812)
We don't always have the same latest version for all platforms (like
with 1.64.2 is only Synology+Windows), so we should use the OS-specific
result from pkgs JSON response instead of the main Version field.

Updates #11795

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-19 13:04:11 -06:00
Irbe Krumina bbe194c80d
cmd/k8s-operator: correctly determine cluster domain (#11512)
Kubernetes cluster domain defaults to 'cluster.local', but can also be customized.
We need to determine cluster domain to set up in-cluster forwarding to our egress proxies.
This was previously hardcoded to 'cluster.local', so was the egress proxies were not usable in clusters with custom domains.
This PR ensures that we attempt to determine the cluster domain by parsing /etc/resolv.conf.
In case the cluster domain cannot be determined from /etc/resolv.conf, we fall back to 'cluster.local'.

Updates tailscale/tailscale#10399,tailscale/tailscale#11445

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-19 16:49:46 +01:00
Percy Wegmann d16c1293e9 ipn/ipnlocal: remove origin and referer headers from Taildrive requests
peerapi does not want these, but rclone includes them.
Removing them allows rclone to work with Taildrive configured
as a WebDAV remote.

Updates #cleanup

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-18 17:00:22 -05:00
Percy Wegmann 94c0403104 ipn/ipnlocal: strip origin and referer headers from Taildrive requests
peerapi does not want these, but rclone includes them.
Stripping them out allows rclone to work with Taildrive configured
as a WebDAV remote.

Updates #cleanup

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-18 17:00:22 -05:00
Percy Wegmann 787f8c08ec drive: rewrite Location headers
This ensures that MOVE, LOCK and any other verbs that use the Location
header work correctly.

Fixes #11758

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-18 15:50:18 -05:00
Claire Wang c24f2eee34
tailcfg: rename exit node destination network flow log node attribute (#11779)
Updates tailscale/corp#18625

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-04-18 16:07:08 -04:00
kari-ts 048cb61dd0
interfaces: create android impl (#11784)
-Move Android impl into interfaces_android.go
-Instead of using ip route to get the interface name, use the one passed in by Android (ip route is restricted in Android 13+ per termux/termux-app#2993)

Follow-up will be to do the same for router

Fixes tailscale/corp#19215
Fixes tailscale/corp#19124

Signed-off-by: kari-ts <kari@tailscale.com>
2024-04-18 12:49:02 -07:00
Aaron Klotz 7132b782d4 hostinfo: use Distro field for distinguishing Windows Server builds
Some editions of Windows server share the same build number as their
client counterpart; we must use an additional field found in the OS
version information to distinguish between them.

Even though "Distro" has Linux connotations, it is the most appropriate
hostinfo field. What is Windows Server if not an alternate distribution
of Windows? This PR populates Distro with "Server" when applicable.

Fixes #11785

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2024-04-18 13:48:50 -06:00
Percy Wegmann 02c6af2a69 cmd/tailscale: clarify Taildrive grants in help text
Fixes #cleanup

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-18 13:27:15 -05:00
Chris Palmer bdfaef4879
safeweb: allow object-src: self in CSP (#11782)
This change is safe (self is still safe, by
definition), and makes the code match the comment.

Updates #cleanup

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
2024-04-18 10:39:11 -07:00
Andrew Lytvynov e775de3c63
go.mod: bump golang.org/x/net (#11775)
One more place to pick up a fix for
https://pkg.go.dev/vuln/GO-2024-2687.

Updates https://github.com/tailscale/corp/issues/18893

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-18 09:55:34 -06:00
Adrian Dewhurst c8b0adb382 docs/windows/policy: add missing key expiration warning interval
Fixes #11345

Change-Id: Ib53b639690b77d1b7d857304dca2119f197227ce
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
2024-04-18 10:49:14 -04:00
Brad Fitzpatrick 03d5d1f0f9 wgengine/magicsock: disable portmapper in tunchan-faked tests
Most of the magicsock tests fake the network, simulating packets going
out and coming in. There's no reason to actually hit your router to do
UPnP/NAT-PMP/PCP during in tests. But while debugging thousands of
iterations of tests to deflake some things, I saw it slamming my
router. This stops that.

Updates #11762

Change-Id: I59b9f48f8f5aff1fa16b4935753d786342e87744
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-17 21:47:38 -07:00
Andrew Lytvynov 22bd506129
ipn/ipnlocal: hold the mutex when in onTailnetDefaultAutoUpdate (#11786)
Turns out, profileManager is not safe for concurrent use and I missed
all the locking infrastructure in LocalBackend, oops.

I was not able to reproduce the race even with `go test -count 100`, but
this seems like an obvious fix.

Fixes #11773

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-17 21:15:09 -06:00
Chris Palmer 88a7767492
safeweb: set SameSite=Strict, with an option for Lax (#11781)
Fixes #11780

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
2024-04-17 16:20:14 -07:00
dependabot[bot] dd48cad89a build(deps-dev): bump vite from 5.1.4 to 5.1.7 in /client/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.1.4 to 5.1.7.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.1.7/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.1.7/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-17 15:16:35 -07:00
Andrew Dunham b85c2b2313 net/dns/resolver: use SystemDial in DoH forwarder
This ensures that we close the underlying connection(s) when a major
link change happens. If we don't do this, on mobile platforms switching
between WiFi and cellular can result in leftover connections in the
http.Client's connection pool which are bound to the "wrong" interface.

Updates #10821
Updates tailscale/corp#19124

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ibd51ce2efcaf4bd68e14f6fdeded61d4e99f9a01
2024-04-17 17:24:38 -04:00
Paul Scott 82394debb7 cmd/tailscale: add shell tab-completion
The approach is lifted from cobra: `tailscale completion bash` emits a bash
script for configuring the shell's autocomplete:

    . <( tailscale completion bash )

so that typing:

    tailscale st<TAB>

invokes:

    tailscale completion __complete -- st

RELNOTE=tailscale CLI now supports shell tab-completion

Fixes #3793

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-04-17 18:54:10 +01:00
Brad Fitzpatrick 21a0fe1b9b ipn/store: omit AWS & Kubernetes support on 'small' Linux GOARCHes
This removes AWS and Kubernetes support from Linux binaries by default
on GOARCH values where people don't typically run on AWS or use
Kubernetes, such as 32-bit mips CPUs.

It primarily focuses on optimizing for the static binaries we
distribute. But for people building it themselves, they can set
ts_kube or ts_aws (the opposite of ts_omit_kube or ts_omit_aws) to
force it back on.

Makes tailscaled binary ~2.3MB (~7%) smaller.

Updates #7272, #10627 etc

Change-Id: I42a8775119ce006fa321462cb2d28bc985d1c146
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-17 10:20:11 -07:00
dependabot[bot] 449be38e03
build(deps): bump google.golang.org/protobuf from 1.32.0 to 1.33.0 (#11410)
* build(deps): bump google.golang.org/protobuf from 1.32.0 to 1.33.0

Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* cmd/{derper,stund}: update depaware.txt

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Lytvynov <awly@tailscale.com>
2024-04-17 10:24:31 -06:00
Irbe Krumina 3ef7f895c8
go.{mod,sum}: bump nftables to the latest commit (#11772)
Updates#deps

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-17 16:39:10 +01:00
Andrew Dunham 226486eb9a net/interfaces: handle removed interfaces in State.Equal
This wasn't previously handling the case where an interface in s2 was
removed and not present in s1, and would cause the Equal method to
incorrectly return that the states were equal.

Updates tailscale/corp#19124

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I3af22bc631015d1ddd0a1d01bfdf312161b9532d
2024-04-17 10:34:40 -04:00
Paul Scott 454a03a766 cmd/tailscale/cli: prepend "tailscale" to usage errors
Updates #11626

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-04-17 09:25:34 +01:00
Paul Scott d07ede461a cmd/tailscale/cli: fix "subcommand required" errors when typod
Fixes #11672

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-04-17 09:25:34 +01:00
Paul Scott 3ff3445e9d cmd/tailscale/cli: improve ShortHelp/ShortUsage unit test, fix new errors
Updates #11364

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-04-17 09:25:34 +01:00
Paul Scott eb34b8a173 cmd/tailscale/cli: remove explicit usageFunc - its default
Updates #cleanup

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-04-17 09:25:34 +01:00
Paul Scott a50e4e604e cmd/tailscale/cli: remove duplicate "tailscale " in drive subcmd usage
Updates #cleanup

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-04-17 09:25:34 +01:00
Paul Scott 62d4be873d cmd/tailscale/cli: fix drive --help usage identation
Updates #cleanup

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-04-17 09:25:34 +01:00
Brad Fitzpatrick 7c1d6e35a5 all: use Go 1.22 range-over-int
Updates #11058

Change-Id: I35e7ef9b90e83cac04ca93fd964ad00ed5b48430
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-16 15:32:38 -07:00
Brad Fitzpatrick 068db1f972 net/interfaces: delete unused unexported function
It should've been deleted in 11ece02f52.

Updates #9040

Change-Id: If8a136bdb6c82804af658c9d2b0a8c63ce02d509
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-16 15:19:33 -07:00
Jonathan Nobels 7e2b4268d6
ipn/{localapi, ipnlocal}: forget the prior exit node when localAPI is used to zero the ExitNodeID (#11681)
Updates tailscale/corp#18724

When localAPI clients directly set ExitNodeID to "", the expected behaviour is that the prior exit node also gets zero'd - effectively setting the UI state back to 'no exit node was ever selected'

The IntenalExitNodePrior has been changed to be a non-opaque type, as it is read by the UI to render the users last selected exit node, and must be concrete. Future-us can either break this, or deprecate it and replace it with something more interesting.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2024-04-16 14:53:56 -04:00
Brad Fitzpatrick 0fba9e7570 cmd/tailscale/cli: prevent concurrent Start calls in 'up'
Seems to deflake tstest/integration tests. I can't reproduce it
anymore on one of my VMs that was consistently flaking after a dozen
runs before. Now I can run hundreds of times.

Updates #11649
Fixes #7036

Change-Id: I2f7d4ae97500d507bdd78af9e92cd1242e8e44b8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-16 10:03:53 -07:00
Irbe Krumina 26f9bbc02b
cmd/k8s-operator,k8s-operator: document tailscale.com Custom Resource Definitions better. (#11665)
Updates tailscale/tailscale#10880

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-16 17:52:10 +01:00
Adrian Dewhurst ca5cb41b43 tailcfg: document use of CapMap for peers
Updates tailscale/corp#17516
Updates #11508

Change-Id: Iad2dafb38ffb9948bc2f3dfaf9c268f7d772cf56
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
2024-04-16 11:18:29 -04:00
Brad Fitzpatrick 3c1e2bba5b ipn/ipnlocal: remove outdated iOS hacky workaround in Start
We haven't needed this hack for quite some time Andrea says.

Updates #11649

Change-Id: Ie854b7edd0a01e92495669daa466c7c0d57e7438
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-15 22:32:30 -07:00
Brad Fitzpatrick dd6c76ea24 ipn: remove unused Options.LegacyMigrationPrefs
I'm on a mission to simplify LocalBackend.Start and its locking
and deflake some tests.

I noticed this hasn't been used since March 2023 when it was removed
from the Windows client in corp 66be796d33c.

So, delete.

Updates #11649

Change-Id: I40f2cb75fb3f43baf23558007655f65a8ec5e1b2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-15 22:13:53 -07:00
Brad Fitzpatrick 7ec0dc3834 ipn/ipnlocal: make StartLoginInteractive take (yet unused) context
In prep for future fix to undermentioned issue.

Updates tailscale/tailscale#7036

Change-Id: Ide114db917dcba43719482ffded6a9a54630d99e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-15 15:23:48 -07:00
Claire Wang 9171b217ba
cmd/tailscale, ipn/ipnlocal: add suggest exit node CLI option (#11407)
Updates tailscale/corp#17516

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-04-15 18:14:20 -04:00
Charlotte Brandhorst-Satzkorn 449f46c207
wgengine/magicsock: rebind/restun if a syscall.EPERM error is returned (#11711)
We have seen in macOS client logs that the "operation not permitted", a
syscall.EPERM error, is being returned when traffic is attempted to be
sent. This may be caused by security software on the client.

This change will perform a rebind and restun if we receive a
syscall.EPERM error on clients running darwin. Rebinds will only be
called if we haven't performed one specifically for an EPERM error in
the past 5 seconds.

Updates #11710

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-04-15 13:57:55 -07:00
Will Norris 14c8b674ea Revert "licenses: add gliderlabs/ssh license"
The gliderlabs/ssh license is actually already included in the standard
package listing.  I'm not sure why I thought it wasn't.

Updates tailscale/corp#5780

This reverts commit 11dca08e93.

Signed-off-by: Will Norris <will@tailscale.com>
2024-04-15 11:21:13 -07:00
Brad Fitzpatrick 952e06aa46 wgengine/router: don't attempt route cleanup on Synology
Trying to run iptables/nftables on Synology pauses for minutes with
lots of errors and ultimately does nothing as it's not used and we
lack permissions.

This fixes a regression from db760d0bac (#11601) that landed
between Synology testing on unstable 1.63.110 and 1.64.0 being cut.

Fixes #11737

Change-Id: Iaf9563363b8e45319a9b6fe94c8d5ffaecc9ccef
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-15 09:49:25 -07:00
Irbe Krumina 38fb23f120
cmd/k8s-operator,k8s-operator: allow users to configure proxy env vars via ProxyClass (#11743)
Adds new ProxyClass.spec.statefulSet.pod.{tailscaleContainer,tailscaleInitContainer}.Env field
that allow users to provide key, value pairs that will be set as env vars for the respective containers.
Allow overriding all containerboot env vars,
but warn that this is not supported and might break (in docs + a warning when validating ProxyClass).

Updates tailscale/tailscale#10709

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-15 17:24:59 +01:00
Brad Fitzpatrick 9258bcc360 Makefile: fix default SYNO_ARCH in Makefile
It was broken with the move to dist in 32e0ba5e68 which doesn't accept
amd64 anymore.

Updates #cleanup

Change-Id: Iaaaba2d73c6a09a226934fe8e5c18b16731ee7a6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-15 08:59:48 -07:00
Brad Fitzpatrick b9aa7421d6 ipn/ipnlocal: remove some dead code (legacyBackend methods) from LocalBackend
Nothing used it.

Updates #11649

Change-Id: Ic1c331d947974cd7d4738ff3aafe9c498853689e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-14 21:02:56 -07:00
Brad Fitzpatrick a6739c49df paths: set default state path on AIX
Updates #11361

Change-Id: I196727a540be6b7c75303f9958490b1d76189fd6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-13 21:31:52 -07:00
Brad Fitzpatrick 271cfdb3d3 util/syspolicy: clean up doc grammar and consistency
Updates #cleanup

Change-Id: I912574cbd5ef4d8b7417b8b2a9b9a2ccfef88840
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-13 18:40:05 -07:00
Brad Fitzpatrick bad3159b62 ipn/ipnlocal: delete useless SetControlClientGetterForTesting use
Updates #11649

Change-Id: I56c069b9c97bd3e30ff87ec6655ec57e1698427c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-13 18:06:06 -07:00
Brad Fitzpatrick 8186cd0349 ipn/ipnlocal: delete redundant TestStatusWithoutPeers
We have tstest/integration nowadays.

And this test was one of the lone holdouts using the to-be-nuked
SetControlClientGetterForTesting.

Updates #11649

Change-Id: Icf8a6a2e9b8ae1ac534754afa898c00dc0b7623b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-13 16:35:02 -07:00
Brad Fitzpatrick 68043a17c2 ipn/ipnlocal: centralize assignments to cc + ccAuto in new method
cc vs ccAuto is a mess. It needs to go. But this is a baby step towards
getting there.

Updates #11649

Change-Id: I34f33934844e580bd823a7d8f2b945cf26c87b3b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-13 16:35:02 -07:00
Brad Fitzpatrick 970b1e21d0 ipn/ipnlocal: inline assertClientLocked into its now sole caller
Updates #11649

Change-Id: I8e2a5e59125a0cad5c0a8c9ed8930585f1735d03
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-13 16:35:02 -07:00
Brad Fitzpatrick 170c618483 ipn/ipnlocal: remove dead code now that Android uses LocalAPI instead
The new Android app and its libtailscale don't use this anymore;
it uses LocalAPI like other clients now.

Updates #11649

Change-Id: Ic9f42b41e0e0280b82294329093dc6c275f41d50
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-13 15:57:50 -07:00
Flakes Updater 65f215115f go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2024-04-13 11:12:06 -07:00
Brad Fitzpatrick a1abd12f35 cmd/tailscaled, net/tstun: build for aix/ppc64
At least in userspace-networking mode.

Fixes #11361

Change-Id: I78d33f0f7e05fe9e9ee95b97c99b593f8fe498f2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-13 11:03:22 -07:00
kari-ts 1cd51f95c7
ipnlocal: enable allow LAN for android (#11709)
Updates tailscale/corp#18984
Updates tailscale/corp#18202
2024-04-12 17:01:32 -07:00
Claire Wang 976d3c7b5f
tailcfg: add exit destination for network flow logs node attribute (#11698)
Updates tailscale/corp#18625

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-04-12 16:31:27 -04:00
Joe Tsai 7a77a2edf1
logtail: optimize JSON processing (#11671)
Changes made:

* Avoid "encoding/json" for JSON processing, and instead use
"github.com/go-json-experiment/json/jsontext".
Use jsontext.Value.IsValid for validation, which is much faster.
Use jsontext.AppendQuote instead of our own JSON escaping.

* In drainPending, use a different maxLen depending on lowMem.
In lowMem mode, it is better to perform multiple uploads
than it is to construct a large body that OOMs the process.

* In drainPending, if an error is encountered draining,
construct an error message in the logtail JSON format
rather than something that is invalid JSON.

* In appendTextOrJSONLocked, use jsontext.Decoder to check
whether the input is a valid JSON object. This is faster than
the previous approach of unmarshaling into map[string]any and
then re-marshaling that data structure.
This is especially beneficial for network flow logging,
which produces relatively large JSON objects.

* In appendTextOrJSONLocked, enforce maxSize on the input.
If too large, then we may end up in a situation where the logs
can never be uploaded because it exceeds the maximum body size
that the Tailscale logs service accepts.

* Use "tailscale.com/util/truncate" to properly truncate a string
on valid UTF-8 boundaries.

* In general, remove unnecessary spaces in JSON output.

Performance:

    name       old time/op    new time/op    delta
    WriteText     776ns ± 2%     596ns ± 1%   -23.24%  (p=0.000 n=10+10)
    WriteJSON     110µs ± 0%       9µs ± 0%   -91.77%  (p=0.000 n=8+8)

    name       old alloc/op   new alloc/op   delta
    WriteText      448B ± 0%        0B       -100.00%  (p=0.000 n=10+10)
    WriteJSON    37.9kB ± 0%     0.0kB ± 0%   -99.87%  (p=0.000 n=10+10)

    name       old allocs/op  new allocs/op  delta
    WriteText      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
    WriteJSON     1.08k ± 0%     0.00k ± 0%   -99.91%  (p=0.000 n=10+10)

For text payloads, this is 1.30x faster.
For JSON payloads, this is 12.2x faster.

Updates #cleanup
Updates tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-04-12 12:05:36 -07:00
Aaron Klotz 4d5d669cd5 net/dns: unconditionally write NRPT rules to local settings
We were being too aggressive when deciding whether to write our NRPT rules
to the local registry key or the group policy registry key.

After once again reviewing the document which calls itself a spec
(see issue), it is clear that the presence of the DnsPolicyConfig subkey
is the important part, not the presence of values set in the DNSClient
subkey. Furthermore, a footnote indicates that the presence of
DnsPolicyConfig in the GPO key will always override its counterpart in
the local key. The implication of this is important: we may unconditionally
write our NRPT rules to the local key. We copy our rules to the policy
key only when it contains NRPT rules belonging to somebody other than us.

Fixes https://github.com/tailscale/corp/issues/19071

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2024-04-12 11:56:26 -06:00
License Updater 9d021579e7 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-04-12 10:47:24 -07:00
Will Norris 11dca08e93 licenses: add gliderlabs/ssh license
This package is included in the tempfork directory, rather than as a go
module dependency, so is not included in the normal package list.

Updates tailscale/corp#5780

Signed-off-by: Will Norris <will@tailscale.com>
2024-04-11 16:22:23 -07:00
Jenny Zhang 2207643312 VERSION.txt: this is v1.65.0
Signed-off-by: Jenny Zhang <jz@tailscale.com>
2024-04-11 14:20:42 -04:00
Jenny Zhang 09524b58f3 VERSION.txt: this is v1.64.0
Signed-off-by: Jenny Zhang <jz@tailscale.com>
2024-04-11 14:00:11 -04:00
James Tucker a2eb1c22b0 wgengine/magicsock: allow disco communication without known endpoints
Just because we don't have known endpoints for a peer does not mean that
the peer should become unreachable. If we know the peers key, it should
be able to call us, then we can talk back via whatever path it called us
on. First step - don't drop the packet in this context.

Updates tailscale/corp#19106

Signed-off-by: James Tucker <james@tailscale.com>
2024-04-11 09:29:49 -07:00
Patrick O'Doherty 7f4cda23ac
scripts/installer.sh: add rpm GPG key import (#11686)
Extend the `zypper` install to import importing the GPG key used to sign
the repository packages.

Updates #11635

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-04-10 16:58:35 -07:00
James Tucker 8fa3026614 tsweb: switch to fastuuid for request ID generation
Request ID generation appears prominently in some services cumulative
allocation rate, and while this does not eradicate this issue (the API
still makes UUID objects), it does improve the overhead of this API and
reduce the amount of garbage that it produces.

Updates tailscale/corp#18266
Updates tailscale/corp#19054

Signed-off-by: James Tucker <james@tailscale.com>
2024-04-09 14:05:20 -07:00
James Tucker d0f3fa7d7e util/fastuuid: add a more efficient uuid generator
This still generates github.com/google/uuid UUID objects, but does so
using a ChaCha8 CSPRNG from the stdlib rand/v2 package. The public API
is backed by a sync.Pool to provide good performance in highly
concurrent operation.

Under high load the read API produces a lot of extra garbage and
overhead by way of temporaries and syscalls. This implementation reduces
both to minimal levels, and avoids any long held global lock by
utilizing sync.Pool.

Updates tailscale/corp#18266
Updates tailscale/corp#19054

Signed-off-by: James Tucker <james@tailscale.com>
2024-04-09 14:05:20 -07:00
James Tucker db760d0bac cmd/tailscaled: move cleanup to an implicit action during startup
This removes a potentially increased boot delay for certain boot
topologies where they block on ExecStartPre that may have socket
activation dependencies on other system services (such as
systemd-resolved and NetworkManager).

Also rename cleanup to clean up in affected/immediately nearby places
per code review commentary.

Fixes #11599

Signed-off-by: James Tucker <james@tailscale.com>
2024-04-09 12:44:08 -07:00
Nick Khyl 8d83adde07 util/winutil/winenv: add package for current Windows environment details
Package winenv provides information about the current Windows environment.
This includes details such as whether the device is a server or workstation,
and if it is AD domain-joined, MDM-registered, or neither.

Updates tailscale/corp#18342

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-04-09 13:25:37 -05:00
Paul Scott da4e92bf01 cmd/tailscale/cli: prefix all --help usages with "tailscale ...", some tidying
Also capitalises the start of all ShortHelp, allows subcommands to be hidden
with a "HIDDEN: " prefix in their ShortHelp, and adds a TS_DUMP_HELP envknob
to look at all --help messages together.

Fixes #11664

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-04-09 12:52:34 +01:00
Percy Wegmann 9da135dd64 cmd/tailscale/cli: moved share.go to drive.go
Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-08 20:11:20 -05:00
Percy Wegmann 1e0ebc6c6d cmd/tailscale/cli: rename share command to drive
Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-08 20:11:20 -05:00
Joe Tsai b4ba492701
logtail: require Buffer.Write to not retain the provided slice (#11617)
Buffer.Write has the exact same signature of io.Writer.Write.
The latter requires that implementations to never retain
the provided input buffer, which is an expectation that most
users will have when they see a Write signature.

The current behavior of Buffer.Write where it does retain
the input buffer is a risky precedent to set.
Switch the behavior to match io.Writer.Write.

There are only two implementations of Buffer in existence:
* logtail.memBuffer
* filch.Filch

The former can be fixed by cloning the input to Write.
This will cause an extra allocation in every Write,
but we can fix that will pooling on the caller side
in a follow-up PR.

The latter only passes the input to os.File.Write,
which does respect the io.Writer.Write requirements.

Updates #cleanup
Updates tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-04-08 15:01:07 -07:00
Irbe Krumina 231e44e742
Revert "cmd/{k8s-nameserver,k8s-operator},k8s-operator: add a kube nameserver, make operator deploy it (#11017)" (#11669)
Temporarily reverting this PR to avoid releasing
half finished featue.

This reverts commit 9e2f58f846.

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-08 21:31:52 +01:00
Andrea Gottardo 0001237253 docs/policy: update ADMX and ADML files with new Windows 1.62 syspolicies
Updates ENG-2776

Updates the .admx and .adml files to include the new ManagedByOrganizationName, ManagedByCaption and ManagedByURL system policies, added in Tailscale v1.62 for Windows.

Co-authored-by: Andrea Gottardo <andrea@gottardo.me>
Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-04-08 15:21:27 -05:00
Brad Fitzpatrick b27238b654 derp/derphttp: don't block in LocalAddr method
The derphttp.Client mutex is held during connects (for up to 10
seconds) so this LocalAddr method (blocking on said mutex) could also
block for up to 10 seconds, causing a pileup upstream in
magicsock/wgengine and ultimately a watchdog timeout resulting in a
crash.

Updates #11519

Change-Id: Idd1d94ee00966be1b901f6899d8b9492f18add0f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-08 10:57:05 -07:00
Brad Fitzpatrick e6983baa73
cmd/tailscale/cli: fix macOS crash reading envknob in init (#11667)
And add a test.

Regression from a5e1f7d703

Fixes tailscale/corp#19036

Change-Id: If90984049af0a4820c96e1f77ddf2fce8cb3043f

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-08 10:22:31 -07:00
Chloé Vulquin 0f3a292ebd
cli/configure: respect $KUBECONFIG (#11604)
cmd/tailscale/cli: respect $KUBECONFIG

* `$KUBECONFIG` is a `$PATH`-like: it defines a *list*.
`tailscale config kubeconfig` works like the rest of the
ecosystem so that if $KUBECONFIG is set it will write to the first existant file in the list, if none exist then
the final entry in the list.
* if `$KUBECONFIG` is an empty string, the old logic takes over.

Notes:

* The logic for file detection is inlined based on what `kind` does.
Technically it's a race condition, since the file could be removed/added
in between the processing steps, but the fallout shouldn't be too bad.
https://github.com/kubernetes-sigs/kind/blob/v0.23.0-alpha/pkg/cluster/internal/kubeconfig/internal/kubeconfig/paths.go

* The sandboxed (App Store) variant relies on a specific temporary
entitlement to access the ~/.kube/config file.
The entitlement is only granted to specific files, and so is not
applicable to paths supplied by the user at runtime.
While there may be other ways to achieve this access to arbitrary
kubeconfig files, it's out of scope for now.

Updates #11645

Signed-off-by: Chloé Vulquin <code@toast.bunkerlabs.net>
2024-04-08 16:49:43 +01:00
Brad Fitzpatrick c71e8db058 cmd/tailscale/cli: stop spamming os.Stdout/os.Stderr in tests
After:

    bradfitz@book1pro tailscale.com % ./tool/go test -c ./cmd/tailscale/cli
    bradfitz@book1pro tailscale.com % ./cli.test
    bradfitz@book1pro tailscale.com %

Before:

    bradfitz@book1pro tailscale.com % ./tool/go test -c ./cmd/tailscale/cli
    bradfitz@book1pro tailscale.com % ./cli.test

    Warning: funnel=on for foo.test.ts.net:443, but no serve config
             run: `tailscale serve --help` to see how to configure handlers

    Warning: funnel=on for foo.test.ts.net:443, but no serve config
             run: `tailscale serve --help` to see how to configure handlers
    USAGE
      funnel <serve-port> {on|off}
      funnel status [--json]

    Funnel allows you to publish a 'tailscale serve'
    server publicly, open to the entire internet.

    Turning off Funnel only turns off serving to the internet.
    It does not affect serving to your tailnet.

    SUBCOMMANDS
      status  show current serve/funnel status
    error: path must be absolute

    error: invalid TCP source "localhost:5432": missing port in address

    error: invalid TCP source "tcp://somehost:5432"
    must be one of: localhost or 127.0.0.1

    tcp://somehost:5432error: invalid TCP source "tcp://somehost:0"
    must be one of: localhost or 127.0.0.1

    tcp://somehost:0error: invalid TCP source "tcp://somehost:65536"
    must be one of: localhost or 127.0.0.1

    tcp://somehost:65536error: path must be absolute

    error: cannot serve web; already serving TCP

    You don't have permission to enable this feature.

This also moves the color handling up to a generic spot so it's
not just one subcommand doing it itself. See
https://github.com/tailscale/tailscale/issues/11626#issuecomment-2041795129

Fixes #11643
Updates #11626

Change-Id: I3a49e659dcbce491f4a2cb784be20bab53f72303
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-08 06:46:45 -07:00
Anton Tolchanov 5336362e64 prober: export probe class and metrics from bandwidth prober
- Wrap each prober function into a probe class that allows associating
  metric labels and custom metrics with a given probe;
- Make sure all existing probe classes set a `class` metric label;
- Move bandwidth probe size from being a metric label to a separate
  gauge metric; this will make it possible to use it to calculate
  average used bandwidth using a PromQL query;
- Also export transfer time for the bandwidth prober (more accurate than
  the total probe time, since it excludes connection establishment
  time).

Updates tailscale/corp#17912

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-04-08 12:02:58 +01:00
Anton Tolchanov 21671ca374 prober: remove unused notification code
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-04-08 12:02:58 +01:00
Brad Fitzpatrick b0fbd85592 net/tsdial: partially fix "tailscale nc" (UserDial) on macOS
At least in the case of dialing a Tailscale IP.

Updates #4529

Change-Id: I9fd667d088a14aec4a56e23aabc2b1ffddafa3fe
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-07 16:04:32 -07:00
Brad Fitzpatrick a5e1f7d703 ipn/{ipnlocal,localapi}: add API to toggle use of exit node
This is primarily for GUIs, so they don't need to remember the most
recently used exit node themselves.

This adds some CLI commands, but they're disabled and behind the WIP
envknob, as we need to consider naming (on/off is ambiguous with
running an exit node, etc) as well as automatic exit node selection in
the future. For now the CLI commands are effectively developer debug
things to test the LocalAPI.

Updates tailscale/corp#18724

Change-Id: I9a32b00e3ffbf5b29bfdcad996a4296b5e37be7e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-07 16:01:00 -07:00
Maisem Ali 3f4c5daa15 wgengine/netstack: remove SubnetRouterWrapper
It was used when we only supported subnet routers on linux
and would nil out the SubnetRoutes slice as no other router
worked with it, but now we support subnet routers on ~all platforms.

The field it was setting to nil is now only used for network logging
and nowhere else, so keep the field but drop the SubnetRouterWrapper
as it's not useful.

Updates #cleanup

Change-Id: Id03f9b6ec33e47ad643e7b66e07911945f25db79
Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-04-07 15:44:41 -07:00
alexelisenko fe22032fb3
net/dns/{publicdns,resolver}: add start of Control D support
Updates #7946

[@bradfitz fixed up version of #8417]

Change-Id: I1dbf6fa8d525b25c0d7ad5c559a7f937c3cd142a
Signed-off-by: alexelisenko <39712468+alexelisenko@users.noreply.github.com>
Signed-off-by: Alex Paguis <alex@windscribe.com>
2024-04-07 11:55:37 -07:00
Brad Fitzpatrick aa084a29c6 ipn/ipnlocal: name the unlockOnce type, plumb more, add Unlock method
This names the func() that Once-unlocked LocalBackend.mu. It does so
both for docs and because it can then have a method: Unlock, for the
few points that need to explicitly unlock early (the cause of all this
mess). This makes those ugly points easy to find, and also can then
make them stricter, panicking if the mutex is already unlocked. So a
normal call to the func just once-releases the mutex, returning false
if it's already done, but the Unlock method is the strict one.

Then this uses it more, so most the b.mu.Unlock calls remaining are
simple cases and usually defers.

Updates #11649

Change-Id: Ia070db66c54a55e59d2f76fdc26316abf0dd4627
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-06 21:49:23 -07:00
Brad Fitzpatrick 5e7c0b025c ipn/ipnlocal: add some "lockedOnEntry" helpers + guardrails, fix bug
A number of methods in LocalBackend (with suffixed "LockedOnEntry")
require b.mu be held but unlock it on the way out. That's asymmetric
and atypical and error prone.

This adds a helper method to LocalBackend that locks the mutex and
returns a sync.OnceFunc that unlocks the mutex. Then we pass around
that unlocker func down the chain to make it explicit (and somewhat
type check the passing of ownership) but also let the caller defer
unlock it, in the case of errors/panics that happen before the callee
gets around to calling the unlock.

This revealed a latent bug in LocalBackend.DeleteProfile which double
unlocked the mutex.

Updates #11649

Change-Id: I002f77567973bd77b8906bfa4ec9a2049b89836a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-06 20:43:54 -07:00
Flakes Updater efb710d0e5 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2024-04-06 15:12:24 -07:00
Brad Fitzpatrick 38377c37b5 ipn/localapi: sort localapi handler map keys
Updates #cleanup

Change-Id: I750ed8d033954f1f8786fb35dd16895bb1c5af8e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-05 20:44:11 -07:00
Maisem Ali 21b32b467e tsweb: handle panics in retHandler
We would have incomplete stats and missing logs in cases
of panics.

Updates tailscale/corp#18687

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-04-05 18:47:21 -07:00
Brad Fitzpatrick ac2522092d cmd/tailscale/cli: make exit-node list not random
The output was changing randomly per run, due to range over a map.

Then some misc style tweaks I noticed while debugging.

Fixes #11629

Change-Id: I67aef0e68566994e5744d4828002f6eb70810ee1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-05 18:19:50 -07:00
James Tucker 6e334e64a1 net/netcheck,wgengine/magicsock: align DERP frame receive time heuristics
The netcheck package and the magicksock package coordinate via the
health package, but both sides have time based heuristics through
indirect dependencies. These were misaligned, so the implemented
heuristic aimed at reducing DERP moves while there is active traffic
were non-operational about 3/5ths of the time.

It is problematic to setup a good test for this integration presently,
so instead I added comment breadcrumbs along with the initial fix.

Updates #8603

Signed-off-by: James Tucker <james@tailscale.com>
2024-04-05 13:04:42 -07:00
Irbe Krumina 1fbaf26106
util/linuxfw: fix chain comparison (#11639)
Don't compare pointer fields by pointer value, but by the actual value

Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-05 19:43:58 +01:00
Charlotte Brandhorst-Satzkorn 8c75da27fc
drive: move normalizeShareName into pkg drive and make func public (#11638)
This change makes the normalizeShareName function public, so it can be
used for validation in control.

Updates tailscale/corp#16827

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-04-05 11:43:13 -07:00
Will Morrison 306bacc669 cmd/tailscale/cli: Add CLI command to update certs on Synology devices.
Fixes #4674

Signed-off-by: Will Morrison <william.barr.morrison@gmail.com>
2024-04-05 07:08:46 -07:00
Brad Fitzpatrick 9699bb0a20 metrics: fix outdated docs on MultiLabelMap
And make NewMultiLabelMap panic earlier (at construction time)
if the comparable struct type T violates the documented rules,
rather than panicking at Add time.

Updates #cleanup

Change-Id: Ib1a03babdd501b8d699c4f18b1097a56c916c6d5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-04 20:53:47 -07:00
Joonas Kuorilehto fe0cfec4ad wgengine/router: enable ip forwarding on gokrazy
Only on Gokrazy, set sysctls to enable IP forwarding so subnet routing
and advertised exit node works.

Fixes #11405

Signed-off-by: Joonas Kuorilehto <joneskoo@derbian.fi>
2024-04-04 20:48:55 -07:00
Joe Tsai 4bbac72868
util/truncate: support []byte as well (#11614)
There are no mutations to the input,
so we can support both ~string and ~[]byte just fine.

Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-04-04 14:38:16 -07:00
Charlotte Brandhorst-Satzkorn 98cf71cd73
tailscale: switch tailfs to drive syntax for api and logs (#11625)
This change switches the api to /drive, rather than the previous /tailfs
as well as updates the log lines to reflect the new value. It also
cleans up some existing tailfs references.

Updates tailscale/corp#16827

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-04-04 13:07:58 -07:00
Percy Wegmann 853e3e29a0 wgengine/router: provide explicit hook to signal Android when VPN needs to be reconfigured
This allows clients to avoid establishing their VPN multiple times when
both routes and DNS are changing in rapid succession.

Updates tailscale/corp#18928

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-04-04 12:56:49 -05:00
Joe Tsai 1a38d2a3b4
util/zstdframe: support specifying a MaxWindowSize (#11595)
Specifying a smaller window size during compression
provides a knob to tweak the tradeoff between memory usage
and the compression ratio.

Updates tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-04-04 10:46:20 -07:00
Andrew Dunham 7d7d159824 prober: support creating multiple probes in ForEachAddr
So that we can e.g. check TLS on multiple ports for a given IP.

Updates tailscale/corp#16367

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I81d840a4c88138de1cbb2032b917741c009470e6
2024-04-04 13:04:16 -04:00
Andrew Dunham ac574d875c prober: add helper function to check all IPs for a DNS hostname
This allows us to check all IP addresses (and address families) for a
given DNS hostname while dynamically discovering new IPs and removing
old ones as they're no longer valid.

Also add a testable example that demonstrates how to use it.

Alternative to #11610
Updates tailscale/corp#16367

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I6d6f39bafc30e6dfcf6708185d09faee2a374599
2024-04-04 11:11:33 -04:00
Brad Fitzpatrick 8d7894c68e clientupdate, net/dns: fix some "tailsacle" typos
Updates #cleanup

Change-Id: I982175e74b0c8c5b3e01a573e5785e6596b7ac39
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-03 21:08:25 -07:00
Brad Fitzpatrick 92d3f64e95 go.toolchain.rev: bump to Go 1.22.2
Update tailscale/corp#18893

Change-Id: I4c04f5153ad43429d7f510c9ac2194c3b2fbc6c1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-03 11:11:07 -07:00
Charlotte Brandhorst-Satzkorn 93618a3518
tailscale: update tailfs functions and vars to use drive naming (#11597)
This change updates all tailfs functions and the majority of the tailfs
variables to use the new drive naming.

Updates tailscale/corp#16827

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-04-03 10:09:58 -07:00
Brad Fitzpatrick 2409661a0d control/controlclient: delete old naclbox code, require ts2021 Noise
Updates #11585
Updates tailscale/corp#18882

Change-Id: I90e2e4a211c58d429e2b128604614dde18986442
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-03 09:17:27 -07:00
Brad Fitzpatrick b9611461e5 ipn/ipnlocal: q-encode (RFC 2047) Tailscale serve header values
Updates #11603

RELNOTE=Tailscale serve headers are now RFC 2047 Q-encoded

Change-Id: I1314b65ecf5d39a5a601676346ec2c334fdef042
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-03 09:08:29 -07:00
Claire Wang 262fa8a01e
ipn/ipnlocal: populate peers' capabilities (#11365)
Populates capabilties field of peers in ipn status.
Updates tailscale/corp#17516

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-04-03 10:55:28 -04:00
James Tucker 9eaa56df93 tsweb: update doc on BucketedStatsOptions.Finish to match behavior
I originally came to update this to match the documented behavior, but
the code is deliberately avoiding this behavior currently, making it
hard to decide how to update this. For now just align the documentation
to the behavior.

Updates #cleanup

Signed-off-by: James Tucker <james@tailscale.com>
2024-04-02 17:22:59 -07:00
Charlotte Brandhorst-Satzkorn 14683371ee
tailscale: update tailfs file and package names (#11590)
This change updates the tailfs file and package names to their new
naming convention.

Updates #tailscale/corp#16827

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-04-02 13:32:30 -07:00
Brad Fitzpatrick 1c259100b0 cmd/{derper,derpprobe}: add --version flag
Fixes #11582

Change-Id: If99fc1ab6b89d624fbb07bd104dd882d2c7b50b4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-02 12:48:07 -07:00
Patrick O'Doherty 1535d0feca
safeweb: move http.Serve for HTTP redirects into lib (#11592)
Refactor the interaction between caller/library when establishing the
HTTP to HTTPS redirects by moving the call to http.Serve into safeweb.
This makes linting for other uses of http.Serve easier without having to
account for false positives created by the old interface.

Updates https://github.com/tailscale/corp/issues/8027

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-04-02 12:04:24 -07:00
James Tucker f384742375 net/packet: allow more ICMP errors
We now allow some more ICMP errors to flow, specifically:

- ICMP parameter problem in both IPv4 and IPv6 (corrupt headers)
- ICMP Packet Too Big (for IPv6 PMTU)

Updates #311
Updates #8102
Updates #11002

Signed-off-by: James Tucker <james@tailscale.com>
2024-04-02 11:31:49 -07:00
Irbe Krumina 92ca770b8d
util/linuxfw: fix MSS clamping in nftables mode (#11588)
MSS clamping for nftables was mostly not ran due to to an earlier rule in the FORWARD chain issuing accept verdict.
This commit places the clamping rule into a chain of its own to ensure that it gets ran.

Updates tailscale/tailscale#11002

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-02 19:31:33 +01:00
Kyle Carberry 27038ee3c2 hostinfo: cache device model to speed up init
This was causing a relatively consistent ~10ms of delay on Linux.

Signed-off-by: Kyle Carberry <kyle@carberry.com>
2024-04-02 09:09:43 -07:00
Brad Fitzpatrick ec87e219ae logtail: delete unused code from old way to configure zstd
Updates #cleanup

Change-Id: I666ecf08ea67e461adf2a3f4daa9d1753b2dc1e4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-01 20:43:06 -07:00
Joe Tsai e2586bc674
logtail: always zstd compress with FastestCompression and LowMemory (#11583)
This is based on empirical testing using actual logs data.

FastestCompression only incurs a marginal <1% compression ratio hit
for a 2.25x reduction in memory use for small payloads
(which are common if log uploads happen at a decently high frequency).
The memory savings for large payloads is much lower
(less than 1.1x reduction).

LowMemory only incurs a marginal <5% hit on performance
for a 1.6-2.0x reduction in memory use for small or large payloads.

The memory gains for both settings justifies the loss of benefits,
which are arguably minimal.

tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-04-01 18:12:09 -07:00
James Tucker 7558a1d594 ipn/ipnlocal: disable sockstats on (unstable) mobile by default
We're tracking down a new instance of memory usage, and excessive memory usage
from sockstats is definitely not going to help with debugging, so disable it by
default on mobile.

Updates tailscale/corp#18514

Signed-off-by: James Tucker <james@tailscale.com>
2024-04-01 14:44:20 -07:00
Asutorufa e20ce7bf0c net/dns: close ctx when close dns directManager
Signed-off-by: Asutorufa <16442314+Asutorufa@users.noreply.github.com>
2024-03-29 20:47:03 -07:00
Will Norris 1d2af801fa .github/workflows: remove go-licenses action
This is now handled by an action running in corp.

Updates tailscale/corp#18803

Signed-off-by: Will Norris <will@tailscale.com>
2024-03-29 19:38:13 -07:00
License Updater e80b99cdd1 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-03-29 16:32:25 -07:00
Andrew Lytvynov 5aa4cfad06
safeweb: detect mux handler conflicts (#11562)
When both muxes match, and one of them is a wildcard "/" pattern (which
is common in browser muxes), choose the more specific pattern.
If both are non-wildcard matches, there is a pattern overlap, so return
an error.

Updates https://github.com/tailscale/corp/issues/8027

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-03-29 16:07:09 -06:00
Brad Fitzpatrick e7599c1f7e logtail: prevent js/wasm clients from picking TLS client cert
Corp details:
https://github.com/tailscale/corp/issues/18177#issuecomment-2026598715
https://github.com/tailscale/corp/pull/18775#issuecomment-2027505036

Updates tailscale/corp#18177

Change-Id: I7c03a4884540b8519e0996088d085af77991f477
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-29 13:18:33 -07:00
Irbe Krumina 5fb721d4ad
util/linuxfw,wgengine/router: skip IPv6 firewall configuration in partial iptables mode (#11546)
We have hosts that support IPv6, but not IPv6 firewall configuration
in iptables mode.
We also have hosts that have some support for IPv6 firewall
configuration in iptables mode, but do not have iptables filter table.
We should:
- configure ip rules for all hosts that support IPv6
- only configure firewall rules in iptables mode if the host
has iptables filter table.

Updates tailscale/tailscale#11540

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-03-29 05:23:03 +00:00
Patrick O'Doherty af61179c2f
safeweb: add opt-in inline style CSP toggle (#11551)
Allow the use of inline styles with safeweb via an opt-in configuration
item. This will append `style-src "self" "unsafe-inline"` to the default
CSP. The `style-src` directive will be used in lieu of the fallback
`default-src "self"` directive.

Updates tailscale/corp#8027

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-03-28 13:15:01 -07:00
Brad Fitzpatrick b0941b79d6 tsweb: make BucketedStats not track 400s, 404s, etc
Updates tailscale/corp#18687

Change-Id: I142ccb1301ec4201c70350799ff03222bce96668
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-28 08:56:33 -07:00
Brad Fitzpatrick 354cac74a9 tsweb/varz: add charset=utf-8 to varz handler
Some of our labels contain UTF-8 and get mojibaked in the browser
right now.

Updates tailscale/corp#18687

Change-Id: I6069cffd6cc8813df415f06bb308bc2fc3ab65c4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-27 19:56:22 -07:00
James Tucker 9401b09028 control/controlclient: move client watchdog to cover initial request
The initial control client request can get stuck in the event that a
connection is established but then lost part way through, without any
ICMP or RST. Ensure that the control client will be restarted by timing
out that initial request as well.

Fixes #11542

Signed-off-by: James Tucker <james@tailscale.com>
2024-03-27 16:02:52 -07:00
Irbe Krumina 9b5176c4d9
cmd/k8s-operator: fix failing tests (#11541)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-03-27 20:56:07 +00:00
Irbe Krumina 9e2f58f846
cmd/{k8s-nameserver,k8s-operator},k8s-operator: add a kube nameserver, make operator deploy it (#11017)
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.

Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.

The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>

* cmd/k8s-operator/deploy/crds,k8s-operator: add DNSConfig CustomResource Definition

DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.

Signed-off-by: Irbe Krumina <irbe@tailscale.com>

* cmd/k8s-operator,k8s-operator: optionally reconcile nameserver resources

Adds a new reconciler that reconciles DNSConfig resources.
If a DNSConfig is deployed to cluster,
the reconciler creates kube nameserver resources.
This reconciler is only responsible for creating
nameserver resources and not for populating nameserver's records.

Signed-off-by: Irbe Krumina <irbe@tailscale.com>

* cmd/{k8s-operator,k8s-nameserver}: generate DNSConfig CRD for charts, append to static manifests

Signed-off-by: Irbe Krumina <irbe@tailscale.com>

---------

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-03-27 20:18:17 +00:00
Patrick O'Doherty b60c4664c7
safeweb: return http.Handler from safeweb.RedirectHTTP (#11538)
Updates #cleanup

Change the return type of the safeweb.RedirectHTTP method to a handler
that can be passed directly to http.Serve without any http.HandlerFunc
wrapping necessary.

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-03-27 11:44:17 -07:00
Brad Fitzpatrick 3e6306a782 derp/derphttp: make CONNECT Host match request-target's authority-form
This CONNECT client doesn't match what Go's net/http.Transport does
(making the two values match).  This makes it match.

This is all pretty unspecified but most clients & doc examples show
these matching. And some proxy implementations (such as Zscaler) care.

Updates tailscale/corp#18716

Change-Id: I135c5facbbcec9276faa772facbde1bb0feb2d26
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-27 11:36:28 -07:00
Patrick O'Doherty 8f27520633
safeweb: init (#11467)
Updates https://github.com/tailscale/corp/issues/8027

Safeweb is a wrapper around http.Server & tsnet that encodes some
application security defaults.

Safeweb asks developers to split their HTTP routes into two
http.ServeMuxs for serving browser and API-facing endpoints
repsectively. It then wraps these HTTP routes with the
context-appropriate security controls.

safeweb.Server#Serve will serve the HTTP muxes over the provided
listener. Caller are responsible for creating and tearing down their
application's listeners. Applications being served over HTTPS that wish
to implement HTTP redirects can use the Server#HTTPRedirect handler to
do so.

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-03-27 10:10:59 -07:00
Andrea Gottardo 008676f76e
cmd/serve: update warning for sandboxed macOS builds (#11530) 2024-03-27 09:03:52 -07:00
Percy Wegmann 66e4d843c1 ipn/localapi: add support for multipart POST to file-put
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.

Updates tailscale/corp#18202

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-27 08:53:52 -05:00
Percy Wegmann bed818a978 ipn/localapi: add support for multipart POST to file-put
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.

Updates tailscale/corp#18202

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-27 08:53:52 -05:00
Maisem Ali 0d8cd1645a go.mod: bump github.com/gaissmai/bart
To pick up https://github.com/gaissmai/bart/pull/17.

Updates #deps

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-03-26 16:06:52 -07:00
Percy Wegmann eb42a16da9 ipn/ipnlocal: report Taildrive access message on failed responses
For example, if we get a 404 when downloading a file, we'll report access.

Also, to reduce verbosty of logs, this elides 0 length files.

Updates tailscale/corp#17818

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-26 16:37:08 -05:00
Andrew Lytvynov 5d41259a63
cmd/tailscale/cli: remove Beta tag from tailscale update (#11529)
Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-03-26 15:28:34 -06:00
Charlotte Brandhorst-Satzkorn acb611f034
ipn/localipn: introduce logs for tailfs (#11496)
This change introduces some basic logging into the access and share
pathways for tailfs.

Updates tailscale/corp#17818

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-03-26 13:14:43 -07:00
Irbe Krumina 4cbef20569
cmd/k8s-operator: redact auth key from debug logs (#11523)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-03-26 16:20:32 +00:00
Brad Fitzpatrick 55baf9474f metrics, tsweb/varz: add multi-label map metrics
Updates tailscale/corp#18640

Change-Id: Ia9ae25956038e9d3266ea165537ac6f02485b74c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-26 09:01:04 -07:00
Flakes Updater 90a4d6ce69 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2024-03-26 08:37:52 -07:00
Brad Fitzpatrick 6d90966c1f logtail: move a scratch buffer to Logger
Rather than pass around a scratch buffer, put it on the Logger.

This is a baby step towards removing the background uploading
goroutine and starting it as needed.

Updates tailscale/corp#18514 (insofar as it led me to look at this code)

Change-Id: I6fd94581c28bde40fdb9fca788eb9590bcedae1b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-25 17:33:42 -07:00
Irbe Krumina 06e22a96b1
.github/workflows: fix path filter for 'Kubernetes manifests' test job (#11520)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-03-25 19:44:30 +00:00
Chris Milson-Tokunaga b6dfd7443a
Change type of installCRDs (#11478)
Including the double quotes (`"`) around the value made it appear like the helm chart should expect a string value for `installCRDs`.

Signed-off-by: Chris Milson-Tokunaga <chris.w.milson@gmail.com>
2024-03-25 19:11:55 +00:00
Percy Wegmann 8b8b315258 net/tstun: use gaissmai/bart instead of tempfork/device
This implementation uses less memory than tempfork/device,
which helps avoid OOM conditions in the iOS VPN extension when
switching to a Tailnet with ExitNode routing enabled.

Updates tailscale/corp#18514

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-25 12:31:14 -05:00
Andrew Lytvynov 1e7050e73a
go.mod: bump github.com/docker/docker (#11515)
There's a vulnerability https://pkg.go.dev/vuln/GO-2024-2659 that
govulncheck flags, even though it's only reachable from tests and
cmd/sync-containers and cannot be exploited there.

Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-03-25 10:45:35 -06:00
Brad Fitzpatrick a36cfb4d3d tailcfg, ipn/ipnlocal, wgengine/magicsock: add only-tcp-443 node attr
Updates tailscale/corp#17879

Change-Id: I0dc305d147b76c409cf729b599a94fa723aef0e0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-25 08:48:25 -07:00
Brad Fitzpatrick 7b34154df2 all: deprecate Node.Capabilities (more), remove PeerChange.Capabilities [capver 89]
First we had Capabilities []string. Then
https://tailscale.com/blog/acl-grants (#4217) brought CapMap, a
superset of Capabilities. Except we never really finished the
transition inside the codebase to go all-in on CapMap. This does so.

Notably, this coverts Capabilities on the wire early to CapMap
internally so the code can only deal in CapMap, even against an old
control server.

In the process, this removes PeerChange.Capabilities support, which no
known control plane sent anyway. They can and should use
PeerChange.CapMap instead.

Updates #11508
Updates #4217

Change-Id: I872074e226b873f9a578d9603897b831d50b25d9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-24 21:08:46 -07:00
Brad Fitzpatrick 4992aca6ec tsweb/varz: flesh out munging of expvar keys into valid Prometheus metrics
From a problem we hit with how badger registers expvars; it broke
trunkd's exported metrics.

Updates tailscale/corp#1297

Change-Id: I42e1552e25f734c6f521b6e993d57a82849464b2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-24 20:06:06 -07:00
Brad Fitzpatrick b104688e04 ipn/ipnlocal, types/netmap: replace hasCapability with set lookup on NetworkMap
When node attributes were super rare, the O(n) slice scans looking for
node attributes was more acceptable. But now more code and more users
are using increasingly more node attributes. Time to make it a map.

Noticed while working on tailscale/corp#17879

Updates #cleanup

Change-Id: Ic17c80341f418421002fbceb47490729048756d2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-22 15:30:46 -07:00
Percy Wegmann 8c88853db6 ipn/ipnlocal: add c2n /debug/pprof/allocs endpoint
This behaves the same as typical debug/pprof/allocs.

Updates tailscale/corp#18514

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-22 17:29:59 -05:00
Brad Fitzpatrick f45594d2c9 control/controlclient: free memory on iOS before full netmap work
Updates tailscale/corp#18514

Change-Id: I8d0330334b030ed8692b25549a0ee887ac6d7188
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-22 09:02:19 -07:00
James Tucker e0f97738ee localapi: reduce garbage production in bus watcher
Updates #optimization

Signed-off-by: James Tucker <james@tailscale.com>
2024-03-21 17:50:20 -07:00
James Tucker 3f7313dbdb util/linuxfw,wgengine/router: enable IPv6 configuration when netfilter is disabled
Updates #11434

Signed-off-by: James Tucker <james@tailscale.com>
2024-03-21 16:10:47 -07:00
Brad Fitzpatrick 8444937c89 control/controlclient: fix panic regression from earlier load balancer hint header
In the recent 20e9f3369 we made HealthChangeRequest machine requests
include a NodeKey, as it was the oddball machine request that didn't
include one. Unfortunately, that code was sometimes being called (at
least in some of our integration tests) without a node key due to its
registration with health.RegisterWatcher(direct.ReportHealthChange).

Fortunately tests in corp caught this before we cut a release. It's
possible this only affects this particular integration test's
environment, but still worth fixing.

Updates tailscale/corp#1297

Change-Id: I84046779955105763dc1be5121c69fec3c138672
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-21 12:54:58 -07:00
Joe Tsai 85febda86d
all: use zstdframe where sensible (#11491)
Use the zstdframe package where sensible instead of plumbing
around our own zstd.Encoder just for stateless operations.

This causes logtail to have a dependency on zstd,
but that's arguably okay since zstd support is implicit
to the protocol between a client and the logging service.
Also, virtually every caller to logger.NewLogger was
manually setting up a zstd.Encoder anyways,
meaning that zstd was functionally always a dependency.

Updates #cleanup
Updates tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-03-21 12:20:38 -07:00
Joe Tsai d4bfe34ba7
util/zstdframe: add package for stateless zstd compression (#11481)
The Go zstd package is not friendly for stateless zstd compression.
Passing around multiple zstd.Encoder just for stateless compression
is a waste of memory since the memory is never freed and seldom
used if no compression operations are happening.

For performance, we pool the relevant Encoder/Decoder
with the specific options set.

Functionally, this package is a wrapper over the Go zstd package
with a more ergonomic API for stateless operations.

This package can be used to cleanup various pre-existing zstd.Encoder
pools or one-off handlers spread throughout our codebases.

Performance:

	BenchmarkEncode/Best               1690        610926 ns/op      25.78 MB/s           1 B/op          0 allocs/op
	    zstd_test.go:137: memory: 50.336 MiB
	    zstd_test.go:138: ratio:  3.269x
	BenchmarkEncode/Better            10000        100939 ns/op     156.04 MB/s           0 B/op          0 allocs/op
	    zstd_test.go:137: memory: 20.399 MiB
	    zstd_test.go:138: ratio:  3.131x
	BenchmarkEncode/Default            15775         74976 ns/op     210.08 MB/s         105 B/op          0 allocs/op
	    zstd_test.go:137: memory: 1.586 MiB
	    zstd_test.go:138: ratio:  3.064x
	BenchmarkEncode/Fastest            23222         53977 ns/op     291.81 MB/s          26 B/op          0 allocs/op
	    zstd_test.go:137: memory: 599.458 KiB
	    zstd_test.go:138: ratio:  2.898x
	BenchmarkEncode/FastestLowMemory                   23361         50789 ns/op     310.13 MB/s          15 B/op          0 allocs/op
	    zstd_test.go:137: memory: 334.458 KiB
	    zstd_test.go:138: ratio:  2.898x
	BenchmarkEncode/FastestNoChecksum                  23086         50253 ns/op     313.44 MB/s          26 B/op          0 allocs/op
	    zstd_test.go:137: memory: 599.458 KiB
	    zstd_test.go:138: ratio:  2.900x

	BenchmarkDecode/Checksum                           70794         17082 ns/op     300.96 MB/s           4 B/op          0 allocs/op
	    zstd_test.go:163: memory: 316.438 KiB
	BenchmarkDecode/NoChecksum                         74935         15990 ns/op     321.51 MB/s           4 B/op          0 allocs/op
	    zstd_test.go:163: memory: 316.438 KiB
	BenchmarkDecode/LowMemory                          71043         16739 ns/op     307.13 MB/s           0 B/op          0 allocs/op
	    zstd_test.go:163: memory: 79.347 KiB

We can see that the options are taking effect where compression ratio improves
with higher levels and compression speed diminishes.
We can also see that LowMemory takes effect where the pooled coder object
references less memory than other cases.
We can see that the pooling is taking effect as there are 0 amortized allocations.

Additional performance:

	BenchmarkEncodeParallel/zstd-24                     1857        619264 ns/op        1796 B/op         49 allocs/op
	BenchmarkEncodeParallel/zstdframe-24                1954        532023 ns/op        4293 B/op         49 allocs/op
	BenchmarkDecodeParallel/zstd-24                     5288        197281 ns/op        2516 B/op         49 allocs/op
	BenchmarkDecodeParallel/zstdframe-24                6441        196254 ns/op        2513 B/op         49 allocs/op

In concurrent usage, handling the pooling in this package
has a marginal benefit over the zstd package,
which relies on a Go channel as the pooling mechanism.
In particular, coders can be freed by the GC when not in use.
Coders can be shared throughout the program if they use this package
instead of multiple independent pools doing the same thing.
The allocations are unrelated to pooling as they're caused by the spawning of goroutines.

Updates #cleanup
Updates tailscale/corp#18514
Updates tailscale/corp#17653
Updates tailscale/corp#18005

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-03-21 11:39:20 -07:00
Brad Fitzpatrick 6a860cfb35 ipn/ipnlocal: add c2n pprof option to force a GC
Like net/http/pprof has.

Updates tailscale/corp#18514

Change-Id: I264adb6dcf5732d19707783b29b7273b4ca69cf4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-21 10:24:06 -07:00
Brad Fitzpatrick 5d1c72f76b wgengine/magicsock: don't use endpoint debug ringbuffer on mobile.
Save some memory.

Updates tailscale/corp#18514

Change-Id: Ibcaf3c6d8e5cc275c81f04141d0f176e2249509b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-21 06:58:55 -07:00
Andrew Dunham 512fc0b502 util/reload: add new package to handle periodic value loading
This can be used to reload a value periodically, whether from disk or
another source, while handling jitter and graceful shutdown.

Updates tailscale/corp#1297

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Iee2b4385c9abae59805f642a7308837877cb5b3f
2024-03-20 18:23:54 -04:00
Adrian Dewhurst 2f7e7be2ea control/controlclient: do not alias peer CapMap
Updates #cleanup

Change-Id: I10fd5e04310cdd7894a3caa3045b86eb0a06b6a0
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
2024-03-20 15:42:44 -04:00
Percy Wegmann 067ed0bf6f ipnlocal: ensure TailFS share notifications are non-nil
This allows the UI to distinguish between 'no shares' versus
'not being notified about shares'.

Updates ENG-2843

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-20 13:54:33 -05:00
Brad Fitzpatrick 20e9f3369d control/controlclient: send load balancing hint HTTP request header
Updates tailscale/corp#1297

Change-Id: I0b102081e81dfc1261f4b05521ab248a2e4a1298
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-20 09:39:41 -07:00
Percy Wegmann 15c58cb77c tailfs: include whitespace in test share and filenames
Since TailFS allows spaces in folder and file names, test with spaces.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-20 09:41:18 -05:00
James Tucker e37eded256
tool/gocross: add android autoflags (#11465)
Updates tailscale/corp#18202

Signed-off-by: James Tucker <james@tailscale.com>
2024-03-19 16:08:20 -07:00
Claire Wang 221de01745
control/controlclient: fix sending peer capmap changes (#11457)
Instead of just checking if a peer capmap is nil, compare the previous
state peer capmap with the new peer capmap.
Updates tailscale/corp#17516

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-03-19 18:56:06 -04:00
Andrew Dunham 6da1dc84de wgengine: fix logger data race in tests
Observed in:
    https://github.com/tailscale/tailscale/actions/runs/8350904950/job/22858266932?pr=11463

Updates #11226

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I9b57db4b34b6ad91d240cd9fa7e344fc0376d52d
2024-03-19 18:51:25 -04:00
Andrew Dunham e382e4cee6 syncs: add Swap method
To mimic sync.Map.Swap, sync/atomic.Value.Swap, etc.

Updates tailscale/corp#1297

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: If7627da1bce8b552873b21d7e5ebb98904e9a650
2024-03-19 18:39:44 -04:00
Andrea Gottardo 6288c9b41e
version/prop: remove IsMacAppSandboxEnabled (#11461)
Fixes tailscale/corp#18441

For a few days, IsMacAppStore() has been returning `false` on App Store builds (IPN-macOS target in Xcode).

I regressed this in #11369 by introducing logic to detect the sandbox by checking for the APP_SANDBOX_CONTAINER_ID environment variable. I thought that was a more robust approach instead of checking the name of the executable. However, it appears that on recent macOS versions this environment variable is no longer getting set, so we should go back to the previous logic that checks for the executable path, or HOME containing references to macsys.

This PR also adds additional checks to the logic by also checking XPC_SERVICE_NAME in addition to HOME where possible. That environment variable is set inside the network extension, either macos or macsys and is good to look at if for any reason HOME is not set.
2024-03-19 14:50:34 -07:00
Mario Minardi 68d9e49a5b
api.md: add missing backtick to GET searchpaths doc (#11459)
Add missing backtick to GET searchpaths api documentation.

Updates #cleanup

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-03-19 11:31:03 -06:00
Will Norris 349799a1ba api.md: format API docs with prettier
Mostly inconsequential minor fixes for consistency.  A couple of changes
to actual JSON examples, but all still very readable, so I think it's
fine.

Updates #cleanup

Signed-off-by: Will Norris <will@tailscale.com>
2024-03-19 09:23:50 -07:00
Irbe Krumina b0c3e6f6c5
cmd/k8s-operator,ipn/conf.go: fix --accept-routes for proxies (#11453)
Fix a bug where all proxies got configured with --accept-routes set to true.
The bug was introduced in https://github.com/tailscale/tailscale/pull/11238.

Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-03-19 14:54:17 +00:00
James Tucker 7fe4cbbaf3
types/views: optimize slices contains under some conditions (#11449)
In control there are conditions where the leaf functions are not being
optimized away (i.e. At is not inlined), resulting in undesirable time
spent copying during SliceContains. This optimization is likely
irrelevant to simpler code or smaller structures.

Updates #optimization

Signed-off-by: James Tucker <james@tailscale.com>
2024-03-18 16:19:16 -07:00
Mario Minardi d2ccfa4edd
cmd/tailscale,ipn/ipnlocal: enable web client over quad 100 by default (#11419)
Enable the web client over 100.100.100.100 by default. Accepting traffic
from [tailnet IP]:5252 still requires setting the `webclient` user pref.

Updates https://github.com/tailscale/tailscale/issues/10261

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-03-18 15:47:21 -06:00
Will Norris 4d747c1833 api.md: document device expiration endpoint
This was originally built for testing node expiration flows, but is also
useful for customers to force device re-auth without actually deleting
the device from the tailnet.

Updates tailscale/corp#18408

Signed-off-by: Will Norris <will@tailscale.com>
2024-03-18 12:49:55 -07:00
Mario Minardi e0886ad167
ipn/ipnlocal, tailcfg: add disable-web-client node attribute (#11418)
Add a disable-web-client node attribute and add handling for disabling
the web client when this node attribute is set.

Updates https://github.com/tailscale/tailscale/issues/10261

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-03-18 10:32:33 -06:00
Marwan Sulaiman da7c3d1753 envknob: ensure f is not nil before using it
This PR fixes a panic that I saw in the mac app where
parsing the env file fails but we don't get to see the
error due to the panic of using f.Name()

Fixes #11425

Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
2024-03-15 12:46:41 -04:00
Andrea Gottardo 08ebac9acb
version,cli,safesocket: detect non-sandboxed macOS GUI (#11369)
Updates ENG-2848

We can safely disable the App Sandbox for our macsys GUI, allowing us to use `tailscale ssh` and do a few other things that we've wanted to do for a while. This PR:

- allows Tailscale SSH to be used from the macsys GUI binary when called from a CLI
- tweaks the detection of client variants in prop.go, with new functions `IsMacSys()`, `IsMacSysApp()` and `IsMacAppSandboxEnabled()`

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-03-14 14:28:06 -07:00
Irbe Krumina ea55f96310
cmd/tailscale/cli: fix configuring partially empty kubeconfig (#11417)
When a user deletes the last cluster/user/context from their
kubeconfig via 'kubectl delete-[cluster|user|context] command,
kubectx sets the relevant field in kubeconfig to 'null'.
This was breaking our conversion logic that was assuming that the field
is either non-existant or is an array.

Updates tailscale/corp#18320

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-03-14 20:26:20 +00:00
Anton Tolchanov cf8948da5f net/routetable: increase route limit used by the test
I was running all tests while preparing a recent stable release, and
this was failing because my computer is connected to a fairly large
tailnet.

```
--- FAIL: TestGetRouteTable (0.01s)
    routetable_linux_test.go:32: expected at least one default route;
    ...
```

```
$ ip route show table 52  | wc -l
1051
```

Updates #cleanup

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-03-14 16:10:40 +00:00
Andrew Lytvynov decd9893e4
ipn/ipnlocal: validate domain of PopBrowserURL on default control URL (#11394)
If the client uses the default Tailscale control URL, validate that all
PopBrowserURLs are under tailscale.com or *.tailscale.com. This reduces
the risk of a compromised control plane opening phishing pages for
example.

The client trusts control for many other things, but this is one easy
way to reduce that trust a bit.

Fixes #11393

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-03-13 18:31:07 -06:00
Andrew Lytvynov 48eef9e6eb
clientupdate: do not allow msiexec to reboot the OS (#11409)
According to
https://learn.microsoft.com/en-us/windows/win32/msi/standard-installer-command-line-options#promptrestart,
`/promptrestart` is ignored with `/quiet` is set, so msiexec.exe can
sometimes silently trigger a reboot. The best we can do to reduce
unexpected disruption is to just prevent restarts, until the user
chooses to do it. Restarts aren't normally needed for Tailscale updates,
but there seem to be some situations where it's triggered.

Updates #18254

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-03-13 16:55:24 -06:00
Anton Tolchanov da3cf12194 VERSION.txt: this is v1.63.0
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-03-13 14:51:52 +00:00
Anton Tolchanov f12d2557f9 prober: add a DERP bandwidth probe
Updates tailscale/corp#17912

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-03-13 13:36:45 +00:00
Anton Tolchanov 5018683d58 prober: remove unused derp prober latency measurements
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-03-13 13:36:45 +00:00
Anton Tolchanov 205a10b51a prober: export probe counters and cumulative latency
Updates #cleanup

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-03-13 13:36:45 +00:00
Andrew Dunham 7429e8912a wgengine/netstack: fix bug with duplicate SYN packets in client limit
This fixes a bug that was introduced in #11258 where the handling of the
per-client limit didn't properly account for the fact that the gVisor
TCP forwarder will return 'true' to indicate that it's handled a
duplicate SYN packet, but not launch the handler goroutine.

In such a case, we neither decremented our per-client limit in the
wrapper function, nor did we do so in the handler function, leading to
our per-client limit table slowly filling up without bound.

Fix this by doing the same duplicate-tracking logic that the TCP
forwarder does so we can detect such cases and appropriately decrement
our in-flight counter.

Updates tailscale/corp#12184

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ib6011a71d382a10d68c0802593f34b8153d06892
2024-03-11 08:05:00 -04:00
Brad Fitzpatrick ad33e47270 ipn/{ipnlocal,localapi}: add debug verb to force spam IPN bus NetMap
To force the problem in its worst case scenario before fixing it.

Updates tailscale/corp#17859

Change-Id: I2c8b8e5f15c7801e1ab093feeafac52ec175a763
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-09 17:47:15 -08:00
Flakes Updater 04fceae898 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2024-03-09 11:51:43 -08:00
James Tucker 055117ad45
util/linuxfw: fix support for containers without IPv6 iptables filters (#11381)
There are container environments such as GitHub codespaces that have
partial IPv6 support - routing support is enabled at the kernel level,
but lacking IPv6 filter support in the iptables module.

In the specific example of the codespaces environment, this also has
pre-existing legacy iptables rules in the IPv4 tables, as such the
nascent firewall mode detection will always pick iptables.

We would previously fault trying to install rules to the filter table,
this catches that condition earlier, and disables IPv6 support under
these conditions.

Updates #5621
Updates #11344
Updates #11354

Signed-off-by: James Tucker <james@tailscale.com>
2024-03-08 15:46:21 -08:00
James Tucker 43fba6e04d
util/linuxfw: correct logical error in NAT table check (#11380)
Updates #11344
Updates #11354

Signed-off-by: James Tucker <james@tailscale.com>
2024-03-08 15:35:13 -08:00
panchajanya 50a570a83f
Code Improvements (#11311)
build_docker, update-flake: cleanup and apply shellcheck fixes

Was editing this file to match my needs while shellcheck warnings
bugged me out.
REV isn't getting used anywhere. Better remove it.

Updates #cleanup

Signed-off-by: Panchajanya1999 <kernel@panchajanya.dev>
Signed-off-by: James Tucker <james@tailscale.com>
2024-03-08 15:24:36 -08:00
Percy Wegmann e496451928 ipn,cmd/tailscale,client/tailscale: add support for renaming TailFS shares
- Updates API to support renaming TailFS shares.
- Adds a CLI rename subcommand for renaming a share.
- Renames the CLI subcommand 'add' to 'set' to make it clear that
  this is an add or update.
- Adds a unit test for TailFS in ipnlocal

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-08 14:48:26 -06:00
Percy Wegmann 6c160e6321 ipn,tailfs: tie TailFS share configuration to user profile
Previously, the configuration of which folders to share persisted across
profile changes. Now, it is tied to the user's profile.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-08 14:48:26 -06:00
Percy Wegmann 16ae0f65c0 cmd/viewer: import views when generating byteSliceField
Updates #cleanup

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-08 14:48:26 -06:00
Andrew Dunham f072d017bd wgengine/magicsock: don't change DERP home when not connected to control
This pretty much always results in an outage because peers won't
discover our new home region and thus won't be able to establish
connectivity.

Updates tailscale/corp#18095

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ic0d09133f198b528dd40c6383b16d7663d9d37a7
2024-03-08 14:15:13 -05:00
Sonia Appasamy 54e52532eb version/mkversion: enforce synology versions within int32 range
Synology requires version numbers are within int32 range. This
change updates the version logic to keep things closer within the
range, and errors on building when the range is exceeded.

Updates #cleanup

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-03-08 12:47:59 -05:00
Claire Wang 74e33b9c50
tailcfg: bump CapabilityVersion (#11368)
bump version for adding NodeAttrSuggestExitNode
remove extra s from NodeAttrSuggestExitNode
Updates tailscale/corp#17516

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-03-07 14:17:40 -05:00
Mario Minardi c662bd9fe7
client/web: dedupe packages in yarn.lock (#11327)
Run yarn-deduplicate on yarn.lock to dedupe packages. This is being done
to reduce the number of redundant packages fetched by yarn when existing
versions in the lockfile satisfy the version dependency we need.

See https://github.com/scinos/yarn-deduplicate for details on the tool
used to perform this deduplication.

Updates #cleanup

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-03-07 09:29:20 -07:00
Andrew Dunham 34176432d6 cmd/derper, types/logger: move log filter to shared package
So we can use it in trunkd to quiet down the logs there.

Updates #5563

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ie3177dc33f5ad103db832aab5a3e0e4f128f973f
2024-03-07 11:05:03 -05:00
Irbe Krumina 3047b6274c
docs/k8s: don't run subnet router in userspace mode (#11363)
There should not be a need to do that unless we run on host network

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-03-07 13:56:11 +00:00
Andrew Dunham 9884d06b80 net/interfaces: fix test hang on Darwin
This test could hang because the subprocess was blocked on writing to
the stdout pipe if we find the address we're looking for early in the
output.

Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I68d82c22a5d782098187ae6d8577e43063b72573
2024-03-06 22:37:40 -05:00
Andrew Dunham 62cf83eb92 go.mod: bump gvisor
The `stack.PacketBufferPtr` type no longer exists; replace it with
`*stack.PacketBuffer` instead.

Updates #8043

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ib56ceff09166a042aa3d9b80f50b2aa2d34b3683
2024-03-06 20:22:20 -05:00
Andrew Dunham 8f27d519bb tsweb: add String method to tsweb.RequestID
In case we want to change the format to something opaque later.

Updates tailscale/corp#2549

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ie2eac8b885b694be607e9d5101d24b650026d89c
2024-03-06 19:48:04 -05:00
Irbe Krumina 90c4067010
util/linuxfw: add container-friendly IPv6 NAT check (#11353)
Remove IPv6 NAT check when routing is being set up
using nftables.
This is unnecessary as support for nftables was
added after support for IPv6.
https://tldp.org/HOWTO/Linux+IPv6-HOWTO/ch18s04.html
https://wiki.nftables.org/wiki-nftables/index.php/Building_and_installing_nftables_from_sources

Additionally, run an extra check for IPv6 NAT support
when the routing is set up with iptables.
This is because the earlier checks rely on
being able to use modprobe and on /proc/net/ip6_tables_names
being populated on start - these conditions are usually not
true in container environments.

Updates tailscale/tailscale#11344

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-03-06 21:53:51 +00:00
Percy Wegmann fd942b5384 ipn/ipnlocal: reduce allocations in TailFS share notifications
This eliminates unnecessary map.Clone() calls and also eliminates
repetitive notifications about the same set of shares.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-06 14:21:53 -06:00
Percy Wegmann 6f66f5a75a ipn: add comment about thread-safety to StateStore
Updates #cleanup

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-06 12:42:18 -06:00
Andrea Gottardo 0cb86468ca ipn/localapi: add set-gui-visible endpoint
Updates tailscale/corp#17859

Provides a local API endpoint to be called from the GUI to inform the backend when the client menu is opened or closed.

cc @bradfitz

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
Signed-off-by: Andrea Gottardo <andrea@tailscale.com>
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-03-06 10:39:52 -08:00
Percy Wegmann 00373f07ac ipn/ipnlocal: exclude mullvad exit nodes from TailFS peers list
This is a temporary solution to at least omit Mullvad exit nodes
from the list of TailFS peers. Once we can identify peers that are
actually sharing via TailFS, we can remove this, but for alpha it'll
be sufficient to just omit Mullvad.

Updates tailscale/corp#17766

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-06 12:27:32 -06:00
Sonia Appasamy c58c59ee54 {ipn,cmd/tailscale/cli}: move ServeConfig mutation logic to ipn/serve
Moving logic that manipulates a ServeConfig into recievers on the
ServeConfig in the ipn package. This is setup work to allow the
web client and cli to both utilize these shared functions to edit
the serve config.

Any logic specific to flag parsing or validation is left untouched
in the cli command. The web client will similarly manage its
validation of user's requested changes. If validation logic becomes
similar-enough, we can make a serve util for shared functionality,
which likely does not make sense in ipn.

Updates #10261

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-03-06 13:26:03 -05:00
Kristoffer Dalby 65255b060b client/tailscale: add postures to UserRuleMatch
Updates tailscale/corp#17770

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-03-06 15:36:17 +01:00
License Updater d59878e457 licenses: update android licenses
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-03-05 17:55:26 -08:00
License Updater 797d75c50a licenses: update win/apple licenses
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-03-05 17:53:52 -08:00
License Updater 6a4e5329c3 licenses: update tailscale{,d} licenses
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-03-05 17:53:05 -08:00
Andrew Dunham 4338db28f7 wgengine/magicsock: prefer link-local addresses to private ones
Since link-local addresses are definitionally more likely to be a direct
(lower-latency, more reliable) connection than a non-link-local private
address, give those a bit of a boost when selecting endpoints.

Updates #8097

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I93fdeb07de55ba39ba5fcee0834b579ca05c2a4e
2024-03-05 20:32:45 -05:00
Sonia Appasamy 65c3c690cf {ipn/serve,cmd/tailscale/cli}: move some shared funcs to ipn
In preparation for changes to allow configuration of serve/funnel
from the web client, this commit moves some functionality that will
be shared between the CLI and web client to the ipn package's
serve.go file, where some other util funcs are already defined.

Updates #10261

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-03-05 14:30:38 -05:00
Brad Fitzpatrick 8780e33500 go.toolchain.rev: bump Go toolchain to 1.22.1
Updates tailscale/corp#18000

Change-Id: I45de95e974ea55b0dac2218b3c82d124c4793390
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-03-05 10:51:13 -08:00
Paul Scott 2fa20e3787 util/cmpver: add Less/LessEq helper funcs
Updates tailscale/corp#17199

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-03-05 16:57:04 +00:00
Claire Wang d610f8eec0
tailcfg: add suggest exit node related node attribute (#11329)
Updates tailscale/corp#17516

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-03-05 10:54:41 -05:00
Chris Palmer 13853e7f29
tsweb: add more test cases for TestCleanRedirectURL (#11331)
Updates #cleanup

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
2024-03-04 17:13:36 -08:00
Irbe Krumina dff6f3377f
docs/k8s: update docs (#11307)
Update docs for static Tailscale deployments on kube
to always use firewall mode autodection when in non-userspace.
Also add a note about running multiple replicas and a few suggestions how folks could do that.

Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <1687799+knyar@users.noreply.github.com>
2024-03-04 14:59:51 +00:00
Percy Wegmann 232a2d627c tailfs: only impersonate unprivileged user if able to sudo -u as that user
When serving TailFS shares, tailscaled executes another tailscaled to act as a
file server. It attempts to execute this child process as an unprivileged user
using sudo -u. This is important to avoid accessing files as root, which would
result in potential privilege escalation.

Previously, tailscaled assumed that it was running as someone who can sudo -u,
and would fail if it was unable to sudo -u.

With this commit, if tailscaled is unable to sudo -u as the requested user, and
tailscaled is not running as root, then tailscaled executes the the file server
process under the same identity that ran tailscaled, since this is already an
unprivileged identity.

In the unlikely event that tailscaled is running as root but is unable to
sudo -u, it will refuse to run the child file server process in order to avoid
privilege escalation.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-03-01 17:42:19 -06:00
Flakes Updater 00554ad277 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2024-02-29 19:53:19 -08:00
Andrew Lytvynov 23fbf0003f
clientupdate: handle multiple versions in "apk info tailscale" output (#11310)
The package info output can list multiple package versions, and not in
descending order. Find the newest version in the output, instead of the
first one.

Fixes #11309

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-02-29 11:54:46 -07:00
Irbe Krumina 097c5ed927
util/linuxfw: insert rather than append nftables DNAT rule (#11303)
Ensure that the latest DNATNonTailscaleTraffic rule
gets inserted on top of any pre-existing rules.

Updates tailscale/tailscale#11281

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-02-29 16:53:43 +00:00
Percy Wegmann e324a5660f ipn: include full tailfs shares in ipn notifications
This allows the Mac application to regain access to restricted
folders after restarts.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-29 10:16:44 -06:00
Percy Wegmann 80f1cb6227 tailfs: support storing bookmark data on shares
This allows the sandboxed Mac application to store security-
scoped URL bookmarks in order to maintain access to restricted
folders across restarts.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-29 10:16:44 -06:00
Brad Fitzpatrick f18f591bc6 wgengine: plumb the PeerByKey from wgengine to magicsock
This was just added in 69f4b459 which doesn't yet use it. This still
doesn't yet use it. It just pushes it down deeper into magicsock where
it'll used later.

Updates #7617

Change-Id: If2f8fd380af150ffc763489e1ff4f8ca2899fac6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-28 19:36:34 -08:00
Andrew Lytvynov c7474431f1
tsweb: allow empty redirect URL in CleanRedirectURL (#11295)
Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-02-28 15:57:42 -08:00
Brad Fitzpatrick b68a09cb34 ipn/ipnlocal: make active IPN sessions keyed by sessionID
We used a HandleSet before when we didn't have a unique handle. But a
sessionID is a unique handle, so use that instead. Then that replaces
the other map we had.

And now we'll have a way to look up an IPN session by sessionID for
later.

Updates tailscale/corp#17859

Change-Id: I5f647f367563ec8783c643e49f93817b341d9064
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-28 15:16:10 -08:00
Percy Wegmann 2d5d6f5403 ipn,wgengine: only intercept TailFS traffic on quad 100
This fixes a regression introduced with 993acf4 and released in
v1.60.0.

The regression caused us to intercept all userspace traffic to port
8080 which prevented users from exposing their own services to their
tailnet at port 8080.

Now, we only intercept traffic to port 8080 if it's bound for
100.100.100.100 or fd7a:115c:a1e0::53.

Fixes #11283

Signed-off-by: Percy Wegmann <percy@tailscale.com>
(cherry picked from commit 17cd0626f3)
2024-02-28 17:09:14 -06:00
Ross Zurowski e83e2e881b
client/web: fix Vite CJS deprecation warning (#11288)
Starting in Vite 5, Vite now issues a deprecation warning when using
a CJS-based Vite config file. This commit fixes it by adding the
`"type": "module"` to our package.json to opt our files into ESM module
behaviours.

Fixes #cleanup

Signed-off-by: Ross Zurowski <ross@rosszurowski.com>
2024-02-28 16:28:22 -05:00
Brad Fitzpatrick 69f4b4595a wgengine{,/wgint}: add wgint.Peer wrapper type, add to wgengine.Engine
This adds a method to wgengine.Engine and plumbed down into magicsock
to add a way to get a type-safe Tailscale-safe wrapper around a
wireguard-go device.Peer that only exposes methods that are safe for
Tailscale to use internally.

It also removes HandshakeAttempts from PeerStatusLite that was just
added as it wasn't needed yet and is now accessible ala cart as needed
from the Peer type accessor.

None of this is used yet.

Updates #7617

Change-Id: I07be0c4e6679883e6eeddf8dbed7394c9e79c5f4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-28 09:50:18 -08:00
James Tucker 7e17aeb36b .github/workflows: fix regular breakage of go toolchains
This server recently had a common ansible applied, which added a
periodic /tmp cleaner, as is needed on other CI machines to deal with
test tempfile leakage. The setting of $HOME to /tmp means that the go
toolchain in there was regularly getting pruned by the tmp cleaner, but
often incompletely, because it was also in use.

Move HOME to a runner owned directory.

Updates #11248

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-28 08:17:35 -08:00
Brad Fitzpatrick b4ff9a578f wgengine: rename local variable from 'found' to conventional 'ok'
Updates #cleanup

Change-Id: I799dc86ea9e4a3a949592abdd8e74282e7e5d086
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-28 07:33:57 -08:00
Brad Fitzpatrick a8a525282c wgengine: use slices.Clone in two places
Updates #cleanup

Change-Id: I1cb30efb6d09180e82b807d6146f37897ef99307
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-28 07:33:57 -08:00
Brad Fitzpatrick 74b8985e19 ipn/ipnstate, wgengine: make PeerStatusLite.LastHandshake zero Time means none
... rather than 1970. Code was using IsZero against the 1970 team
(which isn't a zero value), but fortunately not anywhere that seems to
have mattered.

Updates #cleanup

Change-Id: I708a3f2a9398aaaedc9503678b4a8a311e0e019e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-28 07:33:57 -08:00
Andrew Dunham 3dd8ae2f26 net/tstun: fix spelling of "WireGuard"
Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ida7e30f4689bc18f5f7502f53a0adb5ac3c7981a
2024-02-28 00:00:18 -05:00
Andrew Dunham a20e46a80f
util/cache: fix missing interface methods (#11275)
Updates #cleanup


Change-Id: Ib3a33a7609530ef8c9f3f58fc607a61e8655c4b5

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
2024-02-27 23:03:49 -05:00
Andrew Dunham 23e9447871 tsweb: expose function to generate request IDs
For use in corp.

Updates tailscale/corp#2549

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I71debae1ce9ae48cf69cc44c2ab5c443fc3b2005
2024-02-27 18:57:53 -05:00
Mario Minardi 7912d76da0
client/web: update to typescript 5.3.3 (#11267)
Update typescript to 5.3.3. This is a major bump from the previous
version of 4.8.3. This also requires adding newer versions of
@typescript-eslint/eslint-plugin and @typescript-eslint/parser to our
resolutions as eslint-config-react-app pulls in versions that otherwise
do not support typescript 5.x.

eslint-config-react-app has not been updated in 2 years and is seemingly
abandoned, so we may wish to fork it or move to a different eslint config
in the future.

Updates https://github.com/tailscale/corp/issues/17810

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-02-27 14:17:30 -07:00
Andrew Dunham c5abbcd4b4 wgengine/netstack: add a per-client limit for in-flight TCP forwards
This is a fun one. Right now, when a client is connecting through a
subnet router, here's roughly what happens:

1. The client initiates a connection to an IP address behind a subnet
   router, and sends a TCP SYN
2. The subnet router gets the SYN packet from netstack, and after
   running through acceptTCP, starts DialContext-ing the destination IP,
   without accepting the connection¹
3. The client retransmits the SYN packet a few times while the dial is
   in progress, until either...
4. The subnet router successfully establishes a connection to the
   destination IP and sends the SYN-ACK back to the client, or...
5. The subnet router times out and sends a RST to the client.
6. If the connection was successful, the client ACKs the SYN-ACK it
   received, and traffic starts flowing

As a result, the notification code in forwardTCP never notices when a
new connection attempt is aborted, and it will wait until either the
connection is established, or until the OS-level connection timeout is
reached and it aborts.

To mitigate this, add a per-client limit on how many in-flight TCP
forwarding connections can be in-progress; after this, clients will see
a similar behaviour to the global limit, where new connection attempts
are aborted instead of waiting. This prevents a single misbehaving
client from blocking all other clients of a subnet router by ensuring
that it doesn't starve the global limiter.

Also, bump the global limit again to a higher value.

¹ We can't accept the connection before establishing a connection to the
remote server since otherwise we'd be opening the connection and then
immediately closing it, which breaks a bunch of stuff; see #5503 for
more details.

Updates tailscale/corp#12184

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I76e7008ddd497303d75d473f534e32309c8a5144
2024-02-27 15:25:40 -05:00
Claire Wang 352c1ac96c
tailcfg: add latitude, longitude for node location (#11162)
Updates tailscale/corp#17590

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-02-27 15:02:06 -05:00
Irbe Krumina 95dcc1745b
cmd/k8s-operator: reconcile tailscale Ingresses when their backend Services change. (#11255)
This is so that if a backend Service gets created after the Ingress, it gets picked up by the operator.

Updates tailscale/tailscale#11251

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <1687799+knyar@users.noreply.github.com>
2024-02-27 15:19:53 +00:00
Irbe Krumina 303125d96d
cmd/k8s-operator: configure all proxies with declarative config (#11238)
Containerboot container created for operator's ingress and egress proxies
are now always configured by passing a configfile to tailscaled
(tailscaled --config <configfile-path>.
It does not run 'tailscale set' or 'tailscale up'.
Upgrading existing setups to this version as well as
downgrading existing setups at this version works.

Updates tailscale/tailscale#10869

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-02-27 15:14:09 +00:00
Irbe Krumina 45d27fafd6
cmd/k8s-operator,k8s-operator,go.{mod,sum},tstest/tools: add Tailscale Kubernetes operator API docs (#11246)
Add logic to autogenerate CRD docs.
.github/workflows/kubemanifests.yaml CI workflow will fail if the doc is out of date with regard to the current CRDs.
Docs can be refreshed by running make kube-generate-all.

Updates tailscale/tailscale#11023

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-02-27 14:51:53 +00:00
Percy Wegmann 05acf76392 tailfs: fix race condition in tailfs_test
Ues a noop authenticator to avoid potential races in gowebdav's
built-in authenticator.

Fixes #11259

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-27 08:31:04 -06:00
Keli 086ef19439
scripts/installer.sh: auto-start tailscale on Alpine (#11214)
On Alpine, we add the tailscale service but fail to call start.
This means that tailscale does not start up until the user reboots the machine.

Fixes #11161

Signed-off-by: Keli Velazquez <keli@tailscale.com>
2024-02-27 09:17:12 -05:00
Brad Fitzpatrick 1cf85822d0 ipn/ipnstate, wgengine/wgint: add handshake attempts accessors
Not yet used. This is being made available so magicsock/wgengine can
use it to ignore certain sends (UDP + DERP) later on at least mobile,
letting wireguard-go think it's doing its full attempt schedule, but
we can cut it short conditionally based on what we know from the
control plane.

Updates #7617

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Change-Id: Ia367cf6bd87b2aeedd3c6f4989528acdb6773ca7
2024-02-26 19:09:12 -08:00
Brad Fitzpatrick eb28818403 wgengine: make pendOpen time later, after dup check
Otherwise on OS retransmits, we'd make redundant timers in Go's timer
heap that upon firing just do nothing (well, grab a mutex and check a
map and see that there's nothing to do).

Updates #cleanup

Change-Id: Id30b8b2d629cf9c7f8133a3f7eca5dc79e81facb
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-26 19:09:12 -08:00
Brad Fitzpatrick 219efebad4 wgengine: reduce critical section
No need to hold wgLock while using the device to LookupPeer;
that has its own mutex already.

Updates #cleanup

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Change-Id: Ib56049fcc7163cf5a2c2e7e12916f07b4f9d67cb
2024-02-26 19:09:12 -08:00
Brad Fitzpatrick 9a8c2f47f2 types/key: remove copy returning array by value
It's unnecessary. Returning an array value is already a copy.

Updates #cleanup

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Change-Id: If7f350b61003ea08f16a531b7b4e8ae483617939
2024-02-26 19:09:12 -08:00
Anton Tolchanov 8cc5c51888 health: warn about reverse path filtering and exit nodes
When reverse path filtering is in strict mode on Linux, using an exit
node blocks all network connectivity. This change adds a warning about
this to `tailscale status` and the logs.

Example in `tailscale status`:

```
- not connected to home DERP region 22
- The following issues on your machine will likely make usage of exit nodes impossible: [interface "eth0" has strict reverse-path filtering enabled], please set rp_filter=2 instead of rp_filter=1; see https://github.com/tailscale/tailscale/issues/3310
```

Example in the logs:
```
2024/02/21 21:17:07 health("overall"): error: multiple errors:
	not in map poll
	The following issues on your machine will likely make usage of exit nodes impossible: [interface "eth0" has strict reverse-path filtering enabled], please set rp_filter=2 instead of rp_filter=1; see https://github.com/tailscale/tailscale/issues/3310
```

Updates #3310

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-02-27 00:43:01 +00:00
Nick Khyl 7ef1fb113d cmd/tailscaled, ipn/ipnlocal, wgengine: shutdown tailscaled if wgdevice is closed
Tailscaled becomes inoperative if the Tailscale Tunnel wintun adapter is abruptly removed.
wireguard-go closes the device in case of a read error, but tailscaled keeps running.
This adds detection of a closed WireGuard device, triggering a graceful shutdown of tailscaled.
It is then restarted by the tailscaled watchdog service process.

Fixes #11222

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-02-26 14:45:35 -06:00
Nick Khyl b42b9817b0 net/dns: do not wait for the interface registry key to appear if the windowsManager is being closed
The WinTun adapter may have been removed by the time we're closing
the dns.windowsManager, and its associated interface registry key might
also have been deleted. We shouldn't use winutil.OpenKeyWait and wait
for the interface key to appear when performing a cleanup as a part of
the windowsManager shutdown.

Updates #11222

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-02-26 14:45:35 -06:00
OSS Updater 82c569a83a go.mod: update web-client-prebuilt module
Signed-off-by: OSS Updater <noreply+oss-updater@tailscale.com>
2024-02-26 13:26:47 -05:00
Sonia Appasamy 95f26565db client/web: use grants on web UI frontend
Starts using peer capabilities to restrict the management client
on a per-view basis. This change also includes a bulky cleanup
of the login-toggle.tsx file, which was getting pretty unwieldy
in its previous form.

Updates tailscale/corp#16695

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-02-26 12:59:37 -05:00
Sonia Appasamy 9aa704a05d client/web: restrict serveAPI endpoints to peer capabilities
This change adds a new apiHandler struct for use from serveAPI
to aid with restricting endpoints to specific peer capabilities.

Updates tailscale/corp#16695

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-02-26 12:59:37 -05:00
Anton Tolchanov cd9cf93de6 wgengine/netstack: expose TCP forwarder drops via clientmetrics
- add a clientmetric with a counter of TCP forwarder drops due to the
  max attempts;
- fix varz metric types, as they are all counters.

Updates #8210

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-02-26 17:32:34 +00:00
Percy Wegmann 50fb8b9123 tailfs: replace webdavfs with reverse proxies
Instead of modeling remote WebDAV servers as actual
webdav.FS instances, we now just proxy traffic to them.
This not only simplifies the code, but it also allows
WebDAV locking to work correctly by making sure locks are
handled by the servers that need to (i.e. the ones actually
serving the files).

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-26 09:30:22 -06:00
Brad Fitzpatrick e1bd7488d0 all: remove LenIter, use Go 1.22 range-over-int instead
Updates #11058
Updates golang/go#65685

Change-Id: Ibb216b346e511d486271ab3d84e4546c521e4e22
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-25 12:29:45 -08:00
mrrfv ff1391a97e net/dns/publicdns: add Mullvad family DNS to the list of known DoH servers
Adds the new Mullvad family DNS server to the known DNS over HTTPS server list.

Signed-off-by: mrrfv <rm-rfv-no-preserve-root@protonmail.com>
2024-02-25 07:51:33 -08:00
Brad Fitzpatrick 6ad6d6b252 wgengine/wglog: add TS_DEBUG_RAW_WGLOG envknob for raw wg logs
Updates #7617 (part of debugging it)

Change-Id: I1bcbdcf0f929e3bcf83f244b1033fd438aa6dac1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-24 14:59:48 -08:00
Brad Fitzpatrick 8b9474b06a wgengine/wgcfg: don't send UAPI to disable keep-alives on new peers
That's already the default. Avoid the overhead of writing it on one
side and reading it on the other to do nothing.

Updates #cleanup (noticed while researching something else)

Change-Id: I449c88a022271afb9be5da876bfaf438fe5d3f58
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-24 14:22:56 -08:00
James Tucker 8d0d46462b net/dns: timeout DOH requests after 10s without response headers
If a client socket is remotely lost but the client is not sent an RST in
response to the next request, the socket might sit in RTO for extended
lengths of time, resulting in "no internet" for users. Instead, timeout
after 10s, which will close the underlying socket, recovering from the
situation more promptly.

Updates #10967

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-23 23:08:12 -08:00
James Tucker 0c5e65eb3f cmd/derper: apply TCP keepalive and timeout to TLS as well
I missed a case in the earlier patch, and so we're still sending 15s TCP
keepalive for TLS connections, now adjusted there too.

Updates tailscale/corp#17587
Updates #3363

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-23 19:19:38 -08:00
James Tucker c9b6d19fc9 .github/workflows: fix typo in XDG_CACHE_HOME
This appears to be one of the contributors to this CI target regularly
entering a bad state with a partially written toolchain.

Updates #self

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-23 16:44:29 -08:00
James Tucker 651c4899ac net/interfaces: reduce & cleanup logs on iOS
We don't need a log line every time defaultRoute is read in the good
case, and we now only log default interface updates that are actually
changes.

Updates #3363

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-23 16:37:06 -08:00
Andrea Gottardo c8c999d7a9
cli/debug: rename DERP debug mode (#11220)
Renames a debug flag in the CLI.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-02-23 15:48:37 -08:00
Mario Minardi ac281dd493
client/web: update vite and vitest to latest versions (#11200)
Update vite to 5.1.4, and vitest to 1.3.1 (their latest versions). Also
remove vite-plugin-rewrite-all as this is no longer necessary with vite
5.x and has a dependency on vite 4.x.

Updates https://github.com/tailscale/corp/issues/17715

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-02-23 14:50:41 -07:00
Percy Wegmann 15b2c674bf cmd/tailscale: add node attribute instructions to share command help
This adds details on how to configure node attributes to allow
sharing and accessing shares.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-23 13:13:01 -06:00
James Tucker 131f9094fd wgengine/wglog: quieten WireGuard logs for allowedips
An increasing number of users have very large subnet route
configurations, which can produce very large amounts of log data when
WireGuard is reconfigured. The logs don't contain the actual routes, so
they're largely useless for diagnostics, so we'll just suppress them.

Fixes tailscale/corp#17532

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-23 10:06:22 -08:00
Andrew Dunham e8d2fc7f7f net/tshttpproxy: log when we're using a proxy
Updates #11196

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Id6334c10f52f4cfbda9f03dc8096ab7a6c54a088
2024-02-22 19:22:50 -05:00
Mario Minardi 713d2928b1
client/web: update plugin-react-swc to latest version (#11199)
Update plugin-react-swc to the latest version (3.6.0) ahead of updating vite to 5.x.

Updates https://github.com/tailscale/corp/issues/17715

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-02-22 14:00:36 -07:00
Mario Minardi 72140da000
client/web: update vite-plugin-svgr to latest version (#11197)
Update vite-plugin-svgr to the latest version (4.2.0) ahead of updating
vite to 5.x. This is a major version bump from our previous 3.x, and
requires changing the import paths used for SVGs.

Updates https://github.com/tailscale/corp/issues/17715

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-02-22 13:16:44 -07:00
James Tucker edbad6d274 cmd/derper: add user timeout and reduce TCP keepalive
The derper sends an in-protocol keepalive every 60-65s, so frequent TCP
keepalives are unnecessary. In this tuning TCP keepalives should never
occur for a DERP client connection, as they will send an L7 keepalive
often enough to always reset the TCP keepalive timer. If however a
connection does not receive an ACK promptly it will now be shutdown,
which happens sooner than it would with a normal TCP keepalive tuning.

This re-tuning reduces the frequency of network traffic from derp to
client, reducing battery cost.

Updates tailscale/corp#17587
Updates #3363

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-22 11:22:08 -08:00
Andrea Gottardo 0359c2f94e
util/syspolicy: add 'ResetToDefaults' (#11194)
Updates ENG-2133. Adds the ResetToDefaults visibility policy currently only available on macOS, so that the Windows client can read its value.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-02-22 10:10:31 -08:00
Brad Fitzpatrick 10d130b845 cmd/derper, derp, tailcfg: add admission controller URL option
So derpers can check an external URL for whether to permit access
to a certain public key.

Updates tailscale/corp#17693

Change-Id: I8594de58f54a08be3e2dbef8bcd1ff9b728ab297
Co-authored-by: Maisem Ali <maisem@tailscale.com>
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-21 16:57:45 -08:00
Brad Fitzpatrick 2988c1ec52 derp: plumb context to Server.verifyClient
Updates tailscale/corp#17693

Change-Id: If17e02c77d5ad86b820e639176da2d3e61296bae
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-21 16:24:44 -08:00
Paul Scott 7708ab68c0 cmd/tailscale/cli: pass "-o 'CanonicalizeHostname no'" to ssh
Fixes #10348

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-02-21 22:34:34 +00:00
Percy Wegmann 91a1019ee2 cmd/testwrapper: apply results of all unit tests to coverage for all packages
This allows coverage from tests that hit multiple packages at once
to be reflected in all those packages' coverage.

Updates #cleanup

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-21 13:08:17 -06:00
Andrea Gottardo d756622432
util/syspolicy: add ManagedBy keys for Windows (#11183) 2024-02-20 15:08:06 -08:00
James Tucker 8fe504241d net/ktimeout: add a package to set TCP user timeout
Setting a user timeout will be a more practical tuning knob for a number
of endpoints, this provides a way to set it.

Updates tailscale/corp#17587

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-20 10:49:58 -08:00
Brad Fitzpatrick a4a909a20b prober: add TLS probe constructor to split dial addr from cert name
So we can probe load balancers by their unique DNS name but without
asking for that cert name.

Updates tailscale/corp#13050

Change-Id: Ie4c0a2f951328df64281ed1602b4e624e3c8cf2e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-19 09:03:13 -08:00
Brad Fitzpatrick 794af40f68 ipn/ipnlocal: remove ancient transition mechanism for https certs
And confusing error message that duplicated the valid cert domains.

Fixes tailscale/corp#15876

Change-Id: I098bc45d83c8d1e0a233dcdf3188869cce66e128
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-17 10:33:11 -08:00
James Tucker 6c3899e6ee logpolicy: allow longer idle log upload connections
From a packet trace we have seen log connections being closed
prematurely by the client, resulting in unnecessary extra TLS setup
traffic.

Updates #3363
Updates tailscale/corp#9230
Updates tailscale/corp#8564

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-16 18:06:09 -08:00
Andrew Dunham 70b7201744 net/dns: fix infinite loop when run on Amazon Linux 2023
This fixes an infinite loop caused by the configuration of
systemd-resolved on Amazon Linux 2023 and how that interacts with
Tailscale's "direct" mode. We now drop the Tailscale service IP from the
OS's "base configuration" when we detect this configuration.

Updates #7816

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I73a4ea8e65571eb368c7e179f36af2c049a588ee
2024-02-16 18:07:32 -05:00
Andrea Gottardo 44e337cc0e
tool/gocross: pass flags for visionOS and visionOS Simulator (#11127)
Adds logic in gocross to detect environment variables and pass the right flags so that the backend can be built with the visionOS SDK.

Signed-off-by: Andrea Gottardo <andrea@tailscale.com>
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-02-16 11:14:17 -08:00
Will Norris 6b582cb8b6 cmd/tailscale: support clickable IPv6 web client addresses
Instead of constructing the `ip:port` string ourselves, use
netip.AddrPortFrom which handles IPv6 correctly.

Updates #11164

Signed-off-by: Will Norris <will@tailscale.com>
2024-02-16 11:00:37 -08:00
Will Norris 24487815e1 cmd/tailscale: make web client URL clickable
Updates #11151

Signed-off-by: Will Norris <will@tailscale.com>
2024-02-16 10:28:34 -08:00
San 69f5664075
ipn/ipnlocal: fix doctor API endpoint (#11155)
Small fix to make sure doctor API endpoint returns correctly - I spotted it when checking my tailscaled node and noticed it was handled slightly different compare to the rest

Signed-off-by: San <santrancisco@users.noreply.github.com>
2024-02-16 12:17:34 -05:00
Percy Wegmann 3aca29e00e VERSION.txt: this is v1.61.0
Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-15 17:24:49 -06:00
Jason Barnett 4d668416b8 wgengine/router: fix ip rule restoration
Fixes #10857

Signed-off-by: Jason Barnett <J@sonBarnett.com>
2024-02-15 11:36:40 -05:00
Andrew Dunham 52f16b5d10 doctor/ethtool, ipn/ipnlocal: add ethtool bugreport check
Updates #11137

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Idbe862d80e428adb044249c47d9096b87f29d5d8
2024-02-15 10:17:05 -05:00
Patrick O'Doherty 38bba2d23a
clientupdate: disable auto update on NixOS (#11136)
Updates #cleanup

NixOS packages are immutable and attempts to update via our tarball
mechanism will always fail as a result. Instead we now direct users to
update their nix channel or nixpkgs flake input to receive the latest
Tailscale release.

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-02-14 11:58:29 -08:00
Andrew Dunham b7104cde4a util/topk: add package containing a probabilistic top-K tracker
This package uses a count-min sketch and a heap to track the top K items
in a stream of data. Tracking a new item and adding a count to an
existing item both require no memory allocations and is at worst
O(log(k)) complexity.

Change-Id: I0553381be3fef2470897e2bd806d43396f2dbb36
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
2024-02-14 13:28:58 -05:00
Flakes Updater 7ad2bb87a6 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2024-02-13 19:39:53 -08:00
Brad Fitzpatrick 61a1644c2a go.mod, all: move away from inet.af domain seized by Taliban
Updates inetaf/tcpproxy#39

Change-Id: I7fee276b116bd08397347c6c949011d76a2842cf
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-13 19:21:09 -08:00
Andrew Dunham b0e96a6c39 net/dns: log more info when openresolv commands fail
Updates #11129

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ic594868ba3bc31f6d3b0721ecba4090749a81f7f
2024-02-13 20:48:54 -05:00
Nathan Woodburn 7c0651aea6 scripts/installer.sh: add tuxedoOS to the Ubuntu copies
Signed-off-by: Nathan Woodburn <github@nathan.woodburn.au>
2024-02-13 15:37:15 -08:00
Patrick O'Doherty 256ecd0e8f
Revert "tsweb: update ServeMux matching to 1.22.0 syntax (#11090)" (#11125)
This reverts commit 30c9189ed3.

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-02-13 10:49:36 -08:00
Aaron Klotz f7acbefbbb wgengine/router: make the Windows ifconfig implementation reuse existing MibIPforwardRow2 when possible
Looking at profiles, we spend a lot of time in winipcfg.LUID.DeleteRoute
looking up the routing table entry for the provided RouteData.

But we already have the row! We previously obtained that data via the full
table dump we did in getInterfaceRoutes. We can make this a lot faster by
hanging onto a reference to the wipipcfg.MibIPforwardRow2 and executing
the delete operation directly on that.

Fixes #11123

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2024-02-13 11:17:01 -07:00
Patrick O'Doherty 30c9189ed3
tsweb: update ServeMux matching to 1.22.0 syntax (#11090)
* tsweb: update ServeMux matching to 1.22.0 syntax

Updates #cleanup

Go 1.22.0 introduced the ability to use more expressive routing patterns
that include HTTP method when constructing ServeMux entries.
Applications that attempted to use these patterns in combination with
the old `tsweb.Debugger` would experience a panic as Go would not permit
the use of matching rules with mixed level of specificity. We now
specify the method for each `/debug` handler to prevent
incompatibilities.

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-02-13 09:56:00 -08:00
Irbe Krumina 5bd19fd3e3
cmd/k8s-operator,k8s-operator: proxy configuration mechanism via a new ProxyClass custom resource (#11074)
* cmd/k8s-operator,k8s-operator: introduce proxy configuration mechanism via ProxyClass custom resource.

ProxyClass custom resource can be used to specify customizations
for the proxy resources created by the operator.

Add a reconciler that validates ProxyClass resources
and sets a Ready condition to True or False with a corresponding reason and message.
This is required because some fields (labels and annotations)
require complex validations that cannot be performed at custom resource apply time.
Reconcilers that use the ProxyClass to configure proxy resources are expected to
verify that the ProxyClass is Ready and not proceed with resource creation
if configuration from a ProxyClass that is not yet Ready is required.

If a tailscale ingress/egress Service is annotated with a tailscale.com/proxy-class annotation, look up the corresponding ProxyClass and, if it is Ready, apply the configuration from the ProxyClass to the proxy's StatefulSet.

If a tailscale Ingress has a tailscale.com/proxy-class annotation
and the referenced ProxyClass custom resource is available and Ready,
apply configuration from the ProxyClass to the proxy resources
that will be created for the Ingress.

Add a new .proxyClass field to the Connector spec.
If connector.spec.proxyClass is set to a ProxyClass that is available and Ready,
apply configuration from the ProxyClass to the proxy resources created for the Connector.

Ensure that when Helm chart is packaged, the ProxyClass yaml is added to chart templates. Ensure that static manifest generator adds ProxyClass yaml to operator.yaml. Regenerate operator.yaml


Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-02-13 05:27:54 +00:00
Brad Fitzpatrick f7f496025a types/views: add test that LenIter doesn't allocate
For a second we thought this was allocating but we were looking
at a CPU profile (which showed calls to mallocgc view makeslice)
instead of the alloc profile.

Updates golang/go#65685 (which if fixed wouldn't have confused us)

Change-Id: Ic0132310d52d8a65758a516142525339aa23b1ed
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-12 18:04:06 -08:00
Percy Wegmann c42a4e407a tailfs: listen for local clients only on 100.100.100.100
FileSystemForLocal was listening on the node's Tailscale address,
which potentially exposes the user's view of TailFS shares to other
Tailnet users. Remote nodes should connect to exported shares via
the peerapi.

This removes that code so that FileSystemForLocal is only avaialable
on 100.100.100.100:8080.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-12 14:08:00 -06:00
Percy Wegmann d0ef3a25df cmd/tailscale: hide share subcommand
Fixes #1115

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-12 14:03:01 -06:00
David Anderson 58b8f78e7e flake.nix: build tailscale with go 1.22
Updates #cleanup

Signed-off-by: David Anderson <danderson@tailscale.com>
2024-02-11 20:43:40 -08:00
Maisem Ali 370ecb4654 tailcfg: remove UserProfile.Groups
Removing as per go/group-all-the-things.

Updates tailscale/corp#17445

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-02-11 09:44:11 -08:00
Andrew Dunham c1c50cfcc0 util/cloudenv: add support for DigitalOcean
Updates #4984

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ib229eb40af36a80e6b0fd1dd0cabb07f0d50a7d1
2024-02-10 14:36:20 -05:00
Percy Wegmann 55b372a79f tailscaled: revert to using pointers for subcommands
As part of #10631, we stopped using function pointers for subcommands,
preventing us from registering platform-specific installSystemDaemon
and uninstallSystemDaemon subcommands.

Fixes #11099

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-10 11:55:24 -06:00
Percy Wegmann 87154a2f88 tailfs: fix startup issues on windows
Starts TailFS for Windows too, initializes shares on startup.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-09 22:13:27 -06:00
Percy Wegmann ddcffaef7a tailfs: disable TailFSForLocal via policy
Adds support for node attribute tailfs:access. If this attribute is
not present, Tailscale will not accept connections to the local TailFS
server at 100.100.100.100:8080.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-09 20:00:42 -06:00
Percy Wegmann abab0d4197 tailfs: clean up naming and package structure
- Restyles tailfs -> tailFS
- Defines interfaces for main TailFS types
- Moves implemenatation of TailFS into tailfsimpl package

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-09 20:00:42 -06:00
dependabot[bot] 79b547804b build(deps-dev): bump vite from 4.4.9 to 4.5.2 in /client/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 4.4.9 to 4.5.2.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v4.5.2/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v4.5.2/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-09 18:42:56 -05:00
James Tucker 24bac27632 util/rands: add Shuffle and Perm functions with on-stack RNG state
The new math/rand/v2 package includes an m-local global random number
generator that can not be reseeded by the user, which is suitable for
most uses without the RNG pools we have in a number of areas of the code
base.

The new API still does not have an allocation-free way of performing a
seeded operations, due to the long term compiler bug around interface
parameter escapes, and the Source interface.

This change introduces the two APIs that math/rand/v2 can not yet
replace efficiently: seeded Perm() and Shuffle() operations. This
implementation chooses to use the PCG random source from math/rand/v2,
as with sufficient compiler optimization, this source should boil down
to only two on-stack registers for random state under ideal conditions.

Updates #17243

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-09 15:19:27 -08:00
Sonia Appasamy 2bb837a9cf client/web: only check policy caps for tagged nodes
For user-owned nodes, only the owner is ever allowed to manage the
node.

Updates tailscale/corp#16695

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-02-09 18:17:14 -05:00
Andrew Dunham 6f6383f69e .github: fuzzing is now unbroken
Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I10dca601c79411b412180a46b3f82136e40544b0
2024-02-09 17:02:58 -05:00
Keisuke Umegaki 7039c06d9b
fix toolchain not available error (#11083)
Relates to golang/go#62278
Updates #11058

Signed-off-by: keisku <keisuke.umegaki.630@gmail.com>
2024-02-09 16:46:32 -05:00
Patrick O'Doherty 7c52b27daf
Revert "tsweb: update ServeMux matching to 1.22.0 syntax (#11087)" (#11089)
This reverts commit 291f91d164.

Updates #cleanup

This PR needs additional changes to the registration of child handlers under /debug

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-02-09 12:23:49 -08:00
Patrick O'Doherty 291f91d164
tsweb: update ServeMux matching to 1.22.0 syntax (#11087)
Updates #cleanup

Go 1.22.0 introduced the ability to use more expressive routing patterns
that include HTTP method when constructing ServeMux entries.
Applications that attempted to use these patterns in combination with
the old `tsweb.Debugger` would experience a panic as Go would not permit
the use of matching rules with mixed level of specificity.

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-02-09 11:27:05 -08:00
Jenny Zhang c446451bfa cmd/gitops-pusher: only use OAuth creds if non-empty string
`os.LookupEnv` may return true if the variable is present in
the environment but an empty string. We should only attempt
to set OAuth Config if thsoe values are non-empty.

Updates gitops-acl-action#33

Signed-off-by: Jenny Zhang <jz@tailscale.com>
2024-02-09 10:55:59 -05:00
Percy Wegmann 993acf4475 tailfs: initial implementation
Add a WebDAV-based folder sharing mechanism that is exposed to local clients at
100.100.100.100:8080 and to remote peers via a new peerapi endpoint at
/v0/tailfs.

Add the ability to manage folder sharing via the new 'share' CLI sub-command.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-02-09 09:13:51 -06:00
Joe Tsai 2e404b769d
all: use new AppendEncode methods available in Go 1.22 (#11079)
Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-02-08 17:55:03 -08:00
Joe Tsai 94a4f701c2
all: use reflect.TypeFor now available in Go 1.22 (#11078)
Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-02-08 17:34:22 -08:00
Joe Tsai efddad7d7d
util/deephash: cleanup TODO in TestHash (#11080)
Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-02-08 17:33:25 -08:00
Charlotte Brandhorst-Satzkorn 0f042b9814
cmd/tailscale/cli: fix exit node status output (#11076)
This change fixes the format of tailscale status output when location
based exit nodes are present.

Fixes #11065

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-02-08 14:29:01 -08:00
Andrea Gottardo 6c79f55d48
ipnlocal: force-regen new authURL when it is too old (#10971)
Fixes tailscale/support-escalations#23.

authURLs returned by control expire after 1 hour from creation. Customer reported that the Tailscale client on macOS would sending users to a stale authentication page when clicking on the `Login...` menu item. This can happen when clicking on Login after leaving the device unattended for several days. The device key expires, leading to the creation of a new authURL, however the client doesn't keep track of when the authURL was created. Meaning that `login-interactive` would send the user to an authURL that had expired server-side a long time before.

This PR ensures that whenever `login-interactive` is called via LocalAPI, an authURL that is too old won't be used. We force control to give us a new authURL whenever it's been more than 30 minutes since the last authURL was sent down from control.



Apply suggestions from code review




Set interval to 6 days and 23 hours

Signed-off-by: Andrea Gottardo <andrea@tailscale.com>
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-02-08 13:04:01 -08:00
Sonia Appasamy 1217f655c0 cmd/dist: update logs for synology builds
Update logs for synology builds to more clearly callout which variant
is being built. The two existing variants are:

1. Sideloaded (can be manual installed on a device by anyone)
2. Package center distribution (by the tailscale team)

Updates #cleanup

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2024-02-08 14:36:55 -05:00
OSS Updater 664b861cd4 go.mod: update web-client-prebuilt module
Signed-off-by: OSS Updater <noreply+oss-updater@tailscale.com>
2024-02-08 10:57:10 -08:00
Will Norris 6f0c5e0c05 client/web: use smart quotes in web UI frontend
add the curly-quotes eslint plugin (same that we use for the admin
panel), and fix existing straight quotes in the current web UI.

Updates #cleanup

Signed-off-by: Will Norris <will@tailscale.com>
2024-02-08 10:13:46 -08:00
Will Norris 128c99d4ae client/web: add new readonly mode
The new read-only mode is only accessible when running `tailscale web`
by passing a new `-readonly` flag. This new mode is identical to the
existing login mode with two exceptions:

 - the management client in tailscaled is not started (though if it is
   already running, it is left alone)

 - the client does not prompt the user to login or switch to the
   management client. Instead, a message is shown instructing the user
   to use other means to manage the device.

Updates #10979

Signed-off-by: Will Norris <will@tailscale.com>
2024-02-08 10:11:23 -08:00
License Updater 9f0eaa4464 licenses: update win/apple licenses
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-02-08 08:42:15 -08:00
License Updater 78f257d9f8 licenses: update android licenses
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-02-08 08:41:12 -08:00
License Updater 5486d8aaf9 licenses: update tailscale{,d} licenses
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-02-08 08:40:53 -08:00
Irbe Krumina a6cc2fdc3e
cmd/{containerboot,k8s-operator/deploy/manifests}: optionally allow proxying cluster traffic to a cluster target via ingress proxy (#11036)
* cmd/containerboot,cmd/k8s-operator/deploy/manifests: optionally forward cluster traffic via ingress proxy.

If a tailscale Ingress has tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation, configure the associated ingress proxy to have its tailscale serve proxy to listen on Pod's IP address. This ensures that cluster traffic too can be forwarded via this proxy to the ingress backend(s).

In containerboot, if EXPERIMENTAL_PROXY_CLUSTER_TRAFFIC_VIA_INGRESS is set to true
and the node is Kubernetes operator ingress proxy configured via Ingress,
make sure that traffic from within the cluster can be proxied to the ingress target.

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-02-08 06:45:42 +00:00
Flakes Updater 2404b1444e go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2024-02-07 18:18:32 -08:00
Brad Fitzpatrick c424e192c0 .github/workflows: temporarily disable broken oss-fuzz action
Updates #11064
Updates #11058

Change-Id: I63acc13dece3379a0b2df573afecfd245b7cd6c2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-07 18:10:15 -08:00
Brad Fitzpatrick 2bd3c1474b util/cmpx: delete now that we're using Go 1.22
Updates #11058

Change-Id: I09dea8e86f03ec148b715efca339eab8b1f0f644
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-07 18:10:15 -08:00
Brad Fitzpatrick 5ea071186e Dockerfile: use Go 1.22
Updates #11058

Change-Id: I0f63be498be33d71bd90b7956f9fe9666fd7a696
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-07 18:10:15 -08:00
Brad Fitzpatrick 9612001cf4 .github/workflows: update golangci-lint for Go 1.22
Updates #11058

Change-Id: I3785c1f1bea4a4663e7e5fb6d209d3caedae436d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-07 18:10:15 -08:00
Brad Fitzpatrick b6153efb7d go.mod, README.md: use Go 1.22
Updates #11058

Change-Id: I95eecdc7afe2b5f8189016fdb8a773f78e9f5c42
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-02-07 18:10:15 -08:00
Tom DNetto 653721541c tsweb: normalize passkey identities in bucketed stats
Signed-off-by: Tom DNetto <tom@tailscale.com>
Updates: corp#17075
2024-02-07 16:02:36 -08:00
Tom DNetto 8d6d9d28ba tsweb: normalize common StableID's in bucketed stats, export as LabelMap
Signed-off-by: Tom DNetto <tom@tailscale.com>
Updates: corp#17075
2024-02-07 15:36:39 -08:00
James Tucker e0762fe331 words: add a list of things you should yahoo!
Updates #self

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-07 14:47:20 -08:00
James Tucker 0b16620b80 .github/workflows: add privileged tests workflow
We had missed regressions from privileged tests not running, now they
can run.

Updates #cleanup
Signed-off-by: James Tucker <james@tailscale.com>
2024-02-07 14:45:22 -08:00
James Tucker 0f5e031133 appc: optimize dns response observation for large route tables
Advertise DNS discovered addresses as a single preference update rather
than one at a time.

Sort the list of observed addresses and use binary search to consult the
list.

Updates tailscale/corp#16636

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-07 14:11:41 -08:00
Andrew Lytvynov db3776d5bf
go.toolchain.rev: bump to Go 1.22.0 (#11055)
Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-02-07 14:57:57 -07:00
Tom DNetto af931dcccd tsweb: replace domains/emails in paths when bucketing stats
Signed-off-by: Tom DNetto <tom@tailscale.com>
Updates: corp#17075
2024-02-07 13:31:59 -08:00
Tom DNetto 36efc50817 tsweb: implementing bucketed statistics for started/finished counts
Signed-off-by: Tom DNetto <tom@tailscale.com>
Updates: corp#17075
2024-02-06 16:54:17 -08:00
Maisem Ali b752bde280 types/views: add SliceMapKey[T]
views.Slice are meant to be immutable, and if used as such it
is at times desirable to use them as a key in a map. For non-viewed
slices it was kinda doable by creating a custom key struct but views.Slice
didn't allow for the same so add a method to create that struct here.

Updates tailscale/corp#17122

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-02-06 12:50:28 -08:00
kari-ts 5595b61b96
ipn/localapi: more http status cleanup (#10995)
Use Http.StatusOk instead of 200

Updates #cleanup
2024-02-05 09:34:41 -08:00
Chris Palmer a633a30711
cmd/hello: link to the Hello KB article (#11022)
Fixes https://github.com/tailscale/corp/issues/17104

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
2024-02-02 15:48:31 -08:00
Joe Tsai 60657ac83f
util/deephash: tighten up SelfHasher API (#11012)
Providing a hash.Block512 is an implementation detail of how deephash
works today, but providing an opaque type with mostly equivalent API
(i.e., HashUint8, HashBytes, etc. methods) is still sensible.
Thus, define a public Hasher type that exposes exactly the API
that an implementation of SelfHasher would want to call.
This gives us freedom to change the hashing algorithm of deephash
at some point in the future.

Also, this type is likely going to be called by types that are
going to memoize their own hash results, we additionally add
a HashSum method to simplify this use case.

Add documentation to SelfHasher on how a type might implement it.

Updates: corp#16409

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-02-01 17:07:41 -08:00
Joe Tsai 84f8311bcd
util/deephash: document pathological deephash behavior (#11010)
Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-02-01 13:49:36 -08:00
James Tucker ba70cbb930 ipn/ipnlocal: fix app connector route advertisements on exit nodes
If an app connector is also configured as an exit node, it should still
advertise discovered routes that are not covered by advertised routes,
excluding the exit node routes.

Updates tailscale/corp#16928

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-01 11:56:24 -08:00
James Tucker e1a4b89dbe appc,ipn/ipnlocal: add app connector routes if any part of a CNAME chain is routed
If any domain along a CNAME chain matches any of the routed domains, add
routes for the discovered domains.

Fixes tailscale/corp#16928

Signed-off-by: James Tucker <james@tailscale.com>
2024-02-01 11:43:07 -08:00
Tom DNetto 2aeef4e610 util/deephash: implement SelfHasher to allow types to hash themselves
Updates: corp#16409
Signed-off-by: Tom DNetto <tom@tailscale.com>
2024-02-01 10:18:03 -08:00
James Tucker b4b2ec7801 ipn/ipnlocal: fix pretty printing of multi-record peer DNS results
The API on the DNS record parser is slightly subtle and requires
explicit handling of unhandled records. Failure to advance previously
resulted in an infinite loop in the pretty responder for any reply that
contains a record other than A/AAAA/TXT.

Updates tailscale/corp#16928

Signed-off-by: James Tucker <james@tailscale.com>
2024-01-31 15:59:17 -08:00
Percy Wegmann fad6bae764 ipnlocal: log failure to get ssh host keys
When reporting ssh host keys to control, log a warning
if we're unable to get the SSH host keys.

Updates tailscale/escalations#21

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-01-30 16:57:16 -06:00
Chris Palmer 9744ad47e3
cmd/hello: avoid deprecated apis (#10957)
Updates #cleanup

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
2024-01-29 14:18:49 -08:00
Will Norris 13f8a669d5 cmd/gitops-pusher: fix logic for checking credentials
gitops-pusher supports authenticating with an API key or OAuth
credentials (added in #7393). You shouldn't ever use both of those
together, so we error if both are set.

In tailscale/gitops-acl-action#24, OAuth support is being added to the
GitHub action. In that environment, both the TS_API_KEY and OAuth
variables will be set, even if they are empty values.  This causes an
error in gitops-pusher which expects only one to be set.

Update gitops-pusher to check that only one set of environment variables
are non-empty, rather than just checking if they are set.

Updates #7393

Signed-off-by: Will Norris <will@tailscale.com>
2024-01-29 13:01:29 -08:00
Charlotte Brandhorst-Satzkorn cce189bde1 words: i like the direction this list is taking
Updates tailscale/corp#14698

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-01-26 19:42:37 -08:00
Andrew Lytvynov fbfc3b7e51
cmd/tailscale/cli: run Watch with NotifyNoPrivateKeys (#10950)
When running as non-root non-operator user, you get this error:
```
$ tailscale serve 8080
Access denied: watch IPN bus access denied, must set ipn.NotifyNoPrivateKeys when not running as admin/root or operator

Use 'sudo tailscale serve 8080' or 'tailscale up --operator=$USER' to not require root.
```

It should fail, but the error message is confusing.

With this fix:
```
$ tailscale serve 8080
sending serve config: Access denied: serve config denied

Use 'sudo tailscale serve 8080' or 'tailscale up --operator=$USER' to not require root.
```

Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-01-25 14:59:34 -07:00
James Tucker 0f3b2e7b86 util/expvarx: add a time and concurrency limiting expvar.Func wrapper
expvarx.SafeFunc wraps an expvar.Func with a time limit. On reaching the
time limit, calls to Value return nil, and no new concurrent calls to
the underlying expvar.Func will be started until the call completes.

Updates tailscale/corp#16999
Signed-off-by: James Tucker <james@tailscale.com>
2024-01-24 17:47:16 -08:00
Andrew Dunham fd94d96e2b net/portmapper: support legacy "urn:dslforum-org" portmapping services
These are functionally the same as the "urn:schemas-upnp-org" services
with a few minor changes, and are still used by older devices. Support
them to improve our ability to obtain an external IP on such networks.

Updates #10911

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I05501fad9d6f0a3b8cf19fc95eee80e7d16cc2cf
2024-01-23 21:29:29 -05:00
Irbe Krumina 75f1d3e7d7
ipn/ipnlocal: fix failing test (#10937)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-01-23 19:02:07 +00:00
Irbe Krumina 6ee956333f
ipn/ipnlocal: fix proxy path that matches mount point (#10864)
Don't append a trailing slash to a request path
to the reverse proxy that matches the mount point exactly.

Updates tailscale/tailscale#10730

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-01-23 18:12:56 +00:00
Jordan Whited 8b47322acc
wgengine/magicsock: implement probing of UDP path lifetime (#10844)
This commit implements probing of UDP path lifetime on the tail end of
an active direct connection. Probing configuration has two parts -
Cliffs, which are various timeout cliffs of interest, and
CycleCanStartEvery, which limits how often a probing cycle can start,
per-endpoint. Initially a statically defined default configuration will
be used. The default configuration has cliffs of 10s, 30s, and 60s,
with a CycleCanStartEvery of 24h. Probing results are communicated via
clientmetric counters. Probing is off by default, and can be enabled
via control knob. Probing is purely informational and does not yet
drive any magicsock behaviors.

Updates #540

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2024-01-23 09:37:32 -08:00
James Tucker 0e2cb76abe appc: add test to ensure that individual IPs are not removed during route updates
If control advised the connector to advertise a route that had already
been discovered by DNS it would be incorrectly removed. Now those routes
are preserved.

Updates tailscale/corp#16833

Signed-off-by: James Tucker <james@tailscale.com>
2024-01-22 17:50:55 -08:00
Charlotte Brandhorst-Satzkorn ce4553b988 appc,ipn/ipnlocal: optimize preference adjustments when routes update
This change allows us to perform batch modification for new route
advertisements and route removals. Additionally, we now handle the case
where newly added routes are covered by existing ranges.

This change also introduces a new appctest package that contains some
shared functions used for testing.

Updates tailscale/corp#16833

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2024-01-22 17:37:16 -08:00
Irbe Krumina 370ec6b46b
cmd/k8s-operator: don't proceed with Ingress that has no valid backends (#10919)
Do not provision resources for a tailscale Ingress that has no valid backends.

Updates tailscale/tailscale#10910

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-01-22 19:20:23 +00:00
Andrew Dunham b45089ad85 net/portmapper: handle cases where we have no supported clients
This no longer results in a nil pointer exception when we get a valid
UPnP response with no supported clients.

Updates #10911

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I6e3715a49a193ff5261013871ad7fff197a4d77e
2024-01-22 12:46:19 -05:00
James Tucker 4e822c031f go.toolchain.rev: bump Tailscale Go version to 1.21.6
Updates tailscale/go#83

Signed-off-by: James Tucker <james@tailscale.com>
2024-01-19 18:30:35 -08:00
Flakes Updater b787c27c00 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2024-01-19 18:24:58 -08:00
James Tucker 7e3bcd297e go.mod,wgengine/netstack: bump gvisor
Updates #8043

Signed-off-by: James Tucker <james@tailscale.com>
2024-01-19 18:23:53 -08:00
David Anderson 17eae5b0d3 tool/gocross: force use of our custom toolchain
The new 'toolchain' directive in go.mod can sometimes force
the use of an upstream toolchain against our wishes. Concurrently,
some of our dependencies have added the 'toolchain' directive, which
transitively adds it to our own go.mod. Force all uses of gocross to
ignore that directive and stick to our customized toolchain.

Updates #cleanup

Signed-off-by: David Anderson <danderson@tailscale.com>
2024-01-19 18:23:21 -08:00
David Anderson ae79b2e784 tsweb: add a helper to validate redirect URLs
We issue redirects in a few different places, it's time to have
a common helper to do target validation.

Updates tailscale/corp#16875

Signed-off-by: David Anderson <danderson@tailscale.com>
2024-01-19 17:57:54 -08:00
Claire Wang 213d696db0
magicsock: mute noisy expected peer mtu related error (#10870) 2024-01-19 20:04:22 -05:00
kari-ts 62b056d677
VERSION.txt: this is v1.59.0 (#10884)
* VERSION.txt: this is v1.58.0

Signed-off-by: kari-ts <kari@tailscale.com>

* VERSION.txt: this is v1.59.0

---------

Signed-off-by: kari-ts <kari@tailscale.com>
2024-01-19 17:03:50 -08:00
Flakes Updater 5b4eb47300 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2024-01-19 16:53:05 -08:00
James Tucker 457102d070 go.mod: bump most deps for start of cycle
Plan9 CI is disabled. 3p dependencies do not build for the target.
Contributor enthusiasm appears to have ceased again, and no usage has
been made.

Skipped gvisor, nfpm, and k8s.

Updates #5794
Updates #8043

Signed-off-by: James Tucker <james@tailscale.com>
2024-01-19 16:51:39 -08:00
Andrew Dunham 7a0392a8a3 wgengine/netstack: expose gVisor metrics through expvar
When tailscaled is run with "-debug 127.0.0.1:12345", these metrics are
available at:
    http://localhost:12345/debug/metrics

Updates #8210

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I19db6c445ac1f8344df2bc1066a3d9c9030606f8
2024-01-19 18:36:54 -05:00
as2643 832e5c781d
util/nocasemaps: add AppendSliceElem method to nocasemaps (#10871)
Updates #7667

Signed-off-by: Anishka Singh <anishkasingh66@gmail.com>
2024-01-19 15:30:12 -08:00
ChandonPierre 2ce596ea7a
cmd/k8s-operator/deploy: allow modifying operator tags via Helm values
Updates tailscale/tailscale#10659

Signed-off-by: Chandon Pierre <cpierre@coreweave.com>
2024-01-19 21:22:23 +00:00
Andrew Dunham 2ac7c0161b util/slicesx: add Filter function
For use in corp, where we appear to have re-implemented this in a few
places with varying signatures.

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Id863a87e674f3caa87945519be8e09650e9c1d76
2024-01-19 11:17:05 -05:00
Irbe Krumina 2aec4f2c43
./github/workflows/kubemanifests.yaml: fix the paths whose changes should trigger test runs (#10885)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-01-19 01:51:10 +00:00
James Tucker 8250582fe6 ipn/ipnlocal: make app connector configuration concurrent
If there are routes changes as a side effect of an app connector
configuration update, the connector configuration may want to reenter a
lock, so must be started asynchronously.

Updates tailscale/corp#16833
Signed-off-by: James Tucker <james@tailscale.com>
2024-01-18 12:26:58 -08:00
James Tucker 38a1cf748a control/controlclient,util/execqueue: extract execqueue into a package
This is a useful primitive for asynchronous execution of ordered work I
want to use in another change.

Updates tailscale/corp#16833
Signed-off-by: James Tucker <james@tailscale.com>
2024-01-18 12:08:13 -08:00
689 changed files with 45154 additions and 10165 deletions

View File

@ -47,6 +47,12 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v4
# Install a more recent Go that understands modern go.mod content.
- name: Install Go
uses: actions/setup-go@v4
with:
go-version-file: go.mod
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2

View File

@ -1,64 +0,0 @@
name: go-licenses
on:
# run action when a change lands in the main branch which updates go.mod or
# our license template file. Also allow manual triggering.
push:
branches:
- main
paths:
- go.mod
- .github/licenses.tmpl
- .github/workflows/go-licenses.yml
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
update-licenses:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: go.mod
- name: Install go-licenses
run: |
go install github.com/google/go-licenses@v1.2.2-0.20220825154955-5eedde1c6584
- name: Run go-licenses
env:
# include all build tags to include platform-specific dependencies
GOFLAGS: "-tags=android,cgo,darwin,freebsd,ios,js,linux,openbsd,wasm,windows"
run: |
[ -d licenses ] || mkdir licenses
go-licenses report tailscale.com/cmd/tailscale tailscale.com/cmd/tailscaled > licenses/tailscale.md --template .github/licenses.tmpl
- name: Get access token
uses: tibdex/github-app-token@b62528385c34dbc9f38e5f4225ac829252d1ea92 # v1.8.0
id: generate-token
with:
app_id: ${{ secrets.LICENSING_APP_ID }}
installation_id: ${{ secrets.LICENSING_APP_INSTALLATION_ID }}
private_key: ${{ secrets.LICENSING_APP_PRIVATE_KEY }}
- name: Send pull request
uses: peter-evans/create-pull-request@284f54f989303d2699d373481a0cfa13ad5a6666 #v5.0.1
with:
token: ${{ steps.generate-token.outputs.token }}
author: License Updater <noreply+license-updater@tailscale.com>
committer: License Updater <noreply+license-updater@tailscale.com>
branch: licenses/cli
commit-message: "licenses: update tailscale{,d} licenses"
title: "licenses: update tailscale{,d} licenses"
body: Triggered by ${{ github.repository }}@${{ github.sha }}
signoff: true
delete-branch: true
team-reviewers: opensource-license-reviewers

View File

@ -34,7 +34,7 @@ jobs:
# Note: this is the 'v3' tag as of 2023-08-14
uses: golangci/golangci-lint-action@639cd343e1d3b897ff35927a75193d57cfcba299
with:
version: v1.54.2
version: v1.56
# Show only new issues if it's a pull request.
only-new-issues: true

View File

@ -32,7 +32,6 @@ jobs:
- "ubuntu:18.04"
- "ubuntu:20.04"
- "ubuntu:22.04"
- "ubuntu:22.10"
- "ubuntu:23.04"
- "elementary/docker:stable"
- "elementary/docker:unstable"
@ -91,7 +90,10 @@ jobs:
|| contains(matrix.image, 'parrotsec')
|| contains(matrix.image, 'kalilinux')
- name: checkout
uses: actions/checkout@v4
# We cannot use v4, as it requires a newer glibc version than some of the
# tested images provide. See
# https://github.com/actions/checkout/issues/1487
uses: actions/checkout@v3
- name: run installer
run: scripts/installer.sh
# Package installation can fail in docker because systemd is not running

View File

@ -2,8 +2,8 @@ name: "Kubernetes manifests"
on:
pull_request:
paths:
- './cmd/k8s-operator/'
- './k8s-operator/'
- 'cmd/k8s-operator/**'
- 'k8s-operator/**'
- '.github/workflows/kubemanifests.yaml'
# Cancel workflow run if there is a newer push to the same PR for which it is

View File

@ -183,6 +183,19 @@ jobs:
# the equals signs cause great confusion.
run: go test ./... -bench . -benchtime 1x -run "^$"
privileged:
runs-on: ubuntu-22.04
container:
image: golang:latest
options: --privileged
steps:
- name: checkout
uses: actions/checkout@v4
- name: chown
run: chown -R $(id -u):$(id -g) $PWD
- name: privileged tests
run: ./tool/go test ./util/linuxfw
vm:
runs-on: ["self-hosted", "linux", "vm"]
# VM tests run with some privileges, don't let them run on 3p PRs.
@ -193,9 +206,9 @@ jobs:
- name: Run VM tests
run: ./tool/go test ./tstest/integration/vms -v -no-s3 -run-vm-tests -run=TestRunUbuntu2004
env:
HOME: "/tmp"
HOME: "/var/lib/ghrunner/home"
TMPDIR: "/tmp"
XDB_CACHE_HOME: "/var/lib/ghrunner/cache"
XDG_CACHE_HOME: "/var/lib/ghrunner/cache"
race-build:
runs-on: ubuntu-22.04
@ -241,9 +254,6 @@ jobs:
goarch: amd64
- goos: openbsd
goarch: amd64
# Plan9
- goos: plan9
goarch: amd64
runs-on: ubuntu-22.04
steps:
@ -292,6 +302,47 @@ jobs:
GOOS: ios
GOARCH: arm64
crossmin: # cross-compile for platforms where we only check cmd/tailscale{,d}
strategy:
fail-fast: false # don't abort the entire matrix if one element fails
matrix:
include:
# Plan9
- goos: plan9
goarch: amd64
# AIX
- goos: aix
goarch: ppc64
runs-on: ubuntu-22.04
steps:
- name: checkout
uses: actions/checkout@v4
- name: Restore Cache
uses: actions/cache@v3
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~/go/pkg/mod/cache
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-
- name: build core
run: ./tool/go build ./cmd/tailscale ./cmd/tailscaled
env:
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
GOARM: ${{ matrix.goarm }}
CGO_ENABLED: "0"
android:
# similar to cross above, but android fails to build a few pieces of the
# repo. We should fix those pieces, they're small, but as a stepping stone,
@ -305,7 +356,7 @@ jobs:
# some Android breakages early.
# TODO(bradfitz): better; see https://github.com/tailscale/tailscale/issues/4482
- name: build some
run: ./tool/go install ./net/netns ./ipn/ipnlocal ./wgengine/magicsock/ ./wgengine/ ./wgengine/router/ ./wgengine/netstack ./util/dnsname/ ./ipn/ ./net/interfaces ./wgengine/router/ ./tailcfg/ ./types/logger/ ./net/dns ./hostinfo ./version
run: ./tool/go install ./net/netns ./ipn/ipnlocal ./wgengine/magicsock/ ./wgengine/ ./wgengine/router/ ./wgengine/netstack ./util/dnsname/ ./ipn/ ./net/netmon ./wgengine/router/ ./tailcfg/ ./types/logger/ ./net/dns ./hostinfo ./version
env:
GOOS: android
GOARCH: arm64

1
.gitignore vendored
View File

@ -9,6 +9,7 @@
cmd/tailscale/tailscale
cmd/tailscaled/tailscaled
ssh/tailssh/testcontainers/tailscaled
# Test binary, built with `go test -c`
*.test

View File

@ -31,7 +31,7 @@
# $ docker exec tailscaled tailscale status
FROM golang:1.21-alpine AS build-env
FROM golang:1.22-alpine AS build-env
WORKDIR /go/src/tailscale

View File

@ -1,5 +1,5 @@
IMAGE_REPO ?= tailscale/tailscale
SYNO_ARCH ?= "amd64"
SYNO_ARCH ?= "x86_64"
SYNO_DSM ?= "7"
TAGS ?= "latest"
@ -100,6 +100,26 @@ publishdevoperator: ## Build and publish k8s-operator image to location specifie
@test "${REPO}" != "ghcr.io/tailscale/k8s-operator" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-operator" && exit 1)
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=operator ./build_docker.sh
publishdevnameserver: ## Build and publish k8s-nameserver image to location specified by ${REPO}
@test -n "${REPO}" || (echo "REPO=... required; e.g. REPO=ghcr.io/${USER}/tailscale" && exit 1)
@test "${REPO}" != "tailscale/tailscale" || (echo "REPO=... must not be tailscale/tailscale" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/tailscale" || (echo "REPO=... must not be ghcr.io/tailscale/tailscale" && exit 1)
@test "${REPO}" != "tailscale/k8s-nameserver" || (echo "REPO=... must not be tailscale/k8s-nameserver" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/k8s-nameserver" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-nameserver" && exit 1)
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=k8s-nameserver ./build_docker.sh
.PHONY: sshintegrationtest
sshintegrationtest: ## Run the SSH integration tests in various Docker containers
@GOOS=linux GOARCH=amd64 go test -tags integrationtest -c ./ssh/tailssh -o ssh/tailssh/testcontainers/tailssh.test && \
GOOS=linux GOARCH=amd64 go build -o ssh/tailssh/testcontainers/tailscaled ./cmd/tailscaled && \
echo "Testing on ubuntu:focal" && docker build --build-arg="BASE=ubuntu:focal" -t ssh-ubuntu-focal ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:jammy" && docker build --build-arg="BASE=ubuntu:jammy" -t ssh-ubuntu-jammy ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:mantic" && docker build --build-arg="BASE=ubuntu:mantic" -t ssh-ubuntu-mantic ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:noble" && docker build --build-arg="BASE=ubuntu:noble" -t ssh-ubuntu-noble ssh/tailssh/testcontainers && \
echo "Testing on fedora:38" && docker build --build-arg="BASE=dokken/fedora-38" -t ssh-fedora-38 ssh/tailssh/testcontainers && \
echo "Testing on fedora:39" && docker build --build-arg="BASE=dokken/fedora-39" -t ssh-fedora-39 ssh/tailssh/testcontainers && \
echo "Testing on fedora:40" && docker build --build-arg="BASE=dokken/fedora-40" -t ssh-fedora-40 ssh/tailssh/testcontainers
help: ## Show this help
@echo "\nSpecify a command. The choices are:\n"
@grep -hE '^[0-9a-zA-Z_-]+:.*?## .*$$' ${MAKEFILE_LIST} | awk 'BEGIN {FS = ":.*?## "}; {printf " \033[0;36m%-20s\033[m %s\n", $$1, $$2}'

View File

@ -37,7 +37,7 @@ not open source.
## Building
We always require the latest Go release, currently Go 1.21. (While we build
We always require the latest Go release, currently Go 1.22. (While we build
releases with our [Go fork](https://github.com/tailscale/go/), its use is not
required.)

View File

@ -1 +1 @@
1.57.0
1.67.0

720
api.md

File diff suppressed because it is too large Load Diff

View File

@ -10,6 +10,7 @@
package appc
import (
"context"
"net/netip"
"slices"
"strings"
@ -20,17 +21,33 @@ import (
"tailscale.com/types/logger"
"tailscale.com/types/views"
"tailscale.com/util/dnsname"
"tailscale.com/util/execqueue"
"tailscale.com/util/mak"
"tailscale.com/util/slicesx"
)
// RouteAdvertiser is an interface that allows the AppConnector to advertise
// newly discovered routes that need to be served through the AppConnector.
type RouteAdvertiser interface {
// AdvertiseRoute adds a new route advertisement if the route is not already
// being advertised.
AdvertiseRoute(netip.Prefix) error
// AdvertiseRoute adds one or more route advertisements skipping any that
// are already advertised.
AdvertiseRoute(...netip.Prefix) error
// UnadvertiseRoute removes a route advertisement.
UnadvertiseRoute(netip.Prefix) error
// UnadvertiseRoute removes any matching route advertisements.
UnadvertiseRoute(...netip.Prefix) error
}
// RouteInfo is a data structure used to persist the in memory state of an AppConnector
// so that we can know, even after a restart, which routes came from ACLs and which were
// learned from domains.
type RouteInfo struct {
// Control is the routes from the 'routes' section of an app connector acl.
Control []netip.Prefix `json:",omitempty"`
// Domains are the routes discovered by observing DNS lookups for configured domains.
Domains map[string][]netip.Addr `json:",omitempty"`
// Wildcards are the configured DNS lookup domains to observe. When a DNS query matches Wildcards,
// its result is added to Domains.
Wildcards []string `json:",omitempty"`
}
// AppConnector is an implementation of an AppConnector that performs
@ -46,11 +63,14 @@ type AppConnector struct {
logf logger.Logf
routeAdvertiser RouteAdvertiser
// storeRoutesFunc will be called to persist routes if it is not nil.
storeRoutesFunc func(*RouteInfo) error
// mu guards the fields that follow
mu sync.Mutex
// domains is a map of lower case domain names with no trailing dot, to a
// list of resolved IP addresses.
// domains is a map of lower case domain names with no trailing dot, to an
// ordered list of resolved IP addresses.
domains map[string][]netip.Addr
// controlRoutes is the list of routes that were last supplied by control.
@ -58,21 +78,81 @@ type AppConnector struct {
// wildcards is the list of domain strings that match subdomains.
wildcards []string
// queue provides ordering for update operations
queue execqueue.ExecQueue
}
// NewAppConnector creates a new AppConnector.
func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser) *AppConnector {
return &AppConnector{
func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser, routeInfo *RouteInfo, storeRoutesFunc func(*RouteInfo) error) *AppConnector {
ac := &AppConnector{
logf: logger.WithPrefix(logf, "appc: "),
routeAdvertiser: routeAdvertiser,
storeRoutesFunc: storeRoutesFunc,
}
if routeInfo != nil {
ac.domains = routeInfo.Domains
ac.wildcards = routeInfo.Wildcards
ac.controlRoutes = routeInfo.Control
}
return ac
}
// UpdateDomains replaces the current set of configured domains with the
// supplied set of domains. Domains must not contain a trailing dot, and should
// be lower case. If the domain contains a leading '*' label it matches all
// subdomains of a domain.
// ShouldStoreRoutes returns true if the appconnector was created with the controlknob on
// and is storing its discovered routes persistently.
func (e *AppConnector) ShouldStoreRoutes() bool {
return e.storeRoutesFunc != nil
}
// storeRoutesLocked takes the current state of the AppConnector and persists it
func (e *AppConnector) storeRoutesLocked() error {
if !e.ShouldStoreRoutes() {
return nil
}
return e.storeRoutesFunc(&RouteInfo{
Control: e.controlRoutes,
Domains: e.domains,
Wildcards: e.wildcards,
})
}
// ClearRoutes removes all route state from the AppConnector.
func (e *AppConnector) ClearRoutes() error {
e.mu.Lock()
defer e.mu.Unlock()
e.controlRoutes = nil
e.domains = nil
e.wildcards = nil
return e.storeRoutesLocked()
}
// UpdateDomainsAndRoutes starts an asynchronous update of the configuration
// given the new domains and routes.
func (e *AppConnector) UpdateDomainsAndRoutes(domains []string, routes []netip.Prefix) {
e.queue.Add(func() {
// Add the new routes first.
e.updateRoutes(routes)
e.updateDomains(domains)
})
}
// UpdateDomains asynchronously replaces the current set of configured domains
// with the supplied set of domains. Domains must not contain a trailing dot,
// and should be lower case. If the domain contains a leading '*' label it
// matches all subdomains of a domain.
func (e *AppConnector) UpdateDomains(domains []string) {
e.queue.Add(func() {
e.updateDomains(domains)
})
}
// Wait waits for the currently scheduled asynchronous configuration changes to
// complete.
func (e *AppConnector) Wait(ctx context.Context) {
e.queue.Wait(ctx)
}
func (e *AppConnector) updateDomains(domains []string) {
e.mu.Lock()
defer e.mu.Unlock()
@ -97,18 +177,34 @@ func (e *AppConnector) UpdateDomains(domains []string) {
for _, wc := range e.wildcards {
if dnsname.HasSuffix(d, wc) {
e.domains[d] = addrs
delete(oldDomains, d)
break
}
}
}
// Everything left in oldDomains is a domain we're no longer tracking
// and if we are storing route info we can unadvertise the routes
if e.ShouldStoreRoutes() {
toRemove := []netip.Prefix{}
for _, addrs := range oldDomains {
for _, a := range addrs {
toRemove = append(toRemove, netip.PrefixFrom(a, a.BitLen()))
}
}
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.logf("failed to unadvertise routes on domain removal: %v: %v: %v", xmaps.Keys(oldDomains), toRemove, err)
}
}
e.logf("handling domains: %v and wildcards: %v", xmaps.Keys(e.domains), e.wildcards)
}
// UpdateRoutes merges the supplied routes into the currently configured routes. The routes supplied
// updateRoutes merges the supplied routes into the currently configured routes. The routes supplied
// by control for UpdateRoutes are supplemental to the routes discovered by DNS resolution, but are
// also more often whole ranges. UpdateRoutes will remove any single address routes that are now
// covered by new ranges.
func (e *AppConnector) UpdateRoutes(routes []netip.Prefix) {
func (e *AppConnector) updateRoutes(routes []netip.Prefix) {
e.mu.Lock()
defer e.mu.Unlock()
@ -117,27 +213,42 @@ func (e *AppConnector) UpdateRoutes(routes []netip.Prefix) {
return
}
if err := e.routeAdvertiser.AdvertiseRoute(routes...); err != nil {
e.logf("failed to advertise routes: %v: %v", routes, err)
return
}
var toRemove []netip.Prefix
// If we're storing routes and know e.controlRoutes is a good
// representation of what should be in AdvertisedRoutes we can stop
// advertising routes that used to be in e.controlRoutes but are not
// in routes.
if e.ShouldStoreRoutes() {
toRemove = routesWithout(e.controlRoutes, routes)
}
nextRoute:
for _, r := range routes {
if err := e.routeAdvertiser.AdvertiseRoute(r); err != nil {
e.logf("failed to advertise route: %v: %v", r, err)
continue
}
for _, addr := range e.domains {
for _, a := range addr {
if r.Contains(a) {
if r.Contains(a) && netip.PrefixFrom(a, a.BitLen()) != r {
pfx := netip.PrefixFrom(a, a.BitLen())
if err := e.routeAdvertiser.UnadvertiseRoute(pfx); err != nil {
e.logf("failed to unadvertise route: %v: %v", pfx, err)
}
toRemove = append(toRemove, pfx)
continue nextRoute
}
}
}
}
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.logf("failed to unadvertise routes: %v: %v", toRemove, err)
}
e.controlRoutes = routes
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
}
// Domains returns the currently configured domain list.
@ -175,7 +286,16 @@ func (e *AppConnector) ObserveDNSResponse(res []byte) {
return
}
nextAnswer:
// cnameChain tracks a chain of CNAMEs for a given query in order to reverse
// a CNAME chain back to the original query for flattening. The keys are
// CNAME record targets, and the value is the name the record answers, so
// for www.example.com CNAME example.com, the map would contain
// ["example.com"] = "www.example.com".
var cnameChain map[string]string
// addressRecords is a list of address records found in the response.
var addressRecords map[string][]netip.Addr
for {
h, err := p.AnswerHeader()
if err == dnsmessage.ErrSectionDone {
@ -191,83 +311,186 @@ nextAnswer:
}
continue
}
if h.Type != dnsmessage.TypeA && h.Type != dnsmessage.TypeAAAA {
switch h.Type {
case dnsmessage.TypeCNAME, dnsmessage.TypeA, dnsmessage.TypeAAAA:
default:
if err := p.SkipAnswer(); err != nil {
return
}
continue
}
domain := h.Name.String()
domain := strings.TrimSuffix(strings.ToLower(h.Name.String()), ".")
if len(domain) == 0 {
return
}
domain = strings.TrimSuffix(domain, ".")
domain = strings.ToLower(domain)
e.logf("[v2] observed DNS response for %s", domain)
e.mu.Lock()
addrs, ok := e.domains[domain]
// match wildcard domains
if !ok {
for _, wc := range e.wildcards {
if dnsname.HasSuffix(domain, wc) {
e.domains[domain] = nil
ok = true
break
}
}
}
e.mu.Unlock()
if !ok {
if err := p.SkipAnswer(); err != nil {
return
}
continue
}
var addr netip.Addr
if h.Type == dnsmessage.TypeCNAME {
res, err := p.CNAMEResource()
if err != nil {
return
}
cname := strings.TrimSuffix(strings.ToLower(res.CNAME.String()), ".")
if len(cname) == 0 {
continue
}
mak.Set(&cnameChain, cname, domain)
continue
}
switch h.Type {
case dnsmessage.TypeA:
r, err := p.AResource()
if err != nil {
return
}
addr = netip.AddrFrom4(r.A)
addr := netip.AddrFrom4(r.A)
mak.Set(&addressRecords, domain, append(addressRecords[domain], addr))
case dnsmessage.TypeAAAA:
r, err := p.AAAAResource()
if err != nil {
return
}
addr = netip.AddrFrom16(r.AAAA)
addr := netip.AddrFrom16(r.AAAA)
mak.Set(&addressRecords, domain, append(addressRecords[domain], addr))
default:
if err := p.SkipAnswer(); err != nil {
return
}
continue
}
if slices.Contains(addrs, addr) {
}
e.mu.Lock()
defer e.mu.Unlock()
for domain, addrs := range addressRecords {
domain, isRouted := e.findRoutedDomainLocked(domain, cnameChain)
// domain and none of the CNAMEs in the chain are routed
if !isRouted {
continue
}
for _, route := range e.controlRoutes {
if route.Contains(addr) {
// record the new address associated with the domain for faster matching in subsequent
// requests and for diagnostic records.
e.mu.Lock()
e.domains[domain] = append(addrs, addr)
e.mu.Unlock()
continue nextAnswer
// advertise each address we have learned for the routed domain, that
// was not already known.
var toAdvertise []netip.Prefix
for _, addr := range addrs {
if !e.isAddrKnownLocked(domain, addr) {
toAdvertise = append(toAdvertise, netip.PrefixFrom(addr, addr.BitLen()))
}
}
if err := e.routeAdvertiser.AdvertiseRoute(netip.PrefixFrom(addr, addr.BitLen())); err != nil {
e.logf("failed to advertise route for %s: %v: %v", domain, addr, err)
continue
}
e.logf("[v2] advertised route for %v: %v", domain, addr)
e.mu.Lock()
e.domains[domain] = append(addrs, addr)
e.mu.Unlock()
e.logf("[v2] observed new routes for %s: %s", domain, toAdvertise)
e.scheduleAdvertisement(domain, toAdvertise...)
}
}
// starting from the given domain that resolved to an address, find it, or any
// of the domains in the CNAME chain toward resolving it, that are routed
// domains, returning the routed domain name and a bool indicating whether a
// routed domain was found.
// e.mu must be held.
func (e *AppConnector) findRoutedDomainLocked(domain string, cnameChain map[string]string) (string, bool) {
var isRouted bool
for {
_, isRouted = e.domains[domain]
if isRouted {
break
}
// match wildcard domains
for _, wc := range e.wildcards {
if dnsname.HasSuffix(domain, wc) {
e.domains[domain] = nil
isRouted = true
break
}
}
next, ok := cnameChain[domain]
if !ok {
break
}
domain = next
}
return domain, isRouted
}
// isAddrKnownLocked returns true if the address is known to be associated with
// the given domain. Known domain tables are updated for covered routes to speed
// up future matches.
// e.mu must be held.
func (e *AppConnector) isAddrKnownLocked(domain string, addr netip.Addr) bool {
if e.hasDomainAddrLocked(domain, addr) {
return true
}
for _, route := range e.controlRoutes {
if route.Contains(addr) {
// record the new address associated with the domain for faster matching in subsequent
// requests and for diagnostic records.
e.addDomainAddrLocked(domain, addr)
return true
}
}
return false
}
// scheduleAdvertisement schedules an advertisement of the given address
// associated with the given domain.
func (e *AppConnector) scheduleAdvertisement(domain string, routes ...netip.Prefix) {
e.queue.Add(func() {
if err := e.routeAdvertiser.AdvertiseRoute(routes...); err != nil {
e.logf("failed to advertise routes for %s: %v: %v", domain, routes, err)
return
}
e.mu.Lock()
defer e.mu.Unlock()
for _, route := range routes {
if !route.IsSingleIP() {
continue
}
addr := route.Addr()
if !e.hasDomainAddrLocked(domain, addr) {
e.addDomainAddrLocked(domain, addr)
e.logf("[v2] advertised route for %v: %v", domain, addr)
}
}
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
})
}
// hasDomainAddrLocked returns true if the address has been observed in a
// resolution of domain.
func (e *AppConnector) hasDomainAddrLocked(domain string, addr netip.Addr) bool {
_, ok := slices.BinarySearchFunc(e.domains[domain], addr, compareAddr)
return ok
}
// addDomainAddrLocked adds the address to the list of addresses resolved for
// domain and ensures the list remains sorted. Does not attempt to deduplicate.
func (e *AppConnector) addDomainAddrLocked(domain string, addr netip.Addr) {
e.domains[domain] = append(e.domains[domain], addr)
slices.SortFunc(e.domains[domain], compareAddr)
}
func compareAddr(l, r netip.Addr) int {
return l.Compare(r)
}
// routesWithout returns a without b where a and b
// are unsorted slices of netip.Prefix
func routesWithout(a, b []netip.Prefix) []netip.Prefix {
m := make(map[netip.Prefix]bool, len(b))
for _, p := range b {
m[p] = true
}
return slicesx.Filter(make([]netip.Prefix, 0, len(a)), a, func(p netip.Prefix) bool {
return !m[p]
})
}

View File

@ -4,6 +4,7 @@
package appc
import (
"context"
"net/netip"
"reflect"
"slices"
@ -11,140 +12,243 @@ import (
xmaps "golang.org/x/exp/maps"
"golang.org/x/net/dns/dnsmessage"
"tailscale.com/appc/appctest"
"tailscale.com/util/mak"
"tailscale.com/util/must"
)
func fakeStoreRoutes(*RouteInfo) error { return nil }
func TestUpdateDomains(t *testing.T) {
a := NewAppConnector(t.Logf, nil)
a.UpdateDomains([]string{"example.com"})
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, nil, nil)
}
a.UpdateDomains([]string{"example.com"})
addr := netip.MustParseAddr("192.0.0.8")
a.domains["example.com"] = append(a.domains["example.com"], addr)
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
addr := netip.MustParseAddr("192.0.0.8")
a.domains["example.com"] = append(a.domains["example.com"], addr)
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
// domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx)
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
}
}
func TestUpdateRoutes(t *testing.T) {
rc := &routeCollector{}
a := NewAppConnector(t.Logf, rc)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.UpdateRoutes(routes)
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"*.example.com"})
if !slices.EqualFunc(routes, rc.routes, prefixEqual) {
t.Fatalf("got %v, want %v", rc.routes, routes)
// This route should be collapsed into the range
a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) {
t.Fatalf("got %v, want %v", rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
}
// This route should not be collapsed or removed
a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1"))
a.Wait(ctx)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")}
a.updateRoutes(routes)
slices.SortFunc(rc.Routes(), prefixCompare)
rc.SetRoutes(slices.Compact(rc.Routes()))
slices.SortFunc(routes, prefixCompare)
// Ensure that the non-matching /32 is preserved, even though it's in the domains table.
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Errorf("added routes: got %v, want %v", rc.Routes(), routes)
}
// Ensure that the contained /32 is removed, replaced by the /24.
wantRemoved := []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}
if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) {
t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes())
}
}
}
func TestUpdateRoutesUnadvertisesContainedRoutes(t *testing.T) {
rc := &routeCollector{}
a := NewAppConnector(t.Logf, rc)
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.routes = []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.UpdateRoutes(routes)
for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.updateRoutes(routes)
if !slices.EqualFunc(routes, rc.routes, prefixEqual) {
t.Fatalf("got %v, want %v", rc.routes, routes)
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Fatalf("got %v, want %v", rc.Routes(), routes)
}
}
}
func TestDomainRoutes(t *testing.T) {
rc := &routeCollector{}
a := NewAppConnector(t.Logf, rc)
a.UpdateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(context.Background())
want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")},
}
want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")},
}
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want)
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want)
}
}
}
func TestObserveDNSResponse(t *testing.T) {
rc := &routeCollector{}
a := NewAppConnector(t.Logf, rc)
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
// a has no domains configured, so it should not advertise any routes
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.routes, ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a has no domains configured, so it should not advertise any routes
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
a.UpdateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.routes, wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
// a CNAME record chain should result in a route being added if the chain
// matches a routed domain.
a.updateDomains([]string{"www.example.com", "example.com"})
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
if got, want := rc.routes, wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a CNAME record chain should result in a route being added if the chain
// even if only found in the middle of the chain
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// don't re-advertise routes that have already been advertised
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
if !slices.Equal(rc.routes, wantRoutes) {
t.Errorf("rc.routes: got %v; want %v", rc.routes, wantRoutes)
}
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
// don't advertise addresses that are already in a control provided route
pfx := netip.MustParsePrefix("192.0.2.0/24")
a.UpdateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx)
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
if !slices.Equal(rc.routes, wantRoutes) {
t.Errorf("rc.routes: got %v; want %v", rc.routes, wantRoutes)
}
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// don't re-advertise routes that have already been advertised
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
// don't advertise addresses that are already in a control provided route
pfx := netip.MustParsePrefix("192.0.2.0/24")
a.updateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx)
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
}
}
}
func TestWildcardDomains(t *testing.T) {
rc := &routeCollector{}
a := NewAppConnector(t.Logf, rc)
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.UpdateDomains([]string{"*.example.com"})
a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
if got, want := rc.routes, []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want)
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
a.updateDomains([]string{"*.example.com"})
a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want)
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
a.UpdateDomains([]string{"*.example.com", "example.com"})
if _, ok := a.domains["foo.example.com"]; !ok {
t.Errorf("expected foo.example.com to be preserved in domains due to wildcard")
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
a.updateDomains([]string{"*.example.com", "example.com"})
if _, ok := a.domains["foo.example.com"]; !ok {
t.Errorf("expected foo.example.com to be preserved in domains due to wildcard")
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
// There was an early regression where the wildcard domain was added repeatedly, this guards against that.
a.UpdateDomains([]string{"*.example.com", "example.com"})
if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
// There was an early regression where the wildcard domain was added repeatedly, this guards against that.
a.updateDomains([]string{"*.example.com", "example.com"})
if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
}
}
}
@ -185,30 +289,234 @@ func dnsResponse(domain, address string) []byte {
return must.Get(b.Finish())
}
// routeCollector is a test helper that collects the list of routes advertised
type routeCollector struct {
routes []netip.Prefix
}
func dnsCNAMEResponse(address string, domains ...string) []byte {
addr := netip.MustParseAddr(address)
b := dnsmessage.NewBuilder(nil, dnsmessage.Header{})
b.EnableCompression()
b.StartAnswers()
// routeCollector implements RouteAdvertiser
var _ RouteAdvertiser = (*routeCollector)(nil)
func (rc *routeCollector) AdvertiseRoute(pfx netip.Prefix) error {
rc.routes = append(rc.routes, pfx)
return nil
}
func (rc *routeCollector) UnadvertiseRoute(pfx netip.Prefix) error {
routes := rc.routes
rc.routes = rc.routes[:0]
for _, r := range routes {
if r != pfx {
rc.routes = append(rc.routes, r)
if len(domains) >= 2 {
for i, domain := range domains[:len(domains)-1] {
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName(domain),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName(domains[i+1]),
},
)
}
}
return nil
domain := domains[len(domains)-1]
switch addr.BitLen() {
case 32:
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName(domain),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: addr.As4(),
},
)
case 128:
b.AAAAResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName(domain),
Type: dnsmessage.TypeAAAA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AAAAResource{
AAAA: addr.As16(),
},
)
default:
panic("invalid address length")
}
return must.Get(b.Finish())
}
func prefixEqual(a, b netip.Prefix) bool {
return a.Addr().Compare(b.Addr()) == 0 && a.Bits() == b.Bits()
return a == b
}
func prefixCompare(a, b netip.Prefix) int {
if a.Addr().Compare(b.Addr()) == 0 {
return a.Bits() - b.Bits()
}
return a.Addr().Compare(b.Addr())
}
func prefixes(in ...string) []netip.Prefix {
toRet := make([]netip.Prefix, len(in))
for i, s := range in {
toRet[i] = netip.MustParsePrefix(s)
}
return toRet
}
func TestUpdateRouteRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
// nothing has yet been advertised
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.2/32"))
a.Wait(ctx)
// the routes passed to UpdateDomainsAndRoutes have been advertised
assertRoutes("simple update", prefixes("1.2.3.1/32", "1.2.3.2/32"), []netip.Prefix{})
// one route the same, one different
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.3/32"))
a.Wait(ctx)
// old behavior: routes are not removed, resulting routes are both old and new
// (we have dupe 1.2.3.1 routes because the test RouteAdvertiser doesn't have the deduplication
// the real one does)
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior: routes are removed, resulting routes are new only
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes = prefixes("1.2.3.2/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateDomainRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateWildcardRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "*.b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("1.b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("2.b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for *.b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestRoutesWithout(t *testing.T) {
assert := func(msg string, got, want []netip.Prefix) {
if !slices.Equal(want, got) {
t.Errorf("%s: want %v, got %v", msg, want, got)
}
}
assert("empty routes", routesWithout([]netip.Prefix{}, []netip.Prefix{}), []netip.Prefix{})
assert("a empty", routesWithout([]netip.Prefix{}, prefixes("1.1.1.1/32", "1.1.1.2/32")), []netip.Prefix{})
assert("b empty", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), []netip.Prefix{}), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("no overlap", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.3/32", "1.1.1.4/32")), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("a has fewer", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32")), []netip.Prefix{})
assert("a has more", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32"), prefixes("1.1.1.1/32", "1.1.1.3/32")), prefixes("1.1.1.2/32", "1.1.1.4/32"))
}

49
appc/appctest/appctest.go Normal file
View File

@ -0,0 +1,49 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package appctest
import (
"net/netip"
"slices"
)
// RouteCollector is a test helper that collects the list of routes advertised
type RouteCollector struct {
routes []netip.Prefix
removedRoutes []netip.Prefix
}
func (rc *RouteCollector) AdvertiseRoute(pfx ...netip.Prefix) error {
rc.routes = append(rc.routes, pfx...)
return nil
}
func (rc *RouteCollector) UnadvertiseRoute(toRemove ...netip.Prefix) error {
routes := rc.routes
rc.routes = rc.routes[:0]
for _, r := range routes {
if !slices.Contains(toRemove, r) {
rc.routes = append(rc.routes, r)
} else {
rc.removedRoutes = append(rc.removedRoutes, r)
}
}
return nil
}
// RemovedRoutes returns the list of routes that were removed.
func (rc *RouteCollector) RemovedRoutes() []netip.Prefix {
return rc.removedRoutes
}
// Routes returns the ordered list of routes that were added, including
// possible duplicates.
func (rc *RouteCollector) Routes() []netip.Prefix {
return rc.routes
}
func (rc *RouteCollector) SetRoutes(routes []netip.Prefix) error {
rc.routes = routes
return nil
}

View File

@ -37,7 +37,7 @@ while [ "$#" -gt 1 ]; do
--extra-small)
shift
ldflags="$ldflags -w -s"
tags="${tags:+$tags,}ts_omit_aws,ts_omit_bird,ts_omit_tap,ts_omit_kube"
tags="${tags:+$tags,}ts_omit_aws,ts_omit_bird,ts_omit_tap,ts_omit_kube,ts_omit_completion"
;;
--box)
shift

View File

@ -20,9 +20,9 @@
set -eu
# Use the "go" binary from the "tool" directory (which is github.com/tailscale/go)
export PATH=$PWD/tool:$PATH
export PATH="$PWD"/tool:"$PATH"
eval $(./build_dist.sh shellvars)
eval "$(./build_dist.sh shellvars)"
DEFAULT_TARGET="client"
DEFAULT_TAGS="v${VERSION_SHORT},v${VERSION_MINOR}"
@ -49,6 +49,7 @@ case "$TARGET" in
-X tailscale.com/version.gitCommitStamp=${VERSION_GIT_HASH}" \
--base="${BASE}" \
--tags="${TAGS}" \
--gotags="ts_kube" \
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
@ -70,8 +71,24 @@ case "$TARGET" in
--target="${PLATFORM}" \
/usr/local/bin/operator
;;
k8s-nameserver)
DEFAULT_REPOS="tailscale/k8s-nameserver"
REPOS="${REPOS:-${DEFAULT_REPOS}}"
go run github.com/tailscale/mkctr \
--gopaths="tailscale.com/cmd/k8s-nameserver:/usr/local/bin/k8s-nameserver" \
--ldflags=" \
-X tailscale.com/version.longStamp=${VERSION_LONG} \
-X tailscale.com/version.shortStamp=${VERSION_SHORT} \
-X tailscale.com/version.gitCommitStamp=${VERSION_GIT_HASH}" \
--base="${BASE}" \
--tags="${TAGS}" \
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
/usr/local/bin/k8s-nameserver
;;
*)
echo "unknown target: $TARGET"
exit 1
;;
esac
esac

View File

@ -270,6 +270,14 @@ type UserRuleMatch struct {
Users []string `json:"users"`
Ports []string `json:"ports"`
LineNumber int `json:"lineNumber"`
// Postures is a list of posture policies that are
// associated with this match. The rules can be looked
// up in the ACLPreviewResponse parent struct.
// The source of the list is from srcPosture on
// an ACL or Grant rule:
// https://tailscale.com/kb/1288/device-posture#posture-conditions
Postures []string `json:"postures"`
}
// ACLPreviewResponse is the response type of previewACLPostRequest
@ -277,6 +285,12 @@ type ACLPreviewResponse struct {
Matches []UserRuleMatch `json:"matches"` // ACL rules that match the specified user or ipport.
Type string `json:"type"` // The request type: currently only "user" or "ipport".
PreviewFor string `json:"previewFor"` // A specific user or ipport.
// Postures is a map of postures and associated rules that apply
// to this preview.
// For more details about the posture mapping, see:
// https://tailscale.com/kb/1288/device-posture#postures
Postures map[string][]string `json:"postures,omitempty"`
}
// ACLPreview is the response type of PreviewACLForUser, PreviewACLForIPPort, PreviewACLHuJSONForUser, and PreviewACLHuJSONForIPPort
@ -284,6 +298,12 @@ type ACLPreview struct {
Matches []UserRuleMatch `json:"matches"`
User string `json:"user,omitempty"` // Filled if response of PreviewACLForUser or PreviewACLHuJSONForUser
IPPort string `json:"ipport,omitempty"` // Filled if response of PreviewACLForIPPort or PreviewACLHuJSONForIPPort
// Postures is a map of postures and associated rules that apply
// to this preview.
// For more details about the posture mapping, see:
// https://tailscale.com/kb/1288/device-posture#postures
Postures map[string][]string `json:"postures,omitempty"`
}
func (c *Client) previewACLPostRequest(ctx context.Context, body []byte, previewType string, previewFor string) (res *ACLPreviewResponse, err error) {
@ -341,8 +361,9 @@ func (c *Client) PreviewACLForUser(ctx context.Context, acl ACL, user string) (r
}
return &ACLPreview{
Matches: b.Matches,
User: b.PreviewFor,
Matches: b.Matches,
User: b.PreviewFor,
Postures: b.Postures,
}, nil
}
@ -369,8 +390,9 @@ func (c *Client) PreviewACLForIPPort(ctx context.Context, acl ACL, ipport netip.
}
return &ACLPreview{
Matches: b.Matches,
IPPort: b.PreviewFor,
Matches: b.Matches,
IPPort: b.PreviewFor,
Postures: b.Postures,
}, nil
}
@ -394,8 +416,9 @@ func (c *Client) PreviewACLHuJSONForUser(ctx context.Context, acl ACLHuJSON, use
}
return &ACLPreview{
Matches: b.Matches,
User: b.PreviewFor,
Matches: b.Matches,
User: b.PreviewFor,
Postures: b.Postures,
}, nil
}
@ -419,8 +442,9 @@ func (c *Client) PreviewACLHuJSONForIPPort(ctx context.Context, acl ACLHuJSON, i
}
return &ACLPreview{
Matches: b.Matches,
IPPort: b.PreviewFor,
Matches: b.Matches,
IPPort: b.PreviewFor,
Postures: b.Postures,
}, nil
}

View File

@ -49,3 +49,11 @@ type ReloadConfigResponse struct {
Reloaded bool // whether the config was reloaded
Err string // any error message
}
// ExitNodeSuggestionResponse is the response to a LocalAPI suggest-exit-node GET request.
// It returns the StableNodeID, name, and location of a suggested exit node for the client making the request.
type ExitNodeSuggestionResponse struct {
ID tailcfg.StableNodeID
Name string
Location tailcfg.LocationView `json:",omitempty"`
}

View File

@ -7,6 +7,7 @@ package tailscale
import (
"bytes"
"cmp"
"context"
"crypto/tls"
"encoding/json"
@ -27,6 +28,7 @@ import (
"go4.org/mem"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/drive"
"tailscale.com/envknob"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
@ -37,7 +39,6 @@ import (
"tailscale.com/tka"
"tailscale.com/types/key"
"tailscale.com/types/tkatype"
"tailscale.com/util/cmpx"
)
// defaultLocalClient is the default LocalClient when using the legacy
@ -479,7 +480,7 @@ func (lc *LocalClient) DebugPortmap(ctx context.Context, opts *DebugPortmapOpts)
opts = &DebugPortmapOpts{}
}
vals.Set("duration", cmpx.Or(opts.Duration, 5*time.Second).String())
vals.Set("duration", cmp.Or(opts.Duration, 5*time.Second).String())
vals.Set("type", opts.Type)
vals.Set("log_http", strconv.FormatBool(opts.LogHTTP))
@ -1417,6 +1418,66 @@ func (lc *LocalClient) CheckUpdate(ctx context.Context) (*tailcfg.ClientVersion,
return &cv, nil
}
// SetUseExitNode toggles the use of an exit node on or off.
// To turn it on, there must have been a previously used exit node.
// The most previously used one is reused.
// This is a convenience method for GUIs. To select an actual one, update the prefs.
func (lc *LocalClient) SetUseExitNode(ctx context.Context, on bool) error {
_, err := lc.send(ctx, "POST", "/localapi/v0/set-use-exit-node-enabled?enabled="+strconv.FormatBool(on), http.StatusOK, nil)
return err
}
// DriveSetServerAddr instructs Taildrive to use the server at addr to access
// the filesystem. This is used on platforms like Windows and MacOS to let
// Taildrive know to use the file server running in the GUI app.
func (lc *LocalClient) DriveSetServerAddr(ctx context.Context, addr string) error {
_, err := lc.send(ctx, "PUT", "/localapi/v0/drive/fileserver-address", http.StatusCreated, strings.NewReader(addr))
return err
}
// DriveShareSet adds or updates the given share in the list of shares that
// Taildrive will serve to remote nodes. If a share with the same name already
// exists, the existing share is replaced/updated.
func (lc *LocalClient) DriveShareSet(ctx context.Context, share *drive.Share) error {
_, err := lc.send(ctx, "PUT", "/localapi/v0/drive/shares", http.StatusCreated, jsonBody(share))
return err
}
// DriveShareRemove removes the share with the given name from the list of
// shares that Taildrive will serve to remote nodes.
func (lc *LocalClient) DriveShareRemove(ctx context.Context, name string) error {
_, err := lc.send(
ctx,
"DELETE",
"/localapi/v0/drive/shares",
http.StatusNoContent,
strings.NewReader(name))
return err
}
// DriveShareRename renames the share from old to new name.
func (lc *LocalClient) DriveShareRename(ctx context.Context, oldName, newName string) error {
_, err := lc.send(
ctx,
"POST",
"/localapi/v0/drive/shares",
http.StatusNoContent,
jsonBody([2]string{oldName, newName}))
return err
}
// DriveShareList returns the list of shares that drive is currently serving
// to remote nodes.
func (lc *LocalClient) DriveShareList(ctx context.Context) ([]*drive.Share, error) {
result, err := lc.get200(ctx, "/localapi/v0/drive/shares")
if err != nil {
return nil, err
}
var shares []*drive.Share
err = json.Unmarshal(result, &shares)
return shares, err
}
// IPNBusWatcher is an active subscription (watch) of the local tailscaled IPN bus.
// It's returned by LocalClient.WatchIPNBus.
//
@ -1453,3 +1514,12 @@ func (w *IPNBusWatcher) Next() (ipn.Notify, error) {
}
return n, nil
}
// SuggestExitNode requests an exit node suggestion and returns the exit node's details.
func (lc *LocalClient) SuggestExitNode(ctx context.Context) (apitype.ExitNodeSuggestionResponse, error) {
body, err := lc.get200(ctx, "/localapi/v0/suggest-exit-node")
if err != nil {
return apitype.ExitNodeSuggestionResponse{}, err
}
return decodeJSON[apitype.ExitNodeSuggestionResponse](body)
}

View File

@ -5,7 +5,11 @@
package tailscale
import "testing"
import (
"testing"
"tailscale.com/tstest/deptest"
)
func TestGetServeConfigFromJSON(t *testing.T) {
sc, err := getServeConfigFromJSON([]byte("null"))
@ -25,3 +29,14 @@ func TestGetServeConfigFromJSON(t *testing.T) {
t.Errorf("want non-nil TCP for object")
}
}
func TestDeps(t *testing.T) {
deptest.DepChecker{
BadDeps: map[string]string{
// Make sure we don't again accidentally bring in a dependency on
// drive or its transitive dependencies
"tailscale.com/drive/driveimpl": "https://github.com/tailscale/tailscale/pull/10631",
"github.com/studio-b12/gowebdav": "https://github.com/tailscale/tailscale/pull/10631",
},
}.Check(t)
}

View File

@ -11,6 +11,7 @@ import (
"fmt"
"net/http"
"net/url"
"slices"
"strings"
"time"
@ -222,7 +223,7 @@ func (s *Server) awaitUserAuth(ctx context.Context, session *browserSession) err
func (s *Server) newSessionID() (string, error) {
raw := make([]byte, 16)
for i := 0; i < 5; i++ {
for range 5 {
if _, err := rand.Read(raw); err != nil {
return "", err
}
@ -234,7 +235,12 @@ func (s *Server) newSessionID() (string, error) {
return "", errors.New("too many collisions generating new session; please refresh page")
}
type peerCapabilities map[capFeature]bool // value is true if the peer can edit the given feature
// peerCapabilities holds information about what a source
// peer is allowed to edit via the web UI.
//
// map value is true if the peer can edit the given feature.
// Only capFeatures included in validCaps will be included.
type peerCapabilities map[capFeature]bool
// canEdit is true if the peerCapabilities grant edit access
// to the given feature.
@ -248,39 +254,85 @@ func (p peerCapabilities) canEdit(feature capFeature) bool {
return p[feature]
}
// isEmpty is true if p is either nil or has no capabilities
// with value true.
func (p peerCapabilities) isEmpty() bool {
if p == nil {
return true
}
for _, v := range p {
if v == true {
return false
}
}
return true
}
type capFeature string
const (
// The following values should not be edited.
// New caps can be added, but existing ones should not be changed,
// as these exact values are used by users in tailnet policy files.
//
// IMPORTANT: When adding a new cap, also update validCaps slice below.
capFeatureAll capFeature = "*" // grants peer management of all features
capFeatureFunnel capFeature = "funnel" // grants peer serve/funnel management
capFeatureSSH capFeature = "ssh" // grants peer SSH server management
capFeatureSubnet capFeature = "subnet" // grants peer subnet routes management
capFeatureExitNode capFeature = "exitnode" // grants peer ability to advertise-as and use exit nodes
capFeatureAccount capFeature = "account" // grants peer ability to turn on auto updates and log out of node
capFeatureAll capFeature = "*" // grants peer management of all features
capFeatureSSH capFeature = "ssh" // grants peer SSH server management
capFeatureSubnets capFeature = "subnets" // grants peer subnet routes management
capFeatureExitNodes capFeature = "exitnodes" // grants peer ability to advertise-as and use exit nodes
capFeatureAccount capFeature = "account" // grants peer ability to turn on auto updates and log out of node
)
// validCaps contains the list of valid capabilities used in the web client.
// Any capabilities included in a peer's grants that do not fall into this
// list will be ignored.
var validCaps []capFeature = []capFeature{
capFeatureAll,
capFeatureSSH,
capFeatureSubnets,
capFeatureExitNodes,
capFeatureAccount,
}
type capRule struct {
CanEdit []string `json:"canEdit,omitempty"` // list of features peer is allowed to edit
}
// toPeerCapabilities parses out the web ui capabilities from the
// given whois response.
func toPeerCapabilities(whois *apitype.WhoIsResponse) (peerCapabilities, error) {
caps := peerCapabilities{}
if whois == nil {
return caps, nil
func toPeerCapabilities(status *ipnstate.Status, whois *apitype.WhoIsResponse) (peerCapabilities, error) {
if whois == nil || status == nil {
return peerCapabilities{}, nil
}
if whois.Node.IsTagged() {
// We don't allow management *from* tagged nodes, so ignore caps.
// The web client auth flow relies on having a true user identity
// that can be verified through login.
return peerCapabilities{}, nil
}
if !status.Self.IsTagged() {
// User owned nodes are only ever manageable by the owner.
if status.Self.UserID != whois.UserProfile.ID {
return peerCapabilities{}, nil
} else {
return peerCapabilities{capFeatureAll: true}, nil // owner can edit all features
}
}
// For tagged nodes, we actually look at the granted capabilities.
caps := peerCapabilities{}
rules, err := tailcfg.UnmarshalCapJSON[capRule](whois.CapMap, tailcfg.PeerCapabilityWebUI)
if err != nil {
return nil, fmt.Errorf("failed to unmarshal capability: %v", err)
}
for _, c := range rules {
for _, f := range c.CanEdit {
caps[capFeature(strings.ToLower(f))] = true
cap := capFeature(strings.ToLower(f))
if slices.Contains(validCaps, cap) {
caps[cap] = true
}
}
}
return caps, nil

View File

@ -6,6 +6,7 @@
"node": "18.16.1",
"yarn": "1.22.19"
},
"type": "module",
"private": true,
"dependencies": {
"@radix-ui/react-collapsible": "^1.0.3",
@ -19,23 +20,28 @@
"zustand": "^4.4.7"
},
"devDependencies": {
"@types/node": "^18.16.1",
"@types/react": "^18.0.20",
"@types/react-dom": "^18.0.6",
"@vitejs/plugin-react-swc": "^3.3.2",
"@vitejs/plugin-react-swc": "^3.6.0",
"autoprefixer": "^10.4.15",
"eslint": "^8.23.1",
"eslint-config-react-app": "^7.0.1",
"eslint-plugin-curly-quotes": "^1.0.4",
"jsdom": "^23.0.1",
"postcss": "^8.4.31",
"prettier": "^2.5.1",
"prettier-plugin-organize-imports": "^3.2.2",
"tailwindcss": "^3.3.3",
"typescript": "^4.7.4",
"vite": "^4.3.9",
"vite-plugin-rewrite-all": "^1.0.1",
"vite-plugin-svgr": "^3.2.0",
"typescript": "^5.3.3",
"vite": "^5.1.7",
"vite-plugin-svgr": "^4.2.0",
"vite-tsconfig-paths": "^3.5.0",
"vitest": "^0.32.0"
"vitest": "^1.3.1"
},
"resolutions": {
"@typescript-eslint/eslint-plugin": "^6.2.1",
"@typescript-eslint/parser": "^6.2.1"
},
"scripts": {
"build": "vite build",
@ -50,9 +56,11 @@
"react-app"
],
"plugins": [
"curly-quotes",
"react-hooks"
],
"rules": {
"curly-quotes/no-straight-quotes": "warn",
"react-hooks/rules-of-hooks": "error",
"react-hooks/exhaustive-deps": "error"
},

View File

@ -4,8 +4,8 @@
import * as Primitive from "@radix-ui/react-popover"
import cx from "classnames"
import React, { useCallback } from "react"
import { ReactComponent as ChevronDown } from "src/assets/icons/chevron-down.svg"
import { ReactComponent as Copy } from "src/assets/icons/copy.svg"
import ChevronDown from "src/assets/icons/chevron-down.svg?react"
import Copy from "src/assets/icons/copy.svg?react"
import NiceIP from "src/components/nice-ip"
import useToaster from "src/hooks/toaster"
import Button from "src/ui/button"

View File

@ -2,7 +2,7 @@
// SPDX-License-Identifier: BSD-3-Clause
import React from "react"
import { ReactComponent as TailscaleIcon } from "src/assets/icons/tailscale-icon.svg"
import TailscaleIcon from "src/assets/icons/tailscale-icon.svg?react"
import LoginToggle from "src/components/login-toggle"
import DeviceDetailsView from "src/components/views/device-details-view"
import DisconnectedView from "src/components/views/disconnected-view"
@ -11,8 +11,8 @@ import LoginView from "src/components/views/login-view"
import SSHView from "src/components/views/ssh-view"
import SubnetRouterView from "src/components/views/subnet-router-view"
import { UpdatingView } from "src/components/views/updating-view"
import useAuth, { AuthResponse } from "src/hooks/auth"
import { Feature, featureDescription, NodeData } from "src/types"
import useAuth, { AuthResponse, canEdit } from "src/hooks/auth"
import { Feature, NodeData, featureDescription } from "src/types"
import Card from "src/ui/card"
import EmptyState from "src/ui/empty-state"
import LoadingDots from "src/ui/loading-dots"
@ -56,16 +56,19 @@ function WebClient({
<Header node={node} auth={auth} newSession={newSession} />
<Switch>
<Route path="/">
<HomeView readonly={!auth.canManageNode} node={node} />
<HomeView node={node} auth={auth} />
</Route>
<Route path="/details">
<DeviceDetailsView readonly={!auth.canManageNode} node={node} />
<DeviceDetailsView node={node} auth={auth} />
</Route>
<FeatureRoute path="/subnets" feature="advertise-routes" node={node}>
<SubnetRouterView readonly={!auth.canManageNode} node={node} />
<SubnetRouterView
readonly={!canEdit("subnets", auth)}
node={node}
/>
</FeatureRoute>
<FeatureRoute path="/ssh" feature="ssh" node={node}>
<SSHView readonly={!auth.canManageNode} node={node} />
<SSHView readonly={!canEdit("ssh", auth)} node={node} />
</FeatureRoute>
{/* <Route path="/serve">Share local content</Route> */}
<FeatureRoute path="/update" feature="auto-update" node={node}>

View File

@ -4,8 +4,8 @@
import cx from "classnames"
import React, { useCallback, useEffect, useMemo, useRef, useState } from "react"
import { useAPI } from "src/api"
import { ReactComponent as Check } from "src/assets/icons/check.svg"
import { ReactComponent as ChevronDown } from "src/assets/icons/chevron-down.svg"
import Check from "src/assets/icons/check.svg?react"
import ChevronDown from "src/assets/icons/chevron-down.svg?react"
import useExitNodes, {
noExitNode,
runAsExitNode,
@ -180,7 +180,7 @@ export default function ExitNodeSelector({
)}
{pending && (
<p className="text-white p-3">
Pending approval to run as exit node. This device won't be usable as
Pending approval to run as exit node. This device wont be usable as
an exit node until then.
</p>
)}

View File

@ -2,15 +2,17 @@
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"
import React, { useCallback, useEffect, useState } from "react"
import { ReactComponent as ChevronDown } from "src/assets/icons/chevron-down.svg"
import { ReactComponent as Eye } from "src/assets/icons/eye.svg"
import { ReactComponent as User } from "src/assets/icons/user.svg"
import { AuthResponse, AuthType } from "src/hooks/auth"
import React, { useCallback, useMemo, useState } from "react"
import ChevronDown from "src/assets/icons/chevron-down.svg?react"
import Eye from "src/assets/icons/eye.svg?react"
import User from "src/assets/icons/user.svg?react"
import { AuthResponse, hasAnyEditCapabilities } from "src/hooks/auth"
import { useTSWebConnected } from "src/hooks/ts-web-connected"
import { NodeData } from "src/types"
import Button from "src/ui/button"
import Popover from "src/ui/popover"
import ProfilePic from "src/ui/profile-pic"
import { assertNever, isHTTPS } from "src/utils/util"
export default function LoginToggle({
node,
@ -22,12 +24,29 @@ export default function LoginToggle({
newSession: () => Promise<void>
}) {
const [open, setOpen] = useState<boolean>(false)
const { tsWebConnected, checkTSWebConnection } = useTSWebConnected(
auth.serverMode,
node.IPv4
)
return (
<Popover
className="p-3 bg-white rounded-lg shadow flex flex-col gap-2 max-w-[317px]"
className="p-3 bg-white rounded-lg shadow flex flex-col max-w-[317px]"
content={
<LoginPopoverContent node={node} auth={auth} newSession={newSession} />
auth.serverMode === "readonly" ? (
<ReadonlyModeContent auth={auth} />
) : auth.serverMode === "login" ? (
<LoginModeContent
auth={auth}
node={node}
tsWebConnected={tsWebConnected}
checkTSWebConnection={checkTSWebConnection}
/>
) : auth.serverMode === "manage" ? (
<ManageModeContent auth={auth} node={node} newSession={newSession} />
) : (
assertNever(auth.serverMode)
)
}
side="bottom"
align="end"
@ -35,216 +54,303 @@ export default function LoginToggle({
onOpenChange={setOpen}
asChild
>
{!auth.canManageNode ? (
<button
className={cx(
"pl-3 py-1 bg-gray-700 rounded-full flex justify-start items-center h-[34px]",
{ "pr-1": auth.viewerIdentity, "pr-3": !auth.viewerIdentity }
)}
onClick={() => setOpen(!open)}
>
<Eye />
<div className="text-white leading-snug ml-2 mr-1">Viewing</div>
<ChevronDown className="stroke-white w-[15px] h-[15px]" />
{auth.viewerIdentity && (
<ProfilePic
className="ml-2"
size="medium"
url={auth.viewerIdentity.profilePicUrl}
/>
)}
</button>
) : (
<div
className={cx(
"w-[34px] h-[34px] p-1 rounded-full justify-center items-center inline-flex hover:bg-gray-300",
{
"bg-transparent": !open,
"bg-gray-300": open,
}
)}
>
<button onClick={() => setOpen(!open)}>
<ProfilePic
size="medium"
url={auth.viewerIdentity?.profilePicUrl}
/>
</button>
</div>
)}
<div>
{auth.authorized ? (
<TriggerWhenManaging auth={auth} open={open} setOpen={setOpen} />
) : (
<TriggerWhenReading auth={auth} open={open} setOpen={setOpen} />
)}
</div>
</Popover>
)
}
function LoginPopoverContent({
/**
* TriggerWhenManaging is displayed as the trigger for the login popover
* when the user has an active authorized managment session.
*/
function TriggerWhenManaging({
auth,
open,
setOpen,
}: {
auth: AuthResponse
open: boolean
setOpen: (next: boolean) => void
}) {
return (
<div
className={cx(
"w-[34px] h-[34px] p-1 rounded-full justify-center items-center inline-flex hover:bg-gray-300",
{
"bg-transparent": !open,
"bg-gray-300": open,
}
)}
>
<button onClick={() => setOpen(!open)}>
<ProfilePic size="medium" url={auth.viewerIdentity?.profilePicUrl} />
</button>
</div>
)
}
/**
* TriggerWhenReading is displayed as the trigger for the login popover
* when the user is currently in read mode (doesn't have an authorized
* management session).
*/
function TriggerWhenReading({
auth,
open,
setOpen,
}: {
auth: AuthResponse
open: boolean
setOpen: (next: boolean) => void
}) {
return (
<button
className={cx(
"pl-3 py-1 bg-gray-700 rounded-full flex justify-start items-center h-[34px]",
{ "pr-1": auth.viewerIdentity, "pr-3": !auth.viewerIdentity }
)}
onClick={() => setOpen(!open)}
>
<Eye />
<div className="text-white leading-snug ml-2 mr-1">Viewing</div>
<ChevronDown className="stroke-white w-[15px] h-[15px]" />
{auth.viewerIdentity && (
<ProfilePic
className="ml-2"
size="medium"
url={auth.viewerIdentity.profilePicUrl}
/>
)}
</button>
)
}
/**
* PopoverContentHeader is the header for the login popover.
*/
function PopoverContentHeader({ auth }: { auth: AuthResponse }) {
return (
<div className="text-black text-sm font-medium leading-tight mb-1">
{auth.authorized ? "Managing" : "Viewing"}
{auth.viewerIdentity && ` as ${auth.viewerIdentity.loginName}`}
</div>
)
}
/**
* PopoverContentFooter is the footer for the login popover.
*/
function PopoverContentFooter({ auth }: { auth: AuthResponse }) {
return auth.viewerIdentity ? (
<>
<hr className="my-2" />
<div className="flex items-center">
<User className="flex-shrink-0" />
<p className="text-gray-500 text-xs ml-2">
We recognize you because you are accessing this page from{" "}
<span className="font-medium">
{auth.viewerIdentity.nodeName || auth.viewerIdentity.nodeIP}
</span>
</p>
</div>
</>
) : null
}
/**
* ReadonlyModeContent is the body of the login popover when the web
* client is being run in "readonly" server mode.
*/
function ReadonlyModeContent({ auth }: { auth: AuthResponse }) {
return (
<>
<PopoverContentHeader auth={auth} />
<p className="text-gray-500 text-xs">
This web interface is running in read-only mode.{" "}
<a
href="https://tailscale.com/s/web-client-read-only"
className="text-blue-700"
target="_blank"
rel="noreferrer"
>
Learn more &rarr;
</a>
</p>
<PopoverContentFooter auth={auth} />
</>
)
}
/**
* LoginModeContent is the body of the login popover when the web
* client is being run in "login" server mode.
*/
function LoginModeContent({
node,
auth,
tsWebConnected,
checkTSWebConnection,
}: {
node: NodeData
auth: AuthResponse
tsWebConnected: boolean
checkTSWebConnection: () => void
}) {
const https = isHTTPS()
// We can't run the ts web connection test when the webpage is loaded
// over HTTPS. So in this case, we default to presenting a login button
// with some helper text reminding the user to check their connection
// themselves.
const hasACLAccess = https || tsWebConnected
const hasEditCaps = useMemo(() => {
if (!auth.viewerIdentity) {
// If not connected to login client over tailscale, we won't know the viewer's
// identity. So we must assume they may be able to edit something and have the
// management client handle permissions once the user gets there.
return true
}
return hasAnyEditCapabilities(auth)
}, [auth])
const handleLogin = useCallback(() => {
// Must be connected over Tailscale to log in.
// Send user to Tailscale IP and start check mode
const manageURL = `http://${node.IPv4}:5252/?check=now`
if (window.self !== window.top) {
// If we're inside an iframe, open management client in new window.
window.open(manageURL, "_blank")
} else {
window.location.href = manageURL
}
}, [node.IPv4])
return (
<div
onMouseEnter={
hasEditCaps && !hasACLAccess ? checkTSWebConnection : undefined
}
>
<PopoverContentHeader auth={auth} />
{!hasACLAccess || !hasEditCaps ? (
<>
<p className="text-gray-500 text-xs">
{!hasEditCaps ? (
// ACLs allow access, but user isn't allowed to edit any features,
// restricted to readonly. No point in sending them over to the
// tailscaleIP:5252 address.
<>
You dont have permission to make changes to this device, but
you can view most of its details.
</>
) : !node.ACLAllowsAnyIncomingTraffic ? (
// Tailnet ACLs don't allow access to anyone.
<>
The current tailnet policy file does not allow connecting to
this device.
</>
) : (
// ACLs don't allow access to this user specifically.
<>
Cannot access this devices Tailscale IP. Make sure you are
connected to your tailnet, and that your policy file allows
access.
</>
)}{" "}
<a
href="https://tailscale.com/s/web-client-access"
className="text-blue-700"
target="_blank"
rel="noreferrer"
>
Learn more &rarr;
</a>
</p>
</>
) : (
// User can connect to Tailcale IP; sign in when ready.
<>
<p className="text-gray-500 text-xs">
You can see most of this devices details. To make changes, you need
to sign in.
</p>
{https && (
// we don't know if the user can connect over TS, so
// provide extra tips in case they have trouble.
<p className="text-gray-500 text-xs font-semibold pt-2">
Make sure you are connected to your tailnet, and that your policy
file allows access.
</p>
)}
<SignInButton auth={auth} onClick={handleLogin} />
</>
)}
<PopoverContentFooter auth={auth} />
</div>
)
}
/**
* ManageModeContent is the body of the login popover when the web
* client is being run in "manage" server mode.
*/
function ManageModeContent({
auth,
newSession,
}: {
node: NodeData
auth: AuthResponse
newSession: () => Promise<void>
newSession: () => void
}) {
/**
* canConnectOverTS indicates whether the current viewer
* is able to hit the node's web client that's being served
* at http://${node.IP}:5252. If false, this means that the
* viewer must connect to the correct tailnet before being
* able to sign in.
*/
const [canConnectOverTS, setCanConnectOverTS] = useState<boolean>(false)
const [isRunningCheck, setIsRunningCheck] = useState<boolean>(false)
// Whether the current page is loaded over HTTPS.
// If it is, then the connectivity check to the management client
// will fail with a mixed-content error.
const isHTTPS = window.location.protocol === "https:"
const checkTSConnection = useCallback(() => {
if (auth.viewerIdentity || isHTTPS) {
// Skip the connectivity check if we either already know we're connected over Tailscale,
// or know the connectivity check will fail because the current page is loaded over HTTPS.
setCanConnectOverTS(true)
return
}
// Otherwise, test connection to the ts IP.
if (isRunningCheck) {
return // already checking
}
setIsRunningCheck(true)
fetch(`http://${node.IPv4}:5252/ok`, { mode: "no-cors" })
.then(() => {
setCanConnectOverTS(true)
setIsRunningCheck(false)
})
.catch(() => setIsRunningCheck(false))
}, [auth.viewerIdentity, isRunningCheck, node.IPv4, isHTTPS])
/**
* Checking connection for first time on page load.
*
* While not connected, we check again whenever the mouse
* enters the popover component, to pick up on the user
* leaving to turn on Tailscale then returning to the view.
* See `onMouseEnter` on the div below.
*/
// eslint-disable-next-line react-hooks/exhaustive-deps
useEffect(() => checkTSConnection(), [])
const handleSignInClick = useCallback(() => {
if (auth.viewerIdentity && auth.serverMode === "manage") {
if (window.self !== window.top) {
// if we're inside an iframe, start session in new window
let url = new URL(window.location.href)
url.searchParams.set("check", "now")
window.open(url, "_blank")
} else {
newSession()
}
const handleLogin = useCallback(() => {
if (window.self !== window.top) {
// If we're inside an iframe, start session in new window.
let url = new URL(window.location.href)
url.searchParams.set("check", "now")
window.open(url, "_blank")
} else {
// Must be connected over Tailscale to log in.
// Send user to Tailscale IP and start check mode
const manageURL = `http://${node.IPv4}:5252/?check=now`
if (window.self !== window.top) {
// if we're inside an iframe, open management client in new window
window.open(manageURL, "_blank")
} else {
window.location.href = manageURL
}
newSession()
}
}, [auth.viewerIdentity, auth.serverMode, newSession, node.IPv4])
}, [newSession])
const hasAnyPermissions = useMemo(() => hasAnyEditCapabilities(auth), [auth])
return (
<div onMouseEnter={!canConnectOverTS ? checkTSConnection : undefined}>
<div className="text-black text-sm font-medium leading-tight mb-1">
{!auth.canManageNode ? "Viewing" : "Managing"}
{auth.viewerIdentity && ` as ${auth.viewerIdentity.loginName}`}
</div>
{!auth.canManageNode && (
<>
{!auth.viewerIdentity ? (
// User is not connected over Tailscale.
// These states are only possible on the login client.
<>
{!canConnectOverTS ? (
<>
<p className="text-gray-500 text-xs">
{!node.ACLAllowsAnyIncomingTraffic ? (
// Tailnet ACLs don't allow access.
<>
The current tailnet policy file does not allow
connecting to this device.
</>
) : (
// ACLs allow access, but user can't connect.
<>
Cannot access this device's Tailscale IP. Make sure you
are connected to your tailnet, and that your policy file
allows access.
</>
)}{" "}
<a
href="https://tailscale.com/s/web-client-connection"
className="text-blue-700"
target="_blank"
rel="noreferrer"
>
Learn more &rarr;
</a>
</p>
</>
) : (
// User can connect to Tailcale IP; sign in when ready.
<>
<p className="text-gray-500 text-xs">
You can see most of this device's details. To make changes,
you need to sign in.
</p>
{isHTTPS && (
// we don't know if the user can connect over TS, so
// provide extra tips in case they have trouble.
<p className="text-gray-500 text-xs font-semibold pt-2">
Make sure you are connected to your tailnet, and that your
policy file allows access.
</p>
)}
<SignInButton auth={auth} onClick={handleSignInClick} />
</>
)}
</>
) : auth.authNeeded === AuthType.tailscale ? (
// User is connected over Tailscale, but needs to complete check mode.
<>
<p className="text-gray-500 text-xs">
To make changes, sign in to confirm your identity. This extra
step helps us keep your device secure.
</p>
<SignInButton auth={auth} onClick={handleSignInClick} />
</>
) : (
// User is connected over tailscale, but doesn't have permission to manage.
<>
<PopoverContentHeader auth={auth} />
{!auth.authorized &&
(hasAnyPermissions ? (
// User is connected over Tailscale, but needs to complete check mode.
<>
<p className="text-gray-500 text-xs">
You dont have permission to make changes to this device, but you
can view most of its details.
To make changes, sign in to confirm your identity. This extra step
helps us keep your device secure.
</p>
)}
</>
)}
{auth.viewerIdentity && (
<>
<hr className="my-2" />
<div className="flex items-center">
<User className="flex-shrink-0" />
<p className="text-gray-500 text-xs ml-2">
We recognize you because you are accessing this page from{" "}
<span className="font-medium">
{auth.viewerIdentity.nodeName || auth.viewerIdentity.nodeIP}
</span>
</p>
</div>
</>
)}
</div>
<SignInButton auth={auth} onClick={handleLogin} />
</>
) : (
// User is connected over tailscale, but doesn't have permission to manage.
<p className="text-gray-500 text-xs">
You dont have permission to make changes to this device, but you
can view most of its details.{" "}
<a
href="https://tailscale.com/s/web-client-access"
className="text-blue-700"
target="_blank"
rel="noreferrer"
>
Learn more &rarr;
</a>
</p>
))}
<PopoverContentFooter auth={auth} />
</>
)
}

View File

@ -61,7 +61,7 @@ export function ChangelogText({ version }: { version?: string }) {
<a href="https://tailscale.com/changelog/" className="link">
release notes
</a>{" "}
to find out what's new!
to find out whats new!
</>
)
}

View File

@ -8,6 +8,7 @@ import ACLTag from "src/components/acl-tag"
import * as Control from "src/components/control-components"
import NiceIP from "src/components/nice-ip"
import { UpdateAvailableNotification } from "src/components/update-available"
import { AuthResponse, canEdit } from "src/hooks/auth"
import { NodeData } from "src/types"
import Button from "src/ui/button"
import Card from "src/ui/card"
@ -16,11 +17,11 @@ import QuickCopy from "src/ui/quick-copy"
import { useLocation } from "wouter"
export default function DeviceDetailsView({
readonly,
node,
auth,
}: {
readonly: boolean
node: NodeData
auth: AuthResponse
}) {
return (
<>
@ -37,11 +38,11 @@ export default function DeviceDetailsView({
})}
/>
</div>
{!readonly && <DisconnectDialog />}
{canEdit("account", auth) && <DisconnectDialog />}
</div>
</Card>
{node.Features["auto-update"] &&
!readonly &&
canEdit("account", auth) &&
node.ClientVersion &&
!node.ClientVersion.RunningLatest && (
<UpdateAvailableNotification details={node.ClientVersion} />

View File

@ -2,7 +2,7 @@
// SPDX-License-Identifier: BSD-3-Clause
import React from "react"
import { ReactComponent as TailscaleIcon } from "src/assets/icons/tailscale-icon.svg"
import TailscaleIcon from "src/assets/icons/tailscale-icon.svg?react"
/**
* DisconnectedView is rendered after node logout.

View File

@ -4,21 +4,22 @@
import cx from "classnames"
import React, { useMemo } from "react"
import { apiFetch } from "src/api"
import { ReactComponent as ArrowRight } from "src/assets/icons/arrow-right.svg"
import { ReactComponent as Machine } from "src/assets/icons/machine.svg"
import ArrowRight from "src/assets/icons/arrow-right.svg?react"
import Machine from "src/assets/icons/machine.svg?react"
import AddressCard from "src/components/address-copy-card"
import ExitNodeSelector from "src/components/exit-node-selector"
import { AuthResponse, canEdit } from "src/hooks/auth"
import { NodeData } from "src/types"
import Card from "src/ui/card"
import { pluralize } from "src/utils/util"
import { Link, useLocation } from "wouter"
export default function HomeView({
readonly,
node,
auth,
}: {
readonly: boolean
node: NodeData
auth: AuthResponse
}) {
const [allSubnetRoutes, pendingSubnetRoutes] = useMemo(
() => [
@ -63,7 +64,11 @@ export default function HomeView({
</div>
{(node.Features["advertise-exit-node"] ||
node.Features["use-exit-node"]) && (
<ExitNodeSelector className="mb-5" node={node} disabled={readonly} />
<ExitNodeSelector
className="mb-5"
node={node}
disabled={!canEdit("exitnodes", auth)}
/>
)}
<Link
className="link font-medium"

View File

@ -3,7 +3,7 @@
import React, { useState } from "react"
import { useAPI } from "src/api"
import { ReactComponent as TailscaleIcon } from "src/assets/icons/tailscale-icon.svg"
import TailscaleIcon from "src/assets/icons/tailscale-icon.svg?react"
import { NodeData } from "src/types"
import Button from "src/ui/button"
import Collapsible from "src/ui/collapsible"
@ -41,7 +41,7 @@ export default function LoginView({ data }: { data: NodeData }) {
<>
<div className="mb-6">
<p className="text-gray-700">
Your device's key has expired. Reauthenticate this device by
Your devices key has expired. Reauthenticate this device by
logging in again, or{" "}
<a
href="https://tailscale.com/kb/1028/key-expiry"

View File

@ -4,9 +4,9 @@
import cx from "classnames"
import React, { useCallback, useMemo, useState } from "react"
import { useAPI } from "src/api"
import { ReactComponent as CheckCircle } from "src/assets/icons/check-circle.svg"
import { ReactComponent as Clock } from "src/assets/icons/clock.svg"
import { ReactComponent as Plus } from "src/assets/icons/plus.svg"
import CheckCircle from "src/assets/icons/check-circle.svg?react"
import Clock from "src/assets/icons/clock.svg?react"
import Plus from "src/assets/icons/plus.svg?react"
import * as Control from "src/components/control-components"
import { NodeData } from "src/types"
import Button from "src/ui/button"

View File

@ -2,8 +2,8 @@
// SPDX-License-Identifier: BSD-3-Clause
import React from "react"
import { ReactComponent as CheckCircleIcon } from "src/assets/icons/check-circle.svg"
import { ReactComponent as XCircleIcon } from "src/assets/icons/x-circle.svg"
import CheckCircleIcon from "src/assets/icons/check-circle.svg?react"
import XCircleIcon from "src/assets/icons/x-circle.svg?react"
import { ChangelogText } from "src/components/update-available"
import { UpdateState, useInstallUpdate } from "src/hooks/self-update"
import { VersionInfo } from "src/types"
@ -35,7 +35,7 @@ export function UpdatingView({
<Spinner size="sm" className="text-gray-400" />
<h1 className="text-2xl m-3">Update in progress</h1>
<p className="text-gray-400">
The update shouldn't take more than a couple of minutes. Once it's
The update shouldnt take more than a couple of minutes. Once its
completed, you will be asked to log in again.
</p>
</>

View File

@ -4,25 +4,50 @@
import { useCallback, useEffect, useState } from "react"
import { apiFetch, setSynoToken } from "src/api"
export enum AuthType {
synology = "synology",
tailscale = "tailscale",
}
export type AuthResponse = {
authNeeded?: AuthType
canManageNode: boolean
serverMode: "login" | "manage"
serverMode: AuthServerMode
authorized: boolean
viewerIdentity?: {
loginName: string
nodeName: string
nodeIP: string
profilePicUrl?: string
capabilities: { [key in PeerCapability]: boolean }
}
needsSynoAuth?: boolean
}
// useAuth reports and refreshes Tailscale auth status
// for the web client.
export type AuthServerMode = "login" | "readonly" | "manage"
export type PeerCapability = "*" | "ssh" | "subnets" | "exitnodes" | "account"
/**
* canEdit reports whether the given auth response specifies that the viewer
* has the ability to edit the given capability.
*/
export function canEdit(cap: PeerCapability, auth: AuthResponse): boolean {
if (!auth.authorized || !auth.viewerIdentity) {
return false
}
if (auth.viewerIdentity.capabilities["*"] === true) {
return true // can edit all features
}
return auth.viewerIdentity.capabilities[cap] === true
}
/**
* hasAnyEditCapabilities reports whether the given auth response specifies
* that the viewer has at least one edit capability. If this is true, the
* user is able to go through the auth flow to authenticate a management
* session.
*/
export function hasAnyEditCapabilities(auth: AuthResponse): boolean {
return Object.values(auth.viewerIdentity?.capabilities || {}).includes(true)
}
/**
* useAuth reports and refreshes Tailscale auth status for the web client.
*/
export default function useAuth() {
const [data, setData] = useState<AuthResponse>()
const [loading, setLoading] = useState<boolean>(true)
@ -33,18 +58,16 @@ export default function useAuth() {
return apiFetch<AuthResponse>("/auth", "GET")
.then((d) => {
setData(d)
switch (d.authNeeded) {
case AuthType.synology:
fetch("/webman/login.cgi")
.then((r) => r.json())
.then((a) => {
setSynoToken(a.SynoToken)
setRanSynoAuth(true)
setLoading(false)
})
break
default:
setLoading(false)
if (d.needsSynoAuth) {
fetch("/webman/login.cgi")
.then((r) => r.json())
.then((a) => {
setSynoToken(a.SynoToken)
setRanSynoAuth(true)
setLoading(false)
})
} else {
setLoading(false)
}
return d
})
@ -72,8 +95,13 @@ export default function useAuth() {
useEffect(() => {
loadAuth().then((d) => {
if (!d) {
return
}
if (
!d?.canManageNode &&
!d.authorized &&
hasAnyEditCapabilities(d) &&
// Start auth flow immediately if browser has requested it.
new URLSearchParams(window.location.search).get("check") === "now"
) {
newSession()

View File

@ -0,0 +1,46 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
import { useCallback, useEffect, useState } from "react"
import { isHTTPS } from "src/utils/util"
import { AuthServerMode } from "./auth"
/**
* useTSWebConnected hook is used to check whether the browser is able to
* connect to the web client served at http://${nodeIPv4}:5252
*/
export function useTSWebConnected(mode: AuthServerMode, nodeIPv4: string) {
const [tsWebConnected, setTSWebConnected] = useState<boolean>(
mode === "manage" // browser already on the web client
)
const [isLoading, setIsLoading] = useState<boolean>(false)
const checkTSWebConnection = useCallback(() => {
if (mode === "manage") {
// Already connected to the web client.
setTSWebConnected(true)
return
}
if (isHTTPS()) {
// When page is loaded over HTTPS, the connectivity check will always
// fail with a mixed-content error. In this case don't bother doing
// the check.
return
}
if (isLoading) {
return // already checking
}
setIsLoading(true)
fetch(`http://${nodeIPv4}:5252/ok`, { mode: "no-cors" })
.then(() => {
setTSWebConnected(true)
setIsLoading(false)
})
.catch(() => setIsLoading(false))
}, [isLoading, mode, nodeIPv4])
// eslint-disable-next-line react-hooks/exhaustive-deps
useEffect(() => checkTSWebConnection(), []) // checking connection for first time on page load
return { tsWebConnected, checkTSWebConnection, isLoading }
}

View File

@ -3,7 +3,7 @@
import * as Primitive from "@radix-ui/react-collapsible"
import React, { useState } from "react"
import { ReactComponent as ChevronDown } from "src/assets/icons/chevron-down.svg"
import ChevronDown from "src/assets/icons/chevron-down.svg?react"
type CollapsibleProps = {
trigger?: string

View File

@ -4,7 +4,7 @@
import * as DialogPrimitive from "@radix-ui/react-dialog"
import cx from "classnames"
import React, { Component, ComponentProps, FormEvent } from "react"
import { ReactComponent as X } from "src/assets/icons/x.svg"
import X from "src/assets/icons/x.svg?react"
import Button from "src/ui/button"
import PortalContainerContext from "src/ui/portal-container-context"
import { isObject } from "src/utils/util"

View File

@ -3,7 +3,7 @@
import cx from "classnames"
import React, { forwardRef, InputHTMLAttributes } from "react"
import { ReactComponent as Search } from "src/assets/icons/search.svg"
import Search from "src/assets/icons/search.svg?react"
type Props = {
className?: string

View File

@ -10,7 +10,7 @@ import React, {
useState,
} from "react"
import { createPortal } from "react-dom"
import { ReactComponent as X } from "src/assets/icons/x.svg"
import X from "src/assets/icons/x.svg?react"
import { noop } from "src/utils/util"
import { create } from "zustand"
import { shallow } from "zustand/shallow"

View File

@ -49,3 +49,10 @@ export function isPromise<T = unknown>(val: unknown): val is Promise<T> {
}
return typeof val === "object" && "then" in val
}
/**
* isHTTPS reports whether the current page is loaded over HTTPS.
*/
export function isHTTPS() {
return window.location.protocol === "https:"
}

View File

@ -1,7 +1,7 @@
const plugin = require("tailwindcss/plugin")
const styles = require("./styles.json")
import plugin from "tailwindcss/plugin"
import styles from "./styles.json"
module.exports = {
const config = {
theme: {
screens: {
sm: "420px",
@ -96,20 +96,22 @@ module.exports = {
plugins: [
plugin(function ({ addVariant }) {
addVariant("state-open", [
'&[data-state="open"]',
'[data-state="open"] &',
"&[data-state=“open”]",
"[data-state=“open”] &",
])
addVariant("state-closed", [
'&[data-state="closed"]',
'[data-state="closed"] &',
"&[data-state=“closed”]",
"[data-state=“closed”] &",
])
addVariant("state-delayed-open", [
'&[data-state="delayed-open"]',
'[data-state="delayed-open"] &',
"&[data-state=“delayed-open”]",
"[data-state=“delayed-open”] &",
])
addVariant("state-active", ['&[data-state="active"]'])
addVariant("state-inactive", ['&[data-state="inactive"]'])
addVariant("state-active", ["&[data-state=“active”]"])
addVariant("state-inactive", ["&[data-state=“inactive”]"])
}),
],
content: ["./src/**/*.html", "./src/**/*.{ts,tsx}", "./index.html"],
}
export default config

View File

@ -5,6 +5,7 @@
"module": "ES2020",
"strict": true,
"sourceMap": true,
"skipLibCheck": true,
"isolatedModules": true,
"moduleResolution": "node",
"forceConsistentCasingInFileNames": true,

View File

@ -1,6 +1,5 @@
/// <reference types="vitest" />
import { createLogger, defineConfig } from "vite"
import rewrite from "vite-plugin-rewrite-all"
import svgr from "vite-plugin-svgr"
import paths from "vite-tsconfig-paths"
@ -24,11 +23,6 @@ export default defineConfig({
plugins: [
paths(),
svgr(),
// By default, the Vite dev server doesn't handle dots
// in path names and treats them as static files.
// This plugin changes Vite's routing logic to fix this.
// See: https://github.com/vitejs/vite/issues/2415
rewrite(),
],
build: {
outDir: "build",

View File

@ -95,6 +95,14 @@ const (
// In this mode, API calls are authenticated via platform auth.
LoginServerMode ServerMode = "login"
// ReadOnlyServerMode is identical to LoginServerMode,
// but does not present a login button to switch to manage mode,
// even if the management client is running and reachable.
//
// This is designed for platforms where the device is configured by other means,
// such as Home Assistant's declarative YAML configuration.
ReadOnlyServerMode ServerMode = "readonly"
// ManageServerMode serves a management client for editing tailscale
// settings of a node.
//
@ -154,7 +162,7 @@ type ServerOpts struct {
// and not the lifespan of the web server.
func NewServer(opts ServerOpts) (s *Server, err error) {
switch opts.Mode {
case LoginServerMode, ManageServerMode:
case LoginServerMode, ReadOnlyServerMode, ManageServerMode:
// valid types
case "":
return nil, fmt.Errorf("must specify a Mode")
@ -207,10 +215,14 @@ func NewServer(opts ServerOpts) (s *Server, err error) {
// The client is secured by limiting the interface it listens on,
// or by authenticating requests before they reach the web client.
csrfProtect := csrf.Protect(s.csrfKey(), csrf.Secure(false))
if s.mode == LoginServerMode {
switch s.mode {
case LoginServerMode:
s.apiHandler = csrfProtect(http.HandlerFunc(s.serveLoginAPI))
metric = "web_login_client_initialization"
} else {
case ReadOnlyServerMode:
s.apiHandler = csrfProtect(http.HandlerFunc(s.serveLoginAPI))
metric = "web_readonly_client_initialization"
case ManageServerMode:
s.apiHandler = csrfProtect(http.HandlerFunc(s.serveAPI))
metric = "web_client_initialization"
}
@ -433,18 +445,188 @@ func (s *Server) serveLoginAPI(w http.ResponseWriter, r *http.Request) {
}
}
type authType string
type apiHandler[data any] struct {
s *Server
w http.ResponseWriter
r *http.Request
var (
synoAuth authType = "synology" // user needs a SynoToken for subsequent API calls
tailscaleAuth authType = "tailscale" // user needs to complete Tailscale check mode
)
// permissionCheck allows for defining whether a requesting peer's
// capabilities grant them access to make the given data update.
// If permissionCheck reports false, the request fails as unauthorized.
permissionCheck func(data data, peer peerCapabilities) bool
}
// newHandler constructs a new api handler which restricts the given request
// to the specified permission check. If the permission check fails for
// the peer associated with the request, an unauthorized error is returned
// to the client.
func newHandler[data any](s *Server, w http.ResponseWriter, r *http.Request, permissionCheck func(data data, peer peerCapabilities) bool) *apiHandler[data] {
return &apiHandler[data]{
s: s,
w: w,
r: r,
permissionCheck: permissionCheck,
}
}
// alwaysAllowed can be passed as the permissionCheck argument to newHandler
// for requests that are always allowed to complete regardless of a peer's
// capabilities.
func alwaysAllowed[data any](_ data, _ peerCapabilities) bool { return true }
func (a *apiHandler[data]) getPeer() (peerCapabilities, error) {
// TODO(tailscale/corp#16695,sonia): We also call StatusWithoutPeers and
// WhoIs when originally checking for a session from authorizeRequest.
// Would be nice if we could pipe those through to here so we don't end
// up having to re-call them to grab the peer capabilities.
status, err := a.s.lc.StatusWithoutPeers(a.r.Context())
if err != nil {
return nil, err
}
whois, err := a.s.lc.WhoIs(a.r.Context(), a.r.RemoteAddr)
if err != nil {
return nil, err
}
peer, err := toPeerCapabilities(status, whois)
if err != nil {
return nil, err
}
return peer, nil
}
type noBodyData any // empty type, for use from serveAPI for endpoints with empty body
// handle runs the given handler if the source peer satisfies the
// constraints for running this request.
//
// handle is expected for use when `data` type is empty, or set to
// `noBodyData` in practice. For requests that expect JSON body data
// to be attached, use handleJSON instead.
func (a *apiHandler[data]) handle(h http.HandlerFunc) {
peer, err := a.getPeer()
if err != nil {
http.Error(a.w, err.Error(), http.StatusInternalServerError)
return
}
var body data // not used
if !a.permissionCheck(body, peer) {
http.Error(a.w, "not allowed", http.StatusUnauthorized)
return
}
h(a.w, a.r)
}
// handleJSON manages decoding the request's body JSON and passing
// it on to the provided function if the source peer satisfies the
// constraints for running this request.
func (a *apiHandler[data]) handleJSON(h func(ctx context.Context, data data) error) {
defer a.r.Body.Close()
var body data
if err := json.NewDecoder(a.r.Body).Decode(&body); err != nil {
http.Error(a.w, err.Error(), http.StatusInternalServerError)
return
}
peer, err := a.getPeer()
if err != nil {
http.Error(a.w, err.Error(), http.StatusInternalServerError)
return
}
if !a.permissionCheck(body, peer) {
http.Error(a.w, "not allowed", http.StatusUnauthorized)
return
}
if err := h(a.r.Context(), body); err != nil {
http.Error(a.w, err.Error(), http.StatusInternalServerError)
return
}
a.w.WriteHeader(http.StatusOK)
}
// serveAPI serves requests for the web client api.
// It should only be called by Server.ServeHTTP, via Server.apiHandler,
// which protects the handler using gorilla csrf.
func (s *Server) serveAPI(w http.ResponseWriter, r *http.Request) {
if r.Method == httpm.PATCH {
// Enforce that PATCH requests are always application/json.
if ct := r.Header.Get("Content-Type"); ct != "application/json" {
http.Error(w, "invalid request", http.StatusBadRequest)
return
}
}
w.Header().Set("X-CSRF-Token", csrf.Token(r))
path := strings.TrimPrefix(r.URL.Path, "/api")
switch {
case path == "/data" && r.Method == httpm.GET:
newHandler[noBodyData](s, w, r, alwaysAllowed).
handle(s.serveGetNodeData)
return
case path == "/exit-nodes" && r.Method == httpm.GET:
newHandler[noBodyData](s, w, r, alwaysAllowed).
handle(s.serveGetExitNodes)
return
case path == "/routes" && r.Method == httpm.POST:
peerAllowed := func(d postRoutesRequest, p peerCapabilities) bool {
if d.SetExitNode && !p.canEdit(capFeatureExitNodes) {
return false
} else if d.SetRoutes && !p.canEdit(capFeatureSubnets) {
return false
}
return true
}
newHandler[postRoutesRequest](s, w, r, peerAllowed).
handleJSON(s.servePostRoutes)
return
case path == "/device-details-click" && r.Method == httpm.POST:
newHandler[noBodyData](s, w, r, alwaysAllowed).
handle(s.serveDeviceDetailsClick)
return
case path == "/local/v0/logout" && r.Method == httpm.POST:
peerAllowed := func(_ noBodyData, peer peerCapabilities) bool {
return peer.canEdit(capFeatureAccount)
}
newHandler[noBodyData](s, w, r, peerAllowed).
handle(s.proxyRequestToLocalAPI)
return
case path == "/local/v0/prefs" && r.Method == httpm.PATCH:
peerAllowed := func(data maskedPrefs, peer peerCapabilities) bool {
if data.RunSSHSet && !peer.canEdit(capFeatureSSH) {
return false
}
return true
}
newHandler[maskedPrefs](s, w, r, peerAllowed).
handleJSON(s.serveUpdatePrefs)
return
case path == "/local/v0/update/check" && r.Method == httpm.GET:
newHandler[noBodyData](s, w, r, alwaysAllowed).
handle(s.proxyRequestToLocalAPI)
return
case path == "/local/v0/update/check" && r.Method == httpm.POST:
peerAllowed := func(_ noBodyData, peer peerCapabilities) bool {
return peer.canEdit(capFeatureAccount)
}
newHandler[noBodyData](s, w, r, peerAllowed).
handle(s.proxyRequestToLocalAPI)
return
case path == "/local/v0/update/progress" && r.Method == httpm.POST:
newHandler[noBodyData](s, w, r, alwaysAllowed).
handle(s.proxyRequestToLocalAPI)
return
case path == "/local/v0/upload-client-metrics" && r.Method == httpm.POST:
newHandler[noBodyData](s, w, r, alwaysAllowed).
handle(s.proxyRequestToLocalAPI)
return
}
http.Error(w, "invalid endpoint", http.StatusNotFound)
}
type authResponse struct {
AuthNeeded authType `json:"authNeeded,omitempty"` // filled when user needs to complete a specific type of auth
CanManageNode bool `json:"canManageNode"`
ViewerIdentity *viewerIdentity `json:"viewerIdentity,omitempty"`
ServerMode ServerMode `json:"serverMode"`
Authorized bool `json:"authorized"` // has an authorized management session
ViewerIdentity *viewerIdentity `json:"viewerIdentity,omitempty"`
NeedsSynoAuth bool `json:"needsSynoAuth,omitempty"`
}
// viewerIdentity is the Tailscale identity of the source node
@ -463,9 +645,11 @@ func (s *Server) serveAPIAuth(w http.ResponseWriter, r *http.Request) {
var resp authResponse
resp.ServerMode = s.mode
session, whois, status, sErr := s.getSession(r)
var caps peerCapabilities
if whois != nil {
caps, err := toPeerCapabilities(whois)
var err error
caps, err = toPeerCapabilities(status, whois)
if err != nil {
http.Error(w, sErr.Error(), http.StatusInternalServerError)
return
@ -483,7 +667,7 @@ func (s *Server) serveAPIAuth(w http.ResponseWriter, r *http.Request) {
// First verify platform auth.
// If platform auth is needed, this should happen first.
if s.mode == LoginServerMode {
if s.mode == LoginServerMode || s.mode == ReadOnlyServerMode {
switch distro.Get() {
case distro.Synology:
authorized, err := authorizeSynology(r)
@ -492,7 +676,7 @@ func (s *Server) serveAPIAuth(w http.ResponseWriter, r *http.Request) {
return
}
if !authorized {
resp.AuthNeeded = synoAuth
resp.NeedsSynoAuth = true
writeJSON(w, resp)
return
}
@ -508,21 +692,17 @@ func (s *Server) serveAPIAuth(w http.ResponseWriter, r *http.Request) {
switch {
case sErr != nil && errors.Is(sErr, errNotUsingTailscale):
// Restricted to the readonly view, no auth action to take.
s.lc.IncrementCounter(r.Context(), "web_client_viewing_local", 1)
resp.AuthNeeded = ""
resp.Authorized = false // restricted to the readonly view
case sErr != nil && errors.Is(sErr, errNotOwner):
// Restricted to the readonly view, no auth action to take.
s.lc.IncrementCounter(r.Context(), "web_client_viewing_not_owner", 1)
resp.AuthNeeded = ""
resp.Authorized = false // restricted to the readonly view
case sErr != nil && errors.Is(sErr, errTaggedLocalSource):
// Restricted to the readonly view, no auth action to take.
s.lc.IncrementCounter(r.Context(), "web_client_viewing_local_tag", 1)
resp.AuthNeeded = ""
resp.Authorized = false // restricted to the readonly view
case sErr != nil && errors.Is(sErr, errTaggedRemoteSource):
// Restricted to the readonly view, no auth action to take.
s.lc.IncrementCounter(r.Context(), "web_client_viewing_remote_tag", 1)
resp.AuthNeeded = ""
resp.Authorized = false // restricted to the readonly view
case sErr != nil && !errors.Is(sErr, errNoSession):
// Any other error.
http.Error(w, sErr.Error(), http.StatusInternalServerError)
@ -533,16 +713,26 @@ func (s *Server) serveAPIAuth(w http.ResponseWriter, r *http.Request) {
} else {
s.lc.IncrementCounter(r.Context(), "web_client_managing_remote", 1)
}
resp.CanManageNode = true
resp.AuthNeeded = ""
// User has a valid session. They're now authorized to edit if they
// have any edit capabilities. In practice, they won't be sent through
// the auth flow if they don't have edit caps, but their ACL granted
// permissions may change at any time. The frontend views and backend
// endpoints are always restricted to their current capabilities in
// addition to a valid session.
//
// But, we also check the caps here for a better user experience on
// the frontend login toggle, which uses resp.Authorized to display
// "viewing" vs "managing" copy. If they don't have caps, we want to
// display "viewing" even if they have a valid session.
resp.Authorized = !caps.isEmpty()
default:
// whois being nil implies local as the request did not come over Tailscale
if whois == nil || (whois.Node.StableID == status.Self.ID) {
// whois being nil implies local as the request did not come over Tailscale.
s.lc.IncrementCounter(r.Context(), "web_client_viewing_local", 1)
} else {
s.lc.IncrementCounter(r.Context(), "web_client_viewing_remote", 1)
}
resp.AuthNeeded = tailscaleAuth
resp.Authorized = false // not yet authorized
}
writeJSON(w, resp)
@ -606,32 +796,6 @@ func (s *Server) serveAPIAuthSessionWait(w http.ResponseWriter, r *http.Request)
}
}
// serveAPI serves requests for the web client api.
// It should only be called by Server.ServeHTTP, via Server.apiHandler,
// which protects the handler using gorilla csrf.
func (s *Server) serveAPI(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-CSRF-Token", csrf.Token(r))
path := strings.TrimPrefix(r.URL.Path, "/api")
switch {
case path == "/data" && r.Method == httpm.GET:
s.serveGetNodeData(w, r)
return
case path == "/exit-nodes" && r.Method == httpm.GET:
s.serveGetExitNodes(w, r)
return
case path == "/routes" && r.Method == httpm.POST:
s.servePostRoutes(w, r)
return
case path == "/device-details-click" && r.Method == httpm.POST:
s.serveDeviceDetailsClick(w, r)
return
case strings.HasPrefix(path, "/local/"):
s.proxyRequestToLocalAPI(w, r)
return
}
http.Error(w, "invalid endpoint", http.StatusNotFound)
}
type nodeData struct {
ID tailcfg.StableNodeID
Status string
@ -868,6 +1032,23 @@ func (s *Server) serveGetExitNodes(w http.ResponseWriter, r *http.Request) {
writeJSON(w, exitNodes)
}
// maskedPrefs is the subset of ipn.MaskedPrefs that are
// allowed to be editable via the web UI.
type maskedPrefs struct {
RunSSHSet bool
RunSSH bool
}
func (s *Server) serveUpdatePrefs(ctx context.Context, prefs maskedPrefs) error {
_, err := s.lc.EditPrefs(ctx, &ipn.MaskedPrefs{
RunSSHSet: prefs.RunSSHSet,
Prefs: ipn.Prefs{
RunSSH: prefs.RunSSH,
},
})
return err
}
type postRoutesRequest struct {
SetExitNode bool // when set, UseExitNode and AdvertiseExitNode values are applied
SetRoutes bool // when set, AdvertiseRoutes value is applied
@ -876,18 +1057,10 @@ type postRoutesRequest struct {
AdvertiseRoutes []string
}
func (s *Server) servePostRoutes(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
var data postRoutesRequest
if err := json.NewDecoder(r.Body).Decode(&data); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
prefs, err := s.lc.GetPrefs(r.Context())
func (s *Server) servePostRoutes(ctx context.Context, data postRoutesRequest) error {
prefs, err := s.lc.GetPrefs(ctx)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
return err
}
var currNonExitRoutes []string
var currAdvertisingExitNode bool
@ -910,8 +1083,7 @@ func (s *Server) servePostRoutes(w http.ResponseWriter, r *http.Request) {
routesStr := strings.Join(data.AdvertiseRoutes, ",")
routes, err := netutil.CalcAdvertiseRoutes(routesStr, data.AdvertiseExitNode)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
return err
}
hasExitNodeRoute := func(all []netip.Prefix) bool {
@ -920,8 +1092,7 @@ func (s *Server) servePostRoutes(w http.ResponseWriter, r *http.Request) {
}
if !data.UseExitNode.IsZero() && hasExitNodeRoute(routes) {
http.Error(w, "cannot use and advertise exit node at same time", http.StatusBadRequest)
return
return errors.New("cannot use and advertise exit node at same time")
}
// Make prefs update.
@ -933,12 +1104,8 @@ func (s *Server) servePostRoutes(w http.ResponseWriter, r *http.Request) {
AdvertiseRoutes: routes,
},
}
if _, err := s.lc.EditPrefs(r.Context(), p); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
_, err = s.lc.EditPrefs(ctx, p)
return err
}
// tailscaleUp starts the daemon with the provided options.
@ -983,7 +1150,15 @@ func (s *Server) tailscaleUp(ctx context.Context, st *ipnstate.Status, opt tails
if !isRunning {
ipnOptions := ipn.Options{AuthKey: opt.AuthKey}
if opt.ControlURL != "" {
ipnOptions.UpdatePrefs = &ipn.Prefs{ControlURL: opt.ControlURL}
_, err := s.lc.EditPrefs(ctx, &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
ControlURL: opt.ControlURL,
},
ControlURLSet: true,
})
if err != nil {
s.logf("edit prefs: %v", err)
}
}
if err := s.lc.Start(ctx, ipnOptions); err != nil {
s.logf("start: %v", err)
@ -1077,26 +1252,12 @@ func (s *Server) serveDeviceDetailsClick(w http.ResponseWriter, r *http.Request)
//
// The web API request path is expected to exactly match a localapi path,
// with prefix /api/local/ rather than /localapi/.
//
// If the localapi path is not included in localapiAllowlist,
// the request is rejected.
func (s *Server) proxyRequestToLocalAPI(w http.ResponseWriter, r *http.Request) {
path := strings.TrimPrefix(r.URL.Path, "/api/local")
if r.URL.Path == path { // missing prefix
http.Error(w, "invalid request", http.StatusBadRequest)
return
}
if r.Method == httpm.PATCH {
// enforce that PATCH requests are always application/json
if ct := r.Header.Get("Content-Type"); ct != "application/json" {
http.Error(w, "invalid request", http.StatusBadRequest)
return
}
}
if !slices.Contains(localapiAllowlist, path) {
http.Error(w, fmt.Sprintf("%s not allowed from localapi proxy", path), http.StatusForbidden)
return
}
localAPIURL := "http://" + apitype.LocalAPIHost + "/localapi" + path
req, err := http.NewRequestWithContext(r.Context(), r.Method, localAPIURL, r.Body)
@ -1121,21 +1282,6 @@ func (s *Server) proxyRequestToLocalAPI(w http.ResponseWriter, r *http.Request)
}
}
// localapiAllowlist is an allowlist of localapi endpoints the
// web client is allowed to proxy to the client's localapi.
//
// Rather than exposing all localapi endpoints over the proxy,
// this limits to just the ones actually used from the web
// client frontend.
var localapiAllowlist = []string{
"/v0/logout",
"/v0/prefs",
"/v0/update/check",
"/v0/update/install",
"/v0/update/progress",
"/v0/upload-client-metrics",
}
// csrfKey returns a key that can be used for CSRF protection.
// If an error occurs during key creation, the error is logged and the active process terminated.
// If the server is running in CGI mode, the key is cached to disk and reused between requests.

View File

@ -4,6 +4,7 @@
package web
import (
"bytes"
"context"
"encoding/json"
"errors"
@ -86,75 +87,172 @@ func TestQnapAuthnURL(t *testing.T) {
// TestServeAPI tests the web client api's handling of
// 1. invalid endpoint errors
// 2. localapi proxy allowlist
// 2. permissioning of api endpoints based on node capabilities
func TestServeAPI(t *testing.T) {
selfTags := views.SliceOf([]string{"tag:server"})
self := &ipnstate.PeerStatus{ID: "self", Tags: &selfTags}
prefs := &ipn.Prefs{}
remoteUser := &tailcfg.UserProfile{ID: tailcfg.UserID(1)}
remoteIPWithAllCapabilities := "100.100.100.101"
remoteIPWithNoCapabilities := "100.100.100.102"
lal := memnet.Listen("local-tailscaled.sock:80")
defer lal.Close()
// Serve dummy localapi. Just returns "success".
localapi := &http.Server{Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "success")
})}
localapi := mockLocalAPI(t,
map[string]*apitype.WhoIsResponse{
remoteIPWithAllCapabilities: {
Node: &tailcfg.Node{StableID: "node1"},
UserProfile: remoteUser,
CapMap: tailcfg.PeerCapMap{tailcfg.PeerCapabilityWebUI: []tailcfg.RawMessage{"{\"canEdit\":[\"*\"]}"}},
},
remoteIPWithNoCapabilities: {
Node: &tailcfg.Node{StableID: "node2"},
UserProfile: remoteUser,
},
},
func() *ipnstate.PeerStatus { return self },
func() *ipn.Prefs { return prefs },
nil,
)
defer localapi.Close()
go localapi.Serve(lal)
s := &Server{lc: &tailscale.LocalClient{Dial: lal.Dial}}
s := &Server{
mode: ManageServerMode,
lc: &tailscale.LocalClient{Dial: lal.Dial},
timeNow: time.Now,
}
type requestTest struct {
remoteIP string
wantResponse string
wantStatus int
}
tests := []struct {
name string
reqMethod string
reqPath string
reqMethod string
reqContentType string
wantResp string
wantStatus int
reqBody string
tests []requestTest
}{{
name: "invalid_endpoint",
reqMethod: httpm.POST,
reqPath: "/not-an-endpoint",
wantResp: "invalid endpoint",
wantStatus: http.StatusNotFound,
reqPath: "/not-an-endpoint",
reqMethod: httpm.POST,
tests: []requestTest{{
remoteIP: remoteIPWithNoCapabilities,
wantResponse: "invalid endpoint",
wantStatus: http.StatusNotFound,
}, {
remoteIP: remoteIPWithAllCapabilities,
wantResponse: "invalid endpoint",
wantStatus: http.StatusNotFound,
}},
}, {
name: "not_in_localapi_allowlist",
reqMethod: httpm.POST,
reqPath: "/local/v0/not-allowlisted",
wantResp: "/v0/not-allowlisted not allowed from localapi proxy",
wantStatus: http.StatusForbidden,
reqPath: "/local/v0/not-an-endpoint",
reqMethod: httpm.POST,
tests: []requestTest{{
remoteIP: remoteIPWithNoCapabilities,
wantResponse: "invalid endpoint",
wantStatus: http.StatusNotFound,
}, {
remoteIP: remoteIPWithAllCapabilities,
wantResponse: "invalid endpoint",
wantStatus: http.StatusNotFound,
}},
}, {
name: "in_localapi_allowlist",
reqMethod: httpm.POST,
reqPath: "/local/v0/logout",
wantResp: "success", // Successfully allowed to hit localapi.
wantStatus: http.StatusOK,
reqPath: "/local/v0/logout",
reqMethod: httpm.POST,
tests: []requestTest{{
remoteIP: remoteIPWithNoCapabilities,
wantResponse: "not allowed", // requesting node has insufficient permissions
wantStatus: http.StatusUnauthorized,
}, {
remoteIP: remoteIPWithAllCapabilities,
wantResponse: "success", // requesting node has sufficient permissions
wantStatus: http.StatusOK,
}},
}, {
reqPath: "/exit-nodes",
reqMethod: httpm.GET,
tests: []requestTest{{
remoteIP: remoteIPWithNoCapabilities,
wantResponse: "null",
wantStatus: http.StatusOK, // allowed, no additional capabilities required
}, {
remoteIP: remoteIPWithAllCapabilities,
wantResponse: "null",
wantStatus: http.StatusOK,
}},
}, {
reqPath: "/routes",
reqMethod: httpm.POST,
reqBody: "{\"setExitNode\":true}",
tests: []requestTest{{
remoteIP: remoteIPWithNoCapabilities,
wantResponse: "not allowed",
wantStatus: http.StatusUnauthorized,
}, {
remoteIP: remoteIPWithAllCapabilities,
wantStatus: http.StatusOK,
}},
}, {
name: "patch_bad_contenttype",
reqMethod: httpm.PATCH,
reqPath: "/local/v0/prefs",
reqMethod: httpm.PATCH,
reqBody: "{\"runSSHSet\":true}",
reqContentType: "application/json",
tests: []requestTest{{
remoteIP: remoteIPWithNoCapabilities,
wantResponse: "not allowed",
wantStatus: http.StatusUnauthorized,
}, {
remoteIP: remoteIPWithAllCapabilities,
wantStatus: http.StatusOK,
}},
}, {
reqPath: "/local/v0/prefs",
reqMethod: httpm.PATCH,
reqContentType: "multipart/form-data",
wantResp: "invalid request",
wantStatus: http.StatusBadRequest,
tests: []requestTest{{
remoteIP: remoteIPWithNoCapabilities,
wantResponse: "invalid request",
wantStatus: http.StatusBadRequest,
}, {
remoteIP: remoteIPWithAllCapabilities,
wantResponse: "invalid request",
wantStatus: http.StatusBadRequest,
}},
}}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
r := httptest.NewRequest(tt.reqMethod, "/api"+tt.reqPath, nil)
if tt.reqContentType != "" {
r.Header.Add("Content-Type", tt.reqContentType)
}
w := httptest.NewRecorder()
for _, req := range tt.tests {
t.Run(req.remoteIP+"_requesting_"+tt.reqPath, func(t *testing.T) {
var reqBody io.Reader
if tt.reqBody != "" {
reqBody = bytes.NewBuffer([]byte(tt.reqBody))
}
r := httptest.NewRequest(tt.reqMethod, "/api"+tt.reqPath, reqBody)
r.RemoteAddr = req.remoteIP
if tt.reqContentType != "" {
r.Header.Add("Content-Type", tt.reqContentType)
}
w := httptest.NewRecorder()
s.serveAPI(w, r)
res := w.Result()
defer res.Body.Close()
if gotStatus := res.StatusCode; tt.wantStatus != gotStatus {
t.Errorf("wrong status; want=%v, got=%v", tt.wantStatus, gotStatus)
}
body, err := io.ReadAll(res.Body)
if err != nil {
t.Fatal(err)
}
gotResp := strings.TrimSuffix(string(body), "\n") // trim trailing newline
if tt.wantResp != gotResp {
t.Errorf("wrong response; want=%q, got=%q", tt.wantResp, gotResp)
}
})
s.serveAPI(w, r)
res := w.Result()
defer res.Body.Close()
if gotStatus := res.StatusCode; req.wantStatus != gotStatus {
t.Errorf("wrong status; want=%v, got=%v", req.wantStatus, gotStatus)
}
body, err := io.ReadAll(res.Body)
if err != nil {
t.Fatal(err)
}
gotResp := strings.TrimSuffix(string(body), "\n") // trim trailing newline
if req.wantResponse != gotResp {
t.Errorf("wrong response; want=%q, got=%q", req.wantResponse, gotResp)
}
})
}
}
}
@ -450,7 +548,7 @@ func TestServeAuth(t *testing.T) {
NodeName: remoteNode.Node.Name,
NodeIP: remoteIP,
ProfilePicURL: user.ProfilePicURL,
Capabilities: peerCapabilities{},
Capabilities: peerCapabilities{capFeatureAll: true},
}
testControlURL := &defaultControlURL
@ -524,7 +622,7 @@ func TestServeAuth(t *testing.T) {
name: "no-session",
path: "/api/auth",
wantStatus: http.StatusOK,
wantResp: &authResponse{AuthNeeded: tailscaleAuth, ViewerIdentity: vi, ServerMode: ManageServerMode},
wantResp: &authResponse{ViewerIdentity: vi, ServerMode: ManageServerMode},
wantNewCookie: false,
wantSession: nil,
},
@ -549,7 +647,7 @@ func TestServeAuth(t *testing.T) {
path: "/api/auth",
cookie: successCookie,
wantStatus: http.StatusOK,
wantResp: &authResponse{AuthNeeded: tailscaleAuth, ViewerIdentity: vi, ServerMode: ManageServerMode},
wantResp: &authResponse{ViewerIdentity: vi, ServerMode: ManageServerMode},
wantSession: &browserSession{
ID: successCookie,
SrcNode: remoteNode.Node.ID,
@ -597,7 +695,7 @@ func TestServeAuth(t *testing.T) {
path: "/api/auth",
cookie: successCookie,
wantStatus: http.StatusOK,
wantResp: &authResponse{CanManageNode: true, ViewerIdentity: vi, ServerMode: ManageServerMode},
wantResp: &authResponse{Authorized: true, ViewerIdentity: vi, ServerMode: ManageServerMode},
wantSession: &browserSession{
ID: successCookie,
SrcNode: remoteNode.Node.ID,
@ -1099,20 +1197,56 @@ func TestRequireTailscaleIP(t *testing.T) {
}
func TestPeerCapabilities(t *testing.T) {
userOwnedStatus := &ipnstate.Status{Self: &ipnstate.PeerStatus{UserID: tailcfg.UserID(1)}}
tags := views.SliceOf[string]([]string{"tag:server"})
tagOwnedStatus := &ipnstate.Status{Self: &ipnstate.PeerStatus{Tags: &tags}}
// Testing web.toPeerCapabilities
toPeerCapsTests := []struct {
name string
status *ipnstate.Status
whois *apitype.WhoIsResponse
wantCaps peerCapabilities
}{
{
name: "empty-whois",
status: userOwnedStatus,
whois: nil,
wantCaps: peerCapabilities{},
},
{
name: "no-webui-caps",
name: "user-owned-node-non-owner-caps-ignored",
status: userOwnedStatus,
whois: &apitype.WhoIsResponse{
UserProfile: &tailcfg.UserProfile{ID: tailcfg.UserID(2)},
Node: &tailcfg.Node{ID: tailcfg.NodeID(1)},
CapMap: tailcfg.PeerCapMap{
tailcfg.PeerCapabilityWebUI: []tailcfg.RawMessage{
"{\"canEdit\":[\"ssh\",\"subnets\"]}",
},
},
},
wantCaps: peerCapabilities{},
},
{
name: "user-owned-node-owner-caps-ignored",
status: userOwnedStatus,
whois: &apitype.WhoIsResponse{
UserProfile: &tailcfg.UserProfile{ID: tailcfg.UserID(1)},
Node: &tailcfg.Node{ID: tailcfg.NodeID(1)},
CapMap: tailcfg.PeerCapMap{
tailcfg.PeerCapabilityWebUI: []tailcfg.RawMessage{
"{\"canEdit\":[\"ssh\",\"subnets\"]}",
},
},
},
wantCaps: peerCapabilities{capFeatureAll: true}, // should just have wildcard
},
{
name: "tag-owned-no-webui-caps",
status: tagOwnedStatus,
whois: &apitype.WhoIsResponse{
Node: &tailcfg.Node{ID: tailcfg.NodeID(1)},
CapMap: tailcfg.PeerCapMap{
tailcfg.PeerCapabilityDebugPeer: []tailcfg.RawMessage{},
},
@ -1120,66 +1254,74 @@ func TestPeerCapabilities(t *testing.T) {
wantCaps: peerCapabilities{},
},
{
name: "one-webui-cap",
name: "tag-owned-one-webui-cap",
status: tagOwnedStatus,
whois: &apitype.WhoIsResponse{
Node: &tailcfg.Node{ID: tailcfg.NodeID(1)},
CapMap: tailcfg.PeerCapMap{
tailcfg.PeerCapabilityWebUI: []tailcfg.RawMessage{
"{\"canEdit\":[\"ssh\",\"subnet\"]}",
"{\"canEdit\":[\"ssh\",\"subnets\"]}",
},
},
},
wantCaps: peerCapabilities{
capFeatureSSH: true,
capFeatureSubnet: true,
capFeatureSSH: true,
capFeatureSubnets: true,
},
},
{
name: "multiple-webui-cap",
name: "tag-owned-multiple-webui-cap",
status: tagOwnedStatus,
whois: &apitype.WhoIsResponse{
Node: &tailcfg.Node{ID: tailcfg.NodeID(1)},
CapMap: tailcfg.PeerCapMap{
tailcfg.PeerCapabilityWebUI: []tailcfg.RawMessage{
"{\"canEdit\":[\"ssh\",\"subnet\"]}",
"{\"canEdit\":[\"subnet\",\"exitnode\",\"*\"]}",
"{\"canEdit\":[\"ssh\",\"subnets\"]}",
"{\"canEdit\":[\"subnets\",\"exitnodes\",\"*\"]}",
},
},
},
wantCaps: peerCapabilities{
capFeatureSSH: true,
capFeatureSubnet: true,
capFeatureExitNode: true,
capFeatureAll: true,
capFeatureSSH: true,
capFeatureSubnets: true,
capFeatureExitNodes: true,
capFeatureAll: true,
},
},
{
name: "case=insensitive-caps",
name: "tag-owned-case-insensitive-caps",
status: tagOwnedStatus,
whois: &apitype.WhoIsResponse{
Node: &tailcfg.Node{ID: tailcfg.NodeID(1)},
CapMap: tailcfg.PeerCapMap{
tailcfg.PeerCapabilityWebUI: []tailcfg.RawMessage{
"{\"canEdit\":[\"SSH\",\"sUBnet\"]}",
"{\"canEdit\":[\"SSH\",\"sUBnets\"]}",
},
},
},
wantCaps: peerCapabilities{
capFeatureSSH: true,
capFeatureSubnet: true,
capFeatureSSH: true,
capFeatureSubnets: true,
},
},
{
name: "random-canEdit-contents-dont-error",
name: "tag-owned-random-canEdit-contents-get-dropped",
status: tagOwnedStatus,
whois: &apitype.WhoIsResponse{
Node: &tailcfg.Node{ID: tailcfg.NodeID(1)},
CapMap: tailcfg.PeerCapMap{
tailcfg.PeerCapabilityWebUI: []tailcfg.RawMessage{
"{\"canEdit\":[\"unknown-feature\"]}",
},
},
},
wantCaps: peerCapabilities{
"unknown-feature": true,
},
wantCaps: peerCapabilities{},
},
{
name: "no-canEdit-section",
name: "tag-owned-no-canEdit-section",
status: tagOwnedStatus,
whois: &apitype.WhoIsResponse{
Node: &tailcfg.Node{ID: tailcfg.NodeID(1)},
CapMap: tailcfg.PeerCapMap{
tailcfg.PeerCapabilityWebUI: []tailcfg.RawMessage{
"{\"canDoSomething\":[\"*\"]}",
@ -1188,10 +1330,23 @@ func TestPeerCapabilities(t *testing.T) {
},
wantCaps: peerCapabilities{},
},
{
name: "tagged-source-caps-ignored",
status: tagOwnedStatus,
whois: &apitype.WhoIsResponse{
Node: &tailcfg.Node{ID: tailcfg.NodeID(1), Tags: tags.AsSlice()},
CapMap: tailcfg.PeerCapMap{
tailcfg.PeerCapabilityWebUI: []tailcfg.RawMessage{
"{\"canEdit\":[\"ssh\",\"subnets\"]}",
},
},
},
wantCaps: peerCapabilities{},
},
}
for _, tt := range toPeerCapsTests {
t.Run("toPeerCapabilities-"+tt.name, func(t *testing.T) {
got, err := toPeerCapabilities(tt.whois)
got, err := toPeerCapabilities(tt.status, tt.whois)
if err != nil {
t.Fatalf("unexpected: %v", err)
}
@ -1211,36 +1366,33 @@ func TestPeerCapabilities(t *testing.T) {
name: "empty-caps",
caps: nil,
wantCanEdit: map[capFeature]bool{
capFeatureAll: false,
capFeatureFunnel: false,
capFeatureSSH: false,
capFeatureSubnet: false,
capFeatureExitNode: false,
capFeatureAccount: false,
capFeatureAll: false,
capFeatureSSH: false,
capFeatureSubnets: false,
capFeatureExitNodes: false,
capFeatureAccount: false,
},
},
{
name: "some-caps",
caps: peerCapabilities{capFeatureSSH: true, capFeatureAccount: true},
wantCanEdit: map[capFeature]bool{
capFeatureAll: false,
capFeatureFunnel: false,
capFeatureSSH: true,
capFeatureSubnet: false,
capFeatureExitNode: false,
capFeatureAccount: true,
capFeatureAll: false,
capFeatureSSH: true,
capFeatureSubnets: false,
capFeatureExitNodes: false,
capFeatureAccount: true,
},
},
{
name: "wildcard-in-caps",
caps: peerCapabilities{capFeatureAll: true, capFeatureAccount: true},
wantCanEdit: map[capFeature]bool{
capFeatureAll: true,
capFeatureFunnel: true,
capFeatureSSH: true,
capFeatureSubnet: true,
capFeatureExitNode: true,
capFeatureAccount: true,
capFeatureAll: true,
capFeatureSSH: true,
capFeatureSubnets: true,
capFeatureExitNodes: true,
capFeatureAccount: true,
},
},
}
@ -1301,6 +1453,9 @@ func mockLocalAPI(t *testing.T, whoIs map[string]*apitype.WhoIsResponse, self fu
metricCapture(metricNames[0].Name)
writeJSON(w, struct{}{})
return
case "/localapi/v0/logout":
fmt.Fprintf(w, "success")
return
default:
t.Fatalf("unhandled localapi test endpoint %q, add to localapi handler func in test", r.URL.Path)
}

File diff suppressed because it is too large Load Diff

View File

@ -29,6 +29,7 @@ import (
"github.com/google/uuid"
"tailscale.com/clientupdate/distsign"
"tailscale.com/hostinfo"
"tailscale.com/types/logger"
"tailscale.com/util/cmpver"
"tailscale.com/util/winutil"
@ -162,11 +163,16 @@ func NewUpdater(args Arguments) (*Updater, error) {
type updateFunction func() error
func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
canAutoUpdate = !hostinfo.New().Container.EqualBool(true) // EqualBool(false) would return false if the value is not set.
switch runtime.GOOS {
case "windows":
return up.updateWindows, true
return up.updateWindows, canAutoUpdate
case "linux":
switch distro.Get() {
case distro.NixOS:
// NixOS packages are immutable and managed with a system-wide
// configuration.
return up.updateNixos, false
case distro.Synology:
// Synology updates use our own pkgs.tailscale.com instead of the
// Synology Package Center. We should eventually get to a regular
@ -174,20 +180,20 @@ func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
// auto-update mechanism.
return up.updateSynology, false
case distro.Debian: // includes Ubuntu
return up.updateDebLike, true
return up.updateDebLike, canAutoUpdate
case distro.Arch:
if up.archPackageInstalled() {
// Arch update func just prints a message about how to update,
// it doesn't support auto-updates.
return up.updateArchLike, false
}
return up.updateLinuxBinary, true
return up.updateLinuxBinary, canAutoUpdate
case distro.Alpine:
return up.updateAlpineLike, true
return up.updateAlpineLike, canAutoUpdate
case distro.Unraid:
return up.updateUnraid, true
return up.updateUnraid, canAutoUpdate
case distro.QNAP:
return up.updateQNAP, true
return up.updateQNAP, canAutoUpdate
}
switch {
case haveExecutable("pacman"):
@ -196,21 +202,21 @@ func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
// it doesn't support auto-updates.
return up.updateArchLike, false
}
return up.updateLinuxBinary, true
return up.updateLinuxBinary, canAutoUpdate
case haveExecutable("apt-get"): // TODO(awly): add support for "apt"
// The distro.Debian switch case above should catch most apt-based
// systems, but add this fallback just in case.
return up.updateDebLike, true
return up.updateDebLike, canAutoUpdate
case haveExecutable("dnf"):
return up.updateFedoraLike("dnf"), true
return up.updateFedoraLike("dnf"), canAutoUpdate
case haveExecutable("yum"):
return up.updateFedoraLike("yum"), true
return up.updateFedoraLike("yum"), canAutoUpdate
case haveExecutable("apk"):
return up.updateAlpineLike, true
return up.updateAlpineLike, canAutoUpdate
}
// If nothing matched, fall back to tarball updates.
if up.Update == nil {
return up.updateLinuxBinary, true
return up.updateLinuxBinary, canAutoUpdate
}
case "darwin":
switch {
@ -226,7 +232,7 @@ func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
return nil, false
}
case "freebsd":
return up.updateFreeBSD, true
return up.updateFreeBSD, canAutoUpdate
}
return nil, false
}
@ -432,7 +438,7 @@ func (up *Updater) updateDebLike() error {
return fmt.Errorf("apt-get update failed: %w; output:\n%s", err, out)
}
for i := 0; i < 2; i++ {
for range 2 {
out, err := exec.Command("apt-get", "install", "--yes", "--allow-downgrades", "tailscale="+ver).CombinedOutput()
if err != nil {
if !bytes.Contains(out, []byte(`dpkg was interrupted`)) {
@ -522,6 +528,13 @@ func (up *Updater) updateArchLike() error {
you can use "pacman --sync --refresh --sysupgrade" or "pacman -Syu" to upgrade the system, including Tailscale.`)
}
func (up *Updater) updateNixos() error {
// NixOS package updates are managed on a system level and not individually.
// Direct users to update their nix channel or nixpkgs flake input to
// receive the latest version.
return errors.New(`individual package updates are not supported on NixOS installations. Update your system channel or flake inputs to get the latest Tailscale version from nixpkgs.`)
}
const yumRepoConfigFile = "/etc/yum.repos.d/tailscale.repo"
// updateFedoraLike updates tailscale on any distros in the Fedora family,
@ -654,6 +667,7 @@ func (up *Updater) updateAlpineLike() (err error) {
func parseAlpinePackageVersion(out []byte) (string, error) {
s := bufio.NewScanner(bytes.NewReader(out))
var maxVer string
for s.Scan() {
// The line should look like this:
// tailscale-1.44.2-r0 description:
@ -665,7 +679,13 @@ func parseAlpinePackageVersion(out []byte) (string, error) {
if len(parts) < 3 {
return "", fmt.Errorf("malformed info line: %q", line)
}
return parts[1], nil
ver := parts[1]
if cmpver.Compare(ver, maxVer) == 1 {
maxVer = ver
}
}
if maxVer != "" {
return maxVer, nil
}
return "", errors.New("tailscale version not found in output")
}
@ -818,7 +838,7 @@ func (up *Updater) switchOutputToFile() (io.Closer, error) {
func (up *Updater) installMSI(msi string) error {
var err error
for tries := 0; tries < 2; tries++ {
cmd := exec.Command("msiexec.exe", "/i", filepath.Base(msi), "/quiet", "/promptrestart", "/qn")
cmd := exec.Command("msiexec.exe", "/i", filepath.Base(msi), "/quiet", "/norestart", "/qn")
cmd.Dir = filepath.Dir(msi)
cmd.Stdout = up.Stdout
cmd.Stderr = up.Stderr
@ -999,6 +1019,20 @@ func (up *Updater) updateLinuxBinary() error {
return nil
}
func restartSystemdUnit(ctx context.Context) error {
if _, err := exec.LookPath("systemctl"); err != nil {
// Likely not a systemd-managed distro.
return errors.ErrUnsupported
}
if out, err := exec.Command("systemctl", "daemon-reload").CombinedOutput(); err != nil {
return fmt.Errorf("systemctl daemon-reload failed: %w\noutput: %s", err, out)
}
if out, err := exec.Command("systemctl", "restart", "tailscaled.service").CombinedOutput(); err != nil {
return fmt.Errorf("systemctl restart failed: %w\noutput: %s", err, out)
}
return nil
}
func (up *Updater) downloadLinuxTarball(ver string) (string, error) {
dlDir, err := os.UserCacheDir()
if err != nil {
@ -1277,10 +1311,23 @@ func LatestTailscaleVersion(track string) (string, error) {
if err != nil {
return "", err
}
if latest.Version == "" {
return "", fmt.Errorf("no latest version found for %q track", track)
ver := latest.Version
switch runtime.GOOS {
case "windows":
ver = latest.MSIsVersion
case "darwin":
ver = latest.MacZipsVersion
case "linux":
ver = latest.TarballsVersion
if distro.Get() == distro.Synology {
ver = latest.SPKsVersion
}
}
return latest.Version, nil
if ver == "" {
return "", fmt.Errorf("no latest version found for OS %q on %q track", runtime.GOOS, track)
}
return ver, nil
}
type trackPackages struct {

View File

@ -251,6 +251,29 @@ tailscale installed size:
out: "",
wantErr: true,
},
{
desc: "multiple versions",
out: `
tailscale-1.54.1-r0 description:
The easiest, most secure way to use WireGuard and 2FA
tailscale-1.54.1-r0 webpage:
https://tailscale.com/
tailscale-1.54.1-r0 installed size:
34 MiB
tailscale-1.58.2-r0 description:
The easiest, most secure way to use WireGuard and 2FA
tailscale-1.58.2-r0 webpage:
https://tailscale.com/
tailscale-1.58.2-r0 installed size:
35 MiB
`,
want: "1.58.2",
},
}
for _, tt := range tests {
@ -640,7 +663,7 @@ func genTarball(t *testing.T, path string, files map[string]string) {
func TestWriteFileOverwrite(t *testing.T) {
path := filepath.Join(t.TempDir(), "test")
for i := 0; i < 2; i++ {
for i := range 2 {
content := fmt.Sprintf("content %d", i)
if err := writeFile(strings.NewReader(content), path, 0600); err != nil {
t.Fatal(err)

View File

@ -445,7 +445,7 @@ type testServer struct {
func newTestServer(t *testing.T) *testServer {
var roots []rootKeyPair
for i := 0; i < 3; i++ {
for range 3 {
roots = append(roots, newRootKeyPair(t))
}

View File

@ -1,37 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package clientupdate
import (
"context"
"errors"
"fmt"
"github.com/coreos/go-systemd/v22/dbus"
)
func restartSystemdUnit(ctx context.Context) error {
c, err := dbus.NewWithContext(ctx)
if err != nil {
// Likely not a systemd-managed distro.
return errors.ErrUnsupported
}
defer c.Close()
if err := c.ReloadContext(ctx); err != nil {
return fmt.Errorf("failed to reload tailsacled.service: %w", err)
}
ch := make(chan string, 1)
if _, err := c.RestartUnitContext(ctx, "tailscaled.service", "replace", ch); err != nil {
return fmt.Errorf("failed to restart tailsacled.service: %w", err)
}
select {
case res := <-ch:
if res != "done" {
return fmt.Errorf("systemd service restart failed with result %q", res)
}
case <-ctx.Done():
return ctx.Err()
}
return nil
}

View File

@ -1,15 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !linux
package clientupdate
import (
"context"
"errors"
)
func restartSystemdUnit(ctx context.Context) error {
return errors.ErrUnsupported
}

View File

@ -102,7 +102,7 @@ func gen(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named) {
writef("}")
writef("dst := new(%s)", name)
writef("*dst = *src")
for i := 0; i < t.NumFields(); i++ {
for i := range t.NumFields() {
fname := t.Field(i).Name()
ft := t.Field(i).Type()
if !codegen.ContainsPointers(ft) || codegen.HasNoClone(t.Tag(i)) {

View File

@ -8,6 +8,7 @@ package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"net/http"
@ -18,20 +19,6 @@ import (
"tailscale.com/tailcfg"
)
// findKeyInKubeSecret inspects the kube secret secretName for a data
// field called "authkey", and returns its value if present.
func findKeyInKubeSecret(ctx context.Context, secretName string) (string, error) {
s, err := kc.GetSecret(ctx, secretName)
if err != nil {
return "", err
}
ak, ok := s.Data["authkey"]
if !ok {
return "", nil
}
return string(ak), nil
}
// storeDeviceInfo writes deviceID into the "device_id" data field of the kube
// secret secretName.
func storeDeviceInfo(ctx context.Context, secretName string, deviceID tailcfg.StableNodeID, fqdn string, addresses []netip.Prefix) error {
@ -88,9 +75,59 @@ func deleteAuthKey(ctx context.Context, secretName string) error {
return nil
}
var kc *kube.Client
var kc kube.Client
func initKube(root string) {
// setupKube is responsible for doing any necessary configuration and checks to
// ensure that tailscale state storage and authentication mechanism will work on
// Kubernetes.
func (cfg *settings) setupKube(ctx context.Context) error {
if cfg.KubeSecret == "" {
return nil
}
canPatch, canCreate, err := kc.CheckSecretPermissions(ctx, cfg.KubeSecret)
if err != nil {
return fmt.Errorf("Some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
}
cfg.KubernetesCanPatch = canPatch
s, err := kc.GetSecret(ctx, cfg.KubeSecret)
if err != nil && kube.IsNotFoundErr(err) && !canCreate {
return fmt.Errorf("Tailscale state Secret %s does not exist and we don't have permissions to create it. "+
"If you intend to store tailscale state elsewhere than a Kubernetes Secret, "+
"you can explicitly set TS_KUBE_SECRET env var to an empty string. "+
"Else ensure that RBAC is set up that allows the service account associated with this installation to create Secrets.", cfg.KubeSecret)
} else if err != nil && !kube.IsNotFoundErr(err) {
return fmt.Errorf("Getting Tailscale state Secret %s: %v", cfg.KubeSecret, err)
}
if cfg.AuthKey == "" && !isOneStepConfig(cfg) {
if s == nil {
log.Print("TS_AUTHKEY not provided and kube secret does not exist, login will be interactive if needed.")
return nil
}
keyBytes, _ := s.Data["authkey"]
key := string(keyBytes)
if key != "" {
// This behavior of pulling authkeys from kube secrets was added
// at the same time as the patch permission, so we can enforce
// that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
return errors.New("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the secret to manage the authkey.")
}
cfg.AuthKey = key
} else {
log.Print("No authkey found in kube secret and TS_AUTHKEY not provided, login will be interactive if needed.")
}
}
return nil
}
func initKubeClient(root string) {
if root != "/" {
// If we are running in a test, we need to set the root path to the fake
// service account directory.

View File

@ -0,0 +1,206 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"context"
"errors"
"testing"
"github.com/google/go-cmp/cmp"
"tailscale.com/kube"
)
func TestSetupKube(t *testing.T) {
tests := []struct {
name string
cfg *settings
wantErr bool
wantCfg *settings
kc kube.Client
}{
{
name: "TS_AUTHKEY set, state Secret exists",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, nil
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
},
{
name: "TS_AUTHKEY set, state Secret does not exist, we have permissions to create it",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
},
{
name: "TS_AUTHKEY set, state Secret does not exist, we do not have permissions to create it",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY set, we encounter a non-404 error when trying to retrieve the state Secret",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 403}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY set, we encounter a non-404 error when trying to check Secret permissions",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, errors.New("broken")
},
},
wantErr: true,
},
{
// Interactive login using URL in Pod logs
name: "TS_AUTHKEY not set, state Secret does not exist, we have permissions to create it",
cfg: &settings{
KubeSecret: "foo",
},
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
},
{
// Interactive login using URL in Pod logs
name: "TS_AUTHKEY not set, state Secret exists, but does not contain auth key",
cfg: &settings{
KubeSecret: "foo",
},
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{}, nil
},
},
},
{
name: "TS_AUTHKEY not set, state Secret contains auth key, we do not have RBAC to patch it",
cfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
wantCfg: &settings{
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY not set, state Secret contains auth key, we have RBAC to patch it",
cfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return true, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
wantCfg: &settings{
KubeSecret: "foo",
AuthKey: "foo",
KubernetesCanPatch: true,
},
},
}
for _, tt := range tests {
kc = tt.kc
t.Run(tt.name, func(t *testing.T) {
if err := tt.cfg.setupKube(context.Background()); (err != nil) != tt.wantErr {
t.Errorf("settings.setupKube() error = %v, wantErr %v", err, tt.wantErr)
}
if diff := cmp.Diff(*tt.cfg, *tt.wantCfg); diff != "" {
t.Errorf("unexpected contents of settings after running settings.setupKube()\n(-got +want):\n%s", diff)
}
})
}
}

View File

@ -18,7 +18,11 @@
// previously advertised routes. To accept routes, use TS_EXTRA_ARGS to pass
// in --accept-routes.
// - TS_DEST_IP: proxy all incoming Tailscale traffic to the given
// destination.
// destination defined by an IP address.
// - TS_EXPERIMENTAL_DEST_DNS_NAME: proxy all incoming Tailscale traffic to the given
// destination defined by a DNS name. The DNS name will be periodically resolved and firewall rules updated accordingly.
// This is currently intended to be used by the Kubernetes operator (ExternalName Services).
// This is an experimental env var and will likely change in the future.
// - TS_TAILNET_TARGET_IP: proxy all incoming non-Tailscale traffic to the given
// destination defined by an IP.
// - TS_TAILNET_TARGET_FQDN: proxy all incoming non-Tailscale traffic to the given
@ -48,13 +52,24 @@
// ${TS_CERT_DOMAIN}, it will be replaced with the value of the available FQDN.
// It cannot be used in conjunction with TS_DEST_IP. The file is watched for changes,
// and will be re-applied when it changes.
// - EXPERIMENTAL_TS_CONFIGFILE_PATH: if specified, a path to tailscaled
// config. If this is set, TS_HOSTNAME, TS_EXTRA_ARGS, TS_AUTHKEY,
// - TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR: if specified, a path to a
// directory that containers tailscaled config in file. The config file needs to be
// named cap-<current-tailscaled-cap>.hujson. If this is set, TS_HOSTNAME,
// TS_EXTRA_ARGS, TS_AUTHKEY,
// TS_ROUTES, TS_ACCEPT_DNS env vars must not be set. If this is set,
// containerboot only runs `tailscaled --config <path-to-this-configfile>`
// and not `tailscale up` or `tailscale set`.
// The config file contents are currently read once on container start.
// NB: This env var is currently experimental and the logic will likely change!
// - EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS: if set to true
// and if this containerboot instance is an L7 ingress proxy (created by
// the Kubernetes operator), set up rules to allow proxying cluster traffic,
// received on the Pod IP of this node, to the ingress target in the cluster.
// This, in conjunction with MagicDNS name resolution in cluster, can be
// useful for cases where a cluster workload needs to access a target in
// cluster using the same hostname (in this case, the MagicDNS name of the ingress proxy)
// as a non-cluster workload on tailnet.
// This is only meant to be configured by the Kubernetes operator.
//
// When running on Kubernetes, containerboot defaults to storing state in the
// "tailscale" kube secret. To store state on local disk instead, set
@ -73,12 +88,16 @@ import (
"fmt"
"io/fs"
"log"
"math"
"net"
"net/netip"
"os"
"os/exec"
"os/signal"
"path"
"path/filepath"
"reflect"
"slices"
"strconv"
"strings"
"sync"
@ -91,6 +110,7 @@ import (
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
"tailscale.com/ipn/conffile"
kubeutils "tailscale.com/k8s-operator"
"tailscale.com/tailcfg"
"tailscale.com/types/logger"
"tailscale.com/types/ptr"
@ -108,29 +128,32 @@ func newNetfilterRunner(logf logger.Logf) (linuxfw.NetfilterRunner, error) {
func main() {
log.SetPrefix("boot: ")
tailscale.I_Acknowledge_This_API_Is_Unstable = true
cfg := &settings{
AuthKey: defaultEnvs([]string{"TS_AUTHKEY", "TS_AUTH_KEY"}, ""),
Hostname: defaultEnv("TS_HOSTNAME", ""),
Routes: defaultEnvStringPointer("TS_ROUTES"),
ServeConfigPath: defaultEnv("TS_SERVE_CONFIG", ""),
ProxyTo: defaultEnv("TS_DEST_IP", ""),
TailnetTargetIP: defaultEnv("TS_TAILNET_TARGET_IP", ""),
TailnetTargetFQDN: defaultEnv("TS_TAILNET_TARGET_FQDN", ""),
DaemonExtraArgs: defaultEnv("TS_TAILSCALED_EXTRA_ARGS", ""),
ExtraArgs: defaultEnv("TS_EXTRA_ARGS", ""),
InKubernetes: os.Getenv("KUBERNETES_SERVICE_HOST") != "",
UserspaceMode: defaultBool("TS_USERSPACE", true),
StateDir: defaultEnv("TS_STATE_DIR", ""),
AcceptDNS: defaultEnvBoolPointer("TS_ACCEPT_DNS"),
KubeSecret: defaultEnv("TS_KUBE_SECRET", "tailscale"),
SOCKSProxyAddr: defaultEnv("TS_SOCKS5_SERVER", ""),
HTTPProxyAddr: defaultEnv("TS_OUTBOUND_HTTP_PROXY_LISTEN", ""),
Socket: defaultEnv("TS_SOCKET", "/tmp/tailscaled.sock"),
AuthOnce: defaultBool("TS_AUTH_ONCE", false),
Root: defaultEnv("TS_TEST_ONLY_ROOT", "/"),
TailscaledConfigFilePath: defaultEnv("EXPERIMENTAL_TS_CONFIGFILE_PATH", ""),
AuthKey: defaultEnvs([]string{"TS_AUTHKEY", "TS_AUTH_KEY"}, ""),
Hostname: defaultEnv("TS_HOSTNAME", ""),
Routes: defaultEnvStringPointer("TS_ROUTES"),
ServeConfigPath: defaultEnv("TS_SERVE_CONFIG", ""),
ProxyTargetIP: defaultEnv("TS_DEST_IP", ""),
ProxyTargetDNSName: defaultEnv("TS_EXPERIMENTAL_DEST_DNS_NAME", ""),
TailnetTargetIP: defaultEnv("TS_TAILNET_TARGET_IP", ""),
TailnetTargetFQDN: defaultEnv("TS_TAILNET_TARGET_FQDN", ""),
DaemonExtraArgs: defaultEnv("TS_TAILSCALED_EXTRA_ARGS", ""),
ExtraArgs: defaultEnv("TS_EXTRA_ARGS", ""),
InKubernetes: os.Getenv("KUBERNETES_SERVICE_HOST") != "",
UserspaceMode: defaultBool("TS_USERSPACE", true),
StateDir: defaultEnv("TS_STATE_DIR", ""),
AcceptDNS: defaultEnvBoolPointer("TS_ACCEPT_DNS"),
KubeSecret: defaultEnv("TS_KUBE_SECRET", "tailscale"),
SOCKSProxyAddr: defaultEnv("TS_SOCKS5_SERVER", ""),
HTTPProxyAddr: defaultEnv("TS_OUTBOUND_HTTP_PROXY_LISTEN", ""),
Socket: defaultEnv("TS_SOCKET", "/tmp/tailscaled.sock"),
AuthOnce: defaultBool("TS_AUTH_ONCE", false),
Root: defaultEnv("TS_TEST_ONLY_ROOT", "/"),
TailscaledConfigFilePath: tailscaledConfigFilePath(),
AllowProxyingClusterTrafficViaIngress: defaultBool("EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS", false),
PodIP: defaultEnv("POD_IP", ""),
}
if err := cfg.validate(); err != nil {
log.Fatalf("invalid configuration: %v", err)
}
@ -139,8 +162,8 @@ func main() {
if err := ensureTunFile(cfg.Root); err != nil {
log.Fatalf("Unable to create tuntap device file: %v", err)
}
if cfg.ProxyTo != "" || cfg.Routes != nil || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != "" {
if err := ensureIPForwarding(cfg.Root, cfg.ProxyTo, cfg.TailnetTargetIP, cfg.TailnetTargetFQDN, cfg.Routes); err != nil {
if cfg.ProxyTargetIP != "" || cfg.ProxyTargetDNSName != "" || cfg.Routes != nil || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != "" {
if err := ensureIPForwarding(cfg.Root, cfg.ProxyTargetIP, cfg.TailnetTargetIP, cfg.TailnetTargetFQDN, cfg.Routes); err != nil {
log.Printf("Failed to enable IP forwarding: %v", err)
log.Printf("To run tailscale as a proxy or router container, IP forwarding must be enabled.")
if cfg.InKubernetes {
@ -152,44 +175,16 @@ func main() {
}
}
if cfg.InKubernetes {
initKube(cfg.Root)
}
// Context is used for all setup stuff until we're in steady
// state, so that if something is hanging we eventually time out
// and crashloop the container.
bootCtx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
if cfg.InKubernetes && cfg.KubeSecret != "" {
canPatch, err := kc.CheckSecretPermissions(bootCtx, cfg.KubeSecret)
if err != nil {
log.Fatalf("Some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
}
cfg.KubernetesCanPatch = canPatch
if cfg.AuthKey == "" && !isOneStepConfig(cfg) {
key, err := findKeyInKubeSecret(bootCtx, cfg.KubeSecret)
if err != nil {
log.Fatalf("Getting authkey from kube secret: %v", err)
}
if key != "" {
// This behavior of pulling authkeys from kube secrets was added
// at the same time as the patch permission, so we can enforce
// that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
log.Fatalf("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the secret to manage the authkey.")
}
log.Print("Using authkey found in kube secret")
cfg.AuthKey = key
} else {
log.Print("No authkey found in kube secret and TS_AUTHKEY not provided, login will be interactive if needed.")
}
if cfg.InKubernetes {
initKubeClient(cfg.Root)
if err := cfg.setupKube(bootCtx); err != nil {
log.Fatalf("error setting up for running on Kubernetes: %v", err)
}
}
@ -330,7 +325,7 @@ authLoop:
}
var (
wantProxy = cfg.ProxyTo != "" || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != ""
wantProxy = cfg.ProxyTargetIP != "" || cfg.ProxyTargetDNSName != "" || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != "" || cfg.AllowProxyingClusterTrafficViaIngress
wantDeviceInfo = cfg.InKubernetes && cfg.KubeSecret != "" && cfg.KubernetesCanPatch
startupTasksDone = false
currentIPs deephash.Sum // tailscale IPs assigned to device
@ -338,6 +333,9 @@ authLoop:
currentEgressIPs deephash.Sum
addrs []netip.Prefix
backendAddrs []net.IP
certDomain = new(atomic.Pointer[string])
certDomainChanged = make(chan bool, 1)
)
@ -351,6 +349,44 @@ authLoop:
log.Fatalf("error creating new netfilter runner: %v", err)
}
}
// Setup for proxies that are configured to proxy to a target specified
// by a DNS name (TS_EXPERIMENTAL_DEST_DNS_NAME).
const defaultCheckPeriod = time.Minute * 10 // how often to check what IPs the DNS name resolves to
var (
tc = make(chan string, 1)
failedResolveAttempts int
t *time.Timer = time.AfterFunc(defaultCheckPeriod, func() {
if cfg.ProxyTargetDNSName != "" {
tc <- "recheck"
}
})
)
defer t.Stop()
// resetTimer resets timer for when to next attempt to resolve the DNS
// name for the proxy configured with TS_EXPERIMENTAL_DEST_DNS_NAME. The
// timer gets reset to 10 minutes from now unless the last resolution
// attempt failed. If one or more consecutive previous resolution
// attempts failed, the next resolution attempt will happen after the smallest
// of (10 minutes, 2 ^ number-of-consecutive-failed-resolution-attempts
// seconds) i.e 2s, 4s, 8s ... 10 minutes.
resetTimer := func(lastResolveFailed bool) {
if !lastResolveFailed {
log.Printf("reconfigureTimer: next DNS resolution attempt in %s", defaultCheckPeriod)
t.Reset(defaultCheckPeriod)
failedResolveAttempts = 0
return
}
minDelay := 2 // 2 seconds
nextTick := time.Second * time.Duration(math.Pow(float64(minDelay), float64(failedResolveAttempts)))
if nextTick > defaultCheckPeriod {
nextTick = defaultCheckPeriod // cap at 10 minutes
}
log.Printf("reconfigureTimer: last DNS resolution attempt failed, next DNS resolution attempt in %v", nextTick)
t.Reset(nextTick)
failedResolveAttempts++
}
notifyChan := make(chan ipn.Notify)
errChan := make(chan error)
go func() {
@ -365,6 +401,7 @@ authLoop:
}
}()
var wg sync.WaitGroup
runLoop:
for {
select {
@ -387,7 +424,7 @@ runLoop:
log.Fatalf("tailscaled left running state (now in state %q), exiting", *n.State)
}
if n.NetMap != nil {
addrs := n.NetMap.SelfNode.Addresses().AsSlice()
addrs = n.NetMap.SelfNode.Addresses().AsSlice()
newCurrentIPs := deephash.Hash(&addrs)
ipsHaveChanged := newCurrentIPs != currentIPs
@ -413,7 +450,7 @@ runLoop:
egressAddrs = node.Addresses().AsSlice()
newCurentEgressIPs = deephash.Hash(&egressAddrs)
egressIPsHaveChanged = newCurentEgressIPs != currentEgressIPs
if egressIPsHaveChanged && len(egressAddrs) > 0 {
if egressIPsHaveChanged && len(egressAddrs) != 0 {
for _, egressAddr := range egressAddrs {
ea := egressAddr.Addr()
// TODO (irbekrm): make it work for IPv6 too.
@ -429,13 +466,32 @@ runLoop:
}
currentEgressIPs = newCurentEgressIPs
}
if cfg.ProxyTo != "" && len(addrs) > 0 && ipsHaveChanged {
if cfg.ProxyTargetIP != "" && len(addrs) != 0 && ipsHaveChanged {
log.Printf("Installing proxy rules")
if err := installIngressForwardingRule(ctx, cfg.ProxyTo, addrs, nfr); err != nil {
if err := installIngressForwardingRule(ctx, cfg.ProxyTargetIP, addrs, nfr); err != nil {
log.Fatalf("installing ingress proxy rules: %v", err)
}
}
if cfg.ServeConfigPath != "" && len(n.NetMap.DNS.CertDomains) > 0 {
if cfg.ProxyTargetDNSName != "" && len(addrs) != 0 && ipsHaveChanged {
newBackendAddrs, err := resolveDNS(ctx, cfg.ProxyTargetDNSName)
if err != nil {
log.Printf("[unexpected] error resolving DNS name %s: %v", cfg.ProxyTargetDNSName, err)
resetTimer(true)
continue
}
backendsHaveChanged := !(slices.EqualFunc(backendAddrs, newBackendAddrs, func(ip1 net.IP, ip2 net.IP) bool {
return slices.ContainsFunc(newBackendAddrs, func(ip net.IP) bool { return ip.Equal(ip1) })
}))
if backendsHaveChanged {
log.Printf("installing ingress proxy rules for backends %v", newBackendAddrs)
if err := installIngressForwardingRuleForDNSTarget(ctx, newBackendAddrs, addrs, nfr); err != nil {
log.Fatalf("error installing ingress proxy rules: %v", err)
}
}
resetTimer(false)
backendAddrs = newBackendAddrs
}
if cfg.ServeConfigPath != "" && len(n.NetMap.DNS.CertDomains) != 0 {
cd := n.NetMap.DNS.CertDomains[0]
prev := certDomain.Swap(ptr.To(cd))
if prev == nil || *prev != cd {
@ -445,12 +501,24 @@ runLoop:
}
}
}
if cfg.TailnetTargetIP != "" && ipsHaveChanged && len(addrs) > 0 {
if cfg.TailnetTargetIP != "" && ipsHaveChanged && len(addrs) != 0 {
log.Printf("Installing forwarding rules for destination %v", cfg.TailnetTargetIP)
if err := installEgressForwardingRule(ctx, cfg.TailnetTargetIP, addrs, nfr); err != nil {
log.Fatalf("installing egress proxy rules: %v", err)
}
}
// If this is a L7 cluster ingress proxy (set up
// by Kubernetes operator) and proxying of
// cluster traffic to the ingress target is
// enabled, set up proxy rule each time the
// tailnet IPs of this node change (including
// the first time they become available).
if cfg.AllowProxyingClusterTrafficViaIngress && cfg.ServeConfigPath != "" && ipsHaveChanged && len(addrs) != 0 {
log.Printf("installing rules to forward traffic for %s to node's tailnet IP", cfg.PodIP)
if err := installTSForwardingRuleForDestination(ctx, cfg.PodIP, addrs, nfr); err != nil {
log.Fatalf("installing rules to forward traffic to node's tailnet IP: %v", err)
}
}
currentIPs = newCurrentIPs
deviceInfo := []any{n.NetMap.SelfNode.StableID(), n.NetMap.SelfNode.Name()}
@ -467,32 +535,50 @@ runLoop:
log.Println("Startup complete, waiting for shutdown signal")
startupTasksDone = true
// Reap all processes, since we are PID1 and need to collect zombies. We can
// only start doing this once we've stopped shelling out to things
// `tailscale up`, otherwise this goroutine can reap the CLI subprocesses
// and wedge bringup.
// Wait on tailscaled process. It won't
// be cleaned up by default when the
// container exits as it is not PID1.
// TODO (irbekrm): perhaps we can
// replace the reaper by a running
// cmd.Wait in a goroutine immediately
// after starting tailscaled?
reaper := func() {
defer wg.Done()
for {
var status unix.WaitStatus
pid, err := unix.Wait4(-1, &status, 0, nil)
_, err := unix.Wait4(daemonProcess.Pid, &status, 0, nil)
if errors.Is(err, unix.EINTR) {
continue
}
if err != nil {
log.Fatalf("Waiting for exited processes: %v", err)
}
if pid == daemonProcess.Pid {
log.Printf("Tailscaled exited")
os.Exit(0)
log.Fatalf("Waiting for tailscaled to exit: %v", err)
}
log.Print("tailscaled exited")
os.Exit(0)
}
}
wg.Add(1)
go reaper()
}
}
case <-tc:
newBackendAddrs, err := resolveDNS(ctx, cfg.ProxyTargetDNSName)
if err != nil {
log.Printf("[unexpected] error resolving DNS name %s: %v", cfg.ProxyTargetDNSName, err)
resetTimer(true)
continue
}
backendsHaveChanged := !(slices.EqualFunc(backendAddrs, newBackendAddrs, func(ip1 net.IP, ip2 net.IP) bool {
return slices.ContainsFunc(newBackendAddrs, func(ip net.IP) bool { return ip.Equal(ip1) })
}))
if backendsHaveChanged && len(addrs) != 0 {
log.Printf("Backend address change detected, installing proxy rules for backends %v", newBackendAddrs)
if err := installIngressForwardingRuleForDNSTarget(ctx, newBackendAddrs, addrs, nfr); err != nil {
log.Fatalf("installing ingress proxy rules for DNS target %s: %v", cfg.ProxyTargetDNSName, err)
}
}
backendAddrs = newBackendAddrs
resetTimer(false)
}
}
wg.Wait()
@ -733,12 +819,12 @@ func ensureTunFile(root string) error {
}
// ensureIPForwarding enables IPv4/IPv6 forwarding for the container.
func ensureIPForwarding(root, clusterProxyTarget, tailnetTargetiP, tailnetTargetFQDN string, routes *string) error {
func ensureIPForwarding(root, clusterProxyTargetIP, tailnetTargetIP, tailnetTargetFQDN string, routes *string) error {
var (
v4Forwarding, v6Forwarding bool
)
if clusterProxyTarget != "" {
proxyIP, err := netip.ParseAddr(clusterProxyTarget)
if clusterProxyTargetIP != "" {
proxyIP, err := netip.ParseAddr(clusterProxyTargetIP)
if err != nil {
return fmt.Errorf("invalid cluster destination IP: %v", err)
}
@ -748,8 +834,8 @@ func ensureIPForwarding(root, clusterProxyTarget, tailnetTargetiP, tailnetTarget
v6Forwarding = true
}
}
if tailnetTargetiP != "" {
proxyIP, err := netip.ParseAddr(tailnetTargetiP)
if tailnetTargetIP != "" {
proxyIP, err := netip.ParseAddr(tailnetTargetIP)
if err != nil {
return fmt.Errorf("invalid tailnet destination IP: %v", err)
}
@ -777,7 +863,10 @@ func ensureIPForwarding(root, clusterProxyTarget, tailnetTargetiP, tailnetTarget
}
}
}
return enableIPForwarding(v4Forwarding, v6Forwarding, root)
}
func enableIPForwarding(v4Forwarding, v6Forwarding bool, root string) error {
var paths []string
if v4Forwarding {
paths = append(paths, filepath.Join(root, "proc/sys/net/ipv4/ip_forward"))
@ -837,6 +926,35 @@ func installEgressForwardingRule(ctx context.Context, dstStr string, tsIPs []net
return nil
}
// installTSForwardingRuleForDestination accepts a destination address and a
// list of node's tailnet addresses, sets up rules to forward traffic for
// destination to the tailnet IP matching the destination IP family.
// Destination can be Pod IP of this node.
func installTSForwardingRuleForDestination(ctx context.Context, dstFilter string, tsIPs []netip.Prefix, nfr linuxfw.NetfilterRunner) error {
dst, err := netip.ParseAddr(dstFilter)
if err != nil {
return err
}
var local netip.Addr
for _, pfx := range tsIPs {
if !pfx.IsSingleIP() {
continue
}
if pfx.Addr().Is4() != dst.Is4() {
continue
}
local = pfx.Addr()
break
}
if !local.IsValid() {
return fmt.Errorf("no tailscale IP matching family of %s found in %v", dstFilter, tsIPs)
}
if err := nfr.AddDNATRule(dst, local); err != nil {
return fmt.Errorf("installing rule for forwarding traffic to tailnet IP: %w", err)
}
return nil
}
func installIngressForwardingRule(ctx context.Context, dstStr string, tsIPs []netip.Prefix, nfr linuxfw.NetfilterRunner) error {
dst, err := netip.ParseAddr(dstStr)
if err != nil {
@ -865,15 +983,89 @@ func installIngressForwardingRule(ctx context.Context, dstStr string, tsIPs []ne
return nil
}
func installIngressForwardingRuleForDNSTarget(ctx context.Context, backendAddrs []net.IP, tsIPs []netip.Prefix, nfr linuxfw.NetfilterRunner) error {
var (
tsv4 netip.Addr
tsv6 netip.Addr
v4Backends []netip.Addr
v6Backends []netip.Addr
)
for _, pfx := range tsIPs {
if pfx.IsSingleIP() && pfx.Addr().Is4() {
tsv4 = pfx.Addr()
continue
}
if pfx.IsSingleIP() && pfx.Addr().Is6() {
tsv6 = pfx.Addr()
continue
}
}
// TODO: log if more than one backend address is found and firewall is
// in nftables mode that only the first IP will be used.
for _, ip := range backendAddrs {
if ip.To4() != nil {
v4Backends = append(v4Backends, netip.AddrFrom4([4]byte(ip.To4())))
}
if ip.To16() != nil {
v6Backends = append(v6Backends, netip.AddrFrom16([16]byte(ip.To16())))
}
}
// Enable IP forwarding here as opposed to at the start of containerboot
// as the IPv4/IPv6 requirements might have changed.
// For Kubernetes operator proxies, forwarding for both IPv4 and IPv6 is
// enabled by an init container, so in practice enabling forwarding here
// is only needed if this proxy has been configured by manually setting
// TS_EXPERIMENTAL_DEST_DNS_NAME env var for a containerboot instance.
if err := enableIPForwarding(len(v4Backends) != 0, len(v6Backends) != 0, ""); err != nil {
log.Printf("[unexpected] failed to ensure IP forwarding: %v", err)
}
updateFirewall := func(dst netip.Addr, backendTargets []netip.Addr) error {
if err := nfr.DNATWithLoadBalancer(dst, backendTargets); err != nil {
return fmt.Errorf("installing DNAT rules for ingress backends %+#v: %w", backendTargets, err)
}
// The backend might advertize MSS higher than that of the
// tailscale interfaces. Clamp MSS of packets going out via
// tailscale0 interface to its MTU to prevent broken connections
// in environments where path MTU discovery is not working.
if err := nfr.ClampMSSToPMTU("tailscale0", dst); err != nil {
return fmt.Errorf("adding rule to clamp traffic via tailscale0: %v", err)
}
return nil
}
if len(v4Backends) != 0 {
if !tsv4.IsValid() {
log.Printf("backend targets %v contain at least one IPv4 address, but this node's Tailscale IPs do not contain a valid IPv4 address: %v", backendAddrs, tsIPs)
} else if err := updateFirewall(tsv4, v4Backends); err != nil {
return fmt.Errorf("Installing IPv4 firewall rules: %w", err)
}
}
if len(v6Backends) != 0 && !tsv6.IsValid() {
if !tsv6.IsValid() {
log.Printf("backend targets %v contain at least one IPv6 address, but this node's Tailscale IPs do not contain a valid IPv6 address: %v", backendAddrs, tsIPs)
} else if !nfr.HasIPV6NAT() {
log.Printf("backend targets %v contain at least one IPv6 address, but the chosen firewall mode does not support IPv6 NAT", backendAddrs)
} else if err := updateFirewall(tsv6, v6Backends); err != nil {
return fmt.Errorf("Installing IPv6 firewall rules: %w", err)
}
}
return nil
}
// settings is all the configuration for containerboot.
type settings struct {
AuthKey string
Hostname string
Routes *string
// ProxyTo is the destination IP to which all incoming
// ProxyTargetIP is the destination IP to which all incoming
// Tailscale traffic should be proxied. If empty, no proxying
// is done. This is typically a locally reachable IP.
ProxyTo string
ProxyTargetIP string
// ProxyTargetDNSName is a DNS name to whose backing IP addresses all
// incoming Tailscale traffic should be proxied.
ProxyTargetDNSName string
// TailnetTargetIP is the destination IP to which all incoming
// non-Tailscale traffic should be proxied. This is typically a
// Tailscale IP.
@ -897,17 +1089,38 @@ type settings struct {
Root string
KubernetesCanPatch bool
TailscaledConfigFilePath string
// If set to true and, if this containerboot instance is a Kubernetes
// ingress proxy, set up rules to forward incoming cluster traffic to be
// forwarded to the ingress target in cluster.
AllowProxyingClusterTrafficViaIngress bool
// PodIP is the IP of the Pod if running in Kubernetes. This is used
// when setting up rules to proxy cluster traffic to cluster ingress
// target.
PodIP string
}
func (s *settings) validate() error {
if s.TailscaledConfigFilePath != "" {
dir, file := path.Split(s.TailscaledConfigFilePath)
if _, err := os.Stat(dir); err != nil {
return fmt.Errorf("error validating whether directory with tailscaled config file %s exists: %w", dir, err)
}
if _, err := os.Stat(s.TailscaledConfigFilePath); err != nil {
return fmt.Errorf("error validating whether tailscaled config directory %q contains tailscaled config for current capability version %q: %w. If this is a Tailscale Kubernetes operator proxy, please ensure that the version of the operator is not older than the version of the proxy", dir, file, err)
}
if _, err := conffile.Load(s.TailscaledConfigFilePath); err != nil {
return fmt.Errorf("error validating tailscaled configfile contents: %w", err)
}
}
if s.ProxyTo != "" && s.UserspaceMode {
if s.ProxyTargetIP != "" && s.UserspaceMode {
return errors.New("TS_DEST_IP is not supported with TS_USERSPACE")
}
if s.ProxyTargetDNSName != "" && s.UserspaceMode {
return errors.New("TS_EXPERIMENTAL_DEST_DNS_NAME is not supported with TS_USERSPACE")
}
if s.ProxyTargetDNSName != "" && s.ProxyTargetIP != "" {
return errors.New("TS_EXPERIMENTAL_DEST_DNS_NAME and TS_DEST_IP cannot both be set")
}
if s.TailnetTargetIP != "" && s.UserspaceMode {
return errors.New("TS_TAILNET_TARGET_IP is not supported with TS_USERSPACE")
}
@ -918,11 +1131,42 @@ func (s *settings) validate() error {
return errors.New("Both TS_TAILNET_TARGET_IP and TS_TAILNET_FQDN cannot be set")
}
if s.TailscaledConfigFilePath != "" && (s.AcceptDNS != nil || s.AuthKey != "" || s.Routes != nil || s.ExtraArgs != "" || s.Hostname != "") {
return errors.New("EXPERIMENTAL_TS_CONFIGFILE_PATH cannot be set in combination with TS_HOSTNAME, TS_EXTRA_ARGS, TS_AUTHKEY, TS_ROUTES, TS_ACCEPT_DNS.")
return errors.New("TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR cannot be set in combination with TS_HOSTNAME, TS_EXTRA_ARGS, TS_AUTHKEY, TS_ROUTES, TS_ACCEPT_DNS.")
}
if s.AllowProxyingClusterTrafficViaIngress && s.UserspaceMode {
return errors.New("EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS is not supported in userspace mode")
}
if s.AllowProxyingClusterTrafficViaIngress && s.ServeConfigPath == "" {
return errors.New("EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS is set but this is not a cluster ingress proxy")
}
if s.AllowProxyingClusterTrafficViaIngress && s.PodIP == "" {
return errors.New("EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS is set but POD_IP is not set")
}
return nil
}
func resolveDNS(ctx context.Context, name string) ([]net.IP, error) {
// TODO (irbekrm): look at using recursive.Resolver instead to resolve
// the DNS names as well as retrieve TTLs. It looks though that this
// seems to return very short TTLs (shorter than on the actual records).
ip4s, err := net.DefaultResolver.LookupIP(ctx, "ip4", name)
if err != nil {
if e, ok := err.(*net.DNSError); !(ok && e.IsNotFound) {
return nil, fmt.Errorf("error looking up IPv4 addresses: %v", err)
}
}
ip6s, err := net.DefaultResolver.LookupIP(ctx, "ip6", name)
if err != nil {
if e, ok := err.(*net.DNSError); !(ok && e.IsNotFound) {
return nil, fmt.Errorf("error looking up IPv6 addresses: %v", err)
}
}
if len(ip4s) == 0 && len(ip6s) == 0 {
return nil, fmt.Errorf("no IPv4 or IPv6 addresses found for host: %s", name)
}
return append(ip4s, ip6s...), nil
}
// defaultEnv returns the value of the given envvar name, or defVal if
// unset.
func defaultEnv(name, defVal string) string {
@ -1019,3 +1263,42 @@ func isTwoStepConfigAlwaysAuth(cfg *settings) bool {
func isOneStepConfig(cfg *settings) bool {
return cfg.TailscaledConfigFilePath != ""
}
// tailscaledConfigFilePath returns the path to the tailscaled config file that
// should be used for the current capability version. It is determined by the
// TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR environment variable and looks for a
// file named cap-<capability_version>.hujson in the directory. It searches for
// the highest capability version that is less than or equal to the current
// capability version.
func tailscaledConfigFilePath() string {
dir := os.Getenv("TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR")
if dir == "" {
return ""
}
fe, err := os.ReadDir(dir)
if err != nil {
log.Fatalf("error reading tailscaled config directory %q: %v", dir, err)
}
maxCompatVer := tailcfg.CapabilityVersion(-1)
for _, e := range fe {
// We don't check if type if file as in most cases this will
// come from a mounted kube Secret, where the directory contents
// will be various symlinks.
if e.Type().IsDir() {
continue
}
cv, err := kubeutils.CapVerFromFileName(e.Name())
if err != nil {
log.Printf("skipping file %q in tailscaled config directory %q: %v", e.Name(), dir, err)
continue
}
if cv > maxCompatVer && cv <= tailcfg.CurrentCapabilityVersion {
maxCompatVer = cv
}
}
if maxCompatVer == -1 {
log.Fatalf("no tailscaled config file found in %q for current capability version %q", dir, tailcfg.CurrentCapabilityVersion)
}
log.Printf("Using tailscaled config file %q for capability version %q", maxCompatVer, tailcfg.CurrentCapabilityVersion)
return path.Join(dir, kubeutils.TailscaledConfigFileNameForCap(maxCompatVer))
}

View File

@ -65,7 +65,7 @@ func TestContainerBoot(t *testing.T) {
"dev/net",
"proc/sys/net/ipv4",
"proc/sys/net/ipv6/conf/all",
"etc",
"etc/tailscaled",
}
for _, path := range dirs {
if err := os.MkdirAll(filepath.Join(d, path), 0700); err != nil {
@ -80,7 +80,7 @@ func TestContainerBoot(t *testing.T) {
"dev/net/tun": []byte(""),
"proc/sys/net/ipv4/ip_forward": []byte("0"),
"proc/sys/net/ipv6/conf/all/forwarding": []byte("0"),
"etc/tailscaled": tailscaledConfBytes,
"etc/tailscaled/cap-95.hujson": tailscaledConfBytes,
}
resetFiles := func() {
for path, content := range files {
@ -638,14 +638,14 @@ func TestContainerBoot(t *testing.T) {
},
},
{
Name: "experimental tailscaled configfile",
Name: "experimental tailscaled config path",
Env: map[string]string{
"EXPERIMENTAL_TS_CONFIGFILE_PATH": filepath.Join(d, "etc/tailscaled"),
"TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR": filepath.Join(d, "etc/tailscaled/"),
},
Phases: []phase{
{
WantCmds: []string{
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking --config=/etc/tailscaled",
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking --config=/etc/tailscaled/cap-95.hujson",
},
}, {
Notify: runningNotify,

View File

@ -13,6 +13,7 @@ import (
"testing"
"tailscale.com/tstest"
"tailscale.com/tstest/nettest"
)
func BenchmarkHandleBootstrapDNS(b *testing.B) {
@ -55,6 +56,8 @@ func getBootstrapDNS(t *testing.T, q string) dnsEntryMap {
}
func TestUnpublishedDNS(t *testing.T) {
nettest.SkipIfNoNetwork(t)
const published = "login.tailscale.com"
const unpublished = "log.tailscale.io"

View File

@ -11,21 +11,18 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
W 💣 github.com/dblohm7/wingoes from tailscale.com/util/winutil
github.com/fxamacker/cbor/v2 from tailscale.com/tka
github.com/golang/groupcache/lru from tailscale.com/net/dnscache
github.com/golang/protobuf/proto from github.com/matttproud/golang_protobuf_extensions/pbutil
L github.com/google/nftables from tailscale.com/util/linuxfw
L 💣 github.com/google/nftables/alignedbuff from github.com/google/nftables/xt
L 💣 github.com/google/nftables/binaryutil from github.com/google/nftables+
L github.com/google/nftables/expr from github.com/google/nftables+
L github.com/google/nftables/internal/parseexprfunc from github.com/google/nftables+
L github.com/google/nftables/xt from github.com/google/nftables/expr+
github.com/google/uuid from tailscale.com/tsweb
github.com/google/uuid from tailscale.com/util/fastuuid
github.com/hdevalence/ed25519consensus from tailscale.com/tka
L github.com/josharian/native from github.com/mdlayher/netlink+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/interfaces+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/netmon
L github.com/jsimonetti/rtnetlink/internal/unix from github.com/jsimonetti/rtnetlink
github.com/klauspost/compress/flate from nhooyr.io/websocket
github.com/matttproud/golang_protobuf_extensions/pbutil from github.com/prometheus/common/expfmt
L 💣 github.com/mdlayher/netlink from github.com/jsimonetti/rtnetlink+
L 💣 github.com/mdlayher/netlink from github.com/google/nftables+
L 💣 github.com/mdlayher/netlink/nlenc from github.com/jsimonetti/rtnetlink+
L github.com/mdlayher/netlink/nltest from github.com/google/nftables
L 💣 github.com/mdlayher/socket from github.com/mdlayher/netlink
@ -49,13 +46,15 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
L github.com/vishvananda/netns from github.com/tailscale/netlink+
github.com/x448/float16 from github.com/fxamacker/cbor/v2
💣 go4.org/mem from tailscale.com/client/tailscale+
go4.org/netipx from tailscale.com/wgengine/filter+
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/interfaces+
google.golang.org/protobuf/encoding/prototext from github.com/golang/protobuf/proto+
google.golang.org/protobuf/encoding/protowire from github.com/golang/protobuf/proto+
go4.org/netipx from tailscale.com/net/tsaddr+
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/netmon+
google.golang.org/protobuf/encoding/protodelim from github.com/prometheus/common/expfmt
google.golang.org/protobuf/encoding/prototext from github.com/prometheus/common/expfmt+
google.golang.org/protobuf/encoding/protowire from google.golang.org/protobuf/encoding/protodelim+
google.golang.org/protobuf/internal/descfmt from google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/descopts from google.golang.org/protobuf/internal/filedesc+
google.golang.org/protobuf/internal/detrand from google.golang.org/protobuf/internal/descfmt+
google.golang.org/protobuf/internal/editiondefaults from google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/encoding/defval from google.golang.org/protobuf/internal/encoding/tag+
google.golang.org/protobuf/internal/encoding/messageset from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/internal/encoding/tag from google.golang.org/protobuf/internal/impl
@ -71,16 +70,15 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
google.golang.org/protobuf/internal/set from google.golang.org/protobuf/encoding/prototext
💣 google.golang.org/protobuf/internal/strs from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/internal/version from google.golang.org/protobuf/runtime/protoimpl
google.golang.org/protobuf/proto from github.com/golang/protobuf/proto+
google.golang.org/protobuf/reflect/protodesc from github.com/golang/protobuf/proto
💣 google.golang.org/protobuf/reflect/protoreflect from github.com/golang/protobuf/proto+
google.golang.org/protobuf/reflect/protoregistry from github.com/golang/protobuf/proto+
google.golang.org/protobuf/runtime/protoiface from github.com/golang/protobuf/proto+
google.golang.org/protobuf/runtime/protoimpl from github.com/golang/protobuf/proto+
google.golang.org/protobuf/types/descriptorpb from google.golang.org/protobuf/reflect/protodesc
google.golang.org/protobuf/proto from github.com/prometheus/client_golang/prometheus+
💣 google.golang.org/protobuf/reflect/protoreflect from github.com/prometheus/client_model/go+
google.golang.org/protobuf/reflect/protoregistry from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/runtime/protoiface from google.golang.org/protobuf/internal/impl+
google.golang.org/protobuf/runtime/protoimpl from github.com/prometheus/client_model/go+
google.golang.org/protobuf/types/known/timestamppb from github.com/prometheus/client_golang/prometheus+
nhooyr.io/websocket from tailscale.com/cmd/derper+
nhooyr.io/websocket/internal/errd from nhooyr.io/websocket
nhooyr.io/websocket/internal/util from nhooyr.io/websocket
nhooyr.io/websocket/internal/xsync from nhooyr.io/websocket
tailscale.com from tailscale.com/version
tailscale.com/atomicfile from tailscale.com/cmd/derper+
@ -89,18 +87,19 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/derp from tailscale.com/cmd/derper+
tailscale.com/derp/derphttp from tailscale.com/cmd/derper
tailscale.com/disco from tailscale.com/derp
tailscale.com/envknob from tailscale.com/derp+
tailscale.com/health from tailscale.com/net/tlsdial
tailscale.com/hostinfo from tailscale.com/net/interfaces+
tailscale.com/drive from tailscale.com/client/tailscale+
tailscale.com/envknob from tailscale.com/client/tailscale+
tailscale.com/health from tailscale.com/net/tlsdial+
tailscale.com/hostinfo from tailscale.com/net/netmon+
tailscale.com/ipn from tailscale.com/client/tailscale
tailscale.com/ipn/ipnstate from tailscale.com/client/tailscale+
tailscale.com/metrics from tailscale.com/cmd/derper+
tailscale.com/net/dnscache from tailscale.com/derp/derphttp
tailscale.com/net/flowtrack from tailscale.com/net/packet+
💣 tailscale.com/net/interfaces from tailscale.com/net/netns+
tailscale.com/net/ktimeout from tailscale.com/cmd/derper
tailscale.com/net/netaddr from tailscale.com/ipn+
tailscale.com/net/netknob from tailscale.com/net/netns
tailscale.com/net/netmon from tailscale.com/net/sockstats+
💣 tailscale.com/net/netmon from tailscale.com/derp/derphttp+
tailscale.com/net/netns from tailscale.com/derp/derphttp
tailscale.com/net/netutil from tailscale.com/client/tailscale
tailscale.com/net/packet from tailscale.com/wgengine/filter
@ -117,17 +116,17 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/syncs from tailscale.com/cmd/derper+
tailscale.com/tailcfg from tailscale.com/client/tailscale+
tailscale.com/tka from tailscale.com/client/tailscale+
W tailscale.com/tsconst from tailscale.com/net/interfaces
W tailscale.com/tsconst from tailscale.com/net/netmon
tailscale.com/tstime from tailscale.com/derp+
tailscale.com/tstime/mono from tailscale.com/tstime/rate
tailscale.com/tstime/rate from tailscale.com/wgengine/filter+
tailscale.com/tstime/rate from tailscale.com/derp+
tailscale.com/tsweb from tailscale.com/cmd/derper
tailscale.com/tsweb/promvarz from tailscale.com/tsweb
tailscale.com/tsweb/varz from tailscale.com/tsweb+
tailscale.com/types/dnstype from tailscale.com/tailcfg
tailscale.com/types/empty from tailscale.com/ipn
tailscale.com/types/ipproto from tailscale.com/net/flowtrack+
tailscale.com/types/key from tailscale.com/cmd/derper+
tailscale.com/types/key from tailscale.com/client/tailscale+
tailscale.com/types/lazy from tailscale.com/version+
tailscale.com/types/logger from tailscale.com/cmd/derper+
tailscale.com/types/netmap from tailscale.com/ipn
@ -136,34 +135,36 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/types/preftype from tailscale.com/ipn
tailscale.com/types/ptr from tailscale.com/hostinfo+
tailscale.com/types/structs from tailscale.com/ipn+
tailscale.com/types/tkatype from tailscale.com/types/key+
tailscale.com/types/views from tailscale.com/ipn/ipnstate+
tailscale.com/util/clientmetric from tailscale.com/net/tshttpproxy+
tailscale.com/types/tkatype from tailscale.com/client/tailscale+
tailscale.com/types/views from tailscale.com/ipn+
tailscale.com/util/cibuild from tailscale.com/health
tailscale.com/util/clientmetric from tailscale.com/net/netmon+
tailscale.com/util/cloudenv from tailscale.com/hostinfo+
W tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy
tailscale.com/util/cmpx from tailscale.com/cmd/derper+
tailscale.com/util/ctxkey from tailscale.com/tsweb+
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics
tailscale.com/util/dnsname from tailscale.com/hostinfo+
tailscale.com/util/fastuuid from tailscale.com/tsweb
tailscale.com/util/httpm from tailscale.com/client/tailscale
tailscale.com/util/lineread from tailscale.com/hostinfo+
L tailscale.com/util/linuxfw from tailscale.com/net/netns
tailscale.com/util/mak from tailscale.com/syncs+
tailscale.com/util/mak from tailscale.com/health+
tailscale.com/util/multierr from tailscale.com/health+
tailscale.com/util/nocasemaps from tailscale.com/types/ipproto
tailscale.com/util/set from tailscale.com/health+
tailscale.com/util/set from tailscale.com/derp+
tailscale.com/util/singleflight from tailscale.com/net/dnscache
tailscale.com/util/slicesx from tailscale.com/cmd/derper+
tailscale.com/util/syspolicy from tailscale.com/ipn
tailscale.com/util/vizerror from tailscale.com/tsweb+
tailscale.com/util/vizerror from tailscale.com/tailcfg+
W 💣 tailscale.com/util/winutil from tailscale.com/hostinfo+
W 💣 tailscale.com/util/winutil/winenv from tailscale.com/hostinfo
tailscale.com/version from tailscale.com/derp+
tailscale.com/version/distro from tailscale.com/hostinfo+
tailscale.com/version/distro from tailscale.com/envknob+
tailscale.com/wgengine/filter from tailscale.com/types/netmap
golang.org/x/crypto/acme from golang.org/x/crypto/acme/autocert
golang.org/x/crypto/acme/autocert from tailscale.com/cmd/derper
golang.org/x/crypto/argon2 from tailscale.com/tka
golang.org/x/crypto/blake2b from golang.org/x/crypto/nacl/box+
golang.org/x/crypto/blake2b from golang.org/x/crypto/argon2+
golang.org/x/crypto/blake2s from tailscale.com/tka
golang.org/x/crypto/chacha20 from golang.org/x/crypto/chacha20poly1305
golang.org/x/crypto/chacha20poly1305 from crypto/tls
@ -183,10 +184,10 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
golang.org/x/net/proxy from tailscale.com/net/netns
D golang.org/x/net/route from net+
golang.org/x/sync/errgroup from github.com/mdlayher/socket+
golang.org/x/sys/cpu from golang.org/x/crypto/blake2b+
LD golang.org/x/sys/unix from github.com/jsimonetti/rtnetlink/internal/unix+
W golang.org/x/sys/windows from golang.org/x/sys/windows/registry+
W golang.org/x/sys/windows/registry from golang.zx2c4.com/wireguard/windows/tunnel/winipcfg+
golang.org/x/sys/cpu from github.com/josharian/native+
LD golang.org/x/sys/unix from github.com/google/nftables+
W golang.org/x/sys/windows from github.com/dblohm7/wingoes+
W golang.org/x/sys/windows/registry from github.com/dblohm7/wingoes+
W golang.org/x/sys/windows/svc from golang.org/x/sys/windows/svc/mgr+
W golang.org/x/sys/windows/svc/mgr from tailscale.com/util/winutil
golang.org/x/text/secure/bidirule from golang.org/x/net/idna
@ -198,10 +199,10 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
bytes from bufio+
cmp from slices+
compress/flate from compress/gzip+
compress/gzip from internal/profile+
compress/gzip from google.golang.org/protobuf/internal/impl+
container/list from crypto/tls+
context from crypto/tls+
crypto from crypto/ecdsa+
crypto from crypto/ecdh+
crypto/aes from crypto/ecdsa+
crypto/cipher from crypto/aes+
crypto/des from crypto/tls+
@ -226,14 +227,14 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
embed from crypto/internal/nistec+
encoding from encoding/json+
encoding/asn1 from crypto/x509+
encoding/base32 from tailscale.com/tka+
encoding/base32 from github.com/fxamacker/cbor/v2+
encoding/base64 from encoding/json+
encoding/binary from compress/gzip+
encoding/hex from crypto/x509+
encoding/json from expvar+
encoding/pem from crypto/tls+
errors from bufio+
expvar from tailscale.com/cmd/derper+
expvar from github.com/prometheus/client_golang/prometheus+
flag from tailscale.com/cmd/derper+
fmt from compress/flate+
go/token from google.golang.org/protobuf/internal/strs
@ -247,12 +248,13 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
io/ioutil from github.com/mitchellh/go-ps+
log from expvar+
log/internal from log
maps from tailscale.com/types/views+
maps from tailscale.com/ipn+
math from compress/flate+
math/big from crypto/dsa+
math/bits from compress/flate+
math/rand from github.com/mdlayher/netlink+
mime from mime/multipart+
math/rand/v2 from tailscale.com/util/fastuuid
mime from github.com/prometheus/common/expfmt+
mime/multipart from net/http
mime/quotedprintable from mime/multipart
net from crypto/tls+
@ -264,15 +266,15 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
net/textproto from golang.org/x/net/http/httpguts+
net/url from crypto/x509+
os from crypto/rand+
os/exec from golang.zx2c4.com/wireguard/windows/tunnel/winipcfg+
os/exec from github.com/coreos/go-iptables/iptables+
os/signal from tailscale.com/cmd/derper
W os/user from tailscale.com/util/winutil
path from golang.org/x/crypto/acme/autocert+
path from github.com/prometheus/client_golang/prometheus/internal+
path/filepath from crypto/x509+
reflect from crypto/x509+
regexp from internal/profile+
regexp from github.com/coreos/go-iptables/iptables+
regexp/syntax from regexp
runtime/debug from golang.org/x/crypto/acme+
runtime/debug from github.com/prometheus/client_golang/prometheus+
runtime/metrics from github.com/prometheus/client_golang/prometheus+
runtime/pprof from net/http/pprof
runtime/trace from net/http/pprof+

View File

@ -5,6 +5,7 @@
package main // import "tailscale.com/cmd/derper"
import (
"cmp"
"context"
"crypto/tls"
"encoding/json"
@ -25,38 +26,47 @@ import (
"syscall"
"time"
"go4.org/mem"
"golang.org/x/time/rate"
"tailscale.com/atomicfile"
"tailscale.com/derp"
"tailscale.com/derp/derphttp"
"tailscale.com/metrics"
"tailscale.com/net/ktimeout"
"tailscale.com/net/stunserver"
"tailscale.com/tsweb"
"tailscale.com/types/key"
"tailscale.com/util/cmpx"
"tailscale.com/types/logger"
"tailscale.com/version"
)
var (
dev = flag.Bool("dev", false, "run in localhost development mode (overrides -a)")
addr = flag.String("a", ":443", "server HTTP/HTTPS listen address, in form \":port\", \"ip:port\", or for IPv6 \"[ip]:port\". If the IP is omitted, it defaults to all interfaces. Serves HTTPS if the port is 443 and/or -certmode is manual, otherwise HTTP.")
httpPort = flag.Int("http-port", 80, "The port on which to serve HTTP. Set to -1 to disable. The listener is bound to the same IP (if any) as specified in the -a flag.")
stunPort = flag.Int("stun-port", 3478, "The UDP port on which to serve STUN. The listener is bound to the same IP (if any) as specified in the -a flag.")
configPath = flag.String("c", "", "config file path")
certMode = flag.String("certmode", "letsencrypt", "mode for getting a cert. possible options: manual, letsencrypt")
certDir = flag.String("certdir", tsweb.DefaultCertDir("derper-certs"), "directory to store LetsEncrypt certs, if addr's port is :443")
hostname = flag.String("hostname", "derp.tailscale.com", "LetsEncrypt host name, if addr's port is :443")
runSTUN = flag.Bool("stun", true, "whether to run a STUN server. It will bind to the same IP (if any) as the --addr flag value.")
runDERP = flag.Bool("derp", true, "whether to run a DERP server. The only reason to set this false is if you're decommissioning a server but want to keep its bootstrap DNS functionality still running.")
dev = flag.Bool("dev", false, "run in localhost development mode (overrides -a)")
versionFlag = flag.Bool("version", false, "print version and exit")
addr = flag.String("a", ":443", "server HTTP/HTTPS listen address, in form \":port\", \"ip:port\", or for IPv6 \"[ip]:port\". If the IP is omitted, it defaults to all interfaces. Serves HTTPS if the port is 443 and/or -certmode is manual, otherwise HTTP.")
httpPort = flag.Int("http-port", 80, "The port on which to serve HTTP. Set to -1 to disable. The listener is bound to the same IP (if any) as specified in the -a flag.")
stunPort = flag.Int("stun-port", 3478, "The UDP port on which to serve STUN. The listener is bound to the same IP (if any) as specified in the -a flag.")
configPath = flag.String("c", "", "config file path")
certMode = flag.String("certmode", "letsencrypt", "mode for getting a cert. possible options: manual, letsencrypt")
certDir = flag.String("certdir", tsweb.DefaultCertDir("derper-certs"), "directory to store LetsEncrypt certs, if addr's port is :443")
hostname = flag.String("hostname", "derp.tailscale.com", "LetsEncrypt host name, if addr's port is :443")
runSTUN = flag.Bool("stun", true, "whether to run a STUN server. It will bind to the same IP (if any) as the --addr flag value.")
runDERP = flag.Bool("derp", true, "whether to run a DERP server. The only reason to set this false is if you're decommissioning a server but want to keep its bootstrap DNS functionality still running.")
meshPSKFile = flag.String("mesh-psk-file", defaultMeshPSKFile(), "if non-empty, path to file containing the mesh pre-shared key file. It should contain some hex string; whitespace is trimmed.")
meshWith = flag.String("mesh-with", "", "optional comma-separated list of hostnames to mesh with; the server's own hostname can be in the list")
bootstrapDNS = flag.String("bootstrap-dns-names", "", "optional comma-separated list of hostnames to make available at /bootstrap-dns")
unpublishedDNS = flag.String("unpublished-bootstrap-dns-names", "", "optional comma-separated list of hostnames to make available at /bootstrap-dns and not publish in the list")
verifyClients = flag.Bool("verify-clients", false, "verify clients to this DERP server through a local tailscaled instance.")
meshPSKFile = flag.String("mesh-psk-file", defaultMeshPSKFile(), "if non-empty, path to file containing the mesh pre-shared key file. It should contain some hex string; whitespace is trimmed.")
meshWith = flag.String("mesh-with", "", "optional comma-separated list of hostnames to mesh with; the server's own hostname can be in the list")
bootstrapDNS = flag.String("bootstrap-dns-names", "", "optional comma-separated list of hostnames to make available at /bootstrap-dns")
unpublishedDNS = flag.String("unpublished-bootstrap-dns-names", "", "optional comma-separated list of hostnames to make available at /bootstrap-dns and not publish in the list")
verifyClients = flag.Bool("verify-clients", false, "verify clients to this DERP server through a local tailscaled instance.")
verifyClientURL = flag.String("verify-client-url", "", "if non-empty, an admission controller URL for permitting client connections; see tailcfg.DERPAdmitClientRequest")
verifyFailOpen = flag.Bool("verify-client-url-fail-open", true, "whether we fail open if --verify-client-url is unreachable")
acceptConnLimit = flag.Float64("accept-connection-limit", math.Inf(+1), "rate limit for accepting new connection")
acceptConnBurst = flag.Int("accept-connection-burst", math.MaxInt, "burst limit for accepting new connection")
// tcpKeepAlive is intentionally long, to reduce battery cost. There is an L7 keepalive on a higher frequency schedule.
tcpKeepAlive = flag.Duration("tcp-keepalive-time", 10*time.Minute, "TCP keepalive time")
// tcpUserTimeout is intentionally short, so that hung connections are cleaned up promptly. DERPs should be nearby users.
tcpUserTimeout = flag.Duration("tcp-user-timeout", 15*time.Second, "TCP user timeout")
)
var (
@ -121,6 +131,10 @@ func writeNewConfig() config {
func main() {
flag.Parse()
if *versionFlag {
fmt.Println(version.Long())
return
}
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
@ -147,6 +161,8 @@ func main() {
s := derp.NewServer(cfg.PrivateKey, log.Printf)
s.SetVerifyClient(*verifyClients)
s.SetVerifyClientURL(*verifyClientURL)
s.SetVerifyClientURLFailOpen(*verifyFailOpen)
if *meshPSKFile != "" {
b, err := os.ReadFile(*meshPSKFile)
@ -175,7 +191,12 @@ func main() {
http.Error(w, "derp server disabled", http.StatusNotFound)
}))
}
mux.HandleFunc("/derp/probe", probeHandler)
// These two endpoints are the same. Different versions of the clients
// have assumes different paths over time so we support both.
mux.HandleFunc("/derp/probe", derphttp.ProbeHandler)
mux.HandleFunc("/derp/latency-check", derphttp.ProbeHandler)
go refreshBootstrapDNSLoop()
mux.HandleFunc("/bootstrap-dns", tsweb.BrowserHeaderHandlerFunc(handleBootstrapDNS))
mux.Handle("/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@ -216,7 +237,16 @@ func main() {
}))
debug.Handle("traffic", "Traffic check", http.HandlerFunc(s.ServeDebugTraffic))
quietLogger := log.New(logFilter{}, "", 0)
// Longer lived DERP connections send an application layer keepalive. Note
// if the keepalive is hit, the user timeout will take precedence over the
// keepalive counter, so the probe if unanswered will take effect promptly,
// this is less tolerant of high loss, but high loss is unexpected.
lc := net.ListenConfig{
Control: ktimeout.UserTimeout(*tcpUserTimeout),
KeepAlive: *tcpKeepAlive,
}
quietLogger := log.New(logger.HTTPServerLogFilter{Inner: log.Printf}, "", 0)
httpsrv := &http.Server{
Addr: *addr,
Handler: mux,
@ -292,7 +322,12 @@ func main() {
// duration exceeds server's WriteTimeout".
WriteTimeout: 5 * time.Minute,
}
err := port80srv.ListenAndServe()
ln, err := lc.Listen(context.Background(), "tcp", port80srv.Addr)
if err != nil {
log.Fatal(err)
}
defer ln.Close()
err = port80srv.Serve(ln)
if err != nil {
if err != http.ErrServerClosed {
log.Fatal(err)
@ -300,10 +335,15 @@ func main() {
}
}()
}
err = rateLimitedListenAndServeTLS(httpsrv)
err = rateLimitedListenAndServeTLS(httpsrv, &lc)
} else {
log.Printf("derper: serving on %s", *addr)
err = httpsrv.ListenAndServe()
var ln net.Listener
ln, err = lc.Listen(context.Background(), "tcp", httpsrv.Addr)
if err != nil {
log.Fatal(err)
}
err = httpsrv.Serve(ln)
}
if err != nil && err != http.ErrServerClosed {
log.Fatalf("derper: %v", err)
@ -335,17 +375,6 @@ func isChallengeChar(c rune) bool {
c == '.' || c == '-' || c == '_'
}
// probeHandler is the endpoint that js/wasm clients hit to measure
// DERP latency, since they can't do UDP STUN queries.
func probeHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case "HEAD", "GET":
w.Header().Set("Access-Control-Allow-Origin", "*")
default:
http.Error(w, "bogus probe method", http.StatusMethodNotAllowed)
}
}
var validProdHostname = regexp.MustCompile(`^derp([^.]*)\.tailscale\.com\.?$`)
func prodAutocertHostPolicy(_ context.Context, host string) error {
@ -368,8 +397,8 @@ func defaultMeshPSKFile() string {
return ""
}
func rateLimitedListenAndServeTLS(srv *http.Server) error {
ln, err := net.Listen("tcp", cmpx.Or(srv.Addr, ":https"))
func rateLimitedListenAndServeTLS(srv *http.Server, lc *net.ListenConfig) error {
ln, err := lc.Listen(context.Background(), "tcp", cmp.Or(srv.Addr, ":https"))
if err != nil {
return err
}
@ -423,22 +452,3 @@ func (l *rateLimitedListener) Accept() (net.Conn, error) {
l.numAccepts.Add(1)
return cn, nil
}
// logFilter is used to filter out useless error logs that are logged to
// the net/http.Server.ErrorLog logger.
type logFilter struct{}
func (logFilter) Write(p []byte) (int, error) {
b := mem.B(p)
if mem.HasSuffix(b, mem.S(": EOF\n")) ||
mem.HasSuffix(b, mem.S(": i/o timeout\n")) ||
mem.HasSuffix(b, mem.S(": read: connection reset by peer\n")) ||
mem.HasSuffix(b, mem.S(": remote error: tls: bad certificate\n")) ||
mem.HasSuffix(b, mem.S(": tls: first record does not look like a TLS handshake\n")) {
// Skip this log message, but say that we processed it
return len(p), nil
}
log.Printf("%s", p)
return len(p), nil
}

View File

@ -15,6 +15,7 @@ import (
"tailscale.com/derp"
"tailscale.com/derp/derphttp"
"tailscale.com/net/netmon"
"tailscale.com/types/key"
"tailscale.com/types/logger"
)
@ -36,7 +37,8 @@ func startMesh(s *derp.Server) error {
func startMeshWithHost(s *derp.Server, host string) error {
logf := logger.WithPrefix(log.Printf, fmt.Sprintf("mesh(%q): ", host))
c, err := derphttp.NewClient(s.PrivateKey(), "https://"+host+"/derp", logf)
netMon := netmon.NewStatic() // good enough for cmd/derper; no need for netns fanciness
c, err := derphttp.NewClient(s.PrivateKey(), "https://"+host+"/derp", logf, netMon)
if err != nil {
return err
}

View File

@ -16,21 +16,40 @@ import (
"tailscale.com/prober"
"tailscale.com/tsweb"
"tailscale.com/version"
)
var (
derpMapURL = flag.String("derp-map", "https://login.tailscale.com/derpmap/default", "URL to DERP map (https:// or file://)")
listen = flag.String("listen", ":8030", "HTTP listen address")
probeOnce = flag.Bool("once", false, "probe once and print results, then exit; ignores the listen flag")
spread = flag.Bool("spread", true, "whether to spread probing over time")
interval = flag.Duration("interval", 15*time.Second, "probe interval")
derpMapURL = flag.String("derp-map", "https://login.tailscale.com/derpmap/default", "URL to DERP map (https:// or file://)")
versionFlag = flag.Bool("version", false, "print version and exit")
listen = flag.String("listen", ":8030", "HTTP listen address")
probeOnce = flag.Bool("once", false, "probe once and print results, then exit; ignores the listen flag")
spread = flag.Bool("spread", true, "whether to spread probing over time")
interval = flag.Duration("interval", 15*time.Second, "probe interval")
meshInterval = flag.Duration("mesh-interval", 15*time.Second, "mesh probe interval")
stunInterval = flag.Duration("stun-interval", 15*time.Second, "STUN probe interval")
tlsInterval = flag.Duration("tls-interval", 15*time.Second, "TLS probe interval")
bwInterval = flag.Duration("bw-interval", 0, "bandwidth probe interval (0 = no bandwidth probing)")
bwSize = flag.Int64("bw-probe-size-bytes", 1_000_000, "bandwidth probe size")
)
func main() {
flag.Parse()
if *versionFlag {
fmt.Println(version.Long())
return
}
p := prober.New().WithSpread(*spread).WithOnce(*probeOnce).WithMetricNamespace("derpprobe")
dp, err := prober.DERP(p, *derpMapURL, *interval, *interval, *interval)
opts := []prober.DERPOpt{
prober.WithMeshProbing(*meshInterval),
prober.WithSTUNProbing(*stunInterval),
prober.WithTLSProbing(*tlsInterval),
}
if *bwInterval > 0 {
opts = append(opts, prober.WithBandwidthProbing(*bwInterval, *bwSize))
}
dp, err := prober.DERP(p, *derpMapURL, opts...)
if err != nil {
log.Fatal(err)
}
@ -53,6 +72,7 @@ func main() {
mux := http.NewServeMux()
tsweb.Debugger(mux)
mux.HandleFunc("/", http.HandlerFunc(serveFunc(p)))
log.Printf("Listening on %s", *listen)
log.Fatal(http.ListenAndServe(*listen, mux))
}

16
cmd/dist/dist.go vendored
View File

@ -13,11 +13,16 @@ import (
"tailscale.com/release/dist"
"tailscale.com/release/dist/cli"
"tailscale.com/release/dist/qnap"
"tailscale.com/release/dist/synology"
"tailscale.com/release/dist/unixpkgs"
)
var synologyPackageCenter bool
var (
synologyPackageCenter bool
qnapPrivateKeyPath string
qnapCertificatePath string
)
func getTargets() ([]dist.Target, error) {
var ret []dist.Target
@ -33,7 +38,14 @@ func getTargets() ([]dist.Target, error) {
// Since only we can provide packages to Synology for
// distribution, we default to building the "sideload" variant of
// packages that we distribute on pkgs.tailscale.com.
//
// To build for package center, run
// ./tool/go run ./cmd/dist build --synology-package-center synology
ret = append(ret, synology.Targets(synologyPackageCenter, nil)...)
if (qnapPrivateKeyPath == "") != (qnapCertificatePath == "") {
return nil, errors.New("both --qnap-private-key-path and --qnap-certificate-path must be set")
}
ret = append(ret, qnap.Targets(qnapPrivateKeyPath, qnapCertificatePath)...)
return ret, nil
}
@ -42,6 +54,8 @@ func main() {
for _, subcmd := range cmd.Subcommands {
if subcmd.Name == "build" {
subcmd.FlagSet.BoolVar(&synologyPackageCenter, "synology-package-center", false, "build synology packages with extra metadata for the official package center")
subcmd.FlagSet.StringVar(&qnapPrivateKeyPath, "qnap-private-key-path", "", "sign qnap packages with given key (must also provide --qnap-certificate-path)")
subcmd.FlagSet.StringVar(&qnapCertificatePath, "qnap-certificate-path", "", "sign qnap packages with given certificate (must also provide --qnap-private-key-path)")
}
}

View File

@ -7,6 +7,7 @@
package main
import (
"cmp"
"context"
"flag"
"fmt"
@ -16,7 +17,6 @@ import (
"golang.org/x/oauth2/clientcredentials"
"tailscale.com/client/tailscale"
"tailscale.com/util/cmpx"
)
func main() {
@ -40,7 +40,7 @@ func main() {
log.Fatal("at least one tag must be specified")
}
baseURL := cmpx.Or(os.Getenv("TS_BASE_URL"), "https://api.tailscale.com")
baseURL := cmp.Or(os.Getenv("TS_BASE_URL"), "https://api.tailscale.com")
credentials := clientcredentials.Config{
ClientID: clientID,

View File

@ -158,11 +158,13 @@ func main() {
if !ok && (!oiok || !osok) {
log.Fatal("set envvar TS_API_KEY to your Tailscale API key or TS_OAUTH_ID and TS_OAUTH_SECRET to your Tailscale OAuth ID and Secret")
}
if ok && (oiok || osok) {
if apiKey != "" && (oauthId != "" || oauthSecret != "") {
log.Fatal("set either the envvar TS_API_KEY or TS_OAUTH_ID and TS_OAUTH_SECRET")
}
var client *http.Client
if oiok {
if oiok && (oauthId != "" || oauthSecret != "") {
// Both should ideally be set, but if either are non-empty it means the user had an intent
// to set _something_, so they should receive the oauth error flow.
oauthConfig := &clientcredentials.Config{
ClientID: oauthId,
ClientSecret: oauthSecret,

View File

@ -31,10 +31,12 @@ var (
//go:embed hello.tmpl.html
var embeddedTemplate string
var localClient tailscale.LocalClient
func main() {
flag.Parse()
if *testIP != "" {
res, err := tailscale.WhoIs(context.Background(), *testIP)
res, err := localClient.WhoIs(context.Background(), *testIP)
if err != nil {
log.Fatal(err)
}
@ -76,7 +78,7 @@ func main() {
GetCertificate: func(hi *tls.ClientHelloInfo) (*tls.Certificate, error) {
switch hi.ServerName {
case "hello.ts.net":
return tailscale.GetCertificate(hi)
return localClient.GetCertificate(hi)
case "hello.ipn.dev":
c, err := tls.LoadX509KeyPair(
"/etc/hello/hello.ipn.dev.crt",
@ -170,7 +172,7 @@ func root(w http.ResponseWriter, r *http.Request) {
return
}
who, err := tailscale.WhoIs(r.Context(), r.RemoteAddr)
who, err := localClient.WhoIs(r.Context(), r.RemoteAddr)
var data tmplData
if err != nil {
if devMode() {

View File

@ -430,6 +430,8 @@
<footer class="footer text-gray-600 text-center mb-12">
<p>Read about <a href="https://tailscale.com/kb/1017/install#advanced-features" class="text-blue-600 hover:text-blue-800"
target="_blank">what you can do next &rarr;</a></p>
<p>Read about <a href="https://tailscale.com/kb/1073/hello" class="text-blue-600 hover:text-blue-800"
target="_blank">the Hello service &rarr;</a></p>
</footer>
</main>
</body>

354
cmd/k8s-nameserver/main.go Normal file
View File

@ -0,0 +1,354 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// k8s-nameserver is a simple nameserver implementation meant to be used with
// k8s-operator to allow to resolve magicDNS names associated with tailnet
// proxies in cluster.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"net"
"os"
"os/signal"
"path/filepath"
"sync"
"syscall"
"github.com/fsnotify/fsnotify"
"github.com/miekg/dns"
operatorutils "tailscale.com/k8s-operator"
"tailscale.com/util/dnsname"
)
const (
// tsNetDomain is the domain that this DNS nameserver has registered a handler for.
tsNetDomain = "ts.net"
// addr is the the address that the UDP and TCP listeners will listen on.
addr = ":1053"
// The following constants are specific to the nameserver configuration
// provided by a mounted Kubernetes Configmap. The Configmap mounted at
// /config is the only supported way for configuring this nameserver.
defaultDNSConfigDir = "/config"
kubeletMountedConfigLn = "..data"
)
// nameserver is a simple nameserver that responds to DNS queries for A records
// for ts.net domain names over UDP or TCP. It serves DNS responses from
// in-memory IPv4 host records. It is intended to be deployed on Kubernetes with
// a ConfigMap mounted at /config that should contain the host records. It
// dynamically reconfigures its in-memory mappings as the contents of the
// mounted ConfigMap changes.
type nameserver struct {
// configReader returns the latest desired configuration (host records)
// for the nameserver. By default it gets set to a reader that reads
// from a Kubernetes ConfigMap mounted at /config, but this can be
// overridden in tests.
configReader configReaderFunc
// configWatcher is a watcher that returns an event when the desired
// configuration has changed and the nameserver should update the
// in-memory records.
configWatcher <-chan string
mu sync.Mutex // protects following
// ip4 are the in-memory hostname -> IP4 mappings that the nameserver
// uses to respond to A record queries.
ip4 map[dnsname.FQDN][]net.IP
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
// Ensure that we watch the kube Configmap mounted at /config for
// nameserver configuration updates and send events when updates happen.
c := ensureWatcherForKubeConfigMap(ctx)
ns := &nameserver{
configReader: configMapConfigReader,
configWatcher: c,
}
// Ensure that in-memory records get set up to date now and will get
// reset when the configuration changes.
ns.runRecordsReconciler(ctx)
// Register a DNS server handle for ts.net domain names. Not having a
// handle registered for any other domain names is how we enforce that
// this nameserver can only be used for ts.net domains - querying any
// other domain names returns Rcode Refused.
dns.HandleFunc(tsNetDomain, ns.handleFunc())
// Listen for DNS queries over UDP and TCP.
udpSig := make(chan os.Signal)
tcpSig := make(chan os.Signal)
go listenAndServe("udp", addr, udpSig)
go listenAndServe("tcp", addr, tcpSig)
sig := make(chan os.Signal, 1)
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM)
s := <-sig
log.Printf("OS signal (%s) received, shutting down", s)
cancel() // exit the records reconciler and configmap watcher goroutines
udpSig <- s // stop the UDP listener
tcpSig <- s // stop the TCP listener
}
// handleFunc is a DNS query handler that can respond to A record queries from
// the nameserver's in-memory records.
// - If an A record query is received and the
// nameserver's in-memory records contain records for the queried domain name,
// return a success response.
// - If an A record query is received, but the
// nameserver's in-memory records do not contain records for the queried domain name,
// return NXDOMAIN.
// - If an A record query is received, but the queried domain name is not valid, return Format Error.
// - If a query is received for any other record type than A, return Not Implemented.
func (n *nameserver) handleFunc() func(w dns.ResponseWriter, r *dns.Msg) {
h := func(w dns.ResponseWriter, r *dns.Msg) {
m := new(dns.Msg)
defer func() {
w.WriteMsg(m)
}()
if len(r.Question) < 1 {
log.Print("[unexpected] nameserver received a request with no questions")
m = r.SetRcodeFormatError(r)
return
}
// TODO (irbekrm): maybe set message compression
switch r.Question[0].Qtype {
case dns.TypeA:
q := r.Question[0].Name
fqdn, err := dnsname.ToFQDN(q)
if err != nil {
m = r.SetRcodeFormatError(r)
return
}
// The only supported use of this nameserver is as a
// single source of truth for MagicDNS names by
// non-tailnet Kubernetes workloads.
m.Authoritative = true
m.RecursionAvailable = false
ips := n.lookupIP4(fqdn)
if ips == nil || len(ips) == 0 {
// As we are the authoritative nameserver for MagicDNS
// names, if we do not have a record for this MagicDNS
// name, it does not exist.
m = m.SetRcode(r, dns.RcodeNameError)
return
}
// TODO (irbekrm): TTL is currently set to 0, meaning
// that cluster workloads will not cache the DNS
// records. Revisit this in future when we understand
// the usage patterns better- is it putting too much
// load on kube DNS server or is this fine?
for _, ip := range ips {
rr := &dns.A{Hdr: dns.RR_Header{Name: q, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 0}, A: ip}
m.SetRcode(r, dns.RcodeSuccess)
m.Answer = append(m.Answer, rr)
}
case dns.TypeAAAA:
// TODO (irbekrm): implement IPv6 support.
// Kubernetes distributions that I am most familiar with
// default to IPv4 for Pod CIDR ranges and often many cases don't
// support IPv6 at all, so this should not be crucial for now.
fallthrough
default:
log.Printf("[unexpected] nameserver received a query for an unsupported record type: %s", r.Question[0].String())
m.SetRcode(r, dns.RcodeNotImplemented)
}
}
return h
}
// runRecordsReconciler ensures that nameserver's in-memory records are
// reset when the provided configuration changes.
func (n *nameserver) runRecordsReconciler(ctx context.Context) {
log.Print("updating nameserver's records from the provided configuration...")
if err := n.resetRecords(); err != nil { // ensure records are up to date before the nameserver starts
log.Fatalf("error setting nameserver's records: %v", err)
}
log.Print("nameserver's records were updated")
go func() {
for {
select {
case <-ctx.Done():
log.Printf("context cancelled, exiting records reconciler")
return
case <-n.configWatcher:
log.Print("configuration update detected, resetting records")
if err := n.resetRecords(); err != nil {
// TODO (irbekrm): this runs in a
// container that will be thrown away,
// so this should be ok. But maybe still
// need to ensure that the DNS server
// terminates connections more
// gracefully.
log.Fatalf("error resetting records: %v", err)
}
log.Print("nameserver records were reset")
}
}
}()
}
// resetRecords sets the in-memory DNS records of this nameserver from the
// provided configuration. It does not check for the diff, so the caller is
// expected to ensure that this is only called when reset is needed.
func (n *nameserver) resetRecords() error {
dnsCfgBytes, err := n.configReader()
if err != nil {
log.Printf("error reading nameserver's configuration: %v", err)
return err
}
if dnsCfgBytes == nil || len(dnsCfgBytes) < 1 {
log.Print("nameserver's configuration is empty, any in-memory records will be unset")
n.mu.Lock()
n.ip4 = make(map[dnsname.FQDN][]net.IP)
n.mu.Unlock()
return nil
}
dnsCfg := &operatorutils.Records{}
err = json.Unmarshal(dnsCfgBytes, dnsCfg)
if err != nil {
return fmt.Errorf("error unmarshalling nameserver configuration: %v\n", err)
}
if dnsCfg.Version != operatorutils.Alpha1Version {
return fmt.Errorf("unsupported configuration version %s, supported versions are %s\n", dnsCfg.Version, operatorutils.Alpha1Version)
}
ip4 := make(map[dnsname.FQDN][]net.IP)
defer func() {
n.mu.Lock()
defer n.mu.Unlock()
n.ip4 = ip4
}()
if len(dnsCfg.IP4) == 0 {
log.Print("nameserver's configuration contains no records, any in-memory records will be unset")
return nil
}
for fqdn, ips := range dnsCfg.IP4 {
fqdn, err := dnsname.ToFQDN(fqdn)
if err != nil {
log.Printf("invalid nameserver's configuration: %s is not a valid FQDN: %v; skipping this record", fqdn, err)
continue // one invalid hostname should not break the whole nameserver
}
for _, ipS := range ips {
ip := net.ParseIP(ipS).To4()
if ip == nil { // To4 returns nil if IP is not a IPv4 address
log.Printf("invalid nameserver's configuration: %v does not appear to be an IPv4 address; skipping this record", ipS)
continue // one invalid IP address should not break the whole nameserver
}
ip4[fqdn] = []net.IP{ip}
}
}
return nil
}
// listenAndServe starts a DNS server for the provided network and address.
func listenAndServe(net, addr string, shutdown chan os.Signal) {
s := &dns.Server{Addr: addr, Net: net}
go func() {
<-shutdown
log.Printf("shutting down server for %s", net)
s.Shutdown()
}()
log.Printf("listening for %s queries on %s", net, addr)
if err := s.ListenAndServe(); err != nil {
log.Fatalf("error running %s server: %v", net, err)
}
}
// ensureWatcherForKubeConfigMap sets up a new file watcher for the ConfigMap
// that's expected to be mounted at /config. Returns a channel that receives an
// event every time the contents get updated.
func ensureWatcherForKubeConfigMap(ctx context.Context) chan string {
c := make(chan string)
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatalf("error creating a new watcher for the mounted ConfigMap: %v", err)
}
// kubelet mounts configmap to a Pod using a series of symlinks, one of
// which is <mount-dir>/..data that Kubernetes recommends consumers to
// use if they need to monitor changes
// https://github.com/kubernetes/kubernetes/blob/v1.28.1/pkg/volume/util/atomic_writer.go#L39-L61
toWatch := filepath.Join(defaultDNSConfigDir, kubeletMountedConfigLn)
go func() {
defer watcher.Close()
log.Printf("starting file watch for %s", defaultDNSConfigDir)
for {
select {
case <-ctx.Done():
log.Print("context cancelled, exiting ConfigMap watcher")
return
case event, ok := <-watcher.Events:
if !ok {
log.Fatal("watcher finished; exiting")
}
if event.Name == toWatch {
msg := fmt.Sprintf("ConfigMap update received: %s", event)
log.Print(msg)
c <- msg
}
case err, ok := <-watcher.Errors:
if err != nil {
// TODO (irbekrm): this runs in a
// container that will be thrown away,
// so this should be ok. But maybe still
// need to ensure that the DNS server
// terminates connections more
// gracefully.
log.Fatalf("[unexpected] error watching configuration: %v", err)
}
if !ok {
// TODO (irbekrm): this runs in a
// container that will be thrown away,
// so this should be ok. But maybe still
// need to ensure that the DNS server
// terminates connections more
// gracefully.
log.Fatalf("[unexpected] errors watcher exited")
}
}
}
}()
if err = watcher.Add(defaultDNSConfigDir); err != nil {
log.Fatalf("failed setting up a watcher for the mounted ConfigMap: %v", err)
}
return c
}
// configReaderFunc is a function that returns the desired nameserver configuration.
type configReaderFunc func() ([]byte, error)
// configMapConfigReader reads the desired nameserver configuration from a
// records.json file in a ConfigMap mounted at /config.
var configMapConfigReader configReaderFunc = func() ([]byte, error) {
if contents, err := os.ReadFile(filepath.Join(defaultDNSConfigDir, operatorutils.DNSRecordsCMKey)); err == nil {
return contents, nil
} else if os.IsNotExist(err) {
return nil, nil
} else {
return nil, err
}
}
// lookupIP4 returns any IPv4 addresses for the given FQDN from nameserver's
// in-memory records.
func (n *nameserver) lookupIP4(fqdn dnsname.FQDN) []net.IP {
if n.ip4 == nil {
return nil
}
n.mu.Lock()
defer n.mu.Unlock()
f := n.ip4[fqdn]
return f
}

View File

@ -0,0 +1,227 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"net"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/miekg/dns"
"tailscale.com/util/dnsname"
)
func TestNameserver(t *testing.T) {
tests := []struct {
name string
ip4 map[dnsname.FQDN][]net.IP
query *dns.Msg
wantResp *dns.Msg
}{
{
name: "A record query, record exists",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{Id: 1, RecursionDesired: true},
},
wantResp: &dns.Msg{
Answer: []dns.RR{&dns.A{Hdr: dns.RR_Header{
Name: "foo.bar.com", Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 0},
A: net.IP{1, 2, 3, 4}}},
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeSuccess,
RecursionAvailable: false,
RecursionDesired: true,
Response: true,
Opcode: dns.OpcodeQuery,
Authoritative: true,
}},
},
{
name: "A record query, record does not exist",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "baz.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "baz.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNameError,
RecursionAvailable: false,
Response: true,
Opcode: dns.OpcodeQuery,
Authoritative: true,
}},
},
{
name: "A record query, but the name is not a valid FQDN",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo..bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo..bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeFormatError,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
{
name: "AAAA record query",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNotImplemented,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
{
name: "AAAA record query",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNotImplemented,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
{
name: "CNAME record query",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeCNAME}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeCNAME}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNotImplemented,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ns := &nameserver{
ip4: tt.ip4,
}
handler := ns.handleFunc()
fakeRespW := &fakeResponseWriter{}
handler(fakeRespW, tt.query)
if diff := cmp.Diff(*fakeRespW.msg, *tt.wantResp); diff != "" {
t.Fatalf("unexpected response (-got +want): \n%s", diff)
}
})
}
}
func TestResetRecords(t *testing.T) {
tests := []struct {
name string
config []byte
hasIp4 map[dnsname.FQDN][]net.IP
wantsIp4 map[dnsname.FQDN][]net.IP
wantsErr bool
}{
{
name: "previously empty nameserver.ip4 gets set",
config: []byte(`{"version": "v1alpha1", "ip4": {"foo.bar.com": ["1.2.3.4"]}}`),
wantsIp4: map[dnsname.FQDN][]net.IP{"foo.bar.com.": {{1, 2, 3, 4}}},
},
{
name: "nameserver.ip4 gets reset",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
config: []byte(`{"version": "v1alpha1", "ip4": {"foo.bar.com": ["1.2.3.4"]}}`),
wantsIp4: map[dnsname.FQDN][]net.IP{"foo.bar.com.": {{1, 2, 3, 4}}},
},
{
name: "configuration with incompatible version",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
config: []byte(`{"version": "v1beta1", "ip4": {"foo.bar.com": ["1.2.3.4"]}}`),
wantsIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
wantsErr: true,
},
{
name: "nameserver.ip4 gets reset to empty config when no configuration is provided",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
wantsIp4: make(map[dnsname.FQDN][]net.IP),
},
{
name: "nameserver.ip4 gets reset to empty config when the provided configuration is empty",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
config: []byte(`{"version": "v1alpha1", "ip4": {}}`),
wantsIp4: make(map[dnsname.FQDN][]net.IP),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ns := &nameserver{
ip4: tt.hasIp4,
configReader: func() ([]byte, error) { return tt.config, nil },
}
if err := ns.resetRecords(); err == nil == tt.wantsErr {
t.Errorf("resetRecords() returned err: %v, wantsErr: %v", err, tt.wantsErr)
}
if diff := cmp.Diff(ns.ip4, tt.wantsIp4); diff != "" {
t.Fatalf("unexpected nameserver.ip4 contents (-got +want): \n%s", diff)
}
})
}
}
// fakeResponseWriter is a faked out dns.ResponseWriter that can be used in
// tests that need to read the response message that was written.
type fakeResponseWriter struct {
msg *dns.Msg
}
var _ dns.ResponseWriter = &fakeResponseWriter{}
func (fr *fakeResponseWriter) WriteMsg(msg *dns.Msg) error {
fr.msg = msg
return nil
}
func (fr *fakeResponseWriter) LocalAddr() net.Addr {
return nil
}
func (fr *fakeResponseWriter) RemoteAddr() net.Addr {
return nil
}
func (fr *fakeResponseWriter) Write([]byte) (int, error) {
return 0, nil
}
func (fr *fakeResponseWriter) Close() error {
return nil
}
func (fr *fakeResponseWriter) TsigStatus() error {
return nil
}
func (fr *fakeResponseWriter) TsigTimersOnly(bool) {}
func (fr *fakeResponseWriter) Hijack() {}

View File

@ -164,6 +164,17 @@ func (a *ConnectorReconciler) maybeProvisionConnector(ctx context.Context, logge
hostname = string(cn.Spec.Hostname)
}
crl := childResourceLabels(cn.Name, a.tsnamespace, "connector")
proxyClass := cn.Spec.ProxyClass
if proxyClass != "" {
if ready, err := proxyClassIsReady(ctx, proxyClass, a.Client); err != nil {
return fmt.Errorf("error verifying ProxyClass for Connector: %w", err)
} else if !ready {
logger.Infof("ProxyClass %s specified for the Connector, but is not (yet) Ready, waiting..", proxyClass)
return nil
}
}
sts := &tailscaleSTSConfig{
ParentResourceName: cn.Name,
ParentResourceUID: string(cn.UID),
@ -173,6 +184,7 @@ func (a *ConnectorReconciler) maybeProvisionConnector(ctx context.Context, logge
Connector: &connector{
isExitNode: cn.Spec.ExitNode,
},
ProxyClass: proxyClass,
}
if cn.Spec.SubnetRouter != nil && len(cn.Spec.SubnetRouter.AdvertiseRoutes) > 0 {

View File

@ -67,45 +67,40 @@ func TestConnector(t *testing.T) {
fullName, shortName := findGenName(t, fc, "", "test", "connector")
opts := configOpts{
stsName: shortName,
secretName: fullName,
parentType: "connector",
hostname: "test-connector",
shouldUseDeclarativeConfig: true,
isExitNode: true,
subnetRoutes: "10.40.0.0/14",
confFileHash: "9321660203effb80983eaecc7b5ac5a8c53934926f46e895b9fe295dcfc5a904",
stsName: shortName,
secretName: fullName,
parentType: "connector",
hostname: "test-connector",
isExitNode: true,
subnetRoutes: "10.40.0.0/14",
}
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedSTS(opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Add another route to be advertised.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.SubnetRouter.AdvertiseRoutes = []tsapi.Route{"10.40.0.0/14", "10.44.0.0/20"}
})
opts.subnetRoutes = "10.40.0.0/14,10.44.0.0/20"
opts.confFileHash = "fb6c4daf67425f983985750cd8d6f2beae77e614fcb34176604571f5623d6862"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Remove a route.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.SubnetRouter.AdvertiseRoutes = []tsapi.Route{"10.44.0.0/20"}
})
opts.subnetRoutes = "10.44.0.0/20"
opts.confFileHash = "bacba177bcfe3849065cf6fee53d658a9bb4144197ac5b861727d69ea99742bb"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Remove the subnet router.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.SubnetRouter = nil
})
opts.subnetRoutes = ""
opts.confFileHash = "7c421a99128eb80e79a285a82702f19f8f720615542a15bd794858a6275d8079"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Re-add the subnet router.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
@ -114,9 +109,8 @@ func TestConnector(t *testing.T) {
}
})
opts.subnetRoutes = "10.44.0.0/20"
opts.confFileHash = "bacba177bcfe3849065cf6fee53d658a9bb4144197ac5b861727d69ea99742bb"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Delete the Connector.
if err = fc.Delete(context.Background(), cn); err != nil {
@ -152,25 +146,22 @@ func TestConnector(t *testing.T) {
fullName, shortName = findGenName(t, fc, "", "test", "connector")
opts = configOpts{
stsName: shortName,
secretName: fullName,
parentType: "connector",
shouldUseDeclarativeConfig: true,
subnetRoutes: "10.40.0.0/14",
hostname: "test-connector",
confFileHash: "57d922331890c9b1c8c6ae664394cb254334c551d9cd9db14537b5d9da9fb17e",
stsName: shortName,
secretName: fullName,
parentType: "connector",
subnetRoutes: "10.40.0.0/14",
hostname: "test-connector",
}
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedSTS(opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Add an exit node.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.ExitNode = true
})
opts.isExitNode = true
opts.confFileHash = "1499b591fd97a50f0330db6ec09979792c49890cf31f5da5bb6a3f50dba1e77a"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Delete the Connector.
if err = fc.Delete(context.Background(), cn); err != nil {
@ -183,3 +174,103 @@ func TestConnector(t *testing.T) {
expectMissing[appsv1.StatefulSet](t, fc, "operator-ns", shortName)
expectMissing[corev1.Secret](t, fc, "operator-ns", fullName)
}
func TestConnectorWithProxyClass(t *testing.T) {
// Setup
pc := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{Name: "custom-metadata"},
Spec: tsapi.ProxyClassSpec{StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Annotations: map[string]string{"bar.io/foo": "some-val"},
Pod: &tsapi.Pod{Annotations: map[string]string{"foo.io/bar": "some-val"}}}},
}
cn := &tsapi.Connector{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
UID: types.UID("1234-UID"),
},
TypeMeta: metav1.TypeMeta{
Kind: tsapi.ConnectorKind,
APIVersion: "tailscale.io/v1alpha1",
},
Spec: tsapi.ConnectorSpec{
SubnetRouter: &tsapi.SubnetRouter{
AdvertiseRoutes: []tsapi.Route{"10.40.0.0/14"},
},
ExitNode: true,
},
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pc, cn).
WithStatusSubresource(pc, cn).
Build()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
cl := tstest.NewClock(tstest.ClockOpts{})
cr := &ConnectorReconciler{
Client: fc,
clock: cl,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// 1. Connector is created with no ProxyClass specified, create
// resources with the default configuration.
expectReconciled(t, cr, "", "test")
fullName, shortName := findGenName(t, fc, "", "test", "connector")
opts := configOpts{
stsName: shortName,
secretName: fullName,
parentType: "connector",
hostname: "test-connector",
isExitNode: true,
subnetRoutes: "10.40.0.0/14",
}
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 2. Update Connector to specify a ProxyClass. ProxyClass is not yet
// ready, so its configuration is NOT applied to the Connector
// resources.
mustUpdate(t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.ProxyClass = "custom-metadata"
})
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 3. ProxyClass is set to Ready by proxy-class reconciler. Connector
// get reconciled and configuration from the ProxyClass is applied to
// its resources.
mustUpdateStatus(t, fc, "", "custom-metadata", func(pc *tsapi.ProxyClass) {
pc.Status = tsapi.ProxyClassStatus{
Conditions: []tsapi.ConnectorCondition{{
Status: metav1.ConditionTrue,
Type: tsapi.ProxyClassready,
ObservedGeneration: pc.Generation,
}}}
})
opts.proxyClass = pc.Name
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 4. Connector.spec.proxyClass field is unset, Connector gets
// reconciled and configuration from the ProxyClass is removed from the
// cluster resources for the Connector.
mustUpdate(t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.ProxyClass = ""
})
opts.proxyClass = ""
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
}

View File

@ -21,6 +21,9 @@ spec:
{{- end }}
labels:
app: operator
{{- with .Values.operatorConfig.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
@ -49,6 +52,8 @@ spec:
image: {{ .Values.operatorConfig.image.repo }}{{- if .Values.operatorConfig.image.digest -}}{{ printf "@%s" .Values.operatorConfig.image.digest}}{{- else -}}{{ printf "%s" $operatorTag }}{{- end }}
imagePullPolicy: {{ .Values.operatorConfig.image.pullPolicy }}
env:
- name: OPERATOR_INITIAL_TAGS
value: {{ join "," .Values.operatorConfig.defaultTags }}
- name: OPERATOR_HOSTNAME
value: {{ .Values.operatorConfig.hostname }}
- name: OPERATOR_SECRET

View File

@ -22,7 +22,10 @@ rules:
resources: ["ingressclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["tailscale.com"]
resources: ["connectors", "connectors/status"]
resources: ["connectors", "connectors/status", "proxyclasses", "proxyclasses/status"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["tailscale.com"]
resources: ["dnsconfigs", "dnsconfigs/status"]
verbs: ["get", "list", "watch", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
@ -45,11 +48,14 @@ metadata:
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["secrets"]
resources: ["secrets", "serviceaccounts", "configmaps"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
resources: ["statefulsets", "deployments"]
verbs: ["*"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@ -12,9 +12,16 @@ oauth: {}
# of chart installation. We do not use Helm's CRD installation mechanism as that
# does not allow for upgrading CRDs.
# https://helm.sh/docs/chart_best_practices/custom_resource_definitions/
installCRDs: "true"
installCRDs: true
operatorConfig:
# ACL tag that operator will be tagged with. Operator must be made owner of
# these tags
# https://tailscale.com/kb/1236/kubernetes-operator/?q=operator#setting-up-the-kubernetes-operator
# Multiple tags are defined as array items and passed to the operator as a comma-separated string
defaultTags:
- "tag:k8s-operator"
image:
repo: tailscale/k8s-operator
# Digest will be prioritized over tag. If neither are set appVersion will be
@ -30,6 +37,7 @@ operatorConfig:
resources: {}
podAnnotations: {}
podLabels: {}
tolerations: []

View File

@ -31,6 +31,7 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: 'Connector defines a Tailscale node that will be deployed in the cluster. The node can be configured to act as a Tailscale subnet router and/or a Tailscale exit node. Connector is a cluster-scoped resource. More info: https://tailscale.com/kb/1236/kubernetes-operator#deploying-exit-nodes-and-subnet-routers-on-kubernetes-using-connector-custom-resource'
type: object
required:
- spec
@ -44,7 +45,7 @@ spec:
metadata:
type: object
spec:
description: ConnectorSpec describes the desired Tailscale component.
description: 'ConnectorSpec describes the desired Tailscale component. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status'
type: object
properties:
exitNode:
@ -54,6 +55,9 @@ spec:
description: Hostname is the tailnet hostname that should be assigned to the Connector node. If unset, hostname defaults to <connector name>-connector. Hostname can contain lower case letters, numbers and dashes, it must not start or end with a dash and must be between 2 and 63 characters long.
type: string
pattern: ^[a-z0-9][a-z0-9-]{0,61}[a-z0-9]$
proxyClass:
description: ProxyClass is the name of the ProxyClass custom resource that contains configuration options that should be applied to the resources created for this Connector. If unset, the operator will create resources with the default configuration.
type: string
subnetRouter:
description: SubnetRouter defines subnet routes that the Connector node should expose to tailnet. If unset, none are exposed. https://tailscale.com/kb/1019/subnets/
type: object

View File

@ -0,0 +1,96 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
name: dnsconfigs.tailscale.com
spec:
group: tailscale.com
names:
kind: DNSConfig
listKind: DNSConfigList
plural: dnsconfigs
shortNames:
- dc
singular: dnsconfig
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Service IP address of the nameserver
jsonPath: .status.nameserver.ip
name: NameserverIP
type: string
name: v1alpha1
schema:
openAPIV3Schema:
type: object
required:
- spec
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
type: object
required:
- nameserver
properties:
nameserver:
type: object
properties:
image:
type: object
properties:
repo:
type: string
tag:
type: string
status:
type: object
properties:
conditions:
type: array
items:
description: ConnectorCondition contains condition information for a Connector.
type: object
required:
- status
- type
properties:
lastTransitionTime:
description: LastTransitionTime is the timestamp corresponding to the last status change of this condition.
type: string
format: date-time
message:
description: Message is a human readable description of the details of the last transition, complementing reason.
type: string
observedGeneration:
description: If set, this represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.condition[x].observedGeneration is 9, the condition is out of date with respect to the current state of the Connector.
type: integer
format: int64
reason:
description: Reason is a brief machine readable explanation for the condition's last transition.
type: string
status:
description: Status of the condition, one of ('True', 'False', 'Unknown').
type: string
type:
description: Type of the condition, known values are (`SubnetRouterReady`).
type: string
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
nameserver:
type: object
properties:
ip:
type: string
served: true
storage: true
subresources:
status: {}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,9 @@
apiVersion: tailscale.com/v1alpha1
kind: DNSConfig
metadata:
name: ts-dns
spec:
nameserver:
image:
repo: tailscale/k8s-nameserver
tag: unstable-v1.65

View File

@ -0,0 +1,17 @@
apiVersion: tailscale.com/v1alpha1
kind: ProxyClass
metadata:
name: prod
spec:
metrics:
enable: true
statefulSet:
annotations:
platform-component: infra
pod:
labels:
team: eng
nodeSelector:
kubernetes.io/os: "linux"
imagePullSecrets:
- name: "foo"

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: dnsrecords

View File

@ -0,0 +1,37 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nameserver
spec:
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
app: nameserver
strategy:
type: Recreate
template:
metadata:
labels:
app: nameserver
spec:
containers:
- imagePullPolicy: IfNotPresent
name: nameserver
ports:
- name: tcp
protocol: TCP
containerPort: 1053
- name: udp
protocol: UDP
containerPort: 1053
volumeMounts:
- name: dnsrecords
mountPath: /config
restartPolicy: Always
serviceAccount: nameserver
serviceAccountName: nameserver
volumes:
- name: dnsrecords
configMap:
name: dnsrecords

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: nameserver

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: nameserver
spec:
selector:
app: nameserver
ports:
- name: udp
targetPort: 1053
port: 53
protocol: UDP
- name: tcp
targetPort: 1053
port: 53
protocol: TCP

File diff suppressed because it is too large Load Diff

View File

@ -14,10 +14,8 @@ spec:
- name: sysctler
securityContext:
privileged: true
command: ["/bin/sh"]
args:
- -c
- sysctl -w net.ipv4.ip_forward=1 net.ipv6.conf.all.forwarding=1
command: ["/bin/sh", "-c"]
args: [sysctl -w net.ipv4.ip_forward=1 && if sysctl net.ipv6.conf.all.forwarding; then sysctl -w net.ipv6.conf.all.forwarding=1; fi]
resources:
requests:
cpu: 1m
@ -28,8 +26,10 @@ spec:
env:
- name: TS_USERSPACE
value: "false"
- name: TS_AUTH_ONCE
value: "true"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
securityContext:
capabilities:
add:

View File

@ -20,5 +20,7 @@ spec:
env:
- name: TS_USERSPACE
value: "true"
- name: TS_AUTH_ONCE
value: "true"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP

View File

@ -0,0 +1,337 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// tailscale-operator provides a way to expose services running in a Kubernetes
// cluster to your Tailnet and to make Tailscale nodes available to cluster
// workloads
package main
import (
"context"
"encoding/json"
"fmt"
"slices"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/net"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/util/mak"
)
const (
dnsRecordsRecocilerFinalizer = "tailscale.com/dns-records-reconciler"
annotationTSMagicDNSName = "tailscale.com/magic-dnsname"
)
// dnsRecordsReconciler knows how to update dnsrecords ConfigMap with DNS
// records.
// The records that it creates are:
// - For tailscale Ingress, a mapping of the Ingress's MagicDNSName to the IP address of
// the ingress proxy Pod.
// - For egress proxies configured via tailscale.com/tailnet-fqdn annotation, a
// mapping of the tailnet FQDN to the IP address of the egress proxy Pod.
//
// Records will only be created if there is exactly one ready
// tailscale.com/v1alpha1.DNSConfig instance in the cluster (so that we know
// that there is a ts.net nameserver deployed in the cluster).
type dnsRecordsReconciler struct {
client.Client
tsNamespace string // namespace in which we provision tailscale resources
logger *zap.SugaredLogger
isDefaultLoadBalancer bool // true if operator is the default ingress controller in this cluster
}
// Reconcile takes a reconcile.Request for a headless Service fronting a
// tailscale proxy and updates DNS Records in dnsrecords ConfigMap for the
// in-cluster ts.net nameserver if required.
func (dnsRR *dnsRecordsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
logger := dnsRR.logger.With("Service", req.NamespacedName)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
headlessSvc := new(corev1.Service)
err = dnsRR.Client.Get(ctx, req.NamespacedName, headlessSvc)
if apierrors.IsNotFound(err) {
logger.Debugf("Service not found")
return reconcile.Result{}, nil
}
if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get Service: %w", err)
}
if !(isManagedByType(headlessSvc, "svc") || isManagedByType(headlessSvc, "ingress")) {
logger.Debugf("Service is not a headless Service for a tailscale ingress or egress proxy; do nothing")
return reconcile.Result{}, nil
}
if !headlessSvc.DeletionTimestamp.IsZero() {
logger.Debug("Service is being deleted, clean up resources")
return reconcile.Result{}, dnsRR.maybeCleanup(ctx, headlessSvc, logger)
}
// Check that there is a ts.net nameserver deployed to the cluster by
// checking that there is tailscale.com/v1alpha1.DNSConfig resource in a
// Ready state.
dnsCfgLst := new(tsapi.DNSConfigList)
if err = dnsRR.List(ctx, dnsCfgLst); err != nil {
return reconcile.Result{}, fmt.Errorf("error listing DNSConfigs: %w", err)
}
if len(dnsCfgLst.Items) == 0 {
logger.Debugf("DNSConfig does not exist, not creating DNS records")
return reconcile.Result{}, nil
}
if len(dnsCfgLst.Items) > 1 {
logger.Errorf("Invalid cluster state - more than one DNSConfig found in cluster. Please ensure no more than one exists")
return reconcile.Result{}, nil
}
dnsCfg := dnsCfgLst.Items[0]
if !operatorutils.DNSCfgIsReady(&dnsCfg) {
logger.Info("DNSConfig is not ready yet, waiting...")
return reconcile.Result{}, nil
}
return reconcile.Result{}, dnsRR.maybeProvision(ctx, headlessSvc, logger)
}
// maybeProvision ensures that dnsrecords ConfigMap contains a record for the
// proxy associated with the headless Service.
// The record is only provisioned if the proxy is for a tailscale Ingress or
// egress configured via tailscale.com/tailnet-fqdn annotation.
//
// For Ingress, the record is a mapping between the MagicDNSName of the Ingress, retrieved from
// ingress.status.loadBalancer.ingress.hostname field and the proxy Pod IP addresses
// retrieved from the EndpoinSlice associated with this headless Service, i.e
// Records{IP4: <MagicDNS name of the Ingress>: <[IPs of the ingress proxy Pods]>}
//
// For egress, the record is a mapping between tailscale.com/tailnet-fqdn
// annotation and the proxy Pod IP addresses, retrieved from the EndpointSlice
// associated with this headless Service, i.e
// Records{IP4: {<tailscale.com/tailnet-fqdn>: <[IPs of the egress proxy Pods]>}
//
// If records need to be created for this proxy, maybeProvision will also:
// - update the headless Service with a tailscale.com/magic-dnsname annotation
// - update the headless Service with a finalizer
func (dnsRR *dnsRecordsReconciler) maybeProvision(ctx context.Context, headlessSvc *corev1.Service, logger *zap.SugaredLogger) error {
if headlessSvc == nil {
logger.Info("[unexpected] maybeProvision called with a nil Service")
return nil
}
isEgressFQDNSvc, err := dnsRR.isSvcForFQDNEgressProxy(ctx, headlessSvc)
if err != nil {
return fmt.Errorf("error checking whether the Service is for an egress proxy: %w", err)
}
if !(isEgressFQDNSvc || isManagedByType(headlessSvc, "ingress")) {
logger.Debug("Service is not fronting a proxy that we create DNS records for; do nothing")
return nil
}
fqdn, err := dnsRR.fqdnForDNSRecord(ctx, headlessSvc, logger)
if err != nil {
return fmt.Errorf("error determining DNS name for record: %w", err)
}
if fqdn == "" {
logger.Debugf("MagicDNS name does not (yet) exist, not provisioning DNS record")
return nil // a new reconcile will be triggered once it's added
}
oldHeadlessSvc := headlessSvc.DeepCopy()
// Ensure that headless Service is annotated with a finalizer to help
// with records cleanup when proxy resources are deleted.
if !slices.Contains(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer) {
headlessSvc.Finalizers = append(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer)
}
// Ensure that headless Service is annotated with the current MagicDNS
// name to help with records cleanup when proxy resources are deleted or
// MagicDNS name changes.
oldFqdn := headlessSvc.Annotations[annotationTSMagicDNSName]
if oldFqdn != "" && oldFqdn != fqdn { // i.e user has changed the value of tailscale.com/tailnet-fqdn annotation
logger.Debugf("MagicDNS name has changed, remvoving record for %s", oldFqdn)
updateFunc := func(rec *operatorutils.Records) {
delete(rec.IP4, oldFqdn)
}
if err = dnsRR.updateDNSConfig(ctx, updateFunc); err != nil {
return fmt.Errorf("error removing record for %s: %w", oldFqdn, err)
}
}
mak.Set(&headlessSvc.Annotations, annotationTSMagicDNSName, fqdn)
if !apiequality.Semantic.DeepEqual(oldHeadlessSvc, headlessSvc) {
logger.Infof("provisioning DNS record for MagicDNS name: %s", fqdn) // this will be printed exactly once
if err := dnsRR.Update(ctx, headlessSvc); err != nil {
return fmt.Errorf("error updating proxy headless Service metadata: %w", err)
}
}
// Get the Pod IP addresses for the proxy from the EndpointSlice for the
// headless Service.
labels := map[string]string{discoveryv1.LabelServiceName: headlessSvc.Name} // https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#ownership
eps, err := getSingleObject[discoveryv1.EndpointSlice](ctx, dnsRR.Client, dnsRR.tsNamespace, labels)
if err != nil {
return fmt.Errorf("error getting the EndpointSlice for the proxy's headless Service: %w", err)
}
if eps == nil {
logger.Debugf("proxy's headless Service EndpointSlice does not yet exist. We will reconcile again once it's created")
return nil
}
// An EndpointSlice for a Service can have a list of endpoints that each
// can have multiple addresses - these are the IP addresses of any Pods
// selected by that Service. Pick all the IPv4 addresses.
ips := make([]string, 0)
for _, ep := range eps.Endpoints {
for _, ip := range ep.Addresses {
if !net.IsIPv4String(ip) {
logger.Infof("EndpointSlice contains IP address %q that is not IPv4, ignoring. Currently only IPv4 is supported", ip)
} else {
ips = append(ips, ip)
}
}
}
if len(ips) == 0 {
logger.Debugf("EndpointSlice for the Service contains no IPv4 addresses. We will reconcile again once they are created.")
return nil
}
updateFunc := func(rec *operatorutils.Records) {
mak.Set(&rec.IP4, fqdn, ips)
}
if err = dnsRR.updateDNSConfig(ctx, updateFunc); err != nil {
return fmt.Errorf("error updating DNS records: %w", err)
}
return nil
}
// maybeCleanup ensures that the DNS record for the proxy has been removed from
// dnsrecords ConfigMap and the tailscale.com/dns-records-reconciler finalizer
// has been removed from the Service. If the record is not found in the
// ConfigMap, the ConfigMap does not exist, or the Service does not have
// tailscale.com/magic-dnsname annotation, just remove the finalizer.
func (h *dnsRecordsReconciler) maybeCleanup(ctx context.Context, headlessSvc *corev1.Service, logger *zap.SugaredLogger) error {
ix := slices.Index(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer)
if ix == -1 {
logger.Debugf("no finalizer, nothing to do")
return nil
}
cm := &corev1.ConfigMap{}
err := h.Client.Get(ctx, types.NamespacedName{Name: operatorutils.DNSRecordsCMName, Namespace: h.tsNamespace}, cm)
if apierrors.IsNotFound(err) {
logger.Debug("'dsnrecords' ConfigMap not found")
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
if err != nil {
return fmt.Errorf("error retrieving 'dnsrecords' ConfigMap: %w", err)
}
if cm.Data == nil {
logger.Debug("'dnsrecords' ConfigMap contains no records")
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
_, ok := cm.Data[operatorutils.DNSRecordsCMKey]
if !ok {
logger.Debug("'dnsrecords' ConfigMap contains no records")
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
fqdn, _ := headlessSvc.GetAnnotations()[annotationTSMagicDNSName]
if fqdn == "" {
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
logger.Infof("removing DNS record for MagicDNS name %s", fqdn)
updateFunc := func(rec *operatorutils.Records) {
delete(rec.IP4, fqdn)
}
if err = h.updateDNSConfig(ctx, updateFunc); err != nil {
return fmt.Errorf("error updating DNS config: %w", err)
}
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
func (dnsRR *dnsRecordsReconciler) removeHeadlessSvcFinalizer(ctx context.Context, headlessSvc *corev1.Service) error {
idx := slices.Index(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer)
if idx == -1 {
return nil
}
headlessSvc.Finalizers = append(headlessSvc.Finalizers[:idx], headlessSvc.Finalizers[idx+1:]...)
return dnsRR.Update(ctx, headlessSvc)
}
// fqdnForDNSRecord returns MagicDNS name associated with a given headless Service.
// If the headless Service is for a tailscale Ingress proxy, returns ingress.status.loadBalancer.ingress.hostname.
// If the headless Service is for an tailscale egress proxy configured via tailscale.com/tailnet-fqdn annotation, returns the annotation value.
// This function is not expected to be called with headless Services for other
// proxy types, or any other Services, but it just returns an empty string if
// that happens.
func (dnsRR *dnsRecordsReconciler) fqdnForDNSRecord(ctx context.Context, headlessSvc *corev1.Service, logger *zap.SugaredLogger) (string, error) {
parentName := parentFromObjectLabels(headlessSvc)
if isManagedByType(headlessSvc, "ingress") {
ing := new(networkingv1.Ingress)
if err := dnsRR.Get(ctx, parentName, ing); err != nil {
return "", err
}
if len(ing.Status.LoadBalancer.Ingress) == 0 {
return "", nil
}
return ing.Status.LoadBalancer.Ingress[0].Hostname, nil
}
if isManagedByType(headlessSvc, "svc") {
svc := new(corev1.Service)
if err := dnsRR.Get(ctx, parentName, svc); apierrors.IsNotFound(err) {
logger.Info("[unexpected] parent Service for egress proxy %s not found", headlessSvc.Name)
return "", nil
} else if err != nil {
return "", err
}
return svc.Annotations[AnnotationTailnetTargetFQDN], nil
}
return "", nil
}
// updateDNSConfig runs the provided update function against dnsrecords
// ConfigMap. At this point the in-cluster ts.net nameserver is expected to be
// successfully created together with the ConfigMap.
func (dnsRR *dnsRecordsReconciler) updateDNSConfig(ctx context.Context, update func(*operatorutils.Records)) error {
cm := &corev1.ConfigMap{}
err := dnsRR.Get(ctx, types.NamespacedName{Name: operatorutils.DNSRecordsCMName, Namespace: dnsRR.tsNamespace}, cm)
if apierrors.IsNotFound(err) {
dnsRR.logger.Info("[unexpected] dnsrecords ConfigMap not found in cluster. Not updating DNS records. Please open an isue and attach operator logs.")
return nil
}
if err != nil {
return fmt.Errorf("error retrieving dnsrecords ConfigMap: %w", err)
}
dnsRecords := operatorutils.Records{Version: operatorutils.Alpha1Version, IP4: map[string][]string{}}
if cm.Data != nil && cm.Data[operatorutils.DNSRecordsCMKey] != "" {
if err := json.Unmarshal([]byte(cm.Data[operatorutils.DNSRecordsCMKey]), &dnsRecords); err != nil {
return err
}
}
update(&dnsRecords)
dnsRecordsBs, err := json.Marshal(dnsRecords)
if err != nil {
return fmt.Errorf("error marshalling DNS records: %w", err)
}
mak.Set(&cm.Data, operatorutils.DNSRecordsCMKey, string(dnsRecordsBs))
return dnsRR.Update(ctx, cm)
}
// isSvcForFQDNEgressProxy returns true if the Service is a headless Service
// created for a proxy for a tailscale egress Service configured via
// tailscale.com/tailnet-fqdn annotation.
func (dnsRR *dnsRecordsReconciler) isSvcForFQDNEgressProxy(ctx context.Context, svc *corev1.Service) (bool, error) {
if !isManagedByType(svc, "svc") {
return false, nil
}
parentName := parentFromObjectLabels(svc)
parentSvc := new(corev1.Service)
if err := dnsRR.Get(ctx, parentName, parentSvc); apierrors.IsNotFound(err) {
return false, nil
} else if err != nil {
return false, err
}
annots := parentSvc.Annotations
return annots != nil && annots[AnnotationTailnetTargetFQDN] != "", nil
}

View File

@ -0,0 +1,198 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"encoding/json"
"testing"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest"
"tailscale.com/types/ptr"
)
func TestDNSRecordsReconciler(t *testing.T) {
// Preconfigure a cluster with a DNSConfig
dnsConfig := &tsapi.DNSConfig{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
TypeMeta: metav1.TypeMeta{Kind: "DNSConfig"},
Spec: tsapi.DNSConfigSpec{
Nameserver: &tsapi.Nameserver{},
}}
ing := &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{
Name: "ts-ingress",
Namespace: "test",
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
},
Status: networkingv1.IngressStatus{
LoadBalancer: networkingv1.IngressLoadBalancerStatus{
Ingress: []networkingv1.IngressLoadBalancerIngress{{
Hostname: "cluster.ingress.ts.net"}},
},
},
}
cm := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "dnsrecords", Namespace: "tailscale"}}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(cm).
WithObjects(dnsConfig).
WithObjects(ing).
WithStatusSubresource(dnsConfig, ing).
Build()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
cl := tstest.NewClock(tstest.ClockOpts{})
// Set the ready condition of the DNSConfig
mustUpdateStatus[tsapi.DNSConfig](t, fc, "", "test", func(c *tsapi.DNSConfig) {
operatorutils.SetDNSConfigCondition(c, tsapi.NameserverReady, metav1.ConditionTrue, reasonNameserverCreated, reasonNameserverCreated, 0, cl, zl.Sugar())
})
dnsRR := &dnsRecordsReconciler{
Client: fc,
logger: zl.Sugar(),
tsNamespace: "tailscale",
}
// 1. DNS record is created for an egress proxy configured via
// tailscale.com/tailnet-fqdn annotation
egressSvcFQDN := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "egress-fqdn",
Namespace: "test",
Annotations: map[string]string{"tailscale.com/tailnet-fqdn": "foo.bar.ts.net"},
},
Spec: corev1.ServiceSpec{
ExternalName: "unused",
Type: corev1.ServiceTypeExternalName,
},
}
headlessForEgressSvcFQDN := headlessSvcForParent(egressSvcFQDN, "svc") // create the proxy headless Service
ep := endpointSliceForService(headlessForEgressSvcFQDN, "10.9.8.7")
mustCreate(t, fc, egressSvcFQDN)
mustCreate(t, fc, headlessForEgressSvcFQDN)
mustCreate(t, fc, ep)
expectReconciled(t, dnsRR, "tailscale", "egress-fqdn") // dns-records-reconciler reconcile the headless Service
// ConfigMap should now have a record for foo.bar.ts.net -> 10.8.8.7
wantHosts := map[string][]string{"foo.bar.ts.net": {"10.9.8.7"}}
expectHostsRecords(t, fc, wantHosts)
// 2. DNS record is updated if tailscale.com/tailnet-fqdn annotation's
// value changes
mustUpdate(t, fc, "test", "egress-fqdn", func(svc *corev1.Service) {
svc.Annotations["tailscale.com/tailnet-fqdn"] = "baz.bar.ts.net"
})
expectReconciled(t, dnsRR, "tailscale", "egress-fqdn") // dns-records-reconciler reconcile the headless Service
wantHosts = map[string][]string{"baz.bar.ts.net": {"10.9.8.7"}}
expectHostsRecords(t, fc, wantHosts)
// 3. DNS record is updated if the IP address of the proxy Pod changes.
ep = endpointSliceForService(headlessForEgressSvcFQDN, "10.6.5.4")
mustUpdate(t, fc, ep.Namespace, ep.Name, func(ep *discoveryv1.EndpointSlice) {
ep.Endpoints[0].Addresses = []string{"10.6.5.4"}
})
expectReconciled(t, dnsRR, "tailscale", "egress-fqdn") // dns-records-reconciler reconcile the headless Service
wantHosts = map[string][]string{"baz.bar.ts.net": {"10.6.5.4"}}
expectHostsRecords(t, fc, wantHosts)
// 4. DNS record is created for an ingress proxy configured via Ingress
headlessForIngress := headlessSvcForParent(ing, "ingress")
ep = endpointSliceForService(headlessForIngress, "10.9.8.7")
mustCreate(t, fc, headlessForIngress)
mustCreate(t, fc, ep)
expectReconciled(t, dnsRR, "tailscale", "ts-ingress") // dns-records-reconciler should reconcile the headless Service
wantHosts["cluster.ingress.ts.net"] = []string{"10.9.8.7"}
expectHostsRecords(t, fc, wantHosts)
// 5. DNS records are updated if Ingress's MagicDNS name changes (i.e users changed spec.tls.hosts[0])
t.Log("test case 5")
mustUpdateStatus(t, fc, "test", "ts-ingress", func(ing *networkingv1.Ingress) {
ing.Status.LoadBalancer.Ingress[0].Hostname = "another.ingress.ts.net"
})
expectReconciled(t, dnsRR, "tailscale", "ts-ingress") // dns-records-reconciler should reconcile the headless Service
delete(wantHosts, "cluster.ingress.ts.net")
wantHosts["another.ingress.ts.net"] = []string{"10.9.8.7"}
expectHostsRecords(t, fc, wantHosts)
// 6. DNS records are updated if Ingress proxy's Pod IP changes
mustUpdate(t, fc, ep.Namespace, ep.Name, func(ep *discoveryv1.EndpointSlice) {
ep.Endpoints[0].Addresses = []string{"7.8.9.10"}
})
expectReconciled(t, dnsRR, "tailscale", "ts-ingress")
wantHosts["another.ingress.ts.net"] = []string{"7.8.9.10"}
expectHostsRecords(t, fc, wantHosts)
}
func headlessSvcForParent(o client.Object, typ string) *corev1.Service {
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: o.GetName(),
Namespace: "tailscale",
Labels: map[string]string{
LabelManaged: "true",
LabelParentName: o.GetName(),
LabelParentNamespace: o.GetNamespace(),
LabelParentType: typ,
},
},
Spec: corev1.ServiceSpec{
ClusterIP: "None",
Type: corev1.ServiceTypeClusterIP,
Selector: map[string]string{"foo": "bar"},
},
}
}
func endpointSliceForService(svc *corev1.Service, ip string) *discoveryv1.EndpointSlice {
return &discoveryv1.EndpointSlice{
ObjectMeta: metav1.ObjectMeta{
Name: svc.Name,
Namespace: svc.Namespace,
Labels: map[string]string{discoveryv1.LabelServiceName: svc.Name},
},
Endpoints: []discoveryv1.Endpoint{{
Addresses: []string{ip},
}},
}
}
func expectHostsRecords(t *testing.T, cl client.Client, wantsHosts map[string][]string) {
t.Helper()
cm := new(corev1.ConfigMap)
if err := cl.Get(context.Background(), types.NamespacedName{Name: "dnsrecords", Namespace: "tailscale"}, cm); err != nil {
t.Fatalf("getting dnsconfig ConfigMap: %v", err)
}
if cm.Data == nil {
t.Fatal("dnsconfig ConfigMap has no data")
}
dnsConfigString, ok := cm.Data[operatorutils.DNSRecordsCMKey]
if !ok {
t.Fatal("dnsconfig ConfigMap does not contain dnsconfig")
}
dnsConfig := &operatorutils.Records{}
if err := json.Unmarshal([]byte(dnsConfigString), dnsConfig); err != nil {
t.Fatalf("unmarshaling dnsconfig: %v", err)
}
if diff := cmp.Diff(dnsConfig.IP4, wantsHosts); diff != "" {
t.Fatalf("unexpected dns config (-got +want):\n%s", diff)
}
}

View File

@ -19,10 +19,14 @@ import (
)
const (
operatorDeploymentFilesPath = "cmd/k8s-operator/deploy"
crdPath = operatorDeploymentFilesPath + "/crds/tailscale.com_connectors.yaml"
helmTemplatesPath = operatorDeploymentFilesPath + "/chart/templates"
crdTemplatePath = helmTemplatesPath + "/connectors.yaml"
operatorDeploymentFilesPath = "cmd/k8s-operator/deploy"
connectorCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_connectors.yaml"
proxyClassCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_proxyclasses.yaml"
dnsConfigCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_dnsconfigs.yaml"
helmTemplatesPath = operatorDeploymentFilesPath + "/chart/templates"
connectorCRDHelmTemplatePath = helmTemplatesPath + "/connector.yaml"
proxyClassCRDHelmTemplatePath = helmTemplatesPath + "/proxyclass.yaml"
dnsConfigCRDHelmTemplatePath = helmTemplatesPath + "/dnsconfig.yaml"
helmConditionalStart = "{{ if .Values.installCRDs -}}\n"
helmConditionalEnd = "{{- end -}}"
@ -34,19 +38,19 @@ func main() {
}
repoRoot := "../../"
switch os.Args[1] {
case "helmcrd": // insert CRD to Helm templates behind a installCRDs=true conditional check
log.Print("Adding Connector CRD to Helm templates")
case "helmcrd": // insert CRDs to Helm templates behind a installCRDs=true conditional check
log.Print("Adding CRDs to Helm templates")
if err := generate("./"); err != nil {
log.Fatalf("error adding Connector CRD to Helm templates: %v", err)
log.Fatalf("error adding CRDs to Helm templates: %v", err)
}
return
case "staticmanifests": // generate static manifests from Helm templates (including the CRD)
default:
log.Fatalf("unknown option %s, known options are 'staticmanifests', 'helmcrd'", os.Args[1])
}
log.Printf("Inserting CRD into the Helm templates")
log.Printf("Inserting CRDs Helm templates")
if err := generate(repoRoot); err != nil {
log.Fatalf("error adding Connector CRD to Helm templates: %v", err)
log.Fatalf("error adding CRDs to Helm templates: %v", err)
}
defer func() {
if err := cleanup(repoRoot); err != nil {
@ -106,34 +110,54 @@ func main() {
}
}
// generate places tailscale.com CRDs (currently Connector, ProxyClass and DNSConfig) into
// the Helm chart templates behind .Values.installCRDs=true condition (true by
// default).
func generate(baseDir string) error {
log.Print("Placing Connector CRD into Helm templates..")
chartBytes, err := os.ReadFile(filepath.Join(baseDir, crdPath))
if err != nil {
return fmt.Errorf("error reading CRD contents: %w", err)
addCRDToHelm := func(crdPath, crdTemplatePath string) error {
chartBytes, err := os.ReadFile(filepath.Join(baseDir, crdPath))
if err != nil {
return fmt.Errorf("error reading CRD contents: %w", err)
}
// Place a new temporary Helm template file with the templated CRD
// contents into Helm templates.
file, err := os.Create(filepath.Join(baseDir, crdTemplatePath))
if err != nil {
return fmt.Errorf("error creating CRD template file: %w", err)
}
if _, err := file.Write([]byte(helmConditionalStart)); err != nil {
return fmt.Errorf("error writing helm if statement start: %w", err)
}
if _, err := file.Write(chartBytes); err != nil {
return fmt.Errorf("error writing chart bytes: %w", err)
}
if _, err := file.Write([]byte(helmConditionalEnd)); err != nil {
return fmt.Errorf("error writing helm if-statement end: %w", err)
}
return nil
}
// Place a new temporary Helm template file with the templated CRD
// contents into Helm templates.
file, err := os.Create(filepath.Join(baseDir, crdTemplatePath))
if err != nil {
return fmt.Errorf("error creating CRD template file: %w", err)
if err := addCRDToHelm(connectorCRDPath, connectorCRDHelmTemplatePath); err != nil {
return fmt.Errorf("error adding Connector CRD to Helm templates: %w", err)
}
if _, err := file.Write([]byte(helmConditionalStart)); err != nil {
return fmt.Errorf("error writing helm if statement start: %w", err)
if err := addCRDToHelm(proxyClassCRDPath, proxyClassCRDHelmTemplatePath); err != nil {
return fmt.Errorf("error adding ProxyClass CRD to Helm templates: %w", err)
}
if _, err := file.Write(chartBytes); err != nil {
return fmt.Errorf("error writing chart bytes: %w", err)
}
if _, err := file.Write([]byte(helmConditionalEnd)); err != nil {
return fmt.Errorf("error writing helm if-statement end: %w", err)
if err := addCRDToHelm(dnsConfigCRDPath, dnsConfigCRDHelmTemplatePath); err != nil {
return fmt.Errorf("error adding DNSConfig CRD to Helm templates: %w", err)
}
return nil
}
func cleanup(baseDir string) error {
log.Print("Cleaning up CRD from Helm templates")
if err := os.Remove(filepath.Join(baseDir, crdTemplatePath)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error cleaning up CRD template: %w", err)
if err := os.Remove(filepath.Join(baseDir, connectorCRDHelmTemplatePath)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error cleaning up Connector CRD template: %w", err)
}
if err := os.Remove(filepath.Join(baseDir, proxyClassCRDHelmTemplatePath)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error cleaning up ProxyClass CRD template: %w", err)
}
if err := os.Remove(filepath.Join(baseDir, dnsConfigCRDHelmTemplatePath)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error cleaning up DNSConfig CRD template: %w", err)
}
return nil
}

View File

@ -42,7 +42,7 @@ func Test_generate(t *testing.T) {
t.Fatalf("Helm chart linter failed: %v", err)
}
// Test that default Helm install contains the CRD
// Test that default Helm install contains the Connector and ProxyClass CRDs.
installContentsWithCRD := bytes.NewBuffer([]byte{})
helmTemplateWithCRDCmd := exec.Command(helmCLIPath, "template", helmPackagePath)
helmTemplateWithCRDCmd.Stderr = os.Stderr
@ -51,10 +51,16 @@ func Test_generate(t *testing.T) {
t.Fatalf("templating Helm chart with CRDs failed: %v", err)
}
if !strings.Contains(installContentsWithCRD.String(), "name: connectors.tailscale.com") {
t.Errorf("CRD not found in default chart install")
t.Errorf("Connector CRD not found in default chart install")
}
if !strings.Contains(installContentsWithCRD.String(), "name: proxyclasses.tailscale.com") {
t.Errorf("ProxyClass CRD not found in default chart install")
}
if !strings.Contains(installContentsWithCRD.String(), "name: dnsconfigs.tailscale.com") {
t.Errorf("DNSConfig CRD not found in default chart install")
}
// Test that CRD can be excluded from Helm chart install
// Test that CRDs can be excluded from Helm chart install
installContentsWithoutCRD := bytes.NewBuffer([]byte{})
helmTemplateWithoutCRDCmd := exec.Command(helmCLIPath, "template", helmPackagePath, "--set", "installCRDs=false")
helmTemplateWithoutCRDCmd.Stderr = os.Stderr
@ -63,6 +69,12 @@ func Test_generate(t *testing.T) {
t.Fatalf("templating Helm chart without CRDs failed: %v", err)
}
if strings.Contains(installContentsWithoutCRD.String(), "name: connectors.tailscale.com") {
t.Errorf("CRD found in chart install that should not contain a CRD")
t.Errorf("Connector CRD found in chart install that should not contain a CRD")
}
if strings.Contains(installContentsWithoutCRD.String(), "name: connectors.tailscale.com") {
t.Errorf("ProxyClass CRD found in chart install that should not contain a CRD")
}
if strings.Contains(installContentsWithoutCRD.String(), "name: dnsconfigs.tailscale.com") {
t.Errorf("DNSConfig CRD found in chart install that should not contain a CRD")
}
}

View File

@ -132,6 +132,17 @@ func (a *IngressReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
return fmt.Errorf("failed to add finalizer: %w", err)
}
}
proxyClass := proxyClassForObject(ing)
if proxyClass != "" {
if ready, err := proxyClassIsReady(ctx, proxyClass, a.Client); err != nil {
return fmt.Errorf("error verifying ProxyClass for Ingress: %w", err)
} else if !ready {
logger.Infof("ProxyClass %s specified for the Ingress, but is not (yet) Ready, waiting..", proxyClass)
return nil
}
}
a.mu.Lock()
a.managedIngresses.Add(ing.UID)
gaugeIngressResources.Set(int64(a.managedIngresses.Len()))
@ -230,6 +241,12 @@ func (a *IngressReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
}
}
if len(web.Handlers) == 0 {
logger.Warn("Ingress contains no valid backends")
a.recorder.Eventf(ing, corev1.EventTypeWarning, "NoValidBackends", "no valid backends")
return nil
}
crl := childResourceLabels(ing.Name, ing.Namespace, "ingress")
var tags []string
if tstr, ok := ing.Annotations[AnnotationTags]; ok {
@ -247,6 +264,11 @@ func (a *IngressReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
ServeConfig: sc,
Tags: tags,
ChildResourceLabels: crl,
ProxyClass: proxyClass,
}
if val := ing.GetAnnotations()[AnnotationExperimentalForwardClusterTrafficViaL7IngresProxy]; val == "true" {
sts.ForwardClusterTrafficViaL7IngressProxy = true
}
if _, err := a.ssr.Provision(ctx, logger, sts); err != nil {

View File

@ -0,0 +1,270 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"testing"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"tailscale.com/ipn"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/types/ptr"
"tailscale.com/util/mak"
)
func TestTailscaleIngress(t *testing.T) {
tsIngressClass := &networkingv1.IngressClass{ObjectMeta: metav1.ObjectMeta{Name: "tailscale"}, Spec: networkingv1.IngressClassSpec{Controller: "tailscale.com/ts-ingress"}}
fc := fake.NewFakeClient(tsIngressClass)
ft := &fakeTSClient{}
fakeTsnetServer := &fakeTSNetServer{certDomains: []string{"foo.com"}}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
ingR := &IngressReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
tsnetServer: fakeTsnetServer,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// 1. Resources get created for regular Ingress
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"default-test"}},
},
},
}
mustCreate(t, fc, ing)
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "1.2.3.4",
Ports: []corev1.ServicePort{{
Port: 8080,
Name: "http"},
},
},
})
expectReconciled(t, ingR, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "ingress")
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "ingress",
hostname: "default-test",
}
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{"/": {Proxy: "http://1.2.3.4:8080/"}}}},
}
opts.serveConfig = serveConfig
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"), nil)
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
// 2. Ingress status gets updated with ingress proxy's MagicDNS name
// once that becomes available.
mustUpdate(t, fc, "operator-ns", opts.secretName, func(secret *corev1.Secret) {
mak.Set(&secret.Data, "device_id", []byte("1234"))
mak.Set(&secret.Data, "device_fqdn", []byte("foo.tailnetxyz.ts.net"))
})
expectReconciled(t, ingR, "default", "test")
ing.Finalizers = append(ing.Finalizers, "tailscale.com/finalizer")
ing.Status.LoadBalancer = networkingv1.IngressLoadBalancerStatus{
Ingress: []networkingv1.IngressLoadBalancerIngress{
{Hostname: "foo.tailnetxyz.ts.net", Ports: []networkingv1.IngressPortStatus{{Port: 443, Protocol: "TCP"}}},
},
}
expectEqual(t, fc, ing, nil)
// 3. Resources get created for Ingress that should allow forwarding
// cluster traffic
mustUpdate(t, fc, "default", "test", func(ing *networkingv1.Ingress) {
mak.Set(&ing.ObjectMeta.Annotations, AnnotationExperimentalForwardClusterTrafficViaL7IngresProxy, "true")
})
opts.shouldEnableForwardingClusterTrafficViaIngress = true
expectReconciled(t, ingR, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 4. Resources get cleaned up when Ingress class is unset
mustUpdate(t, fc, "default", "test", func(ing *networkingv1.Ingress) {
ing.Spec.IngressClassName = ptr.To("nginx")
})
expectReconciled(t, ingR, "default", "test")
expectReconciled(t, ingR, "default", "test") // deleting Ingress STS requires two reconciles
expectMissing[appsv1.StatefulSet](t, fc, "operator-ns", shortName)
expectMissing[corev1.Service](t, fc, "operator-ns", shortName)
expectMissing[corev1.Secret](t, fc, "operator-ns", fullName)
}
func TestTailscaleIngressWithProxyClass(t *testing.T) {
// Setup
pc := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{Name: "custom-metadata"},
Spec: tsapi.ProxyClassSpec{StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Annotations: map[string]string{"bar.io/foo": "some-val"},
Pod: &tsapi.Pod{Annotations: map[string]string{"foo.io/bar": "some-val"}}}},
}
tsIngressClass := &networkingv1.IngressClass{ObjectMeta: metav1.ObjectMeta{Name: "tailscale"}, Spec: networkingv1.IngressClassSpec{Controller: "tailscale.com/ts-ingress"}}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pc, tsIngressClass).
WithStatusSubresource(pc).
Build()
ft := &fakeTSClient{}
fakeTsnetServer := &fakeTSNetServer{certDomains: []string{"foo.com"}}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
ingR := &IngressReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
tsnetServer: fakeTsnetServer,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// 1. Ingress is created with no ProxyClass specified, default proxy
// resources get configured.
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"default-test"}},
},
},
}
mustCreate(t, fc, ing)
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "1.2.3.4",
Ports: []corev1.ServicePort{{
Port: 8080,
Name: "http"},
},
},
})
expectReconciled(t, ingR, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "ingress")
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "ingress",
hostname: "default-test",
}
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{"/": {Proxy: "http://1.2.3.4:8080/"}}}},
}
opts.serveConfig = serveConfig
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"), nil)
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
// 2. Ingress is updated to specify a ProxyClass, ProxyClass is not yet
// ready, so proxy resource configuration does not change.
mustUpdate(t, fc, "default", "test", func(ing *networkingv1.Ingress) {
mak.Set(&ing.ObjectMeta.Labels, LabelProxyClass, "custom-metadata")
})
expectReconciled(t, ingR, "default", "test")
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
// 3. ProxyClass is set to Ready by proxy-class reconciler. Ingress get
// reconciled and configuration from the ProxyClass is applied to the
// created proxy resources.
mustUpdateStatus(t, fc, "", "custom-metadata", func(pc *tsapi.ProxyClass) {
pc.Status = tsapi.ProxyClassStatus{
Conditions: []tsapi.ConnectorCondition{{
Status: metav1.ConditionTrue,
Type: tsapi.ProxyClassready,
ObservedGeneration: pc.Generation,
}}}
})
expectReconciled(t, ingR, "default", "test")
opts.proxyClass = pc.Name
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
// 4. tailscale.com/proxy-class label is removed from the Ingress, the
// Ingress gets reconciled and the custom ProxyClass configuration is
// removed from the proxy resources.
mustUpdate(t, fc, "default", "test", func(ing *networkingv1.Ingress) {
delete(ing.ObjectMeta.Labels, LabelProxyClass)
})
expectReconciled(t, ingR, "default", "test")
opts.proxyClass = ""
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
}

View File

@ -0,0 +1,283 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"fmt"
"slices"
"sync"
_ "embed"
"github.com/pkg/errors"
"go.uber.org/zap"
xslices "golang.org/x/exp/slices"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/yaml"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstime"
"tailscale.com/util/clientmetric"
"tailscale.com/util/set"
)
const (
reasonNameserverCreationFailed = "NameserverCreationFailed"
reasonMultipleDNSConfigsPresent = "MultipleDNSConfigsPresent"
reasonNameserverCreated = "NameserverCreated"
messageNameserverCreationFailed = "Failed creating nameserver resources: %v"
messageMultipleDNSConfigsPresent = "Multiple DNSConfig resources found in cluster. Please ensure no more than one is present."
defaultNameserverImageRepo = "tailscale/k8s-nameserver"
// TODO (irbekrm): once we start publishing nameserver images for stable
// track, replace 'unstable' here with the version of this operator
// instance.
defaultNameserverImageTag = "unstable"
)
// NameserverReconciler knows how to create nameserver resources in cluster in
// response to users applying DNSConfig.
type NameserverReconciler struct {
client.Client
logger *zap.SugaredLogger
recorder record.EventRecorder
clock tstime.Clock
tsNamespace string
mu sync.Mutex // protects following
managedNameservers set.Slice[types.UID] // one or none
}
var (
gaugeNameserverResources = clientmetric.NewGauge("k8s_nameserver_resources")
)
func (a *NameserverReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
logger := a.logger.With("dnsConfig", req.Name)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
var dnsCfg tsapi.DNSConfig
err = a.Get(ctx, req.NamespacedName, &dnsCfg)
if apierrors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
logger.Debugf("dnsconfig not found, assuming it was deleted")
return reconcile.Result{}, nil
} else if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get dnsconfig: %w", err)
}
if !dnsCfg.DeletionTimestamp.IsZero() {
ix := xslices.Index(dnsCfg.Finalizers, FinalizerName)
if ix < 0 {
logger.Debugf("no finalizer, nothing to do")
return reconcile.Result{}, nil
}
logger.Info("Cleaning up DNSConfig resources")
if err := a.maybeCleanup(ctx, &dnsCfg, logger); err != nil {
logger.Errorf("error cleaning up reconciler resource: %v", err)
return res, err
}
dnsCfg.Finalizers = append(dnsCfg.Finalizers[:ix], dnsCfg.Finalizers[ix+1:]...)
if err := a.Update(ctx, &dnsCfg); err != nil {
logger.Errorf("error removing finalizer: %v", err)
return reconcile.Result{}, err
}
logger.Infof("Nameserver resources cleaned up")
return reconcile.Result{}, nil
}
oldCnStatus := dnsCfg.Status.DeepCopy()
setStatus := func(dnsCfg *tsapi.DNSConfig, conditionType tsapi.ConnectorConditionType, status metav1.ConditionStatus, reason, message string) (reconcile.Result, error) {
tsoperator.SetDNSConfigCondition(dnsCfg, tsapi.NameserverReady, status, reason, message, dnsCfg.Generation, a.clock, logger)
if !apiequality.Semantic.DeepEqual(oldCnStatus, dnsCfg.Status) {
// An error encountered here should get returned by the Reconcile function.
if updateErr := a.Client.Status().Update(ctx, dnsCfg); updateErr != nil {
err = errors.Wrap(err, updateErr.Error())
}
}
return res, err
}
var dnsCfgs tsapi.DNSConfigList
if err := a.List(ctx, &dnsCfgs); err != nil {
return res, fmt.Errorf("error listing DNSConfigs: %w", err)
}
if len(dnsCfgs.Items) > 1 { // enforce DNSConfig to be a singleton
msg := "invalid cluster configuration: more than one tailscale.com/dnsconfigs found. Please ensure that no more than one is created."
logger.Error(msg)
a.recorder.Event(&dnsCfg, corev1.EventTypeWarning, reasonMultipleDNSConfigsPresent, messageMultipleDNSConfigsPresent)
setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionFalse, reasonMultipleDNSConfigsPresent, messageMultipleDNSConfigsPresent)
}
if !slices.Contains(dnsCfg.Finalizers, FinalizerName) {
logger.Infof("ensuring nameserver resources")
dnsCfg.Finalizers = append(dnsCfg.Finalizers, FinalizerName)
if err := a.Update(ctx, &dnsCfg); err != nil {
msg := fmt.Sprintf(messageNameserverCreationFailed, err)
logger.Error(msg)
return setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionFalse, reasonNameserverCreationFailed, msg)
}
}
if err := a.maybeProvision(ctx, &dnsCfg, logger); err != nil {
return reconcile.Result{}, fmt.Errorf("error provisioning nameserver resources: %w", err)
}
a.mu.Lock()
a.managedNameservers.Add(dnsCfg.UID)
a.mu.Unlock()
gaugeNameserverResources.Set(int64(a.managedNameservers.Len()))
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{Name: "nameserver", Namespace: a.tsNamespace},
}
if err := a.Client.Get(ctx, client.ObjectKeyFromObject(svc), svc); err != nil {
return res, fmt.Errorf("error getting Service: %w", err)
}
if ip := svc.Spec.ClusterIP; ip != "" && ip != "None" {
dnsCfg.Status.Nameserver = &tsapi.NameserverStatus{
IP: ip,
}
return setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionTrue, reasonNameserverCreated, reasonNameserverCreated)
}
logger.Info("nameserver Service does not have an IP address allocated, waiting...")
return reconcile.Result{}, nil
}
func nameserverResourceLabels(name, namespace string) map[string]string {
labels := childResourceLabels(name, namespace, "nameserver")
labels["app.kubernetes.io/name"] = "tailscale"
labels["app.kubernetes.io/component"] = "nameserver"
return labels
}
func (a *NameserverReconciler) maybeProvision(ctx context.Context, tsDNSCfg *tsapi.DNSConfig, logger *zap.SugaredLogger) error {
labels := nameserverResourceLabels(tsDNSCfg.Name, a.tsNamespace)
dCfg := &deployConfig{
ownerRefs: []metav1.OwnerReference{*metav1.NewControllerRef(tsDNSCfg, tsapi.SchemeGroupVersion.WithKind("DNSConfig"))},
namespace: a.tsNamespace,
labels: labels,
imageRepo: defaultNameserverImageRepo,
imageTag: defaultNameserverImageTag,
}
if tsDNSCfg.Spec.Nameserver.Image != nil && tsDNSCfg.Spec.Nameserver.Image.Repo != "" {
dCfg.imageRepo = tsDNSCfg.Spec.Nameserver.Image.Repo
}
if tsDNSCfg.Spec.Nameserver.Image != nil && tsDNSCfg.Spec.Nameserver.Image.Tag != "" {
dCfg.imageTag = tsDNSCfg.Spec.Nameserver.Image.Tag
}
for _, deployable := range []deployable{saDeployable, deployDeployable, svcDeployable, cmDeployable} {
if err := deployable.updateObj(ctx, dCfg, a.Client); err != nil {
return fmt.Errorf("error reconciling %s: %w", deployable.kind, err)
}
}
return nil
}
// maybeCleanup removes DNSConfig from being tracked. The cluster resources
// created, will be automatically garbage collected as they are owned by the
// DNSConfig.
func (a *NameserverReconciler) maybeCleanup(ctx context.Context, dnsCfg *tsapi.DNSConfig, logger *zap.SugaredLogger) error {
a.mu.Lock()
a.managedNameservers.Remove(dnsCfg.UID)
a.mu.Unlock()
gaugeNameserverResources.Set(int64(a.managedNameservers.Len()))
return nil
}
type deployable struct {
kind string
updateObj func(context.Context, *deployConfig, client.Client) error
}
type deployConfig struct {
imageRepo string
imageTag string
labels map[string]string
ownerRefs []metav1.OwnerReference
namespace string
}
var (
//go:embed deploy/manifests/nameserver/cm.yaml
cmYaml []byte
//go:embed deploy/manifests/nameserver/deploy.yaml
deployYaml []byte
//go:embed deploy/manifests/nameserver/sa.yaml
saYaml []byte
//go:embed deploy/manifests/nameserver/svc.yaml
svcYaml []byte
deployDeployable = deployable{
kind: "Deployment",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
d := new(appsv1.Deployment)
if err := yaml.Unmarshal(deployYaml, &d); err != nil {
return fmt.Errorf("error unmarshalling Deployment yaml: %w", err)
}
d.Spec.Template.Spec.Containers[0].Image = fmt.Sprintf("%s:%s", cfg.imageRepo, cfg.imageTag)
d.ObjectMeta.Namespace = cfg.namespace
d.ObjectMeta.Labels = cfg.labels
d.ObjectMeta.OwnerReferences = cfg.ownerRefs
updateF := func(oldD *appsv1.Deployment) {
oldD.Spec = d.Spec
}
_, err := createOrUpdate[appsv1.Deployment](ctx, kubeClient, cfg.namespace, d, updateF)
return err
},
}
saDeployable = deployable{
kind: "ServiceAccount",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
sa := new(corev1.ServiceAccount)
if err := yaml.Unmarshal(saYaml, &sa); err != nil {
return fmt.Errorf("error unmarshalling ServiceAccount yaml: %w", err)
}
sa.ObjectMeta.Labels = cfg.labels
sa.ObjectMeta.OwnerReferences = cfg.ownerRefs
sa.ObjectMeta.Namespace = cfg.namespace
_, err := createOrUpdate(ctx, kubeClient, cfg.namespace, sa, func(*corev1.ServiceAccount) {})
return err
},
}
svcDeployable = deployable{
kind: "Service",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
svc := new(corev1.Service)
if err := yaml.Unmarshal(svcYaml, &svc); err != nil {
return fmt.Errorf("error unmarshalling Service yaml: %w", err)
}
svc.ObjectMeta.Labels = cfg.labels
svc.ObjectMeta.OwnerReferences = cfg.ownerRefs
svc.ObjectMeta.Namespace = cfg.namespace
_, err := createOrUpdate[corev1.Service](ctx, kubeClient, cfg.namespace, svc, func(*corev1.Service) {})
return err
},
}
cmDeployable = deployable{
kind: "ConfigMap",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
cm := new(corev1.ConfigMap)
if err := yaml.Unmarshal(cmYaml, &cm); err != nil {
return fmt.Errorf("error unmarshalling ConfigMap yaml: %w", err)
}
cm.ObjectMeta.Labels = cfg.labels
cm.ObjectMeta.OwnerReferences = cfg.ownerRefs
cm.ObjectMeta.Namespace = cfg.namespace
_, err := createOrUpdate[corev1.ConfigMap](ctx, kubeClient, cfg.namespace, cm, func(cm *corev1.ConfigMap) {})
return err
},
}
)

View File

@ -0,0 +1,127 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// tailscale-operator provides a way to expose services running in a Kubernetes
// cluster to your Tailnet and to make Tailscale nodes available to cluster
// workloads
package main
import (
"encoding/json"
"testing"
"time"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/yaml"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest"
"tailscale.com/util/mak"
)
func TestNameserverReconciler(t *testing.T) {
dnsCfg := &tsapi.DNSConfig{
TypeMeta: metav1.TypeMeta{Kind: "DNSConfig", APIVersion: "tailscale.com/v1alpha1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
Spec: tsapi.DNSConfigSpec{
Nameserver: &tsapi.Nameserver{
Image: &tsapi.Image{
Repo: "test",
Tag: "v0.0.1",
},
},
},
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(dnsCfg).
WithStatusSubresource(dnsCfg).
Build()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
cl := tstest.NewClock(tstest.ClockOpts{})
nr := &NameserverReconciler{
Client: fc,
clock: cl,
logger: zl.Sugar(),
tsNamespace: "tailscale",
}
expectReconciled(t, nr, "", "test")
// Verify that nameserver Deployment has been created and has the expected fields.
wantsDeploy := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: "nameserver", Namespace: "tailscale"}, TypeMeta: metav1.TypeMeta{Kind: "Deployment", APIVersion: appsv1.SchemeGroupVersion.Identifier()}}
if err := yaml.Unmarshal(deployYaml, wantsDeploy); err != nil {
t.Fatalf("unmarshalling yaml: %v", err)
}
dnsCfgOwnerRef := metav1.NewControllerRef(dnsCfg, tsapi.SchemeGroupVersion.WithKind("DNSConfig"))
wantsDeploy.OwnerReferences = []metav1.OwnerReference{*dnsCfgOwnerRef}
wantsDeploy.Spec.Template.Spec.Containers[0].Image = "test:v0.0.1"
wantsDeploy.Namespace = "tailscale"
labels := nameserverResourceLabels("test", "tailscale")
wantsDeploy.ObjectMeta.Labels = labels
expectEqual(t, fc, wantsDeploy, nil)
// Verify that DNSConfig advertizes the nameserver's Service IP address,
// has the ready status condition and tailscale finalizer.
mustUpdate(t, fc, "tailscale", "nameserver", func(svc *corev1.Service) {
svc.Spec.ClusterIP = "1.2.3.4"
})
expectReconciled(t, nr, "", "test")
dnsCfg.Status.Nameserver = &tsapi.NameserverStatus{
IP: "1.2.3.4",
}
dnsCfg.Finalizers = []string{FinalizerName}
dnsCfg.Status.Conditions = append(dnsCfg.Status.Conditions, tsapi.ConnectorCondition{
Type: tsapi.NameserverReady,
Status: metav1.ConditionTrue,
Reason: reasonNameserverCreated,
Message: reasonNameserverCreated,
LastTransitionTime: &metav1.Time{Time: cl.Now().Truncate(time.Second)},
})
expectEqual(t, fc, dnsCfg, nil)
// // Verify that nameserver image gets updated to match DNSConfig spec.
mustUpdate(t, fc, "", "test", func(dnsCfg *tsapi.DNSConfig) {
dnsCfg.Spec.Nameserver.Image.Tag = "v0.0.2"
})
expectReconciled(t, nr, "", "test")
wantsDeploy.Spec.Template.Spec.Containers[0].Image = "test:v0.0.2"
expectEqual(t, fc, wantsDeploy, nil)
// Verify that when another actor sets ConfigMap data, it does not get
// overwritten by nameserver reconciler.
dnsRecords := &operatorutils.Records{Version: "v1alpha1", IP4: map[string][]string{"foo.ts.net": {"1.2.3.4"}}}
bs, err := json.Marshal(dnsRecords)
if err != nil {
t.Fatalf("error marshalling ConfigMap contents: %v", err)
}
mustUpdate(t, fc, "tailscale", "dnsrecords", func(cm *corev1.ConfigMap) {
mak.Set(&cm.Data, "records.json", string(bs))
})
expectReconciled(t, nr, "", "test")
wantCm := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "dnsrecords",
Namespace: "tailscale", Labels: labels, OwnerReferences: []metav1.OwnerReference{*dnsCfgOwnerRef}},
TypeMeta: metav1.TypeMeta{Kind: "ConfigMap", APIVersion: "v1"},
Data: map[string]string{"records.json": string(bs)},
}
expectEqual(t, fc, wantCm, nil)
// Verify that if dnsconfig.spec.nameserver.image.{repo,tag} are unset,
// the nameserver image defaults to tailscale/k8s-nameserver:unstable.
mustUpdate(t, fc, "", "test", func(dnsCfg *tsapi.DNSConfig) {
dnsCfg.Spec.Nameserver.Image = nil
})
expectReconciled(t, nr, "", "test")
wantsDeploy.Spec.Template.Spec.Containers[0].Image = "tailscale/k8s-nameserver:unstable"
expectEqual(t, fc, wantsDeploy, nil)
}

View File

@ -20,6 +20,7 @@ import (
"golang.org/x/oauth2/clientcredentials"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/rest"
@ -47,21 +48,25 @@ import (
// Generate static manifests for deploying Tailscale operator on Kubernetes from the operator's Helm chart.
//go:generate go run tailscale.com/cmd/k8s-operator/generate staticmanifests
// Generate Connector CustomResourceDefinition yaml from its Go types.
// Generate Connector and ProxyClass CustomResourceDefinition yamls from their Go types.
//go:generate go run sigs.k8s.io/controller-tools/cmd/controller-gen crd schemapatch:manifests=./deploy/crds output:dir=./deploy/crds paths=../../k8s-operator/apis/...
// Generate CRD docs from the yamls
//go:generate go run fybrik.io/crdoc --resources=./deploy/crds --output=../../k8s-operator/api.md
func main() {
// Required to use our client API. We're fine with the instability since the
// client lives in the same repo as this code.
tailscale.I_Acknowledge_This_API_Is_Unstable = true
var (
tsNamespace = defaultEnv("OPERATOR_NAMESPACE", "")
tslogging = defaultEnv("OPERATOR_LOGGING", "info")
image = defaultEnv("PROXY_IMAGE", "tailscale/tailscale:latest")
priorityClassName = defaultEnv("PROXY_PRIORITY_CLASS_NAME", "")
tags = defaultEnv("PROXY_TAGS", "tag:k8s")
tsFirewallMode = defaultEnv("PROXY_FIREWALL_MODE", "")
tsNamespace = defaultEnv("OPERATOR_NAMESPACE", "")
tslogging = defaultEnv("OPERATOR_LOGGING", "info")
image = defaultEnv("PROXY_IMAGE", "tailscale/tailscale:latest")
priorityClassName = defaultEnv("PROXY_PRIORITY_CLASS_NAME", "")
tags = defaultEnv("PROXY_TAGS", "tag:k8s")
tsFirewallMode = defaultEnv("PROXY_FIREWALL_MODE", "")
isDefaultLoadBalancer = defaultBool("OPERATOR_DEFAULT_LOAD_BALANCER", false)
)
var opts []kzap.Opts
@ -90,9 +95,19 @@ func main() {
defer s.Close()
restConfig := config.GetConfigOrDie()
maybeLaunchAPIServerProxy(zlog, restConfig, s, mode)
// TODO (irbekrm): gather the reconciler options into an opts struct
// rather than passing a million of them in one by one.
runReconcilers(zlog, s, tsNamespace, restConfig, tsClient, image, priorityClassName, tags, tsFirewallMode)
rOpts := reconcilerOpts{
log: zlog,
tsServer: s,
tsClient: tsClient,
tailscaleNamespace: tsNamespace,
restConfig: restConfig,
proxyImage: image,
proxyPriorityClassName: priorityClassName,
proxyActAsDefaultLoadBalancer: isDefaultLoadBalancer,
proxyTags: tags,
proxyFirewallMode: tsFirewallMode,
}
runReconcilers(rOpts)
}
// initTSNet initializes the tsnet.Server and logs in to Tailscale. It uses the
@ -200,11 +215,8 @@ waitOnline:
// runReconcilers starts the controller-runtime manager and registers the
// ServiceReconciler. It blocks forever.
func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string, restConfig *rest.Config, tsClient *tailscale.Client, image, priorityClassName, tags, tsFirewallMode string) {
var (
isDefaultLoadBalancer = defaultBool("OPERATOR_DEFAULT_LOAD_BALANCER", false)
)
startlog := zlog.Named("startReconcilers")
func runReconcilers(opts reconcilerOpts) {
startlog := opts.log.Named("startReconcilers")
// For secrets and statefulsets, we only get permission to touch the objects
// in the controller's own namespace. This cannot be expressed by
// .Watches(...) below, instead you have to add a per-type field selector to
@ -212,7 +224,7 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
// implicitly filter what parts of the world the builder code gets to see at
// all.
nsFilter := cache.ByObject{
Field: client.InNamespace(tsNamespace).AsSelector(),
Field: client.InNamespace(opts.tailscaleNamespace).AsSelector(),
}
mgrOpts := manager.Options{
// TODO (irbekrm): stricter filtering what we watch/cache/call
@ -220,30 +232,37 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
// resources that we GET via the controller manager's client.
Cache: cache.Options{
ByObject: map[client.Object]cache.ByObject{
&corev1.Secret{}: nsFilter,
&appsv1.StatefulSet{}: nsFilter,
&corev1.Secret{}: nsFilter,
&corev1.ServiceAccount{}: nsFilter,
&corev1.ConfigMap{}: nsFilter,
&appsv1.StatefulSet{}: nsFilter,
&appsv1.Deployment{}: nsFilter,
&discoveryv1.EndpointSlice{}: nsFilter,
},
},
Scheme: tsapi.GlobalScheme,
}
mgr, err := manager.New(restConfig, mgrOpts)
mgr, err := manager.New(opts.restConfig, mgrOpts)
if err != nil {
startlog.Fatalf("could not create manager: %v", err)
}
svcFilter := handler.EnqueueRequestsFromMapFunc(serviceHandler)
svcChildFilter := handler.EnqueueRequestsFromMapFunc(managedResourceHandlerForType("svc"))
// If a ProxyClass changes, enqueue all Services labeled with that
// ProxyClass's name.
proxyClassFilterForSvc := handler.EnqueueRequestsFromMapFunc(proxyClassHandlerForSvc(mgr.GetClient(), startlog))
eventRecorder := mgr.GetEventRecorderFor("tailscale-operator")
ssr := &tailscaleSTSReconciler{
Client: mgr.GetClient(),
tsnetServer: s,
tsClient: tsClient,
defaultTags: strings.Split(tags, ","),
operatorNamespace: tsNamespace,
proxyImage: image,
proxyPriorityClassName: priorityClassName,
tsFirewallMode: tsFirewallMode,
tsnetServer: opts.tsServer,
tsClient: opts.tsClient,
defaultTags: strings.Split(opts.proxyTags, ","),
operatorNamespace: opts.tailscaleNamespace,
proxyImage: opts.proxyImage,
proxyPriorityClassName: opts.proxyPriorityClassName,
tsFirewallMode: opts.proxyFirewallMode,
}
err = builder.
ControllerManagedBy(mgr).
@ -251,47 +270,118 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
Watches(&corev1.Service{}, svcFilter).
Watches(&appsv1.StatefulSet{}, svcChildFilter).
Watches(&corev1.Secret{}, svcChildFilter).
Watches(&tsapi.ProxyClass{}, proxyClassFilterForSvc).
Complete(&ServiceReconciler{
ssr: ssr,
Client: mgr.GetClient(),
logger: zlog.Named("service-reconciler"),
isDefaultLoadBalancer: isDefaultLoadBalancer,
logger: opts.log.Named("service-reconciler"),
isDefaultLoadBalancer: opts.proxyActAsDefaultLoadBalancer,
recorder: eventRecorder,
tsNamespace: opts.tailscaleNamespace,
})
if err != nil {
startlog.Fatalf("could not create controller: %v", err)
startlog.Fatalf("could not create service reconciler: %v", err)
}
ingressChildFilter := handler.EnqueueRequestsFromMapFunc(managedResourceHandlerForType("ingress"))
// If a ProxyClassChanges, enqueue all Ingresses labeled with that
// ProxyClass's name.
proxyClassFilterForIngress := handler.EnqueueRequestsFromMapFunc(proxyClassHandlerForIngress(mgr.GetClient(), startlog))
// Enque Ingress if a managed Service or backend Service associated with a tailscale Ingress changes.
svcHandlerForIngress := handler.EnqueueRequestsFromMapFunc(serviceHandlerForIngress(mgr.GetClient(), startlog))
err = builder.
ControllerManagedBy(mgr).
For(&networkingv1.Ingress{}).
Watches(&appsv1.StatefulSet{}, ingressChildFilter).
Watches(&corev1.Secret{}, ingressChildFilter).
Watches(&corev1.Service{}, ingressChildFilter).
Watches(&corev1.Service{}, svcHandlerForIngress).
Watches(&tsapi.ProxyClass{}, proxyClassFilterForIngress).
Complete(&IngressReconciler{
ssr: ssr,
recorder: eventRecorder,
Client: mgr.GetClient(),
logger: zlog.Named("ingress-reconciler"),
logger: opts.log.Named("ingress-reconciler"),
})
if err != nil {
startlog.Fatalf("could not create controller: %v", err)
startlog.Fatalf("could not create ingress reconciler: %v", err)
}
connectorFilter := handler.EnqueueRequestsFromMapFunc(managedResourceHandlerForType("connector"))
// If a ProxyClassChanges, enqueue all Connectors that have
// .spec.proxyClass set to the name of this ProxyClass.
proxyClassFilterForConnector := handler.EnqueueRequestsFromMapFunc(proxyClassHandlerForConnector(mgr.GetClient(), startlog))
err = builder.ControllerManagedBy(mgr).
For(&tsapi.Connector{}).
Watches(&appsv1.StatefulSet{}, connectorFilter).
Watches(&corev1.Secret{}, connectorFilter).
Watches(&tsapi.ProxyClass{}, proxyClassFilterForConnector).
Complete(&ConnectorReconciler{
ssr: ssr,
recorder: eventRecorder,
Client: mgr.GetClient(),
logger: zlog.Named("connector-reconciler"),
logger: opts.log.Named("connector-reconciler"),
clock: tstime.DefaultClock{},
})
if err != nil {
startlog.Fatal("could not create connector reconciler: %v", err)
startlog.Fatalf("could not create connector reconciler: %v", err)
}
// TODO (irbekrm): switch to metadata-only watches for resources whose
// spec we don't need to inspect to reduce memory consumption.
// https://github.com/kubernetes-sigs/controller-runtime/issues/1159
nameserverFilter := handler.EnqueueRequestsFromMapFunc(managedResourceHandlerForType("nameserver"))
err = builder.ControllerManagedBy(mgr).
For(&tsapi.DNSConfig{}).
Watches(&appsv1.Deployment{}, nameserverFilter).
Watches(&corev1.ConfigMap{}, nameserverFilter).
Watches(&corev1.Service{}, nameserverFilter).
Watches(&corev1.ServiceAccount{}, nameserverFilter).
Complete(&NameserverReconciler{
recorder: eventRecorder,
tsNamespace: opts.tailscaleNamespace,
Client: mgr.GetClient(),
logger: opts.log.Named("nameserver-reconciler"),
clock: tstime.DefaultClock{},
})
if err != nil {
startlog.Fatalf("could not create nameserver reconciler: %v", err)
}
err = builder.ControllerManagedBy(mgr).
For(&tsapi.ProxyClass{}).
Complete(&ProxyClassReconciler{
Client: mgr.GetClient(),
recorder: eventRecorder,
logger: opts.log.Named("proxyclass-reconciler"),
clock: tstime.DefaultClock{},
})
if err != nil {
startlog.Fatal("could not create proxyclass reconciler: %v", err)
}
logger := startlog.Named("dns-records-reconciler-event-handlers")
// On EndpointSlice events, if it is an EndpointSlice for an
// ingress/egress proxy headless Service, reconcile the headless
// Service.
dnsRREpsOpts := handler.EnqueueRequestsFromMapFunc(dnsRecordsReconcilerEndpointSliceHandler)
// On DNSConfig changes, reconcile all headless Services for
// ingress/egress proxies in operator namespace.
dnsRRDNSConfigOpts := handler.EnqueueRequestsFromMapFunc(enqueueAllIngressEgressProxySvcsInNS(opts.tailscaleNamespace, mgr.GetClient(), logger))
// On Service events, if it is an ingress/egress proxy headless Service, reconcile it.
dnsRRServiceOpts := handler.EnqueueRequestsFromMapFunc(dnsRecordsReconcilerServiceHandler)
// On Ingress events, if it is a tailscale Ingress or if tailscale is the default ingress controller, reconcile the proxy
// headless Service.
dnsRRIngressOpts := handler.EnqueueRequestsFromMapFunc(dnsRecordsReconcilerIngressHandler(opts.tailscaleNamespace, opts.proxyActAsDefaultLoadBalancer, mgr.GetClient(), logger))
err = builder.ControllerManagedBy(mgr).
Named("dns-records-reconciler").
Watches(&corev1.Service{}, dnsRRServiceOpts).
Watches(&networkingv1.Ingress{}, dnsRRIngressOpts).
Watches(&discoveryv1.EndpointSlice{}, dnsRREpsOpts).
Watches(&tsapi.DNSConfig{}, dnsRRDNSConfigOpts).
Complete(&dnsRecordsReconciler{
Client: mgr.GetClient(),
tsNamespace: opts.tailscaleNamespace,
logger: opts.log.Named("dns-records-reconciler"),
isDefaultLoadBalancer: opts.proxyActAsDefaultLoadBalancer,
})
if err != nil {
startlog.Fatalf("could not create DNS records reconciler: %v", err)
}
startlog.Infof("Startup complete, operator running, version: %s", version.Long())
if err := mgr.Start(signals.SetupSignalHandler()); err != nil {
@ -299,6 +389,130 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
}
}
type reconcilerOpts struct {
log *zap.SugaredLogger
tsServer *tsnet.Server
tsClient *tailscale.Client
tailscaleNamespace string // namespace in which operator resources will be deployed
restConfig *rest.Config // config for connecting to the kube API server
proxyImage string // <proxy-image-repo>:<proxy-image-tag>
// proxyPriorityClassName isPriorityClass to be set for proxy Pods. This
// is a legacy mechanism for cluster resource configuration options -
// going forward use ProxyClass.
// https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
proxyPriorityClassName string
// proxyTags are ACL tags to tag proxy auth keys. Multiple tags should
// be provided as a string with comma-separated tag values. Proxy tags
// default to tag:k8s.
// https://tailscale.com/kb/1085/auth-keys
proxyTags string
// proxyActAsDefaultLoadBalancer determines whether this operator
// instance should act as the default ingress controller when looking at
// Ingress resources with unset ingress.spec.ingressClassName.
// TODO (irbekrm): this setting does not respect the default
// IngressClass.
// https://kubernetes.io/docs/concepts/services-networking/ingress/#default-ingress-class
// We should fix that and preferably integrate with that mechanism as
// well - perhaps make the operator itself create the default
// IngressClass if this is set to true.
proxyActAsDefaultLoadBalancer bool
// proxyFirewallMode determines whether non-userspace proxies should use
// iptables or nftables for firewall configuration. Accepted values are
// iptables, nftables and auto. If set to auto, proxy will automatically
// determine which mode is supported for a given host (prefer nftables).
// Auto is usually the best choice, unless you want to explicitly set
// specific mode for debugging purposes.
proxyFirewallMode string
}
// enqueueAllIngressEgressProxySvcsinNS returns a reconcile request for each
// ingress/egress proxy headless Service found in the provided namespace.
func enqueueAllIngressEgressProxySvcsInNS(ns string, cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, _ client.Object) []reconcile.Request {
reqs := make([]reconcile.Request, 0)
// Get all headless Services for proxies configured using Service.
svcProxyLabels := map[string]string{
LabelManaged: "true",
LabelParentType: "svc",
}
svcHeadlessSvcList := &corev1.ServiceList{}
if err := cl.List(ctx, svcHeadlessSvcList, client.InNamespace(ns), client.MatchingLabels(svcProxyLabels)); err != nil {
logger.Errorf("error listing headless Services for tailscale ingress/egress Services in operator namespace: %v", err)
return nil
}
for _, svc := range svcHeadlessSvcList.Items {
reqs = append(reqs, reconcile.Request{NamespacedName: types.NamespacedName{Namespace: svc.Namespace, Name: svc.Name}})
}
// Get all headless Services for proxies configured using Ingress.
ingProxyLabels := map[string]string{
LabelManaged: "true",
LabelParentType: "ingress",
}
ingHeadlessSvcList := &corev1.ServiceList{}
if err := cl.List(ctx, ingHeadlessSvcList, client.InNamespace(ns), client.MatchingLabels(ingProxyLabels)); err != nil {
logger.Errorf("error listing headless Services for tailscale Ingresses in operator namespace: %v", err)
return nil
}
for _, svc := range ingHeadlessSvcList.Items {
reqs = append(reqs, reconcile.Request{NamespacedName: types.NamespacedName{Namespace: svc.Namespace, Name: svc.Name}})
}
return reqs
}
}
// dnsRecordsReconciler filters EndpointSlice events for which
// dns-records-reconciler should reconcile a headless Service. The only events
// it should reconcile are those for EndpointSlices associated with proxy
// headless Services.
func dnsRecordsReconcilerEndpointSliceHandler(ctx context.Context, o client.Object) []reconcile.Request {
if !isManagedByType(o, "svc") && !isManagedByType(o, "ingress") {
return nil
}
headlessSvcName, ok := o.GetLabels()[discoveryv1.LabelServiceName] // https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#ownership
if !ok {
return nil
}
return []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: o.GetNamespace(), Name: headlessSvcName}}}
}
// dnsRecordsReconcilerServiceHandler filters Service events for which
// dns-records-reconciler should reconcile. If the event is for a cluster
// ingress/cluster egress proxy's headless Service, returns the Service for
// reconcile.
func dnsRecordsReconcilerServiceHandler(ctx context.Context, o client.Object) []reconcile.Request {
if isManagedByType(o, "svc") || isManagedByType(o, "ingress") {
return []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: o.GetNamespace(), Name: o.GetName()}}}
}
return nil
}
// dnsRecordsReconcilerIngressHandler filters Ingress events to ensure that
// dns-records-reconciler only reconciles on tailscale Ingress events. When an
// event is observed on a tailscale Ingress, reconcile the proxy headless Service.
func dnsRecordsReconcilerIngressHandler(ns string, isDefaultLoadBalancer bool, cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
ing, ok := o.(*networkingv1.Ingress)
if !ok {
return nil
}
if !isDefaultLoadBalancer && (ing.Spec.IngressClassName == nil || *ing.Spec.IngressClassName != "tailscale") {
return nil
}
proxyResourceLabels := childResourceLabels(ing.Name, ing.Namespace, "ingress")
headlessSvc, err := getSingleObject[corev1.Service](ctx, cl, ns, proxyResourceLabels)
if err != nil {
logger.Errorf("error getting headless Service from parent labels: %v", err)
return nil
}
if headlessSvc == nil {
return nil
}
return []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: headlessSvc.Namespace, Name: headlessSvc.Name}}}
}
}
type tsClient interface {
CreateKey(ctx context.Context, caps tailscale.KeyCapabilities) (string, *tailscale.Key, error)
DeleteDevice(ctx context.Context, nodeStableID string) error
@ -321,6 +535,7 @@ func parentFromObjectLabels(o client.Object) types.NamespacedName {
Name: ls[LabelParentName],
}
}
func managedResourceHandlerForType(typ string) handler.MapFunc {
return func(_ context.Context, o client.Object) []reconcile.Request {
if !isManagedByType(o, typ) {
@ -330,14 +545,115 @@ func managedResourceHandlerForType(typ string) handler.MapFunc {
{NamespacedName: parentFromObjectLabels(o)},
}
}
}
// proxyClassHandlerForSvc returns a handler that, for a given ProxyClass,
// returns a list of reconcile requests for all Services labeled with
// tailscale.com/proxy-class: <proxy class name>.
func proxyClassHandlerForSvc(cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
svcList := new(corev1.ServiceList)
labels := map[string]string{
LabelProxyClass: o.GetName(),
}
if err := cl.List(ctx, svcList, client.MatchingLabels(labels)); err != nil {
logger.Debugf("error listing Services for ProxyClass: %v", err)
return nil
}
reqs := make([]reconcile.Request, 0)
for _, svc := range svcList.Items {
reqs = append(reqs, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&svc)})
}
return reqs
}
}
// proxyClassHandlerForIngress returns a handler that, for a given ProxyClass,
// returns a list of reconcile requests for all Ingresses labeled with
// tailscale.com/proxy-class: <proxy class name>.
func proxyClassHandlerForIngress(cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
ingList := new(networkingv1.IngressList)
labels := map[string]string{
LabelProxyClass: o.GetName(),
}
if err := cl.List(ctx, ingList, client.MatchingLabels(labels)); err != nil {
logger.Debugf("error listing Ingresses for ProxyClass: %v", err)
return nil
}
reqs := make([]reconcile.Request, 0)
for _, ing := range ingList.Items {
reqs = append(reqs, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&ing)})
}
return reqs
}
}
// proxyClassHandlerForConnector returns a handler that, for a given ProxyClass,
// returns a list of reconcile requests for all Connectors that have
// .spec.proxyClass set.
func proxyClassHandlerForConnector(cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
connList := new(tsapi.ConnectorList)
if err := cl.List(ctx, connList); err != nil {
logger.Debugf("error listing Connectors for ProxyClass: %v", err)
return nil
}
reqs := make([]reconcile.Request, 0)
proxyClassName := o.GetName()
for _, conn := range connList.Items {
if conn.Spec.ProxyClass == proxyClassName {
reqs = append(reqs, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&conn)})
}
}
return reqs
}
}
// serviceHandlerForIngress returns a handler for Service events for ingress
// reconciler that ensures that if the Service associated with an event is of
// interest to the reconciler, the associated Ingress(es) gets be reconciled.
// The Services of interest are backend Services for tailscale Ingress and
// managed Services for an StatefulSet for a proxy configured for tailscale
// Ingress
func serviceHandlerForIngress(cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
if isManagedByType(o, "ingress") {
ingName := parentFromObjectLabels(o)
return []reconcile.Request{{NamespacedName: ingName}}
}
ingList := networkingv1.IngressList{}
if err := cl.List(ctx, &ingList, client.InNamespace(o.GetNamespace())); err != nil {
logger.Debugf("error listing Ingresses: %v", err)
return nil
}
reqs := make([]reconcile.Request, 0)
for _, ing := range ingList.Items {
if ing.Spec.IngressClassName == nil || *ing.Spec.IngressClassName != tailscaleIngressClassName {
return nil
}
if ing.Spec.DefaultBackend != nil && ing.Spec.DefaultBackend.Service != nil && ing.Spec.DefaultBackend.Service.Name == o.GetName() {
reqs = append(reqs, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&ing)})
}
for _, rule := range ing.Spec.Rules {
if rule.HTTP == nil {
continue
}
for _, path := range rule.HTTP.Paths {
if path.Backend.Service != nil && path.Backend.Service.Name == o.GetName() {
reqs = append(reqs, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&ing)})
}
}
}
}
return reqs
}
}
func serviceHandler(_ context.Context, o client.Object) []reconcile.Request {
if isManagedByType(o, "svc") {
// If this is a Service managed by a Service we want to enqueue its parent
return []reconcile.Request{{NamespacedName: parentFromObjectLabels(o)}}
}
if isManagedResource(o) {
// If this is a Servce managed by a resource that is not a Service, we leave it alone
@ -352,7 +668,6 @@ func serviceHandler(_ context.Context, o client.Object) []reconcile.Request {
},
},
}
}
// isMagicDNSName reports whether name is a full tailnet node FQDN (with or

View File

@ -6,16 +6,24 @@
package main
import (
"context"
"fmt"
"testing"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/net/dns/resolvconffile"
"tailscale.com/types/ptr"
"tailscale.com/util/dnsname"
"tailscale.com/util/mak"
)
func TestLoadBalancerClass(t *testing.T) {
@ -67,9 +75,9 @@ func TestLoadBalancerClass(t *testing.T) {
clusterTargetIP: "10.20.30.40",
}
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Normally the Tailscale proxy pod would come up here and write its info
// into the secret. Simulate that, then verify reconcile again and verify
@ -112,7 +120,7 @@ func TestLoadBalancerClass(t *testing.T) {
},
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Turn the service back into a ClusterIP service, which should make the
// operator clean up.
@ -151,7 +159,7 @@ func TestLoadBalancerClass(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestTailnetTargetFQDNAnnotation(t *testing.T) {
@ -208,9 +216,9 @@ func TestTailnetTargetFQDNAnnotation(t *testing.T) {
hostname: "default-test",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want := &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
@ -231,10 +239,10 @@ func TestTailnetTargetFQDNAnnotation(t *testing.T) {
Selector: nil,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, want, nil)
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// Change the tailscale-target-fqdn annotation which should update the
// StatefulSet
@ -318,9 +326,9 @@ func TestTailnetTargetIPAnnotation(t *testing.T) {
hostname: "default-test",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want := &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
@ -341,10 +349,10 @@ func TestTailnetTargetIPAnnotation(t *testing.T) {
Selector: nil,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, want, nil)
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// Change the tailscale-target-ip annotation which should update the
// StatefulSet
@ -425,9 +433,9 @@ func TestAnnotations(t *testing.T) {
clusterTargetIP: "10.20.30.40",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want := &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
@ -447,7 +455,7 @@ func TestAnnotations(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Turn the service back into a ClusterIP service, which should make the
// operator clean up.
@ -479,7 +487,7 @@ func TestAnnotations(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestAnnotationIntoLB(t *testing.T) {
@ -533,9 +541,9 @@ func TestAnnotationIntoLB(t *testing.T) {
clusterTargetIP: "10.20.30.40",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// Normally the Tailscale proxy pod would come up here and write its info
// into the secret. Simulate that, since it would have normally happened at
@ -568,7 +576,7 @@ func TestAnnotationIntoLB(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Remove Tailscale's annotation, and at the same time convert the service
// into a tailscale LoadBalancer.
@ -579,8 +587,8 @@ func TestAnnotationIntoLB(t *testing.T) {
})
expectReconciled(t, sr, "default", "test")
// None of the proxy machinery should have changed...
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// ... but the service should have a LoadBalancer status.
want = &corev1.Service{
@ -612,7 +620,7 @@ func TestAnnotationIntoLB(t *testing.T) {
},
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestLBIntoAnnotation(t *testing.T) {
@ -664,9 +672,9 @@ func TestLBIntoAnnotation(t *testing.T) {
clusterTargetIP: "10.20.30.40",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// Normally the Tailscale proxy pod would come up here and write its info
// into the secret. Simulate that, then verify reconcile again and verify
@ -709,7 +717,7 @@ func TestLBIntoAnnotation(t *testing.T) {
},
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Turn the service back into a ClusterIP service, but also add the
// tailscale annotation.
@ -728,8 +736,8 @@ func TestLBIntoAnnotation(t *testing.T) {
})
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want = &corev1.Service{
TypeMeta: metav1.TypeMeta{
@ -750,7 +758,7 @@ func TestLBIntoAnnotation(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestCustomHostname(t *testing.T) {
@ -805,9 +813,9 @@ func TestCustomHostname(t *testing.T) {
clusterTargetIP: "10.20.30.40",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want := &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
@ -828,7 +836,7 @@ func TestCustomHostname(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Turn the service back into a ClusterIP service, which should make the
// operator clean up.
@ -863,7 +871,7 @@ func TestCustomHostname(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestCustomPriorityClassName(t *testing.T) {
@ -920,7 +928,104 @@ func TestCustomPriorityClassName(t *testing.T) {
clusterTargetIP: "10.20.30.40",
}
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
}
func TestProxyClassForService(t *testing.T) {
// Setup
pc := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{Name: "custom-metadata"},
Spec: tsapi.ProxyClassSpec{StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Annotations: map[string]string{"bar.io/foo": "some-val"},
Pod: &tsapi.Pod{Annotations: map[string]string{"foo.io/bar": "some-val"}}}},
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pc).
WithStatusSubresource(pc).
Build()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
sr := &ServiceReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// 1. A new tailscale LoadBalancer Service is created without any
// ProxyClass. Resources get created for it as usual.
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.20.30.40",
Type: corev1.ServiceTypeLoadBalancer,
LoadBalancerClass: ptr.To("tailscale"),
},
})
expectReconciled(t, sr, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "svc")
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
}
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 2. The Service gets updated with tailscale.com/proxy-class label
// pointing at the 'custom-metadata' ProxyClass. The ProxyClass is not
// yet ready, so no changes are actually applied to the proxy resources.
mustUpdate(t, fc, "default", "test", func(svc *corev1.Service) {
mak.Set(&svc.Labels, LabelProxyClass, "custom-metadata")
})
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 3. ProxyClass is set to Ready, the Service gets reconciled by the
// services-reconciler and the customization from the ProxyClass is
// applied to the proxy resources.
mustUpdateStatus(t, fc, "", "custom-metadata", func(pc *tsapi.ProxyClass) {
pc.Status = tsapi.ProxyClassStatus{
Conditions: []tsapi.ConnectorCondition{{
Status: metav1.ConditionTrue,
Type: tsapi.ProxyClassready,
ObservedGeneration: pc.Generation,
}}}
})
opts.proxyClass = pc.Name
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 4. tailscale.com/proxy-class label is removed from the Service, the
// configuration from the ProxyClass is removed from the cluster
// resources.
mustUpdate(t, fc, "default", "test", func(svc *corev1.Service) {
delete(svc.Labels, LabelProxyClass)
})
opts.proxyClass = ""
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
}
func TestDefaultLoadBalancer(t *testing.T) {
@ -964,7 +1069,7 @@ func TestDefaultLoadBalancer(t *testing.T) {
fullName, shortName := findGenName(t, fc, "default", "test", "svc")
expectEqual(t, fc, expectedHeadlessService(shortName))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
o := configOpts{
stsName: shortName,
secretName: fullName,
@ -973,7 +1078,8 @@ func TestDefaultLoadBalancer(t *testing.T) {
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
}
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
}
func TestProxyFirewallMode(t *testing.T) {
@ -1026,10 +1132,70 @@ func TestProxyFirewallMode(t *testing.T) {
firewallMode: "nftables",
clusterTargetIP: "10.20.30.40",
}
expectEqual(t, fc, expectedSTS(o))
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
}
func TestTailscaledConfigfileHash(t *testing.T) {
fc := fake.NewFakeClient()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
sr := &ServiceReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
isDefaultLoadBalancer: true,
}
// Create a service that we should manage, and check that the initial round
// of objects looks right.
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.20.30.40",
Type: corev1.ServiceTypeLoadBalancer,
},
})
expectReconciled(t, sr, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "svc")
o := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
confFileHash: "e09bededa0379920141cbd0b0dbdf9b8b66545877f9e8397423f5ce3e1ba439e",
}
expectEqual(t, fc, expectedSTS(t, fc, o), nil)
// 2. Hostname gets changed, configfile is updated and a new hash value
// is produced.
mustUpdate(t, fc, "default", "test", func(svc *corev1.Service) {
mak.Set(&svc.Annotations, AnnotationHostname, "another-test")
})
o.hostname = "another-test"
o.confFileHash = "5d754cf55463135ee34aa9821f2fd8483b53eb0570c3740c84a086304f427684"
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, o), nil)
}
func Test_isMagicDNSName(t *testing.T) {
tests := []struct {
in string
@ -1056,3 +1222,279 @@ func Test_isMagicDNSName(t *testing.T) {
})
}
}
func Test_serviceHandlerForIngress(t *testing.T) {
fc := fake.NewFakeClient()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
// 1. An event on a headless Service for a tailscale Ingress results in
// the Ingress being reconciled.
mustCreate(t, fc, &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{
Name: "ing-1",
Namespace: "ns-1",
},
Spec: networkingv1.IngressSpec{IngressClassName: ptr.To(tailscaleIngressClassName)},
})
svc1 := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "headless-1",
Namespace: "tailscale",
Labels: map[string]string{
LabelManaged: "true",
LabelParentName: "ing-1",
LabelParentNamespace: "ns-1",
LabelParentType: "ingress",
},
},
}
mustCreate(t, fc, svc1)
wantReqs := []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: "ns-1", Name: "ing-1"}}}
gotReqs := serviceHandlerForIngress(fc, zl.Sugar())(context.Background(), svc1)
if diff := cmp.Diff(gotReqs, wantReqs); diff != "" {
t.Fatalf("unexpected reconcile requests (-got +want):\n%s", diff)
}
// 2. An event on a Service that is the default backend for a tailscale
// Ingress results in the Ingress being reconciled.
mustCreate(t, fc, &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{
Name: "ing-2",
Namespace: "ns-2",
},
Spec: networkingv1.IngressSpec{
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{Name: "def-backend"},
},
IngressClassName: ptr.To(tailscaleIngressClassName),
},
})
backendSvc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "def-backend",
Namespace: "ns-2",
},
}
mustCreate(t, fc, backendSvc)
wantReqs = []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: "ns-2", Name: "ing-2"}}}
gotReqs = serviceHandlerForIngress(fc, zl.Sugar())(context.Background(), backendSvc)
if diff := cmp.Diff(gotReqs, wantReqs); diff != "" {
t.Fatalf("unexpected reconcile requests (-got +want):\n%s", diff)
}
// 3. An event on a Service that is one of the non-default backends for
// a tailscale Ingress results in the Ingress being reconciled.
mustCreate(t, fc, &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{
Name: "ing-3",
Namespace: "ns-3",
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To(tailscaleIngressClassName),
Rules: []networkingv1.IngressRule{{IngressRuleValue: networkingv1.IngressRuleValue{HTTP: &networkingv1.HTTPIngressRuleValue{
Paths: []networkingv1.HTTPIngressPath{
{Backend: networkingv1.IngressBackend{Service: &networkingv1.IngressServiceBackend{Name: "backend"}}}},
}}}},
},
})
backendSvc2 := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "backend",
Namespace: "ns-3",
},
}
mustCreate(t, fc, backendSvc2)
wantReqs = []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: "ns-3", Name: "ing-3"}}}
gotReqs = serviceHandlerForIngress(fc, zl.Sugar())(context.Background(), backendSvc2)
if diff := cmp.Diff(gotReqs, wantReqs); diff != "" {
t.Fatalf("unexpected reconcile requests (-got +want):\n%s", diff)
}
// 4. An event on a Service that is a backend for an Ingress that is not
// tailscale Ingress does not result in an Ingress reconcile.
mustCreate(t, fc, &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{
Name: "ing-4",
Namespace: "ns-4",
},
Spec: networkingv1.IngressSpec{
Rules: []networkingv1.IngressRule{{IngressRuleValue: networkingv1.IngressRuleValue{HTTP: &networkingv1.HTTPIngressRuleValue{
Paths: []networkingv1.HTTPIngressPath{
{Backend: networkingv1.IngressBackend{Service: &networkingv1.IngressServiceBackend{Name: "non-ts-backend"}}}},
}}}},
},
})
nonTSBackend := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "non-ts-backend",
Namespace: "ns-4",
},
}
mustCreate(t, fc, nonTSBackend)
gotReqs = serviceHandlerForIngress(fc, zl.Sugar())(context.Background(), nonTSBackend)
if len(gotReqs) > 0 {
t.Errorf("unexpected reconcile request for a Service that does not belong to a Tailscale Ingress: %#+v\n", gotReqs)
}
// 5. An event on a Service not related to any Ingress does not result
// in an Ingress reconcile.
someSvc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "some-svc",
Namespace: "ns-4",
},
}
mustCreate(t, fc, someSvc)
gotReqs = serviceHandlerForIngress(fc, zl.Sugar())(context.Background(), someSvc)
if len(gotReqs) > 0 {
t.Errorf("unexpected reconcile request for a Service that does not belong to any Ingress: %#+v\n", gotReqs)
}
}
func Test_clusterDomainFromResolverConf(t *testing.T) {
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
tests := []struct {
name string
conf *resolvconffile.Config
namespace string
want string
}{
{
name: "success- custom domain",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.svc.department.org.io"), toFQDN(t, "svc.department.org.io"), toFQDN(t, "department.org.io")},
},
namespace: "foo",
want: "department.org.io",
},
{
name: "success- default domain",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.svc.cluster.local."), toFQDN(t, "svc.cluster.local."), toFQDN(t, "cluster.local.")},
},
namespace: "foo",
want: "cluster.local",
},
{
name: "only two search domains found",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "svc.department.org.io"), toFQDN(t, "department.org.io")},
},
namespace: "foo",
want: "cluster.local",
},
{
name: "first search domain does not match the expected structure",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.bar.department.org.io"), toFQDN(t, "svc.department.org.io"), toFQDN(t, "some.other.fqdn")},
},
namespace: "foo",
want: "cluster.local",
},
{
name: "second search domain does not match the expected structure",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.svc.department.org.io"), toFQDN(t, "foo.department.org.io"), toFQDN(t, "some.other.fqdn")},
},
namespace: "foo",
want: "cluster.local",
},
{
name: "third search domain does not match the expected structure",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.svc.department.org.io"), toFQDN(t, "svc.department.org.io"), toFQDN(t, "some.other.fqdn")},
},
namespace: "foo",
want: "cluster.local",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := clusterDomainFromResolverConf(tt.conf, tt.namespace, zl.Sugar()); got != tt.want {
t.Errorf("clusterDomainFromResolverConf() = %v, want %v", got, tt.want)
}
})
}
}
func Test_externalNameService(t *testing.T) {
fc := fake.NewFakeClient()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
// 1. A External name Service that should be exposed via Tailscale gets
// created.
sr := &ServiceReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// 1. Create an ExternalName Service that we should manage, and check that the initial round
// of objects looks right.
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
Annotations: map[string]string{
AnnotationExpose: "true",
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeExternalName,
ExternalName: "foo.com",
},
})
expectReconciled(t, sr, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "svc")
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "svc",
hostname: "default-test",
clusterTargetDNS: "foo.com",
}
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 2. Change the ExternalName and verify that changes get propagated.
mustUpdate(t, sr, "default", "test", func(s *corev1.Service) {
s.Spec.ExternalName = "bar.com"
})
expectReconciled(t, sr, "default", "test")
opts.clusterTargetDNS = "bar.com"
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
}
func toFQDN(t *testing.T, s string) dnsname.FQDN {
t.Helper()
fqdn, err := dnsname.ToFQDN(s)
if err != nil {
t.Fatalf("error coverting %q to dnsname.FQDN: %v", s, err)
}
return fqdn
}

View File

@ -0,0 +1,122 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"fmt"
"strings"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
apivalidation "k8s.io/apimachinery/pkg/api/validation"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metavalidation "k8s.io/apimachinery/pkg/apis/meta/v1/validation"
"k8s.io/apimachinery/pkg/util/validation/field"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstime"
)
const (
reasonProxyClassInvalid = "ProxyClassInvalid"
reasonProxyClassValid = "ProxyClassValid"
reasonCustomTSEnvVar = "CustomTSEnvVar"
messageProxyClassInvalid = "ProxyClass is not valid: %v"
messageCustomTSEnvVar = "ProxyClass overrides the default value for %s env var for %s container. Running with custom values for Tailscale env vars is not recommended and might break in the future."
)
type ProxyClassReconciler struct {
client.Client
recorder record.EventRecorder
logger *zap.SugaredLogger
clock tstime.Clock
}
func (pcr *ProxyClassReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
logger := pcr.logger.With("ProxyClass", req.Name)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
pc := new(tsapi.ProxyClass)
err = pcr.Get(ctx, req.NamespacedName, pc)
if apierrors.IsNotFound(err) {
logger.Debugf("ProxyClass not found, assuming it was deleted")
return reconcile.Result{}, nil
} else if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get tailscale.com ProxyClass: %w", err)
}
if !pc.DeletionTimestamp.IsZero() {
logger.Debugf("ProxyClass is being deleted, do nothing")
return reconcile.Result{}, nil
}
oldPCStatus := pc.Status.DeepCopy()
if errs := pcr.validate(pc); errs != nil {
msg := fmt.Sprintf(messageProxyClassInvalid, errs.ToAggregate().Error())
pcr.recorder.Event(pc, corev1.EventTypeWarning, reasonProxyClassInvalid, msg)
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassready, metav1.ConditionFalse, reasonProxyClassInvalid, msg, pc.Generation, pcr.clock, logger)
} else {
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassready, metav1.ConditionTrue, reasonProxyClassValid, reasonProxyClassValid, pc.Generation, pcr.clock, logger)
}
if !apiequality.Semantic.DeepEqual(oldPCStatus, pc.Status) {
if err := pcr.Client.Status().Update(ctx, pc); err != nil {
logger.Errorf("error updating ProxyClass status: %v", err)
return reconcile.Result{}, err
}
}
return reconcile.Result{}, nil
}
func (a *ProxyClassReconciler) validate(pc *tsapi.ProxyClass) (violations field.ErrorList) {
if sts := pc.Spec.StatefulSet; sts != nil {
if len(sts.Labels) > 0 {
if errs := metavalidation.ValidateLabels(sts.Labels, field.NewPath(".spec.statefulSet.labels")); errs != nil {
violations = append(violations, errs...)
}
}
if len(sts.Annotations) > 0 {
if errs := apivalidation.ValidateAnnotations(sts.Annotations, field.NewPath(".spec.statefulSet.annotations")); errs != nil {
violations = append(violations, errs...)
}
}
if pod := sts.Pod; pod != nil {
if len(pod.Labels) > 0 {
if errs := metavalidation.ValidateLabels(pod.Labels, field.NewPath(".spec.statefulSet.pod.labels")); errs != nil {
violations = append(violations, errs...)
}
}
if len(pod.Annotations) > 0 {
if errs := apivalidation.ValidateAnnotations(pod.Annotations, field.NewPath(".spec.statefulSet.pod.annotations")); errs != nil {
violations = append(violations, errs...)
}
}
if tc := pod.TailscaleContainer; tc != nil {
for _, e := range tc.Env {
if strings.HasPrefix(string(e.Name), "TS_") {
a.recorder.Event(pc, corev1.EventTypeWarning, reasonCustomTSEnvVar, fmt.Sprintf(messageCustomTSEnvVar, string(e.Name), "tailscale"))
}
if strings.EqualFold(string(e.Name), "EXPERIMENTAL_TS_CONFIGFILE_PATH") {
a.recorder.Event(pc, corev1.EventTypeWarning, reasonCustomTSEnvVar, fmt.Sprintf(messageCustomTSEnvVar, string(e.Name), "tailscale"))
}
if strings.EqualFold(string(e.Name), "EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS") {
a.recorder.Event(pc, corev1.EventTypeWarning, reasonCustomTSEnvVar, fmt.Sprintf(messageCustomTSEnvVar, string(e.Name), "tailscale"))
}
}
}
}
}
// We do not validate embedded fields (security context, resource
// requirements etc) as we inherit upstream validation for those fields.
// Invalid values would get rejected by upstream validations at apply
// time.
return violations
}

View File

@ -0,0 +1,98 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// tailscale-operator provides a way to expose services running in a Kubernetes
// cluster to your Tailnet.
package main
import (
"testing"
"time"
"go.uber.org/zap"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest"
)
func TestProxyClass(t *testing.T) {
pc := &tsapi.ProxyClass{
TypeMeta: metav1.TypeMeta{Kind: "ProxyClass", APIVersion: "tailscale.com/v1alpha1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar", "xyz1234": "abc567"},
Annotations: map[string]string{"foo.io/bar": "{'key': 'val1232'}"},
Pod: &tsapi.Pod{
Labels: map[string]string{"foo": "bar", "xyz1234": "abc567"},
Annotations: map[string]string{"foo.io/bar": "{'key': 'val1232'}"},
TailscaleContainer: &tsapi.Container{Env: []tsapi.Env{{Name: "FOO", Value: "BAR"}}},
},
},
},
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pc).
WithStatusSubresource(pc).
Build()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
fr := record.NewFakeRecorder(3) // bump this if you expect a test case to throw more events
cl := tstest.NewClock(tstest.ClockOpts{})
pcr := &ProxyClassReconciler{
Client: fc,
logger: zl.Sugar(),
clock: cl,
recorder: fr,
}
// 1. A valid ProxyClass resource gets its status updated to Ready.
expectReconciled(t, pcr, "", "test")
pc.Status.Conditions = append(pc.Status.Conditions, tsapi.ConnectorCondition{
Type: tsapi.ProxyClassready,
Status: metav1.ConditionTrue,
Reason: reasonProxyClassValid,
Message: reasonProxyClassValid,
LastTransitionTime: &metav1.Time{Time: cl.Now().Truncate(time.Second)},
})
expectEqual(t, fc, pc, nil)
// 2. An invalid ProxyClass resource gets its status updated to Invalid.
pc.Spec.StatefulSet.Labels["foo"] = "?!someVal"
mustUpdate(t, fc, "", "test", func(proxyClass *tsapi.ProxyClass) {
proxyClass.Spec.StatefulSet.Labels = pc.Spec.StatefulSet.Labels
})
expectReconciled(t, pcr, "", "test")
msg := `ProxyClass is not valid: .spec.statefulSet.labels: Invalid value: "?!someVal": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')`
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassready, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
expectEqual(t, fc, pc, nil)
expectedEvent := "Warning ProxyClassInvalid ProxyClass is not valid: .spec.statefulSet.labels: Invalid value: \"?!someVal\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')"
expectEvents(t, fr, []string{expectedEvent})
// 2. An valid ProxyClass but with a Tailscale env vars set results in warning events.
mustUpdate(t, fc, "", "test", func(proxyClass *tsapi.ProxyClass) {
proxyClass.Spec.StatefulSet.Labels = nil // unset invalid labels from the previous test
proxyClass.Spec.StatefulSet.Pod.TailscaleContainer.Env = []tsapi.Env{{Name: "TS_USERSPACE", Value: "true"}, {Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH"}, {Name: "EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS"}}
})
expectedEvents := []string{"Warning CustomTSEnvVar ProxyClass overrides the default value for TS_USERSPACE env var for tailscale container. Running with custom values for Tailscale env vars is not recommended and might break in the future.",
"Warning CustomTSEnvVar ProxyClass overrides the default value for EXPERIMENTAL_TS_CONFIGFILE_PATH env var for tailscale container. Running with custom values for Tailscale env vars is not recommended and might break in the future.",
"Warning CustomTSEnvVar ProxyClass overrides the default value for EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS env var for tailscale container. Running with custom values for Tailscale env vars is not recommended and might break in the future."}
expectReconciled(t, pcr, "", "test")
expectEvents(t, fr, expectedEvents)
}

View File

@ -14,6 +14,7 @@ import (
"fmt"
"net/http"
"os"
"slices"
"strings"
"go.uber.org/zap"
@ -22,25 +23,38 @@ import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apiserver/pkg/storage/names"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/yaml"
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
kubeutils "tailscale.com/k8s-operator"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/net/netutil"
"tailscale.com/tailcfg"
"tailscale.com/tsnet"
"tailscale.com/types/opt"
"tailscale.com/types/ptr"
"tailscale.com/util/dnsname"
"tailscale.com/util/mak"
)
const (
// Labels that the operator sets on StatefulSets and Pods. If you add a
// new label here, do also add it to tailscaleManagedLabels var to
// ensure that it does not get overwritten by ProxyClass configuration.
LabelManaged = "tailscale.com/managed"
LabelParentType = "tailscale.com/parent-resource-type"
LabelParentName = "tailscale.com/parent-resource"
LabelParentNamespace = "tailscale.com/parent-resource-ns"
// LabelProxyClass can be set by users on Connectors, tailscale
// Ingresses and Services that define cluster ingress or cluster egress,
// to specify that configuration in this ProxyClass should be applied to
// resources created for the Connector, Ingress or Service.
LabelProxyClass = "tailscale.com/proxy-class"
FinalizerName = "tailscale.com/finalizer"
// Annotations settable by users on services.
@ -55,18 +69,37 @@ const (
// Annotations settable by users on ingresses.
AnnotationFunnel = "tailscale.com/funnel"
// If set to true, set up iptables/nftables rules in the proxy forward
// cluster traffic to the tailnet IP of that proxy. This can only be set
// on an Ingress. This is useful in cases where a cluster target needs
// to be able to reach a cluster workload exposed to tailnet via Ingress
// using the same hostname as a tailnet workload (in this case, the
// MagicDNS name of the ingress proxy). This annotation is experimental.
// If it is set to true, the proxy set up for Ingress, will run
// tailscale in non-userspace, with NET_ADMIN cap for tailscale
// container and will also run a privileged init container that enables
// forwarding.
// Eventually this behaviour might become the default.
AnnotationExperimentalForwardClusterTrafficViaL7IngresProxy = "tailscale.com/experimental-forward-cluster-traffic-via-ingress"
// Annotations set by the operator on pods to trigger restarts when the
// hostname, IP, FQDN or tailscaled config changes.
// hostname, IP, FQDN or tailscaled config changes. If you add a new
// annotation here, also add it to tailscaleManagedAnnotations var to
// ensure that it does not get removed when a ProxyClass configuration
// is applied.
podAnnotationLastSetClusterIP = "tailscale.com/operator-last-set-cluster-ip"
podAnnotationLastSetHostname = "tailscale.com/operator-last-set-hostname"
podAnnotationLastSetClusterDNSName = "tailscale.com/operator-last-set-cluster-dns-name"
podAnnotationLastSetTailnetTargetIP = "tailscale.com/operator-last-set-ts-tailnet-target-ip"
podAnnotationLastSetTailnetTargetFQDN = "tailscale.com/operator-last-set-ts-tailnet-target-fqdn"
// podAnnotationLastSetConfigFileHash is sha256 hash of the current tailscaled configuration contents.
podAnnotationLastSetConfigFileHash = "tailscale.com/operator-last-set-config-file-hash"
)
// tailscaledConfigKey is the name of the key in proxy Secret Data that
// holds the tailscaled config contents.
tailscaledConfigKey = "tailscaled"
var (
// tailscaleManagedLabels are label keys that tailscale operator sets on StatefulSets and Pods.
tailscaleManagedLabels = []string{LabelManaged, LabelParentType, LabelParentName, LabelParentNamespace, "app"}
// tailscaleManagedAnnotations are annotation keys that tailscale operator sets on StatefulSets and Pods.
tailscaleManagedAnnotations = []string{podAnnotationLastSetClusterIP, podAnnotationLastSetTailnetTargetIP, podAnnotationLastSetTailnetTargetFQDN, podAnnotationLastSetConfigFileHash}
)
type tailscaleSTSConfig struct {
@ -74,8 +107,12 @@ type tailscaleSTSConfig struct {
ParentResourceUID string
ChildResourceLabels map[string]string
ServeConfig *ipn.ServeConfig
ClusterTargetIP string // ingress target
ServeConfig *ipn.ServeConfig // if serve config is set, this is a proxy for Ingress
ClusterTargetIP string // ingress target IP
ClusterTargetDNSName string // ingress target DNS name
// If set to true, operator should configure containerboot to forward
// cluster traffic via the proxy set up for Kubernetes Ingress.
ForwardClusterTrafficViaL7IngressProxy bool
TailnetTargetIP string // egress target IP
@ -87,6 +124,8 @@ type tailscaleSTSConfig struct {
// Connector specifies a configuration of a Connector instance if that's
// what this StatefulSet should be created for.
Connector *connector
ProxyClass string
}
type connector struct {
@ -95,10 +134,13 @@ type connector struct {
// isExitNode defines whether this Connector should act as an exit node.
isExitNode bool
}
type tsnetServer interface {
CertDomains() []string
}
type tailscaleSTSReconciler struct {
client.Client
tsnetServer *tsnet.Server
tsnetServer tsnetServer
tsClient tsClient
defaultTags []string
operatorNamespace string
@ -129,11 +171,11 @@ func (a *tailscaleSTSReconciler) Provision(ctx context.Context, logger *zap.Suga
return nil, fmt.Errorf("failed to reconcile headless service: %w", err)
}
secretName, tsConfigHash, err := a.createOrGetSecret(ctx, logger, sts, hsvc)
secretName, tsConfigHash, configs, err := a.createOrGetSecret(ctx, logger, sts, hsvc)
if err != nil {
return nil, fmt.Errorf("failed to create or get API key secret: %w", err)
}
_, err = a.reconcileSTS(ctx, logger, sts, hsvc, secretName, tsConfigHash)
_, err = a.reconcileSTS(ctx, logger, sts, hsvc, secretName, tsConfigHash, configs)
if err != nil {
return nil, fmt.Errorf("failed to reconcile statefulset: %w", err)
}
@ -246,7 +288,7 @@ func (a *tailscaleSTSReconciler) reconcileHeadlessService(ctx context.Context, l
return createOrUpdate(ctx, a.Client, a.operatorNamespace, hsvc, func(svc *corev1.Service) { svc.Spec = hsvc.Spec })
}
func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *zap.SugaredLogger, stsC *tailscaleSTSConfig, hsvc *corev1.Service) (string, string, error) {
func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *zap.SugaredLogger, stsC *tailscaleSTSConfig, hsvc *corev1.Service) (secretName, hash string, configs tailscaleConfigs, _ error) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
// Hardcode a -0 suffix so that in future, if we support
@ -262,25 +304,23 @@ func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *
logger.Debugf("secret %s/%s already exists", secret.GetNamespace(), secret.GetName())
orig = secret.DeepCopy()
} else if !apierrors.IsNotFound(err) {
return "", "", err
return "", "", nil, err
}
var (
authKey, hash string
)
var authKey string
if orig == nil {
// Secret doesn't exist yet, create one. Initially it contains
// only the Tailscale authkey, but once Tailscale starts it'll
// also store the daemon state.
// Initially it contains only tailscaled config, but when the
// proxy starts, it will also store there the state, certs and
// ACME account key.
sts, err := getSingleObject[appsv1.StatefulSet](ctx, a.Client, a.operatorNamespace, stsC.ChildResourceLabels)
if err != nil {
return "", "", err
return "", "", nil, err
}
if sts != nil {
// StatefulSet exists, so we have already created the secret.
// If the secret is missing, they should delete the StatefulSet.
logger.Errorf("Tailscale proxy secret doesn't exist, but the corresponding StatefulSet %s/%s already does. Something is wrong, please delete the StatefulSet.", sts.GetNamespace(), sts.GetName())
return "", "", nil
return "", "", nil, nil
}
// Create API Key secret which is going to be used by the statefulset
// to authenticate with Tailscale.
@ -291,40 +331,66 @@ func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *
}
authKey, err = a.newAuthKey(ctx, tags)
if err != nil {
return "", "", err
return "", "", nil, err
}
}
if !shouldDoTailscaledDeclarativeConfig(stsC) && authKey != "" {
mak.Set(&secret.StringData, "authkey", authKey)
configs, err := tailscaledConfig(stsC, authKey, orig)
if err != nil {
return "", "", nil, fmt.Errorf("error creating tailscaled config: %w", err)
}
if shouldDoTailscaledDeclarativeConfig(stsC) {
confFileBytes, h, err := tailscaledConfig(stsC, authKey, orig)
hash, err = tailscaledConfigHash(configs)
if err != nil {
return "", "", nil, fmt.Errorf("error calculating hash of tailscaled configs: %w", err)
}
latest := tailcfg.CapabilityVersion(-1)
var latestConfig ipn.ConfigVAlpha
for key, val := range configs {
fn := kubeutils.TailscaledConfigFileNameForCap(key)
b, err := json.Marshal(val)
if err != nil {
return "", "", fmt.Errorf("error creating tailscaled config: %w", err)
return "", "", nil, fmt.Errorf("error marshalling tailscaled config: %w", err)
}
mak.Set(&secret.StringData, fn, string(b))
if key > latest {
latest = key
latestConfig = val
}
hash = h
mak.Set(&secret.StringData, tailscaledConfigKey, string(confFileBytes))
}
if stsC.ServeConfig != nil {
j, err := json.Marshal(stsC.ServeConfig)
if err != nil {
return "", "", err
return "", "", nil, err
}
mak.Set(&secret.StringData, "serve-config", string(j))
}
if orig != nil {
logger.Debugf("patching existing state Secret with values %s", secret.Data[tailscaledConfigKey])
logger.Debugf("patching the existing proxy Secret with tailscaled config %s", sanitizeConfigBytes(latestConfig))
if err := a.Patch(ctx, secret, client.MergeFrom(orig)); err != nil {
return "", "", err
return "", "", nil, err
}
} else {
logger.Debugf("creating new state Secret with authkey %s", secret.Data[tailscaledConfigKey])
logger.Debugf("creating a new Secret for the proxy with tailscaled config %s", sanitizeConfigBytes(latestConfig))
if err := a.Create(ctx, secret); err != nil {
return "", "", err
return "", "", nil, err
}
}
return secret.Name, hash, nil
return secret.Name, hash, configs, nil
}
// sanitizeConfigBytes returns ipn.ConfigVAlpha in string form with redacted
// auth key.
func sanitizeConfigBytes(c ipn.ConfigVAlpha) string {
if c.AuthKey != nil {
c.AuthKey = ptr.To("**redacted**")
}
sanitizedBytes, err := json.Marshal(c)
if err != nil {
return "invalid config"
}
return string(sanitizedBytes)
}
// DeviceInfo returns the device ID and hostname for the Tailscale device
@ -379,11 +445,11 @@ var proxyYaml []byte
//go:embed deploy/manifests/userspace-proxy.yaml
var userspaceProxyYaml []byte
func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.SugaredLogger, sts *tailscaleSTSConfig, headlessSvc *corev1.Service, proxySecret, tsConfigHash string) (*appsv1.StatefulSet, error) {
var ss appsv1.StatefulSet
if sts.ServeConfig != nil {
func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.SugaredLogger, sts *tailscaleSTSConfig, headlessSvc *corev1.Service, proxySecret, tsConfigHash string, configs map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha) (*appsv1.StatefulSet, error) {
ss := new(appsv1.StatefulSet)
if sts.ServeConfig != nil && sts.ForwardClusterTrafficViaL7IngressProxy != true { // If forwarding cluster traffic via is required we need non-userspace + NET_ADMIN + forwarding
if err := yaml.Unmarshal(userspaceProxyYaml, &ss); err != nil {
return nil, fmt.Errorf("failed to unmarshal proxy spec: %w", err)
return nil, fmt.Errorf("failed to unmarshal userspace proxy spec: %v", err)
}
} else {
if err := yaml.Unmarshal(proxyYaml, &ss); err != nil {
@ -397,12 +463,25 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
}
}
}
container := &ss.Spec.Template.Spec.Containers[0]
pod := &ss.Spec.Template
container := &pod.Spec.Containers[0]
proxyClass := new(tsapi.ProxyClass)
if sts.ProxyClass != "" {
if err := a.Get(ctx, types.NamespacedName{Name: sts.ProxyClass}, proxyClass); err != nil {
return nil, fmt.Errorf("failed to get ProxyClass: %w", err)
}
if !tsoperator.ProxyClassIsReady(proxyClass) {
logger.Infof("ProxyClass %s specified for the proxy, but it is not (yet) in a ready state, waiting..")
return nil, nil
}
}
container.Image = a.proxyImage
ss.ObjectMeta = metav1.ObjectMeta{
Name: headlessSvc.Name,
Namespace: a.operatorNamespace,
Labels: sts.ChildResourceLabels,
}
for key, val := range sts.ChildResourceLabels {
mak.Set(&ss.ObjectMeta.Labels, key, val)
}
ss.Spec.ServiceName = headlessSvc.Name
ss.Spec.Selector = &metav1.LabelSelector{
@ -410,9 +489,9 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
"app": sts.ParentResourceUID,
},
}
mak.Set(&ss.Spec.Template.Labels, "app", sts.ParentResourceUID)
mak.Set(&pod.Labels, "app", sts.ParentResourceUID)
for key, val := range sts.ChildResourceLabels {
ss.Spec.Template.Labels[key] = val // sync StatefulSet labels to Pod to make it easier for users to select the Pod
pod.Labels[key] = val // sync StatefulSet labels to Pod to make it easier for users to select the Pod
}
// Generic containerboot configuration options.
@ -421,43 +500,40 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
Name: "TS_KUBE_SECRET",
Value: proxySecret,
},
)
if !shouldDoTailscaledDeclarativeConfig(sts) {
container.Env = append(container.Env, corev1.EnvVar{
Name: "TS_HOSTNAME",
Value: sts.Hostname,
})
// containerboot currently doesn't have a way to re-read the hostname/ip as
// it is passed via an environment variable. So we need to restart the
// container when the value changes. We do this by adding an annotation to
// the pod template that contains the last value we set.
mak.Set(&ss.Spec.Template.Annotations, podAnnotationLastSetHostname, sts.Hostname)
}
// Configure containeboot to run tailscaled with a configfile read from the state Secret.
if shouldDoTailscaledDeclarativeConfig(sts) {
mak.Set(&ss.Spec.Template.Annotations, podAnnotationLastSetConfigFileHash, tsConfigHash)
ss.Spec.Template.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
Name: "tailscaledconfig",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: proxySecret,
Items: []corev1.KeyToPath{{
Key: tailscaledConfigKey,
Path: tailscaledConfigKey,
}},
},
},
})
container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{
Name: "tailscaledconfig",
ReadOnly: true,
MountPath: "/etc/tsconfig",
})
container.Env = append(container.Env, corev1.EnvVar{
corev1.EnvVar{
// Old tailscaled config key is still used for backwards compatibility.
Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH",
Value: "/etc/tsconfig/tailscaled",
},
corev1.EnvVar{
// New style is in the form of cap-<capability-version>.hujson.
Name: "TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR",
Value: "/etc/tsconfig",
},
)
if sts.ForwardClusterTrafficViaL7IngressProxy {
container.Env = append(container.Env, corev1.EnvVar{
Name: "EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS",
Value: "true",
})
}
// Configure containeboot to run tailscaled with a configfile read from the state Secret.
mak.Set(&ss.Spec.Template.Annotations, podAnnotationLastSetConfigFileHash, tsConfigHash)
configVolume := corev1.Volume{
Name: "tailscaledconfig",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: proxySecret,
},
},
}
pod.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, configVolume)
container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{
Name: "tailscaledconfig",
ReadOnly: true,
MountPath: "/etc/tsconfig",
})
if a.tsFirewallMode != "" {
container.Env = append(container.Env, corev1.EnvVar{
@ -465,7 +541,7 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
Value: a.tsFirewallMode,
})
}
ss.Spec.Template.Spec.PriorityClassName = a.proxyPriorityClassName
pod.Spec.PriorityClassName = a.proxyPriorityClassName
// Ingress/egress proxy configuration options.
if sts.ClusterTargetIP != "" {
@ -474,6 +550,12 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
Value: sts.ClusterTargetIP,
})
mak.Set(&ss.Spec.Template.Annotations, podAnnotationLastSetClusterIP, sts.ClusterTargetIP)
} else if sts.ClusterTargetDNSName != "" {
container.Env = append(container.Env, corev1.EnvVar{
Name: "TS_EXPERIMENTAL_DEST_DNS_NAME",
Value: sts.ClusterTargetDNSName,
})
mak.Set(&ss.Spec.Template.Annotations, podAnnotationLastSetClusterDNSName, sts.ClusterTargetDNSName)
} else if sts.TailnetTargetIP != "" {
container.Env = append(container.Env, corev1.EnvVar{
Name: "TS_TAILNET_TARGET_IP",
@ -496,58 +578,229 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
ReadOnly: true,
MountPath: "/etc/tailscaled",
})
ss.Spec.Template.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
pod.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
Name: "serve-config",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: proxySecret,
Items: []corev1.KeyToPath{{
Key: "serve-config",
Path: "serve-config",
}},
Items: []corev1.KeyToPath{{Key: "serve-config", Path: "serve-config"}},
},
},
})
}
logger.Debugf("reconciling statefulset %s/%s", ss.GetNamespace(), ss.GetName())
return createOrUpdate(ctx, a.Client, a.operatorNamespace, &ss, func(s *appsv1.StatefulSet) { s.Spec = ss.Spec })
if sts.ProxyClass != "" {
logger.Debugf("configuring proxy resources with ProxyClass %s", sts.ProxyClass)
ss = applyProxyClassToStatefulSet(proxyClass, ss, sts, logger)
}
updateSS := func(s *appsv1.StatefulSet) {
s.Spec = ss.Spec
s.ObjectMeta.Labels = ss.Labels
s.ObjectMeta.Annotations = ss.Annotations
}
return createOrUpdate(ctx, a.Client, a.operatorNamespace, ss, updateSS)
}
// mergeStatefulSetLabelsOrAnnots returns a map that contains all keys/values
// present in 'custom' map as well as those keys/values from the current map
// whose keys are present in the 'managed' map. The reason why this merge is
// necessary is to ensure that labels/annotations applied from a ProxyClass get removed
// if they are removed from a ProxyClass or if the ProxyClass no longer applies
// to this StatefulSet whilst any tailscale managed labels/annotations remain present.
func mergeStatefulSetLabelsOrAnnots(current, custom map[string]string, managed []string) map[string]string {
if custom == nil {
custom = make(map[string]string)
}
if current == nil {
return custom
}
for key, val := range current {
if slices.Contains(managed, key) {
custom[key] = val
}
}
return custom
}
func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet, stsCfg *tailscaleSTSConfig, logger *zap.SugaredLogger) *appsv1.StatefulSet {
if pc == nil || ss == nil {
return ss
}
if pc.Spec.Metrics != nil && pc.Spec.Metrics.Enable {
if stsCfg.TailnetTargetFQDN == "" && stsCfg.TailnetTargetIP == "" && !stsCfg.ForwardClusterTrafficViaL7IngressProxy {
enableMetrics(ss, pc)
} else if stsCfg.ForwardClusterTrafficViaL7IngressProxy {
// TODO (irbekrm): fix this
// For Ingress proxies that have been configured with
// tailscale.com/experimental-forward-cluster-traffic-via-ingress
// annotation, all cluster traffic is forwarded to the
// Ingress backend(s).
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
} else {
// TODO (irbekrm): fix this
// For egress proxies, currently all cluster traffic is forwarded to the tailnet target.
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
}
}
if pc.Spec.StatefulSet == nil {
return ss
}
// Update StatefulSet metadata.
if wantsSSLabels := pc.Spec.StatefulSet.Labels; len(wantsSSLabels) > 0 {
ss.ObjectMeta.Labels = mergeStatefulSetLabelsOrAnnots(ss.ObjectMeta.Labels, wantsSSLabels, tailscaleManagedLabels)
}
if wantsSSAnnots := pc.Spec.StatefulSet.Annotations; len(wantsSSAnnots) > 0 {
ss.ObjectMeta.Annotations = mergeStatefulSetLabelsOrAnnots(ss.ObjectMeta.Annotations, wantsSSAnnots, tailscaleManagedAnnotations)
}
// Update Pod fields.
if pc.Spec.StatefulSet.Pod == nil {
return ss
}
wantsPod := pc.Spec.StatefulSet.Pod
if wantsPodLabels := wantsPod.Labels; len(wantsPodLabels) > 0 {
ss.Spec.Template.ObjectMeta.Labels = mergeStatefulSetLabelsOrAnnots(ss.Spec.Template.ObjectMeta.Labels, wantsPodLabels, tailscaleManagedLabels)
}
if wantsPodAnnots := wantsPod.Annotations; len(wantsPodAnnots) > 0 {
ss.Spec.Template.ObjectMeta.Annotations = mergeStatefulSetLabelsOrAnnots(ss.Spec.Template.ObjectMeta.Annotations, wantsPodAnnots, tailscaleManagedAnnotations)
}
ss.Spec.Template.Spec.SecurityContext = wantsPod.SecurityContext
ss.Spec.Template.Spec.ImagePullSecrets = wantsPod.ImagePullSecrets
ss.Spec.Template.Spec.NodeName = wantsPod.NodeName
ss.Spec.Template.Spec.NodeSelector = wantsPod.NodeSelector
ss.Spec.Template.Spec.Affinity = wantsPod.Affinity
ss.Spec.Template.Spec.Tolerations = wantsPod.Tolerations
// Update containers.
updateContainer := func(overlay *tsapi.Container, base corev1.Container) corev1.Container {
if overlay == nil {
return base
}
if overlay.SecurityContext != nil {
base.SecurityContext = overlay.SecurityContext
}
base.Resources = overlay.Resources
for _, e := range overlay.Env {
// Env vars configured via ProxyClass might override env
// vars that have been specified by the operator, i.e
// TS_USERSPACE. The intended behaviour is to allow this
// and in practice it works without explicitly removing
// the operator configured value here as a later value
// in the env var list overrides an earlier one.
base.Env = append(base.Env, corev1.EnvVar{Name: string(e.Name), Value: e.Value})
}
return base
}
for i, c := range ss.Spec.Template.Spec.Containers {
if c.Name == "tailscale" {
ss.Spec.Template.Spec.Containers[i] = updateContainer(wantsPod.TailscaleContainer, ss.Spec.Template.Spec.Containers[i])
break
}
}
if initContainers := ss.Spec.Template.Spec.InitContainers; len(initContainers) > 0 {
for i, c := range initContainers {
if c.Name == "sysctler" {
ss.Spec.Template.Spec.InitContainers[i] = updateContainer(wantsPod.TailscaleInitContainer, initContainers[i])
break
}
}
}
return ss
}
func enableMetrics(ss *appsv1.StatefulSet, pc *tsapi.ProxyClass) {
for i, c := range ss.Spec.Template.Spec.Containers {
if c.Name == "tailscale" {
// Serve metrics on on <pod-ip>:9001/debug/metrics. If
// we didn't specify Pod IP here, the proxy would, in
// some cases, also listen to its Tailscale IP- we don't
// want folks to start relying on this side-effect as a
// feature.
ss.Spec.Template.Spec.Containers[i].Env = append(ss.Spec.Template.Spec.Containers[i].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
ss.Spec.Template.Spec.Containers[i].Ports = append(ss.Spec.Template.Spec.Containers[i].Ports, corev1.ContainerPort{Name: "metrics", Protocol: "TCP", HostPort: 9001, ContainerPort: 9001})
break
}
}
}
func readAuthKey(secret *corev1.Secret, key string) (*string, error) {
origConf := &ipn.ConfigVAlpha{}
if err := json.Unmarshal([]byte(secret.Data[key]), origConf); err != nil {
return nil, fmt.Errorf("error unmarshaling previous tailscaled config in %q: %w", key, err)
}
return origConf.AuthKey, nil
}
// tailscaledConfig takes a proxy config, a newly generated auth key if
// generated and a Secret with the previous proxy state and auth key and
// produces returns tailscaled configuration and a hash of that configuration.
func tailscaledConfig(stsC *tailscaleSTSConfig, newAuthkey string, oldSecret *corev1.Secret) ([]byte, string, error) {
conf := ipn.ConfigVAlpha{
Version: "alpha0",
AcceptDNS: "false",
Locked: "false",
Hostname: &stsC.Hostname,
// returns tailscaled configuration and a hash of that configuration.
//
// As of 2024-05-09 it also returns legacy tailscaled config without the
// later added NoStatefulFilter field to support proxies older than cap95.
// TODO (irbekrm): remove the legacy config once we no longer need to support
// versions older than cap94,
// https://tailscale.com/kb/1236/kubernetes-operator#operator-and-proxies
func tailscaledConfig(stsC *tailscaleSTSConfig, newAuthkey string, oldSecret *corev1.Secret) (tailscaleConfigs, error) {
conf := &ipn.ConfigVAlpha{
Version: "alpha0",
AcceptDNS: "false",
AcceptRoutes: "false", // AcceptRoutes defaults to true
Locked: "false",
Hostname: &stsC.Hostname,
NoStatefulFiltering: "false",
}
// For egress proxies only, we need to ensure that stateful filtering is
// not in place so that traffic from cluster can be forwarded via
// Tailscale IPs.
if stsC.TailnetTargetFQDN != "" || stsC.TailnetTargetIP != "" {
conf.NoStatefulFiltering = "true"
}
if stsC.Connector != nil {
routes, err := netutil.CalcAdvertiseRoutes(stsC.Connector.routes, stsC.Connector.isExitNode)
if err != nil {
return nil, "", fmt.Errorf("error calculating routes: %w", err)
return nil, fmt.Errorf("error calculating routes: %w", err)
}
conf.AdvertiseRoutes = routes
}
if newAuthkey != "" {
conf.AuthKey = &newAuthkey
} else if oldSecret != nil && len(oldSecret.Data[tailscaledConfigKey]) > 0 { // write to StringData, read from Data as StringData is write-only
origConf := &ipn.ConfigVAlpha{}
if err := json.Unmarshal([]byte(oldSecret.Data[tailscaledConfigKey]), origConf); err != nil {
return nil, "", fmt.Errorf("error unmarshaling previous tailscaled config: %w", err)
} else if oldSecret != nil {
var err error
latest := tailcfg.CapabilityVersion(-1)
latestStr := ""
for k, data := range oldSecret.Data {
// write to StringData, read from Data as StringData is write-only
if len(data) == 0 {
continue
}
v, err := kubeutils.CapVerFromFileName(k)
if err != nil {
continue
}
if v > latest {
latestStr = k
latest = v
}
}
// Allow for configs that don't contain an auth key. Perhaps
// users have some mechanisms to delete them. Auth key is
// normally not needed after the initial login.
if latestStr != "" {
conf.AuthKey, err = readAuthKey(oldSecret, latestStr)
if err != nil {
return nil, err
}
}
conf.AuthKey = origConf.AuthKey
}
confFileBytes, err := json.Marshal(conf)
if err != nil {
return nil, "", fmt.Errorf("error marshaling tailscaled config : %w", err)
}
hash, err := hashBytes(confFileBytes)
if err != nil {
return nil, "", fmt.Errorf("error calculating config hash: %w", err)
}
return confFileBytes, hash, nil
capVerConfigs := make(map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha)
capVerConfigs[95] = *conf
// legacy config should not contain NoStatefulFiltering field.
conf.NoStatefulFiltering.Clear()
capVerConfigs[94] = *conf
return capVerConfigs, nil
}
// ptrObject is a type constraint for pointer types that implement
@ -557,7 +810,9 @@ type ptrObject[T any] interface {
*T
}
// hashBytes produces a hash for the provided bytes that is the same across
type tailscaleConfigs map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha
// hashBytes produces a hash for the provided tailscaled config that is the same across
// different invocations of this code. We do not use the
// tailscale.com/deephash.Hash here because that produces a different hash for
// the same value in different tailscale builds. The hash we are producing here
@ -566,10 +821,13 @@ type ptrObject[T any] interface {
// thing that changed is operator version (the hash is also exposed to users via
// an annotation and might be confusing if it changes without the config having
// changed).
func hashBytes(b []byte) (string, error) {
h := sha256.New()
_, err := h.Write(b)
func tailscaledConfigHash(c tailscaleConfigs) (string, error) {
b, err := json.Marshal(c)
if err != nil {
return "", fmt.Errorf("error marshalling tailscaled configs: %w", err)
}
h := sha256.New()
if _, err = h.Write(b); err != nil {
return "", fmt.Errorf("error calculating hash: %w", err)
}
return fmt.Sprintf("%x", h.Sum(nil)), nil
@ -678,10 +936,3 @@ func nameForService(svc *corev1.Service) (string, error) {
func isValidFirewallMode(m string) bool {
return m == "auto" || m == "nftables" || m == "iptables"
}
// shouldDoTailscaledDeclarativeConfig determines whether the proxy instance
// should be configured to run tailscaled only with a all config opts passed to
// tailscaled.
func shouldDoTailscaledDeclarativeConfig(stsC *tailscaleSTSConfig) bool {
return stsC.Connector != nil
}

View File

@ -6,10 +6,21 @@
package main
import (
_ "embed"
"fmt"
"reflect"
"regexp"
"strings"
"testing"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
"sigs.k8s.io/yaml"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/types/ptr"
)
// Test_statefulSetNameBase tests that parent name portion in a StatefulSet name
@ -39,3 +50,272 @@ func Test_statefulSetNameBase(t *testing.T) {
}
}
}
func Test_applyProxyClassToStatefulSet(t *testing.T) {
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
// Setup
proxyClassAllOpts := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Annotations: map[string]string{"foo.io/bar": "foo"},
Pod: &tsapi.Pod{
Labels: map[string]string{"bar": "foo"},
Annotations: map[string]string{"bar.io/foo": "foo"},
SecurityContext: &corev1.PodSecurityContext{
RunAsUser: ptr.To(int64(0)),
},
ImagePullSecrets: []corev1.LocalObjectReference{{Name: "docker-creds"}},
NodeName: "some-node",
NodeSelector: map[string]string{"beta.kubernetes.io/os": "linux"},
Affinity: &corev1.Affinity{NodeAffinity: &corev1.NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{}}},
Tolerations: []corev1.Toleration{{Key: "", Operator: "Exists"}},
TailscaleContainer: &tsapi.Container{
SecurityContext: &corev1.SecurityContext{
Privileged: ptr.To(true),
},
Resources: corev1.ResourceRequirements{
Limits: corev1.ResourceList{corev1.ResourceCPU: resource.MustParse("1000m"), corev1.ResourceMemory: resource.MustParse("128Mi")},
Requests: corev1.ResourceList{corev1.ResourceCPU: resource.MustParse("500m"), corev1.ResourceMemory: resource.MustParse("64Mi")},
},
Env: []tsapi.Env{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}},
},
TailscaleInitContainer: &tsapi.Container{
SecurityContext: &corev1.SecurityContext{
Privileged: ptr.To(true),
RunAsUser: ptr.To(int64(0)),
},
Resources: corev1.ResourceRequirements{
Limits: corev1.ResourceList{corev1.ResourceCPU: resource.MustParse("1000m"), corev1.ResourceMemory: resource.MustParse("128Mi")},
Requests: corev1.ResourceList{corev1.ResourceCPU: resource.MustParse("500m"), corev1.ResourceMemory: resource.MustParse("64Mi")},
},
Env: []tsapi.Env{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}},
},
},
},
},
}
proxyClassJustLabels := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Annotations: map[string]string{"foo.io/bar": "foo"},
Pod: &tsapi.Pod{
Labels: map[string]string{"bar": "foo"},
Annotations: map[string]string{"bar.io/foo": "foo"},
},
},
},
}
proxyClassMetrics := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
Metrics: &tsapi.Metrics{Enable: true},
},
}
var userspaceProxySS, nonUserspaceProxySS appsv1.StatefulSet
if err := yaml.Unmarshal(userspaceProxyYaml, &userspaceProxySS); err != nil {
t.Fatalf("unmarshaling userspace proxy template: %v", err)
}
if err := yaml.Unmarshal(proxyYaml, &nonUserspaceProxySS); err != nil {
t.Fatalf("unmarshaling non-userspace proxy template: %v", err)
}
// Set a couple additional fields so we can test that we don't
// mistakenly override those.
labels := map[string]string{
LabelManaged: "true",
LabelParentName: "foo",
}
annots := map[string]string{
podAnnotationLastSetClusterIP: "1.2.3.4",
}
env := []corev1.EnvVar{{Name: "TS_HOSTNAME", Value: "nginx"}}
userspaceProxySS.Labels = labels
userspaceProxySS.Annotations = annots
userspaceProxySS.Spec.Template.Spec.Containers[0].Env = env
nonUserspaceProxySS.ObjectMeta.Labels = labels
nonUserspaceProxySS.ObjectMeta.Annotations = annots
nonUserspaceProxySS.Spec.Template.Spec.Containers[0].Env = env
// 1. Test that a ProxyClass with all fields set gets correctly applied
// to a Statefulset built from non-userspace proxy template.
wantSS := nonUserspaceProxySS.DeepCopy()
wantSS.ObjectMeta.Labels = mergeMapKeys(wantSS.ObjectMeta.Labels, proxyClassAllOpts.Spec.StatefulSet.Labels)
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassAllOpts.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassAllOpts.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassAllOpts.Spec.StatefulSet.Pod.Annotations
wantSS.Spec.Template.Spec.SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.SecurityContext
wantSS.Spec.Template.Spec.ImagePullSecrets = proxyClassAllOpts.Spec.StatefulSet.Pod.ImagePullSecrets
wantSS.Spec.Template.Spec.NodeName = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeName
wantSS.Spec.Template.Spec.NodeSelector = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeSelector
wantSS.Spec.Template.Spec.Affinity = proxyClassAllOpts.Spec.StatefulSet.Pod.Affinity
wantSS.Spec.Template.Spec.Tolerations = proxyClassAllOpts.Spec.StatefulSet.Pod.Tolerations
wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext
wantSS.Spec.Template.Spec.InitContainers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleInitContainer.SecurityContext
wantSS.Spec.Template.Spec.Containers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.Resources
wantSS.Spec.Template.Spec.InitContainers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleInitContainer.Resources
wantSS.Spec.Template.Spec.InitContainers[0].Env = append(wantSS.Spec.Template.Spec.InitContainers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
gotSS := applyProxyClassToStatefulSet(proxyClassAllOpts, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with all fields set to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
}
// 2. Test that a ProxyClass with custom labels and annotations for
// StatefulSet and Pod set gets correctly applied to a Statefulset built
// from non-userspace proxy template.
wantSS = nonUserspaceProxySS.DeepCopy()
wantSS.ObjectMeta.Labels = mergeMapKeys(wantSS.ObjectMeta.Labels, proxyClassJustLabels.Spec.StatefulSet.Labels)
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
}
// 3. Test that a ProxyClass with all fields set gets correctly applied
// to a Statefulset built from a userspace proxy template.
wantSS = userspaceProxySS.DeepCopy()
wantSS.ObjectMeta.Labels = mergeMapKeys(wantSS.ObjectMeta.Labels, proxyClassAllOpts.Spec.StatefulSet.Labels)
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassAllOpts.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassAllOpts.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassAllOpts.Spec.StatefulSet.Pod.Annotations
wantSS.Spec.Template.Spec.SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.SecurityContext
wantSS.Spec.Template.Spec.ImagePullSecrets = proxyClassAllOpts.Spec.StatefulSet.Pod.ImagePullSecrets
wantSS.Spec.Template.Spec.NodeName = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeName
wantSS.Spec.Template.Spec.NodeSelector = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeSelector
wantSS.Spec.Template.Spec.Affinity = proxyClassAllOpts.Spec.StatefulSet.Pod.Affinity
wantSS.Spec.Template.Spec.Tolerations = proxyClassAllOpts.Spec.StatefulSet.Pod.Tolerations
wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext
wantSS.Spec.Template.Spec.Containers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.Resources
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
gotSS = applyProxyClassToStatefulSet(proxyClassAllOpts, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
}
// 4. Test that a ProxyClass with custom labels and annotations gets correctly applied
// to a Statefulset built from a userspace proxy template.
wantSS = userspaceProxySS.DeepCopy()
wantSS.ObjectMeta.Labels = mergeMapKeys(wantSS.ObjectMeta.Labels, proxyClassJustLabels.Spec.StatefulSet.Labels)
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
}
// 5. Test that a ProxyClass with metrics enabled gets correctly applied to a StatefulSet.
wantSS = nonUserspaceProxySS.DeepCopy()
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
wantSS.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{{Name: "metrics", Protocol: "TCP", ContainerPort: 9001, HostPort: 9001}}
gotSS = applyProxyClassToStatefulSet(proxyClassMetrics, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with metrics enabled to a StatefulSet (-got +want):\n%s", diff)
}
}
func mergeMapKeys(a, b map[string]string) map[string]string {
for key, val := range b {
a[key] = val
}
return a
}
func Test_mergeStatefulSetLabelsOrAnnots(t *testing.T) {
tests := []struct {
name string
current map[string]string
custom map[string]string
managed []string
want map[string]string
}{
{
name: "no custom labels specified and none present in current labels, return current labels",
current: map[string]string{LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
want: map[string]string{LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
managed: tailscaleManagedLabels,
},
{
name: "no custom labels specified, but some present in current labels, return tailscale managed labels only from the current labels",
current: map[string]string{"foo": "bar", "something.io/foo": "bar", LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
want: map[string]string{LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
managed: tailscaleManagedLabels,
},
{
name: "custom labels specified, current labels only contain tailscale managed labels, return a union of both",
current: map[string]string{LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
custom: map[string]string{"foo": "bar", "something.io/foo": "bar"},
want: map[string]string{"foo": "bar", "something.io/foo": "bar", LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
managed: tailscaleManagedLabels,
},
{
name: "custom labels specified, current labels contain tailscale managed labels and custom labels, some of which re not present in the new custom labels, return a union of managed labels and the desired custom labels",
current: map[string]string{"foo": "bar", "bar": "baz", "app": "1234", LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
custom: map[string]string{"foo": "bar", "something.io/foo": "bar"},
want: map[string]string{"foo": "bar", "something.io/foo": "bar", "app": "1234", LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
managed: tailscaleManagedLabels,
},
{
name: "no current labels present, return custom labels only",
custom: map[string]string{"foo": "bar", "something.io/foo": "bar"},
want: map[string]string{"foo": "bar", "something.io/foo": "bar"},
managed: tailscaleManagedLabels,
},
{
name: "no current labels present, no custom labels specified, return empty map",
want: map[string]string{},
managed: tailscaleManagedLabels,
},
{
name: "no custom annots specified and none present in current annots, return current annots",
current: map[string]string{podAnnotationLastSetClusterIP: "1.2.3.4"},
want: map[string]string{podAnnotationLastSetClusterIP: "1.2.3.4"},
managed: tailscaleManagedAnnotations,
},
{
name: "no custom annots specified, but some present in current annots, return tailscale managed annots only from the current annots",
current: map[string]string{"foo": "bar", "something.io/foo": "bar", podAnnotationLastSetClusterIP: "1.2.3.4"},
want: map[string]string{podAnnotationLastSetClusterIP: "1.2.3.4"},
managed: tailscaleManagedAnnotations,
},
{
name: "custom annots specified, current annots only contain tailscale managed annots, return a union of both",
current: map[string]string{podAnnotationLastSetClusterIP: "1.2.3.4"},
custom: map[string]string{"foo": "bar", "something.io/foo": "bar"},
want: map[string]string{"foo": "bar", "something.io/foo": "bar", podAnnotationLastSetClusterIP: "1.2.3.4"},
managed: tailscaleManagedAnnotations,
},
{
name: "custom annots specified, current annots contain tailscale managed annots and custom annots, some of which are not present in the new custom annots, return a union of managed annots and the desired custom annots",
current: map[string]string{"foo": "bar", "something.io/foo": "bar", podAnnotationLastSetClusterIP: "1.2.3.4"},
custom: map[string]string{"something.io/foo": "bar"},
want: map[string]string{"something.io/foo": "bar", podAnnotationLastSetClusterIP: "1.2.3.4"},
managed: tailscaleManagedAnnotations,
},
{
name: "no current annots present, return custom annots only",
custom: map[string]string{"foo": "bar", "something.io/foo": "bar"},
want: map[string]string{"foo": "bar", "something.io/foo": "bar"},
managed: tailscaleManagedAnnotations,
},
{
name: "no current labels present, no custom labels specified, return empty map",
want: map[string]string{},
managed: tailscaleManagedAnnotations,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := mergeStatefulSetLabelsOrAnnots(tt.current, tt.custom, tt.managed); !reflect.DeepEqual(got, tt.want) {
t.Errorf("mergeStatefulSetLabels() = %v, want %v", got, tt.want)
}
})
}
}

Some files were not shown because too many files have changed in this diff Show More