Compare commits

...

19 Commits

Author SHA1 Message Date
franbull 5fe3c1400f
Merge 36194a037e into 4dece0c359 2024-04-26 21:17:30 -04:00
Brad Fitzpatrick 4dece0c359 net/netutil: remove a use of deprecated interfaces.GetState
I'm working on moving all network state queries to be on
netmon.Monitor, removing old APIs.

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: If0de137e0e2e145520f69e258597fb89cf39a2a3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 18:17:27 -07:00
Brad Fitzpatrick 7f587d0321 health, wgengine/magicsock: remove last of health package globals
Fixes #11874
Updates #4136

Change-Id: Ib70e6831d4c19c32509fe3d7eee4aa0e9f233564
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 17:36:19 -07:00
Jonathan Nobels 71e9258ad9
ipn/ipnlocal: fix null dereference for early suggested exit node queries (#11885)
Fixes tailscale/corp#19558

A request for the suggested exit nodes that occurs too early in the
VPN lifecycle would result in a null deref of the netmap and/or
the netcheck report.  This checks both and errors out.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2024-04-26 14:35:11 -07:00
Fran Bull 36194a037e appc: setting AdvertiseRoutes explicitly discards app connector routes
This fixes bugs where after using the cli to set AdvertiseRoutes users
were finding that they had to restart tailscaled before the app
connector would advertise previously learned routes again. And seems
more in line with user expectations.

Fixes #11006
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-26 13:57:07 -07:00
Fran Bull fd096680f0 appc: unadvertise routes when reconfiguring app connector
If the controlknob to persist app connector routes is enabled, when
reconfiguring an app connector unadvertise routes that are no longer
relevant.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-26 13:57:07 -07:00
Fran Bull 63f66ad4ad appc: write discovered domains to StateStore
If the controlknob is on.
This will allow us to remove discovered routes associated with a
particular domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-26 12:26:47 -07:00
Fran Bull 9494209767 appc: add flag shouldStoreRoutes and controlknob for it
When an app connector is reconfigured and domains to route are removed,
we would like to no longer advertise routes that were discovered for
those domains. In order to do this we plan to store which routes were
discovered for which domains.

Add a controlknob so that we can enable/disable the new behavior.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-26 12:26:43 -07:00
Fran Bull af32580cfb appc: add RouteInfo struct and persist it to StateStore
Lays the groundwork for the ability to persist app connectors discovered
routes, which will allow us to stop advertising routes for a domain if
the app connector no longer monitors that domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2024-04-26 12:26:08 -07:00
Brad Fitzpatrick 745931415c health, all: remove health.Global, finish plumbing health.Tracker
Updates #11874
Updates #4136

Change-Id: I414470f71d90be9889d44c3afd53956d9f26cd61
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 12:03:11 -07:00
Brad Fitzpatrick a4a282cd49 control/controlclient: plumb health.Tracker
Updates #11874
Updates #4136

Change-Id: Ia941153bd83523f0c8b56852010f5231d774d91a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 10:12:33 -07:00
Brad Fitzpatrick 6d69fc137f ipn/{ipnlocal,localapi},wgengine{,/magicsock}: plumb health.Tracker
Down to 25 health.Global users. After this remains controlclient &
net/dns & wgengine/router.

Updates #11874
Updates #4136

Change-Id: I6dd1856e3d9bf523bdd44b60fb3b8f7501d5dc0d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-26 09:43:28 -07:00
Irbe Krumina df8f40905b
cmd/k8s-operator,k8s-operator: optionally serve tailscaled metrics on Pod IP (#11699)
Adds a new .spec.metrics field to ProxyClass to allow users to optionally serve
client metrics (tailscaled --debug) on <Pod-IP>:9001.
Metrics cannot currently be enabled for proxies that egress traffic to tailnet
and for Ingress proxies with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation
(because they currently forward all cluster traffic to their respective backends).

The assumption is that users will want to have these metrics enabled
continuously to be able to monitor proxy behaviour (as opposed to enabling
them temporarily for debugging). Hence we expose them on Pod IP to make it
easier to consume them i.e via Prometheus PodMonitor.

Updates tailscale/tailscale#11292

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-04-26 08:25:06 +01:00
Brad Fitzpatrick 723c775dbb tsd, ipnlocal, etc: add tsd.System.HealthTracker, start some plumbing
This adds a health.Tracker to tsd.System, accessible via
a new tsd.System.HealthTracker method.

In the future, that new method will return a tsd.System-specific
HealthTracker, so multiple tsnet.Servers in the same process are
isolated. For now, though, it just always returns the temporary
health.Global value. That permits incremental plumbing over a number
of changes. When the second to last health.Global reference is gone,
then the tsd.System.HealthTracker implementation can return a private
Tracker.

The primary plumbing this does is adding it to LocalBackend and its
dozen and change health calls. A few misc other callers are also
plumbed. Subsequent changes will flesh out other parts of the tree
(magicsock, controlclient, etc).

Updates #11874
Updates #4136

Change-Id: Id51e73cfc8a39110425b6dc19d18b3975eac75ce
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-25 22:13:04 -07:00
Brad Fitzpatrick cb66952a0d health: permit Tracker method calls on nil receiver
In prep for tsd.System Tracker plumbing throughout tailscaled,
defensively permit all methods on Tracker to accept a nil receiver
without crashing, lest I screw something up later. (A health tracking
system that itself causes crashes would be no good.) Methods on nil
receivers should not be called, so a future change will also collect
their stacks (and panic during dev/test), but we should at least not
crash in prod.

This also locks that in with a test using reflect to automatically
call all methods on a nil receiver and check they don't crash.

Updates #11874
Updates #4136

Change-Id: I8e955046ebf370ec8af0c1fb63e5123e6282a9d3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-25 20:45:57 -07:00
Chris Palmer 7349b274bd
safeweb: handle mux pattern collisions more generally (#11801)
Fixes #11800

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
2024-04-25 16:08:30 -07:00
Brad Fitzpatrick 5b32264033 health: break Warnable into a global and per-Tracker value halves
Previously it was both metadata about the class of warnable item as
well as the value.

Now it's only metadata and the value is per-Tracker.

Updates #11874
Updates #4136

Change-Id: Ia1ed1b6c95d34bc5aae36cffdb04279e6ba77015
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-25 14:40:11 -07:00
Brad Fitzpatrick ebc552d2e0 health: add Tracker type, in prep for removing global variables
This moves most of the health package global variables to a new
`health.Tracker` type.

But then rather than plumbing the Tracker in tsd.System everywhere,
this only goes halfway and makes one new global Tracker
(`health.Global`) that all the existing callers now use.

A future change will eliminate that global.

Updates #11874
Updates #4136

Change-Id: I6ee27e0b2e35f68cb38fecdb3b2dc4c3f2e09d68
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-04-25 13:46:22 -07:00
Claire Wang d5fc52a0f5
tailcfg: add auto exit node attribute (#11871)
Updates tailscale/corp#19515

Signed-off-by: Claire Wang <claire@tailscale.com>
2024-04-25 15:05:39 -04:00
82 changed files with 1813 additions and 821 deletions

View File

@ -23,6 +23,7 @@ import (
"tailscale.com/util/dnsname" "tailscale.com/util/dnsname"
"tailscale.com/util/execqueue" "tailscale.com/util/execqueue"
"tailscale.com/util/mak" "tailscale.com/util/mak"
"tailscale.com/util/slicesx"
) )
// RouteAdvertiser is an interface that allows the AppConnector to advertise // RouteAdvertiser is an interface that allows the AppConnector to advertise
@ -36,6 +37,19 @@ type RouteAdvertiser interface {
UnadvertiseRoute(...netip.Prefix) error UnadvertiseRoute(...netip.Prefix) error
} }
// RouteInfo is a data structure used to persist the in memory state of an AppConnector
// so that we can know, even after a restart, which routes came from ACLs and which were
// learned from domains.
type RouteInfo struct {
// Control is the routes from the 'routes' section of an app connector acl.
Control []netip.Prefix `json:",omitempty"`
// Domains are the routes discovered by observing DNS lookups for configured domains.
Domains map[string][]netip.Addr `json:",omitempty"`
// Wildcards are the configured DNS lookup domains to observe. When a DNS query matches Wildcards,
// its result is added to Domains.
Wildcards []string `json:",omitempty"`
}
// AppConnector is an implementation of an AppConnector that performs // AppConnector is an implementation of an AppConnector that performs
// its function as a subsystem inside of a tailscale node. At the control plane // its function as a subsystem inside of a tailscale node. At the control plane
// side App Connector routing is configured in terms of domains rather than IP // side App Connector routing is configured in terms of domains rather than IP
@ -49,6 +63,9 @@ type AppConnector struct {
logf logger.Logf logf logger.Logf
routeAdvertiser RouteAdvertiser routeAdvertiser RouteAdvertiser
// storeRoutesFunc will be called to persist routes if it is not nil.
storeRoutesFunc func(*RouteInfo) error
// mu guards the fields that follow // mu guards the fields that follow
mu sync.Mutex mu sync.Mutex
@ -67,11 +84,46 @@ type AppConnector struct {
} }
// NewAppConnector creates a new AppConnector. // NewAppConnector creates a new AppConnector.
func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser) *AppConnector { func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser, routeInfo *RouteInfo, storeRoutesFunc func(*RouteInfo) error) *AppConnector {
return &AppConnector{ ac := &AppConnector{
logf: logger.WithPrefix(logf, "appc: "), logf: logger.WithPrefix(logf, "appc: "),
routeAdvertiser: routeAdvertiser, routeAdvertiser: routeAdvertiser,
storeRoutesFunc: storeRoutesFunc,
} }
if routeInfo != nil {
ac.domains = routeInfo.Domains
ac.wildcards = routeInfo.Wildcards
ac.controlRoutes = routeInfo.Control
}
return ac
}
// ShouldStoreRoutes returns true if the appconnector was created with the controlknob on
// and is storing its discovered routes persistently.
func (e *AppConnector) ShouldStoreRoutes() bool {
return e.storeRoutesFunc != nil
}
// storeRoutesLocked takes the current state of the AppConnector and persists it
func (e *AppConnector) storeRoutesLocked() error {
if !e.ShouldStoreRoutes() {
return nil
}
return e.storeRoutesFunc(&RouteInfo{
Control: e.controlRoutes,
Domains: e.domains,
Wildcards: e.wildcards,
})
}
// ClearRoutes removes all route state from the AppConnector.
func (e *AppConnector) ClearRoutes() error {
e.mu.Lock()
defer e.mu.Unlock()
e.controlRoutes = nil
e.domains = nil
e.wildcards = nil
return e.storeRoutesLocked()
} }
// UpdateDomainsAndRoutes starts an asynchronous update of the configuration // UpdateDomainsAndRoutes starts an asynchronous update of the configuration
@ -125,10 +177,26 @@ func (e *AppConnector) updateDomains(domains []string) {
for _, wc := range e.wildcards { for _, wc := range e.wildcards {
if dnsname.HasSuffix(d, wc) { if dnsname.HasSuffix(d, wc) {
e.domains[d] = addrs e.domains[d] = addrs
delete(oldDomains, d)
break break
} }
} }
} }
// Everything left in oldDomains is a domain we're no longer tracking
// and if we are storing route info we can unadvertise the routes
if e.ShouldStoreRoutes() {
toRemove := []netip.Prefix{}
for _, addrs := range oldDomains {
for _, a := range addrs {
toRemove = append(toRemove, netip.PrefixFrom(a, a.BitLen()))
}
}
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.logf("failed to unadvertise routes on domain removal: %v: %v: %v", xmaps.Keys(oldDomains), toRemove, err)
}
}
e.logf("handling domains: %v and wildcards: %v", xmaps.Keys(e.domains), e.wildcards) e.logf("handling domains: %v and wildcards: %v", xmaps.Keys(e.domains), e.wildcards)
} }
@ -152,6 +220,14 @@ func (e *AppConnector) updateRoutes(routes []netip.Prefix) {
var toRemove []netip.Prefix var toRemove []netip.Prefix
// If we're storing routes and know e.controlRoutes is a good
// representation of what should be in AdvertisedRoutes we can stop
// advertising routes that used to be in e.controlRoutes but are not
// in routes.
if e.ShouldStoreRoutes() {
toRemove = routesWithout(e.controlRoutes, routes)
}
nextRoute: nextRoute:
for _, r := range routes { for _, r := range routes {
for _, addr := range e.domains { for _, addr := range e.domains {
@ -170,6 +246,9 @@ nextRoute:
} }
e.controlRoutes = routes e.controlRoutes = routes
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
} }
// Domains returns the currently configured domain list. // Domains returns the currently configured domain list.
@ -380,6 +459,9 @@ func (e *AppConnector) scheduleAdvertisement(domain string, routes ...netip.Pref
e.logf("[v2] advertised route for %v: %v", domain, addr) e.logf("[v2] advertised route for %v: %v", domain, addr)
} }
} }
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
}) })
} }
@ -400,3 +482,15 @@ func (e *AppConnector) addDomainAddrLocked(domain string, addr netip.Addr) {
func compareAddr(l, r netip.Addr) int { func compareAddr(l, r netip.Addr) int {
return l.Compare(r) return l.Compare(r)
} }
// routesWithout returns a without b where a and b
// are unsorted slices of netip.Prefix
func routesWithout(a, b []netip.Prefix) []netip.Prefix {
m := make(map[netip.Prefix]bool, len(b))
for _, p := range b {
m[p] = true
}
return slicesx.Filter(make([]netip.Prefix, 0, len(a)), a, func(p netip.Prefix) bool {
return !m[p]
})
}

View File

@ -17,194 +17,238 @@ import (
"tailscale.com/util/must" "tailscale.com/util/must"
) )
func fakeStoreRoutes(*RouteInfo) error { return nil }
func TestUpdateDomains(t *testing.T) { func TestUpdateDomains(t *testing.T) {
ctx := context.Background() for _, shouldStore := range []bool{false, true} {
a := NewAppConnector(t.Logf, nil) ctx := context.Background()
a.UpdateDomains([]string{"example.com"}) var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, nil, nil)
}
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx) a.Wait(ctx)
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) { if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
addr := netip.MustParseAddr("192.0.0.8") addr := netip.MustParseAddr("192.0.0.8")
a.domains["example.com"] = append(a.domains["example.com"], addr) a.domains["example.com"] = append(a.domains["example.com"], addr)
a.UpdateDomains([]string{"example.com"}) a.UpdateDomains([]string{"example.com"})
a.Wait(ctx) a.Wait(ctx)
if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) { if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
// domains are explicitly downcased on set. // domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"}) a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx) a.Wait(ctx)
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) { if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
}
} }
} }
func TestUpdateRoutes(t *testing.T) { func TestUpdateRoutes(t *testing.T) {
ctx := context.Background() for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{} ctx := context.Background()
a := NewAppConnector(t.Logf, rc) rc := &appctest.RouteCollector{}
a.updateDomains([]string{"*.example.com"}) var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"*.example.com"})
// This route should be collapsed into the range // This route should be collapsed into the range
a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1")) a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1"))
a.Wait(ctx) a.Wait(ctx)
if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) { if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) {
t.Fatalf("got %v, want %v", rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) t.Fatalf("got %v, want %v", rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
} }
// This route should not be collapsed or removed // This route should not be collapsed or removed
a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1")) a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1"))
a.Wait(ctx) a.Wait(ctx)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")} routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")}
a.updateRoutes(routes) a.updateRoutes(routes)
slices.SortFunc(rc.Routes(), prefixCompare) slices.SortFunc(rc.Routes(), prefixCompare)
rc.SetRoutes(slices.Compact(rc.Routes())) rc.SetRoutes(slices.Compact(rc.Routes()))
slices.SortFunc(routes, prefixCompare) slices.SortFunc(routes, prefixCompare)
// Ensure that the non-matching /32 is preserved, even though it's in the domains table. // Ensure that the non-matching /32 is preserved, even though it's in the domains table.
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) { if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Errorf("added routes: got %v, want %v", rc.Routes(), routes) t.Errorf("added routes: got %v, want %v", rc.Routes(), routes)
} }
// Ensure that the contained /32 is removed, replaced by the /24. // Ensure that the contained /32 is removed, replaced by the /24.
wantRemoved := []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")} wantRemoved := []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}
if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) { if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) {
t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes()) t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes())
}
} }
} }
func TestUpdateRoutesUnadvertisesContainedRoutes(t *testing.T) { func TestUpdateRoutesUnadvertisesContainedRoutes(t *testing.T) {
rc := &appctest.RouteCollector{} for _, shouldStore := range []bool{false, true} {
a := NewAppConnector(t.Logf, rc) rc := &appctest.RouteCollector{}
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")}) var a *AppConnector
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) if shouldStore {
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")} a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
a.updateRoutes(routes) } else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.updateRoutes(routes)
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) { if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Fatalf("got %v, want %v", rc.Routes(), routes) t.Fatalf("got %v, want %v", rc.Routes(), routes)
}
} }
} }
func TestDomainRoutes(t *testing.T) { func TestDomainRoutes(t *testing.T) {
rc := &appctest.RouteCollector{} for _, shouldStore := range []bool{false, true} {
a := NewAppConnector(t.Logf, rc) rc := &appctest.RouteCollector{}
a.updateDomains([]string{"example.com"}) var a *AppConnector
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8")) if shouldStore {
a.Wait(context.Background()) a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(context.Background())
want := map[string][]netip.Addr{ want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")}, "example.com": {netip.MustParseAddr("192.0.0.8")},
} }
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) { if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want) t.Fatalf("DomainRoutes: got %v, want %v", got, want)
}
} }
} }
func TestObserveDNSResponse(t *testing.T) { func TestObserveDNSResponse(t *testing.T) {
ctx := context.Background() for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{} ctx := context.Background()
a := NewAppConnector(t.Logf, rc) rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
// a has no domains configured, so it should not advertise any routes // a has no domains configured, so it should not advertise any routes
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8")) a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) { if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")} wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
a.updateDomains([]string{"example.com"}) a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8")) a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(ctx) a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) { if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
// a CNAME record chain should result in a route being added if the chain // a CNAME record chain should result in a route being added if the chain
// matches a routed domain. // matches a routed domain.
a.updateDomains([]string{"www.example.com", "example.com"}) a.updateDomains([]string{"www.example.com", "example.com"})
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com.")) a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com."))
a.Wait(ctx) a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32")) wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) { if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
// a CNAME record chain should result in a route being added if the chain // a CNAME record chain should result in a route being added if the chain
// even if only found in the middle of the chain // even if only found in the middle of the chain
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org.")) a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org."))
a.Wait(ctx) a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32")) wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) { if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128")) wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1")) a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx) a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) { if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
// don't re-advertise routes that have already been advertised // don't re-advertise routes that have already been advertised
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1")) a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx) a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) { if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes) t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
} }
// don't advertise addresses that are already in a control provided route // don't advertise addresses that are already in a control provided route
pfx := netip.MustParsePrefix("192.0.2.0/24") pfx := netip.MustParsePrefix("192.0.2.0/24")
a.updateRoutes([]netip.Prefix{pfx}) a.updateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx) wantRoutes = append(wantRoutes, pfx)
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1")) a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
a.Wait(ctx) a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) { if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes) t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
} }
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) { if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"]) t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
}
} }
} }
func TestWildcardDomains(t *testing.T) { func TestWildcardDomains(t *testing.T) {
ctx := context.Background() for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{} ctx := context.Background()
a := NewAppConnector(t.Logf, rc) rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"*.example.com"}) a.updateDomains([]string{"*.example.com"})
a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8")) a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
a.Wait(ctx) a.Wait(ctx)
if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) { if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want) t.Errorf("routes: got %v; want %v", got, want)
} }
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) { if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want) t.Errorf("wildcards: got %v; want %v", got, want)
} }
a.updateDomains([]string{"*.example.com", "example.com"}) a.updateDomains([]string{"*.example.com", "example.com"})
if _, ok := a.domains["foo.example.com"]; !ok { if _, ok := a.domains["foo.example.com"]; !ok {
t.Errorf("expected foo.example.com to be preserved in domains due to wildcard") t.Errorf("expected foo.example.com to be preserved in domains due to wildcard")
} }
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) { if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want) t.Errorf("wildcards: got %v; want %v", got, want)
} }
// There was an early regression where the wildcard domain was added repeatedly, this guards against that. // There was an early regression where the wildcard domain was added repeatedly, this guards against that.
a.updateDomains([]string{"*.example.com", "example.com"}) a.updateDomains([]string{"*.example.com", "example.com"})
if len(a.wildcards) != 1 { if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards) t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
}
} }
} }
@ -310,3 +354,169 @@ func prefixCompare(a, b netip.Prefix) int {
} }
return a.Addr().Compare(b.Addr()) return a.Addr().Compare(b.Addr())
} }
func prefixes(in ...string) []netip.Prefix {
toRet := make([]netip.Prefix, len(in))
for i, s := range in {
toRet[i] = netip.MustParsePrefix(s)
}
return toRet
}
func TestUpdateRouteRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
// nothing has yet been advertised
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.2/32"))
a.Wait(ctx)
// the routes passed to UpdateDomainsAndRoutes have been advertised
assertRoutes("simple update", prefixes("1.2.3.1/32", "1.2.3.2/32"), []netip.Prefix{})
// one route the same, one different
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.3/32"))
a.Wait(ctx)
// old behavior: routes are not removed, resulting routes are both old and new
// (we have dupe 1.2.3.1 routes because the test RouteAdvertiser doesn't have the deduplication
// the real one does)
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior: routes are removed, resulting routes are new only
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes = prefixes("1.2.3.2/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateDomainRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateWildcardRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "*.b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("1.b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("2.b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for *.b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestRoutesWithout(t *testing.T) {
assert := func(msg string, got, want []netip.Prefix) {
if !slices.Equal(want, got) {
t.Errorf("%s: want %v, got %v", msg, want, got)
}
}
assert("empty routes", routesWithout([]netip.Prefix{}, []netip.Prefix{}), []netip.Prefix{})
assert("a empty", routesWithout([]netip.Prefix{}, prefixes("1.1.1.1/32", "1.1.1.2/32")), []netip.Prefix{})
assert("b empty", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), []netip.Prefix{}), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("no overlap", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.3/32", "1.1.1.4/32")), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("a has fewer", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32")), []netip.Prefix{})
assert("a has more", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32"), prefixes("1.1.1.1/32", "1.1.1.3/32")), prefixes("1.1.1.2/32", "1.1.1.4/32"))
}

View File

@ -89,7 +89,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/disco from tailscale.com/derp tailscale.com/disco from tailscale.com/derp
tailscale.com/drive from tailscale.com/client/tailscale+ tailscale.com/drive from tailscale.com/client/tailscale+
tailscale.com/envknob from tailscale.com/client/tailscale+ tailscale.com/envknob from tailscale.com/client/tailscale+
tailscale.com/health from tailscale.com/net/tlsdial tailscale.com/health from tailscale.com/net/tlsdial+
tailscale.com/hostinfo from tailscale.com/net/interfaces+ tailscale.com/hostinfo from tailscale.com/net/interfaces+
tailscale.com/ipn from tailscale.com/client/tailscale tailscale.com/ipn from tailscale.com/client/tailscale
tailscale.com/ipn/ipnstate from tailscale.com/client/tailscale+ tailscale.com/ipn/ipnstate from tailscale.com/client/tailscale+
@ -138,6 +138,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/types/structs from tailscale.com/ipn+ tailscale.com/types/structs from tailscale.com/ipn+
tailscale.com/types/tkatype from tailscale.com/client/tailscale+ tailscale.com/types/tkatype from tailscale.com/client/tailscale+
tailscale.com/types/views from tailscale.com/ipn+ tailscale.com/types/views from tailscale.com/ipn+
tailscale.com/util/cibuild from tailscale.com/health
tailscale.com/util/clientmetric from tailscale.com/net/netmon+ tailscale.com/util/clientmetric from tailscale.com/net/netmon+
tailscale.com/util/cloudenv from tailscale.com/hostinfo+ tailscale.com/util/cloudenv from tailscale.com/hostinfo+
W tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy W tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy

View File

@ -37,9 +37,16 @@ spec:
spec: spec:
description: Specification of the desired state of the ProxyClass resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status description: Specification of the desired state of the ProxyClass resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
type: object type: object
required:
- statefulSet
properties: properties:
metrics:
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.
type: object
required:
- enable
properties:
enable:
description: Setting enable to true will make the proxy serve Tailscale metrics at <pod-ip>:9001/debug/metrics. Defaults to false.
type: boolean
statefulSet: statefulSet:
description: Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector). description: Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector).
type: object type: object

View File

@ -3,13 +3,15 @@ kind: ProxyClass
metadata: metadata:
name: prod name: prod
spec: spec:
metrics:
enable: true
statefulSet: statefulSet:
annotations: annotations:
platform-component: infra platform-component: infra
pod: pod:
labels: labels:
team: eng team: eng
nodeSelector: nodeSelector:
beta.kubernetes.io/os: "linux" kubernetes.io/os: "linux"
imagePullSecrets: imagePullSecrets:
- name: "foo" - name: "foo"

View File

@ -193,6 +193,15 @@ spec:
spec: spec:
description: Specification of the desired state of the ProxyClass resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status description: Specification of the desired state of the ProxyClass resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
properties: properties:
metrics:
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.
properties:
enable:
description: Setting enable to true will make the proxy serve Tailscale metrics at <pod-ip>:9001/debug/metrics. Defaults to false.
type: boolean
required:
- enable
type: object
statefulSet: statefulSet:
description: Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector). description: Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector).
properties: properties:
@ -1157,8 +1166,6 @@ spec:
type: array type: array
type: object type: object
type: object type: object
required:
- statefulSet
type: object type: object
status: status:
description: Status of the ProxyClass. This is set and managed automatically. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status description: Status of the ProxyClass. This is set and managed automatically. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

View File

@ -20,3 +20,7 @@ spec:
env: env:
- name: TS_USERSPACE - name: TS_USERSPACE
value: "true" value: "true"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP

View File

@ -582,7 +582,7 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
logger.Debugf("reconciling statefulset %s/%s", ss.GetNamespace(), ss.GetName()) logger.Debugf("reconciling statefulset %s/%s", ss.GetNamespace(), ss.GetName())
if sts.ProxyClass != "" { if sts.ProxyClass != "" {
logger.Debugf("configuring proxy resources with ProxyClass %s", sts.ProxyClass) logger.Debugf("configuring proxy resources with ProxyClass %s", sts.ProxyClass)
ss = applyProxyClassToStatefulSet(proxyClass, ss) ss = applyProxyClassToStatefulSet(proxyClass, ss, sts, logger)
} }
updateSS := func(s *appsv1.StatefulSet) { updateSS := func(s *appsv1.StatefulSet) {
s.Spec = ss.Spec s.Spec = ss.Spec
@ -613,8 +613,28 @@ func mergeStatefulSetLabelsOrAnnots(current, custom map[string]string, managed [
return custom return custom
} }
func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet) *appsv1.StatefulSet { func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet, stsCfg *tailscaleSTSConfig, logger *zap.SugaredLogger) *appsv1.StatefulSet {
if pc == nil || ss == nil || pc.Spec.StatefulSet == nil { if pc == nil || ss == nil {
return ss
}
if pc.Spec.Metrics != nil && pc.Spec.Metrics.Enable {
if stsCfg.TailnetTargetFQDN == "" && stsCfg.TailnetTargetIP == "" && !stsCfg.ForwardClusterTrafficViaL7IngressProxy {
enableMetrics(ss, pc)
} else if stsCfg.ForwardClusterTrafficViaL7IngressProxy {
// TODO (irbekrm): fix this
// For Ingress proxies that have been configured with
// tailscale.com/experimental-forward-cluster-traffic-via-ingress
// annotation, all cluster traffic is forwarded to the
// Ingress backend(s).
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
} else {
// TODO (irbekrm): fix this
// For egress proxies, currently all cluster traffic is forwarded to the tailnet target.
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
}
}
if pc.Spec.StatefulSet == nil {
return ss return ss
} }
@ -681,6 +701,21 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet)
return ss return ss
} }
func enableMetrics(ss *appsv1.StatefulSet, pc *tsapi.ProxyClass) {
for i, c := range ss.Spec.Template.Spec.Containers {
if c.Name == "tailscale" {
// Serve metrics on on <pod-ip>:9001/debug/metrics. If
// we didn't specify Pod IP here, the proxy would, in
// some cases, also listen to its Tailscale IP- we don't
// want folks to start relying on this side-effect as a
// feature.
ss.Spec.Template.Spec.Containers[i].Env = append(ss.Spec.Template.Spec.Containers[i].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
ss.Spec.Template.Spec.Containers[i].Ports = append(ss.Spec.Template.Spec.Containers[i].Ports, corev1.ContainerPort{Name: "metrics", Protocol: "TCP", HostPort: 9001, ContainerPort: 9001})
break
}
}
}
// tailscaledConfig takes a proxy config, a newly generated auth key if // tailscaledConfig takes a proxy config, a newly generated auth key if
// generated and a Secret with the previous proxy state and auth key and // generated and a Secret with the previous proxy state and auth key and
// produces returns tailscaled configuration and a hash of that configuration. // produces returns tailscaled configuration and a hash of that configuration.

View File

@ -14,6 +14,7 @@ import (
"testing" "testing"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/api/resource"
@ -51,6 +52,10 @@ func Test_statefulSetNameBase(t *testing.T) {
} }
func Test_applyProxyClassToStatefulSet(t *testing.T) { func Test_applyProxyClassToStatefulSet(t *testing.T) {
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
// Setup // Setup
proxyClassAllOpts := &tsapi.ProxyClass{ proxyClassAllOpts := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{ Spec: tsapi.ProxyClassSpec{
@ -105,6 +110,12 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
}, },
}, },
} }
proxyClassMetrics := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
Metrics: &tsapi.Metrics{Enable: true},
},
}
var userspaceProxySS, nonUserspaceProxySS appsv1.StatefulSet var userspaceProxySS, nonUserspaceProxySS appsv1.StatefulSet
if err := yaml.Unmarshal(userspaceProxyYaml, &userspaceProxySS); err != nil { if err := yaml.Unmarshal(userspaceProxyYaml, &userspaceProxySS); err != nil {
t.Fatalf("unmarshaling userspace proxy template: %v", err) t.Fatalf("unmarshaling userspace proxy template: %v", err)
@ -149,7 +160,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.InitContainers[0].Env = append(wantSS.Spec.Template.Spec.InitContainers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...) wantSS.Spec.Template.Spec.InitContainers[0].Env = append(wantSS.Spec.Template.Spec.InitContainers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...) wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
gotSS := applyProxyClassToStatefulSet(proxyClassAllOpts, nonUserspaceProxySS.DeepCopy()) gotSS := applyProxyClassToStatefulSet(proxyClassAllOpts, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" { if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with all fields set to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff) t.Fatalf("Unexpected result applying ProxyClass with all fields set to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
} }
@ -162,7 +173,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations) wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, nonUserspaceProxySS.DeepCopy()) gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" { if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff) t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
} }
@ -183,7 +194,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext
wantSS.Spec.Template.Spec.Containers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.Resources wantSS.Spec.Template.Spec.Containers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.Resources
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...) wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
gotSS = applyProxyClassToStatefulSet(proxyClassAllOpts, userspaceProxySS.DeepCopy()) gotSS = applyProxyClassToStatefulSet(proxyClassAllOpts, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" { if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff) t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
} }
@ -195,10 +206,19 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations) wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, userspaceProxySS.DeepCopy()) gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" { if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff) t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
} }
// 5. Test that a ProxyClass with metrics enabled gets correctly applied to a StatefulSet.
wantSS = nonUserspaceProxySS.DeepCopy()
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
wantSS.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{{Name: "metrics", Protocol: "TCP", ContainerPort: 9001, HostPort: 9001}}
gotSS = applyProxyClassToStatefulSet(proxyClassMetrics, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with metrics enabled to a StatefulSet (-got +want):\n%s", diff)
}
} }
func mergeMapKeys(a, b map[string]string) map[string]string { func mergeMapKeys(a, b map[string]string) map[string]string {

View File

@ -15,6 +15,7 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors" apierrors "k8s.io/apimachinery/pkg/api/errors"
@ -54,6 +55,10 @@ type configOpts struct {
func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.StatefulSet { func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.StatefulSet {
t.Helper() t.Helper()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
tsContainer := corev1.Container{ tsContainer := corev1.Container{
Name: "tailscale", Name: "tailscale",
Image: "tailscale/tailscale", Image: "tailscale/tailscale",
@ -205,18 +210,23 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
if err := cl.Get(context.Background(), types.NamespacedName{Name: opts.proxyClass}, proxyClass); err != nil { if err := cl.Get(context.Background(), types.NamespacedName{Name: opts.proxyClass}, proxyClass); err != nil {
t.Fatalf("error getting ProxyClass: %v", err) t.Fatalf("error getting ProxyClass: %v", err)
} }
return applyProxyClassToStatefulSet(proxyClass, ss) return applyProxyClassToStatefulSet(proxyClass, ss, new(tailscaleSTSConfig), zl.Sugar())
} }
return ss return ss
} }
func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *appsv1.StatefulSet { func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *appsv1.StatefulSet {
t.Helper() t.Helper()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
tsContainer := corev1.Container{ tsContainer := corev1.Container{
Name: "tailscale", Name: "tailscale",
Image: "tailscale/tailscale", Image: "tailscale/tailscale",
Env: []corev1.EnvVar{ Env: []corev1.EnvVar{
{Name: "TS_USERSPACE", Value: "true"}, {Name: "TS_USERSPACE", Value: "true"},
{Name: "POD_IP", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "status.podIP"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "TS_KUBE_SECRET", Value: opts.secretName}, {Name: "TS_KUBE_SECRET", Value: opts.secretName},
{Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH", Value: "/etc/tsconfig/tailscaled"}, {Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH", Value: "/etc/tsconfig/tailscaled"},
{Name: "TS_SERVE_CONFIG", Value: "/etc/tailscaled/serve-config"}, {Name: "TS_SERVE_CONFIG", Value: "/etc/tailscaled/serve-config"},
@ -301,7 +311,7 @@ func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *apps
if err := cl.Get(context.Background(), types.NamespacedName{Name: opts.proxyClass}, proxyClass); err != nil { if err := cl.Get(context.Background(), types.NamespacedName{Name: opts.proxyClass}, proxyClass); err != nil {
t.Fatalf("error getting ProxyClass: %v", err) t.Fatalf("error getting ProxyClass: %v", err)
} }
return applyProxyClassToStatefulSet(proxyClass, ss) return applyProxyClassToStatefulSet(proxyClass, ss, new(tailscaleSTSConfig), zl.Sugar())
} }
return ss return ss
} }

View File

@ -88,7 +88,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/disco from tailscale.com/derp tailscale.com/disco from tailscale.com/derp
tailscale.com/drive from tailscale.com/client/tailscale+ tailscale.com/drive from tailscale.com/client/tailscale+
tailscale.com/envknob from tailscale.com/client/tailscale+ tailscale.com/envknob from tailscale.com/client/tailscale+
tailscale.com/health from tailscale.com/net/tlsdial tailscale.com/health from tailscale.com/net/tlsdial+
tailscale.com/health/healthmsg from tailscale.com/cmd/tailscale/cli tailscale.com/health/healthmsg from tailscale.com/cmd/tailscale/cli
tailscale.com/hostinfo from tailscale.com/client/web+ tailscale.com/hostinfo from tailscale.com/client/web+
tailscale.com/ipn from tailscale.com/client/tailscale+ tailscale.com/ipn from tailscale.com/client/tailscale+
@ -142,6 +142,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/types/structs from tailscale.com/ipn+ tailscale.com/types/structs from tailscale.com/ipn+
tailscale.com/types/tkatype from tailscale.com/types/key+ tailscale.com/types/tkatype from tailscale.com/types/key+
tailscale.com/types/views from tailscale.com/tailcfg+ tailscale.com/types/views from tailscale.com/tailcfg+
tailscale.com/util/cibuild from tailscale.com/health
tailscale.com/util/clientmetric from tailscale.com/net/netcheck+ tailscale.com/util/clientmetric from tailscale.com/net/netcheck+
tailscale.com/util/cloudenv from tailscale.com/net/dnscache+ tailscale.com/util/cloudenv from tailscale.com/net/dnscache+
tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy+ tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy+

View File

@ -21,6 +21,7 @@ import (
"time" "time"
"tailscale.com/derp/derphttp" "tailscale.com/derp/derphttp"
"tailscale.com/health"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/net/interfaces" "tailscale.com/net/interfaces"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
@ -157,6 +158,7 @@ func getURL(ctx context.Context, urlStr string) error {
} }
func checkDerp(ctx context.Context, derpRegion string) (err error) { func checkDerp(ctx context.Context, derpRegion string) (err error) {
ht := new(health.Tracker)
req, err := http.NewRequestWithContext(ctx, "GET", ipn.DefaultControlURL+"/derpmap/default", nil) req, err := http.NewRequestWithContext(ctx, "GET", ipn.DefaultControlURL+"/derpmap/default", nil)
if err != nil { if err != nil {
return fmt.Errorf("create derp map request: %w", err) return fmt.Errorf("create derp map request: %w", err)
@ -195,6 +197,8 @@ func checkDerp(ctx context.Context, derpRegion string) (err error) {
c1 := derphttp.NewRegionClient(priv1, log.Printf, nil, getRegion) c1 := derphttp.NewRegionClient(priv1, log.Printf, nil, getRegion)
c2 := derphttp.NewRegionClient(priv2, log.Printf, nil, getRegion) c2 := derphttp.NewRegionClient(priv2, log.Printf, nil, getRegion)
c1.HealthTracker = ht
c2.HealthTracker = ht
defer func() { defer func() {
if err != nil { if err != nil {
c1.Close() c1.Close()

View File

@ -358,6 +358,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
tailscale.com/types/structs from tailscale.com/control/controlclient+ tailscale.com/types/structs from tailscale.com/control/controlclient+
tailscale.com/types/tkatype from tailscale.com/tka+ tailscale.com/types/tkatype from tailscale.com/tka+
tailscale.com/types/views from tailscale.com/ipn/ipnlocal+ tailscale.com/types/views from tailscale.com/ipn/ipnlocal+
tailscale.com/util/cibuild from tailscale.com/health
tailscale.com/util/clientmetric from tailscale.com/control/controlclient+ tailscale.com/util/clientmetric from tailscale.com/control/controlclient+
tailscale.com/util/cloudenv from tailscale.com/net/dns/resolver+ tailscale.com/util/cloudenv from tailscale.com/net/dns/resolver+
tailscale.com/util/cmpver from tailscale.com/net/dns+ tailscale.com/util/cmpver from tailscale.com/net/dns+

View File

@ -358,7 +358,7 @@ func run() (err error) {
sys.Set(netMon) sys.Set(netMon)
} }
pol := logpolicy.New(logtail.CollectionNode, netMon, nil /* use log.Printf */) pol := logpolicy.New(logtail.CollectionNode, netMon, sys.HealthTracker(), nil /* use log.Printf */)
pol.SetVerbosityLevel(args.verbose) pol.SetVerbosityLevel(args.verbose)
logPol = pol logPol = pol
defer func() { defer func() {
@ -651,6 +651,7 @@ func tryEngine(logf logger.Logf, sys *tsd.System, name string) (onlyNetstack boo
conf := wgengine.Config{ conf := wgengine.Config{
ListenPort: args.port, ListenPort: args.port,
NetMon: sys.NetMon.Get(), NetMon: sys.NetMon.Get(),
HealthTracker: sys.HealthTracker(),
Dialer: sys.Dialer.Get(), Dialer: sys.Dialer.Get(),
SetSubsystem: sys.Set, SetSubsystem: sys.Set,
ControlKnobs: sys.ControlKnobs(), ControlKnobs: sys.ControlKnobs(),
@ -676,7 +677,7 @@ func tryEngine(logf logger.Logf, sys *tsd.System, name string) (onlyNetstack boo
// configuration being unavailable (from the noop // configuration being unavailable (from the noop
// manager). More in Issue 4017. // manager). More in Issue 4017.
// TODO(bradfitz): add a Synology-specific DNS manager. // TODO(bradfitz): add a Synology-specific DNS manager.
conf.DNS, err = dns.NewOSConfigurator(logf, "") // empty interface name conf.DNS, err = dns.NewOSConfigurator(logf, sys.HealthTracker(), "") // empty interface name
if err != nil { if err != nil {
return false, fmt.Errorf("dns.NewOSConfigurator: %w", err) return false, fmt.Errorf("dns.NewOSConfigurator: %w", err)
} }
@ -698,13 +699,13 @@ func tryEngine(logf logger.Logf, sys *tsd.System, name string) (onlyNetstack boo
return false, err return false, err
} }
r, err := router.New(logf, dev, sys.NetMon.Get()) r, err := router.New(logf, dev, sys.NetMon.Get(), sys.HealthTracker())
if err != nil { if err != nil {
dev.Close() dev.Close()
return false, fmt.Errorf("creating router: %w", err) return false, fmt.Errorf("creating router: %w", err)
} }
d, err := dns.NewOSConfigurator(logf, devName) d, err := dns.NewOSConfigurator(logf, sys.HealthTracker(), devName)
if err != nil { if err != nil {
dev.Close() dev.Close()
r.Close() r.Close()

View File

@ -104,9 +104,10 @@ func newIPN(jsConfig js.Value) map[string]any {
sys.Set(store) sys.Set(store)
dialer := &tsdial.Dialer{Logf: logf} dialer := &tsdial.Dialer{Logf: logf}
eng, err := wgengine.NewUserspaceEngine(logf, wgengine.Config{ eng, err := wgengine.NewUserspaceEngine(logf, wgengine.Config{
Dialer: dialer, Dialer: dialer,
SetSubsystem: sys.Set, SetSubsystem: sys.Set,
ControlKnobs: sys.ControlKnobs(), ControlKnobs: sys.ControlKnobs(),
HealthTracker: sys.HealthTracker(),
}) })
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)

View File

@ -12,7 +12,6 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"tailscale.com/health"
"tailscale.com/logtail/backoff" "tailscale.com/logtail/backoff"
"tailscale.com/net/sockstats" "tailscale.com/net/sockstats"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
@ -195,7 +194,7 @@ func NewNoStart(opts Options) (_ *Auto, err error) {
c.mapCtx, c.mapCancel = context.WithCancel(context.Background()) c.mapCtx, c.mapCancel = context.WithCancel(context.Background())
c.mapCtx = sockstats.WithSockStats(c.mapCtx, sockstats.LabelControlClientAuto, opts.Logf) c.mapCtx = sockstats.WithSockStats(c.mapCtx, sockstats.LabelControlClientAuto, opts.Logf)
c.unregisterHealthWatch = health.RegisterWatcher(direct.ReportHealthChange) c.unregisterHealthWatch = opts.HealthTracker.RegisterWatcher(direct.ReportHealthChange)
return c, nil return c, nil
} }
@ -316,7 +315,7 @@ func (c *Auto) authRoutine() {
} }
if goal == nil { if goal == nil {
health.SetAuthRoutineInError(nil) c.direct.health.SetAuthRoutineInError(nil)
// Wait for user to Login or Logout. // Wait for user to Login or Logout.
<-ctx.Done() <-ctx.Done()
c.logf("[v1] authRoutine: context done.") c.logf("[v1] authRoutine: context done.")
@ -343,7 +342,7 @@ func (c *Auto) authRoutine() {
f = "TryLogin" f = "TryLogin"
} }
if err != nil { if err != nil {
health.SetAuthRoutineInError(err) c.direct.health.SetAuthRoutineInError(err)
report(err, f) report(err, f)
bo.BackOff(ctx, err) bo.BackOff(ctx, err)
continue continue
@ -373,7 +372,7 @@ func (c *Auto) authRoutine() {
} }
// success // success
health.SetAuthRoutineInError(nil) c.direct.health.SetAuthRoutineInError(nil)
c.mu.Lock() c.mu.Lock()
c.urlToVisit = "" c.urlToVisit = ""
c.loggedIn = true c.loggedIn = true
@ -503,11 +502,11 @@ func (c *Auto) mapRoutine() {
c.logf("[v1] mapRoutine: context done.") c.logf("[v1] mapRoutine: context done.")
continue continue
} }
health.SetOutOfPollNetMap() c.direct.health.SetOutOfPollNetMap()
err := c.direct.PollNetMap(ctx, mrs) err := c.direct.PollNetMap(ctx, mrs)
health.SetOutOfPollNetMap() c.direct.health.SetOutOfPollNetMap()
c.mu.Lock() c.mu.Lock()
c.inMapPoll = false c.inMapPoll = false
if c.state == StateSynchronized { if c.state == StateSynchronized {

View File

@ -69,6 +69,7 @@ type Direct struct {
clock tstime.Clock clock tstime.Clock
logf logger.Logf logf logger.Logf
netMon *netmon.Monitor // or nil netMon *netmon.Monitor // or nil
health *health.Tracker
discoPubKey key.DiscoPublic discoPubKey key.DiscoPublic
getMachinePrivKey func() (key.MachinePrivate, error) getMachinePrivKey func() (key.MachinePrivate, error)
debugFlags []string debugFlags []string
@ -119,10 +120,11 @@ type Options struct {
Hostinfo *tailcfg.Hostinfo // non-nil passes ownership, nil means to use default using os.Hostname, etc Hostinfo *tailcfg.Hostinfo // non-nil passes ownership, nil means to use default using os.Hostname, etc
DiscoPublicKey key.DiscoPublic DiscoPublicKey key.DiscoPublic
Logf logger.Logf Logf logger.Logf
HTTPTestClient *http.Client // optional HTTP client to use (for tests only) HTTPTestClient *http.Client // optional HTTP client to use (for tests only)
NoiseTestClient *http.Client // optional HTTP client to use for noise RPCs (tests only) NoiseTestClient *http.Client // optional HTTP client to use for noise RPCs (tests only)
DebugFlags []string // debug settings to send to control DebugFlags []string // debug settings to send to control
NetMon *netmon.Monitor // optional network monitor NetMon *netmon.Monitor // optional network monitor
HealthTracker *health.Tracker
PopBrowserURL func(url string) // optional func to open browser PopBrowserURL func(url string) // optional func to open browser
OnClientVersion func(*tailcfg.ClientVersion) // optional func to inform GUI of client version status OnClientVersion func(*tailcfg.ClientVersion) // optional func to inform GUI of client version status
OnControlTime func(time.Time) // optional func to notify callers of new time from control OnControlTime func(time.Time) // optional func to notify callers of new time from control
@ -248,7 +250,7 @@ func NewDirect(opts Options) (*Direct, error) {
tr := http.DefaultTransport.(*http.Transport).Clone() tr := http.DefaultTransport.(*http.Transport).Clone()
tr.Proxy = tshttpproxy.ProxyFromEnvironment tr.Proxy = tshttpproxy.ProxyFromEnvironment
tshttpproxy.SetTransportGetProxyConnectHeader(tr) tshttpproxy.SetTransportGetProxyConnectHeader(tr)
tr.TLSClientConfig = tlsdial.Config(serverURL.Hostname(), tr.TLSClientConfig) tr.TLSClientConfig = tlsdial.Config(serverURL.Hostname(), opts.HealthTracker, tr.TLSClientConfig)
tr.DialContext = dnscache.Dialer(opts.Dialer.SystemDial, dnsCache) tr.DialContext = dnscache.Dialer(opts.Dialer.SystemDial, dnsCache)
tr.DialTLSContext = dnscache.TLSDialer(opts.Dialer.SystemDial, dnsCache, tr.TLSClientConfig) tr.DialTLSContext = dnscache.TLSDialer(opts.Dialer.SystemDial, dnsCache, tr.TLSClientConfig)
tr.ForceAttemptHTTP2 = true tr.ForceAttemptHTTP2 = true
@ -271,6 +273,7 @@ func NewDirect(opts Options) (*Direct, error) {
discoPubKey: opts.DiscoPublicKey, discoPubKey: opts.DiscoPublicKey,
debugFlags: opts.DebugFlags, debugFlags: opts.DebugFlags,
netMon: opts.NetMon, netMon: opts.NetMon,
health: opts.HealthTracker,
skipIPForwardingCheck: opts.SkipIPForwardingCheck, skipIPForwardingCheck: opts.SkipIPForwardingCheck,
pinger: opts.Pinger, pinger: opts.Pinger,
popBrowser: opts.PopBrowserURL, popBrowser: opts.PopBrowserURL,
@ -894,10 +897,10 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
ipForwardingBroken(hi.RoutableIPs, c.netMon.InterfaceState()) { ipForwardingBroken(hi.RoutableIPs, c.netMon.InterfaceState()) {
extraDebugFlags = append(extraDebugFlags, "warn-ip-forwarding-off") extraDebugFlags = append(extraDebugFlags, "warn-ip-forwarding-off")
} }
if health.RouterHealth() != nil { if c.health.RouterHealth() != nil {
extraDebugFlags = append(extraDebugFlags, "warn-router-unhealthy") extraDebugFlags = append(extraDebugFlags, "warn-router-unhealthy")
} }
extraDebugFlags = health.AppendWarnableDebugFlags(extraDebugFlags) extraDebugFlags = c.health.AppendWarnableDebugFlags(extraDebugFlags)
if hostinfo.DisabledEtcAptSource() { if hostinfo.DisabledEtcAptSource() {
extraDebugFlags = append(extraDebugFlags, "warn-etc-apt-source-disabled") extraDebugFlags = append(extraDebugFlags, "warn-etc-apt-source-disabled")
} }
@ -970,7 +973,7 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
} }
defer res.Body.Close() defer res.Body.Close()
health.NoteMapRequestHeard(request) c.health.NoteMapRequestHeard(request)
watchdogTimer.Reset(watchdogTimeout) watchdogTimer.Reset(watchdogTimeout)
if nu == nil { if nu == nil {
@ -1041,7 +1044,7 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
metricMapResponseMessages.Add(1) metricMapResponseMessages.Add(1)
if isStreaming { if isStreaming {
health.GotStreamedMapResponse() c.health.GotStreamedMapResponse()
} }
if pr := resp.PingRequest; pr != nil && c.isUniquePingRequest(pr) { if pr := resp.PingRequest; pr != nil && c.isUniquePingRequest(pr) {
@ -1450,14 +1453,15 @@ func (c *Direct) getNoiseClient() (*NoiseClient, error) {
} }
c.logf("[v1] creating new noise client") c.logf("[v1] creating new noise client")
nc, err := NewNoiseClient(NoiseOpts{ nc, err := NewNoiseClient(NoiseOpts{
PrivKey: k, PrivKey: k,
ServerPubKey: serverNoiseKey, ServerPubKey: serverNoiseKey,
ServerURL: c.serverURL, ServerURL: c.serverURL,
Dialer: c.dialer, Dialer: c.dialer,
DNSCache: c.dnsCache, DNSCache: c.dnsCache,
Logf: c.logf, Logf: c.logf,
NetMon: c.netMon, NetMon: c.netMon,
DialPlan: dp, HealthTracker: c.health,
DialPlan: dp,
}) })
if err != nil { if err != nil {
return nil, err return nil, err

View File

@ -19,6 +19,7 @@ import (
"golang.org/x/net/http2" "golang.org/x/net/http2"
"tailscale.com/control/controlbase" "tailscale.com/control/controlbase"
"tailscale.com/control/controlhttp" "tailscale.com/control/controlhttp"
"tailscale.com/health"
"tailscale.com/net/dnscache" "tailscale.com/net/dnscache"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/net/tsdial" "tailscale.com/net/tsdial"
@ -174,6 +175,7 @@ type NoiseClient struct {
logf logger.Logf logf logger.Logf
netMon *netmon.Monitor netMon *netmon.Monitor
health *health.Tracker
// mu only protects the following variables. // mu only protects the following variables.
mu sync.Mutex mu sync.Mutex
@ -204,6 +206,8 @@ type NoiseOpts struct {
// network interface state. This field can be nil; if so, the current // network interface state. This field can be nil; if so, the current
// state will be looked up dynamically. // state will be looked up dynamically.
NetMon *netmon.Monitor NetMon *netmon.Monitor
// HealthTracker, if non-nil, is the health tracker to use.
HealthTracker *health.Tracker
// DialPlan, if set, is a function that should return an explicit plan // DialPlan, if set, is a function that should return an explicit plan
// on how to connect to the server. // on how to connect to the server.
DialPlan func() *tailcfg.ControlDialPlan DialPlan func() *tailcfg.ControlDialPlan
@ -247,6 +251,7 @@ func NewNoiseClient(opts NoiseOpts) (*NoiseClient, error) {
dialPlan: opts.DialPlan, dialPlan: opts.DialPlan,
logf: opts.Logf, logf: opts.Logf,
netMon: opts.NetMon, netMon: opts.NetMon,
health: opts.HealthTracker,
} }
// Create the HTTP/2 Transport using a net/http.Transport // Create the HTTP/2 Transport using a net/http.Transport
@ -453,6 +458,7 @@ func (nc *NoiseClient) dial(ctx context.Context) (*noiseConn, error) {
DialPlan: dialPlan, DialPlan: dialPlan,
Logf: nc.logf, Logf: nc.logf,
NetMon: nc.netMon, NetMon: nc.netMon,
HealthTracker: nc.health,
Clock: tstime.StdClock{}, Clock: tstime.StdClock{},
}).Dial(ctx) }).Dial(ctx)
if err != nil { if err != nil {

View File

@ -433,7 +433,7 @@ func (a *Dialer) tryURLUpgrade(ctx context.Context, u *url.URL, addr netip.Addr,
// Disable HTTP2, since h2 can't do protocol switching. // Disable HTTP2, since h2 can't do protocol switching.
tr.TLSClientConfig.NextProtos = []string{} tr.TLSClientConfig.NextProtos = []string{}
tr.TLSNextProto = map[string]func(string, *tls.Conn) http.RoundTripper{} tr.TLSNextProto = map[string]func(string, *tls.Conn) http.RoundTripper{}
tr.TLSClientConfig = tlsdial.Config(a.Hostname, tr.TLSClientConfig) tr.TLSClientConfig = tlsdial.Config(a.Hostname, a.HealthTracker, tr.TLSClientConfig)
if !tr.TLSClientConfig.InsecureSkipVerify { if !tr.TLSClientConfig.InsecureSkipVerify {
panic("unexpected") // should be set by tlsdial.Config panic("unexpected") // should be set by tlsdial.Config
} }

View File

@ -8,6 +8,7 @@ import (
"net/url" "net/url"
"time" "time"
"tailscale.com/health"
"tailscale.com/net/dnscache" "tailscale.com/net/dnscache"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
@ -79,6 +80,9 @@ type Dialer struct {
NetMon *netmon.Monitor NetMon *netmon.Monitor
// HealthTracker, if non-nil, is the health tracker to use.
HealthTracker *health.Tracker
// DialPlan, if set, contains instructions from the control server on // DialPlan, if set, contains instructions from the control server on
// how to connect to it. If present, we will try the methods in this // how to connect to it. If present, we will try the methods in this
// plan before falling back to DNS. // plan before falling back to DNS.

View File

@ -72,6 +72,10 @@ type Knobs struct {
// ProbeUDPLifetime is whether the node should probe UDP path lifetime on // ProbeUDPLifetime is whether the node should probe UDP path lifetime on
// the tail end of an active direct connection in magicsock. // the tail end of an active direct connection in magicsock.
ProbeUDPLifetime atomic.Bool ProbeUDPLifetime atomic.Bool
// AppCStoreRoutes is whether the node should store RouteInfo to StateStore
// if it's an app connector.
AppCStoreRoutes atomic.Bool
} }
// UpdateFromNodeAttributes updates k (if non-nil) based on the provided self // UpdateFromNodeAttributes updates k (if non-nil) based on the provided self
@ -96,6 +100,7 @@ func (k *Knobs) UpdateFromNodeAttributes(capMap tailcfg.NodeCapMap) {
forceNfTables = has(tailcfg.NodeAttrLinuxMustUseNfTables) forceNfTables = has(tailcfg.NodeAttrLinuxMustUseNfTables)
seamlessKeyRenewal = has(tailcfg.NodeAttrSeamlessKeyRenewal) seamlessKeyRenewal = has(tailcfg.NodeAttrSeamlessKeyRenewal)
probeUDPLifetime = has(tailcfg.NodeAttrProbeUDPLifetime) probeUDPLifetime = has(tailcfg.NodeAttrProbeUDPLifetime)
appCStoreRoutes = has(tailcfg.NodeAttrStoreAppCRoutes)
) )
if has(tailcfg.NodeAttrOneCGNATEnable) { if has(tailcfg.NodeAttrOneCGNATEnable) {
@ -118,6 +123,7 @@ func (k *Knobs) UpdateFromNodeAttributes(capMap tailcfg.NodeCapMap) {
k.LinuxForceNfTables.Store(forceNfTables) k.LinuxForceNfTables.Store(forceNfTables)
k.SeamlessKeyRenewal.Store(seamlessKeyRenewal) k.SeamlessKeyRenewal.Store(seamlessKeyRenewal)
k.ProbeUDPLifetime.Store(probeUDPLifetime) k.ProbeUDPLifetime.Store(probeUDPLifetime)
k.AppCStoreRoutes.Store(appCStoreRoutes)
} }
// AsDebugJSON returns k as something that can be marshalled with json.Marshal // AsDebugJSON returns k as something that can be marshalled with json.Marshal
@ -141,5 +147,6 @@ func (k *Knobs) AsDebugJSON() map[string]any {
"LinuxForceNfTables": k.LinuxForceNfTables.Load(), "LinuxForceNfTables": k.LinuxForceNfTables.Load(),
"SeamlessKeyRenewal": k.SeamlessKeyRenewal.Load(), "SeamlessKeyRenewal": k.SeamlessKeyRenewal.Load(),
"ProbeUDPLifetime": k.ProbeUDPLifetime.Load(), "ProbeUDPLifetime": k.ProbeUDPLifetime.Load(),
"AppCStoreRoutes": k.AppCStoreRoutes.Load(),
} }
} }

View File

@ -31,6 +31,7 @@ import (
"go4.org/mem" "go4.org/mem"
"tailscale.com/derp" "tailscale.com/derp"
"tailscale.com/envknob" "tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/net/dnscache" "tailscale.com/net/dnscache"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/net/netns" "tailscale.com/net/netns"
@ -51,10 +52,11 @@ import (
// Send/Recv will completely re-establish the connection (unless Close // Send/Recv will completely re-establish the connection (unless Close
// has been called). // has been called).
type Client struct { type Client struct {
TLSConfig *tls.Config // optional; nil means default TLSConfig *tls.Config // optional; nil means default
DNSCache *dnscache.Resolver // optional; nil means no caching HealthTracker *health.Tracker // optional; used if non-nil only
MeshKey string // optional; for trusted clients DNSCache *dnscache.Resolver // optional; nil means no caching
IsProber bool // optional; for probers to optional declare themselves as such MeshKey string // optional; for trusted clients
IsProber bool // optional; for probers to optional declare themselves as such
// WatchConnectionChanges is whether the client wishes to subscribe to // WatchConnectionChanges is whether the client wishes to subscribe to
// notifications about clients connecting & disconnecting. // notifications about clients connecting & disconnecting.
@ -115,6 +117,7 @@ func (c *Client) String() string {
// NewRegionClient returns a new DERP-over-HTTP client. It connects lazily. // NewRegionClient returns a new DERP-over-HTTP client. It connects lazily.
// To trigger a connection, use Connect. // To trigger a connection, use Connect.
// The netMon parameter is optional; if non-nil it's used to do faster interface lookups. // The netMon parameter is optional; if non-nil it's used to do faster interface lookups.
// The healthTracker parameter is also optional.
func NewRegionClient(privateKey key.NodePrivate, logf logger.Logf, netMon *netmon.Monitor, getRegion func() *tailcfg.DERPRegion) *Client { func NewRegionClient(privateKey key.NodePrivate, logf logger.Logf, netMon *netmon.Monitor, getRegion func() *tailcfg.DERPRegion) *Client {
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
c := &Client{ c := &Client{
@ -612,7 +615,7 @@ func (c *Client) dialRegion(ctx context.Context, reg *tailcfg.DERPRegion) (net.C
} }
func (c *Client) tlsClient(nc net.Conn, node *tailcfg.DERPNode) *tls.Conn { func (c *Client) tlsClient(nc net.Conn, node *tailcfg.DERPNode) *tls.Conn {
tlsConf := tlsdial.Config(c.tlsServerName(node), c.TLSConfig) tlsConf := tlsdial.Config(c.tlsServerName(node), c.HealthTracker, c.TLSConfig)
if node != nil { if node != nil {
if node.InsecureForTests { if node.InsecureForTests {
tlsConf.InsecureSkipVerify = true tlsConf.InsecureSkipVerify = true

View File

@ -9,6 +9,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"net/http" "net/http"
"os"
"runtime" "runtime"
"sort" "sort"
"sync" "sync"
@ -17,20 +18,59 @@ import (
"tailscale.com/envknob" "tailscale.com/envknob"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/opt"
"tailscale.com/util/cibuild"
"tailscale.com/util/mak"
"tailscale.com/util/multierr" "tailscale.com/util/multierr"
"tailscale.com/util/set" "tailscale.com/util/set"
) )
var ( var (
// mu guards everything in this var block. mu sync.Mutex
debugHandler map[string]http.Handler
)
// ReceiveFunc is one of the three magicsock Receive funcs (IPv4, IPv6, or
// DERP).
type ReceiveFunc int
// ReceiveFunc indices for Tracker.MagicSockReceiveFuncs.
const (
ReceiveIPv4 ReceiveFunc = 0
ReceiveIPv6 ReceiveFunc = 1
ReceiveDERP ReceiveFunc = 2
)
func (f ReceiveFunc) String() string {
if f < 0 || int(f) >= len(receiveNames) {
return fmt.Sprintf("ReceiveFunc(%d)", f)
}
return receiveNames[f]
}
var receiveNames = []string{
ReceiveIPv4: "ReceiveIPv4",
ReceiveIPv6: "ReceiveIPv6",
ReceiveDERP: "ReceiveDERP",
}
// Tracker tracks the health of various Tailscale subsystems,
// comparing each subsystems' state with each other to make sure
// they're consistent based on the user's intended state.
type Tracker struct {
// MagicSockReceiveFuncs tracks the state of the three
// magicsock receive functions: IPv4, IPv6, and DERP.
MagicSockReceiveFuncs [3]ReceiveFuncStats // indexed by ReceiveFunc values
// mu guards everything that follows.
mu sync.Mutex mu sync.Mutex
sysErr = map[Subsystem]error{} // error key => err (or nil for no error) warnables []*Warnable // keys ever set
watchers = set.HandleSet[func(Subsystem, error)]{} // opt func to run if error state changes warnableVal map[*Warnable]error
warnables = set.Set[*Warnable]{}
timer *time.Timer
debugHandler = map[string]http.Handler{} sysErr map[Subsystem]error // subsystem => err (or nil for no error)
watchers set.HandleSet[func(Subsystem, error)] // opt func to run if error state changes
timer *time.Timer
inMapPoll bool inMapPoll bool
inMapPollSince time.Time inMapPollSince time.Time
@ -38,19 +78,19 @@ var (
lastStreamedMapResponse time.Time lastStreamedMapResponse time.Time
derpHomeRegion int derpHomeRegion int
derpHomeless bool derpHomeless bool
derpRegionConnected = map[int]bool{} derpRegionConnected map[int]bool
derpRegionHealthProblem = map[int]string{} derpRegionHealthProblem map[int]string
derpRegionLastFrame = map[int]time.Time{} derpRegionLastFrame map[int]time.Time
lastMapRequestHeard time.Time // time we got a 200 from control for a MapRequest lastMapRequestHeard time.Time // time we got a 200 from control for a MapRequest
ipnState string ipnState string
ipnWantRunning bool ipnWantRunning bool
anyInterfaceUp = true // until told otherwise anyInterfaceUp opt.Bool // empty means unknown (assume true)
udp4Unbound bool udp4Unbound bool
controlHealth []string controlHealth []string
lastLoginErr error lastLoginErr error
localLogConfigErr error localLogConfigErr error
tlsConnectionErrors = map[string]error{} // map[ServerName]error tlsConnectionErrors map[string]error // map[ServerName]error
) }
// Subsystem is the name of a subsystem whose health can be monitored. // Subsystem is the name of a subsystem whose health can be monitored.
type Subsystem string type Subsystem string
@ -76,16 +116,16 @@ const (
SysTKA = Subsystem("tailnet-lock") SysTKA = Subsystem("tailnet-lock")
) )
// NewWarnable returns a new warnable item that the caller can mark // NewWarnable returns a new warnable item that the caller can mark as health or
// as health or in warning state. // in warning state via Tracker.SetWarnable.
//
// NewWarnable is generally called in init and stored in a package global. It
// can be used by multiple Trackers.
func NewWarnable(opts ...WarnableOpt) *Warnable { func NewWarnable(opts ...WarnableOpt) *Warnable {
w := new(Warnable) w := new(Warnable)
for _, o := range opts { for _, o := range opts {
o.mod(w) o.mod(w)
} }
mu.Lock()
defer mu.Unlock()
warnables.Add(w)
return w return w
} }
@ -118,49 +158,66 @@ type warnOptFunc func(*Warnable)
func (f warnOptFunc) mod(w *Warnable) { f(w) } func (f warnOptFunc) mod(w *Warnable) { f(w) }
// Warnable is a health check item that may or may not be in a bad warning state. // Warnable is a health check item that may or may not be in a bad warning state.
// The caller of NewWarnable is responsible for calling Set to update the state. // The caller of NewWarnable is responsible for calling Tracker.SetWarnable to update the state.
type Warnable struct { type Warnable struct {
debugFlag string // optional MapRequest.DebugFlag to send when unhealthy debugFlag string // optional MapRequest.DebugFlag to send when unhealthy
// If true, this warning is related to configuration of networking stack // If true, this warning is related to configuration of networking stack
// on the machine that impacts connectivity. // on the machine that impacts connectivity.
hasConnectivityImpact bool hasConnectivityImpact bool
}
isSet atomic.Bool // nil reports whether t is nil.
mu sync.Mutex // It exists to accept nil *Tracker receivers on all methods
err error // to at least not crash. But because a nil receiver indicates
// some lost Tracker plumbing, we want to capture stack trace
// samples when it occurs.
func (t *Tracker) nil() bool {
if t != nil {
return false
}
if cibuild.On() {
stack := make([]byte, 1<<10)
stack = stack[:runtime.Stack(stack, false)]
fmt.Fprintf(os.Stderr, "## WARNING: (non-fatal) nil health.Tracker (being strict in CI):\n%s\n", stack)
}
// TODO(bradfitz): open source our "unexpected" package
// and use it here to capture samples of stacks where
// t is nil.
return true
} }
// Set updates the Warnable's state. // Set updates the Warnable's state.
// If non-nil, it's considered unhealthy. // If non-nil, it's considered unhealthy.
func (w *Warnable) Set(err error) { func (t *Tracker) SetWarnable(w *Warnable, err error) {
w.mu.Lock() if t.nil() {
defer w.mu.Unlock() return
w.err = err }
w.isSet.Store(err != nil) t.mu.Lock()
} defer t.mu.Unlock()
l0 := len(t.warnableVal)
func (w *Warnable) get() error { mak.Set(&t.warnableVal, w, err)
if !w.isSet.Load() { if len(t.warnableVal) != l0 {
return nil t.warnables = append(t.warnables, w)
} }
w.mu.Lock()
defer w.mu.Unlock()
return w.err
} }
// AppendWarnableDebugFlags appends to base any health items that are currently in failed // AppendWarnableDebugFlags appends to base any health items that are currently in failed
// state and were created with MapDebugFlag. // state and were created with MapDebugFlag.
func AppendWarnableDebugFlags(base []string) []string { func (t *Tracker) AppendWarnableDebugFlags(base []string) []string {
if t.nil() {
return base
}
ret := base ret := base
mu.Lock() t.mu.Lock()
defer mu.Unlock() defer t.mu.Unlock()
for w := range warnables { for w, err := range t.warnableVal {
if w.debugFlag == "" { if w.debugFlag == "" {
continue continue
} }
if err := w.get(); err != nil { if err != nil {
ret = append(ret, w.debugFlag) ret = append(ret, w.debugFlag)
} }
} }
@ -172,75 +229,87 @@ func AppendWarnableDebugFlags(base []string) []string {
// error changes state either to unhealthy or from unhealthy. It is // error changes state either to unhealthy or from unhealthy. It is
// not called on transition from unknown to healthy. It must be non-nil // not called on transition from unknown to healthy. It must be non-nil
// and is run in its own goroutine. The returned func unregisters it. // and is run in its own goroutine. The returned func unregisters it.
func RegisterWatcher(cb func(key Subsystem, err error)) (unregister func()) { func (t *Tracker) RegisterWatcher(cb func(key Subsystem, err error)) (unregister func()) {
mu.Lock() if t.nil() {
defer mu.Unlock() return func() {}
handle := watchers.Add(cb) }
if timer == nil { t.mu.Lock()
timer = time.AfterFunc(time.Minute, timerSelfCheck) defer t.mu.Unlock()
if t.watchers == nil {
t.watchers = set.HandleSet[func(Subsystem, error)]{}
}
handle := t.watchers.Add(cb)
if t.timer == nil {
t.timer = time.AfterFunc(time.Minute, t.timerSelfCheck)
} }
return func() { return func() {
mu.Lock() t.mu.Lock()
defer mu.Unlock() defer t.mu.Unlock()
delete(watchers, handle) delete(t.watchers, handle)
if len(watchers) == 0 && timer != nil { if len(t.watchers) == 0 && t.timer != nil {
timer.Stop() t.timer.Stop()
timer = nil t.timer = nil
} }
} }
} }
// SetRouterHealth sets the state of the wgengine/router.Router. // SetRouterHealth sets the state of the wgengine/router.Router.
func SetRouterHealth(err error) { setErr(SysRouter, err) } func (t *Tracker) SetRouterHealth(err error) { t.setErr(SysRouter, err) }
// RouterHealth returns the wgengine/router.Router error state. // RouterHealth returns the wgengine/router.Router error state.
func RouterHealth() error { return get(SysRouter) } func (t *Tracker) RouterHealth() error { return t.get(SysRouter) }
// SetDNSHealth sets the state of the net/dns.Manager // SetDNSHealth sets the state of the net/dns.Manager
func SetDNSHealth(err error) { setErr(SysDNS, err) } func (t *Tracker) SetDNSHealth(err error) { t.setErr(SysDNS, err) }
// DNSHealth returns the net/dns.Manager error state. // DNSHealth returns the net/dns.Manager error state.
func DNSHealth() error { return get(SysDNS) } func (t *Tracker) DNSHealth() error { return t.get(SysDNS) }
// SetDNSOSHealth sets the state of the net/dns.OSConfigurator // SetDNSOSHealth sets the state of the net/dns.OSConfigurator
func SetDNSOSHealth(err error) { setErr(SysDNSOS, err) } func (t *Tracker) SetDNSOSHealth(err error) { t.setErr(SysDNSOS, err) }
// SetDNSManagerHealth sets the state of the Linux net/dns manager's // SetDNSManagerHealth sets the state of the Linux net/dns manager's
// discovery of the /etc/resolv.conf situation. // discovery of the /etc/resolv.conf situation.
func SetDNSManagerHealth(err error) { setErr(SysDNSManager, err) } func (t *Tracker) SetDNSManagerHealth(err error) { t.setErr(SysDNSManager, err) }
// DNSOSHealth returns the net/dns.OSConfigurator error state. // DNSOSHealth returns the net/dns.OSConfigurator error state.
func DNSOSHealth() error { return get(SysDNSOS) } func (t *Tracker) DNSOSHealth() error { return t.get(SysDNSOS) }
// SetTKAHealth sets the health of the tailnet key authority. // SetTKAHealth sets the health of the tailnet key authority.
func SetTKAHealth(err error) { setErr(SysTKA, err) } func (t *Tracker) SetTKAHealth(err error) { t.setErr(SysTKA, err) }
// TKAHealth returns the tailnet key authority error state. // TKAHealth returns the tailnet key authority error state.
func TKAHealth() error { return get(SysTKA) } func (t *Tracker) TKAHealth() error { return t.get(SysTKA) }
// SetLocalLogConfigHealth sets the error state of this client's local log configuration. // SetLocalLogConfigHealth sets the error state of this client's local log configuration.
func SetLocalLogConfigHealth(err error) { func (t *Tracker) SetLocalLogConfigHealth(err error) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
localLogConfigErr = err }
t.mu.Lock()
defer t.mu.Unlock()
t.localLogConfigErr = err
} }
// SetTLSConnectionError sets the error state for connections to a specific // SetTLSConnectionError sets the error state for connections to a specific
// host. Setting the error to nil will clear any previously-set error. // host. Setting the error to nil will clear any previously-set error.
func SetTLSConnectionError(host string, err error) { func (t *Tracker) SetTLSConnectionError(host string, err error) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
}
t.mu.Lock()
defer t.mu.Unlock()
if err == nil { if err == nil {
delete(tlsConnectionErrors, host) delete(t.tlsConnectionErrors, host)
} else { } else {
tlsConnectionErrors[host] = err mak.Set(&t.tlsConnectionErrors, host, err)
} }
} }
func RegisterDebugHandler(typ string, h http.Handler) { func RegisterDebugHandler(typ string, h http.Handler) {
mu.Lock() mu.Lock()
defer mu.Unlock() defer mu.Unlock()
debugHandler[typ] = h mak.Set(&debugHandler, typ, h)
} }
func DebugHandler(typ string) http.Handler { func DebugHandler(typ string) http.Handler {
@ -249,24 +318,33 @@ func DebugHandler(typ string) http.Handler {
return debugHandler[typ] return debugHandler[typ]
} }
func get(key Subsystem) error { func (t *Tracker) get(key Subsystem) error {
mu.Lock() if t.nil() {
defer mu.Unlock() return nil
return sysErr[key] }
t.mu.Lock()
defer t.mu.Unlock()
return t.sysErr[key]
} }
func setErr(key Subsystem, err error) { func (t *Tracker) setErr(key Subsystem, err error) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
setLocked(key, err) }
t.mu.Lock()
defer t.mu.Unlock()
t.setLocked(key, err)
} }
func setLocked(key Subsystem, err error) { func (t *Tracker) setLocked(key Subsystem, err error) {
old, ok := sysErr[key] if t.sysErr == nil {
t.sysErr = map[Subsystem]error{}
}
old, ok := t.sysErr[key]
if !ok && err == nil { if !ok && err == nil {
// Initial happy path. // Initial happy path.
sysErr[key] = nil t.sysErr[key] = nil
selfCheckLocked() t.selfCheckLocked()
return return
} }
if ok && (old == nil) == (err == nil) { if ok && (old == nil) == (err == nil) {
@ -274,22 +352,25 @@ func setLocked(key Subsystem, err error) {
// don't run callbacks, but exact error might've // don't run callbacks, but exact error might've
// changed, so note it. // changed, so note it.
if err != nil { if err != nil {
sysErr[key] = err t.sysErr[key] = err
} }
return return
} }
sysErr[key] = err t.sysErr[key] = err
selfCheckLocked() t.selfCheckLocked()
for _, cb := range watchers { for _, cb := range t.watchers {
go cb(key, err) go cb(key, err)
} }
} }
func SetControlHealth(problems []string) { func (t *Tracker) SetControlHealth(problems []string) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
controlHealth = problems }
selfCheckLocked() t.mu.Lock()
defer t.mu.Unlock()
t.controlHealth = problems
t.selfCheckLocked()
} }
// GotStreamedMapResponse notes that we got a tailcfg.MapResponse // GotStreamedMapResponse notes that we got a tailcfg.MapResponse
@ -297,177 +378,224 @@ func SetControlHealth(problems []string) {
// //
// This also notes that a map poll is in progress. To unset that, call // This also notes that a map poll is in progress. To unset that, call
// SetOutOfPollNetMap(). // SetOutOfPollNetMap().
func GotStreamedMapResponse() { func (t *Tracker) GotStreamedMapResponse() {
mu.Lock() if t.nil() {
defer mu.Unlock() return
lastStreamedMapResponse = time.Now()
if !inMapPoll {
inMapPoll = true
inMapPollSince = time.Now()
} }
selfCheckLocked() t.mu.Lock()
defer t.mu.Unlock()
t.lastStreamedMapResponse = time.Now()
if !t.inMapPoll {
t.inMapPoll = true
t.inMapPollSince = time.Now()
}
t.selfCheckLocked()
} }
// SetOutOfPollNetMap records that the client is no longer in // SetOutOfPollNetMap records that the client is no longer in
// an HTTP map request long poll to the control plane. // an HTTP map request long poll to the control plane.
func SetOutOfPollNetMap() { func (t *Tracker) SetOutOfPollNetMap() {
mu.Lock() if t.nil() {
defer mu.Unlock()
if !inMapPoll {
return return
} }
inMapPoll = false t.mu.Lock()
lastMapPollEndedAt = time.Now() defer t.mu.Unlock()
selfCheckLocked() if !t.inMapPoll {
return
}
t.inMapPoll = false
t.lastMapPollEndedAt = time.Now()
t.selfCheckLocked()
} }
// GetInPollNetMap reports whether the client has an open // GetInPollNetMap reports whether the client has an open
// HTTP long poll open to the control plane. // HTTP long poll open to the control plane.
func GetInPollNetMap() bool { func (t *Tracker) GetInPollNetMap() bool {
mu.Lock() if t.nil() {
defer mu.Unlock() return false
return inMapPoll }
t.mu.Lock()
defer t.mu.Unlock()
return t.inMapPoll
} }
// SetMagicSockDERPHome notes what magicsock's view of its home DERP is. // SetMagicSockDERPHome notes what magicsock's view of its home DERP is.
// //
// The homeless parameter is whether magicsock is running in DERP-disconnected // The homeless parameter is whether magicsock is running in DERP-disconnected
// mode, without discovering and maintaining a connection to its home DERP. // mode, without discovering and maintaining a connection to its home DERP.
func SetMagicSockDERPHome(region int, homeless bool) { func (t *Tracker) SetMagicSockDERPHome(region int, homeless bool) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
derpHomeRegion = region }
derpHomeless = homeless t.mu.Lock()
selfCheckLocked() defer t.mu.Unlock()
t.derpHomeRegion = region
t.derpHomeless = homeless
t.selfCheckLocked()
} }
// NoteMapRequestHeard notes whenever we successfully sent a map request // NoteMapRequestHeard notes whenever we successfully sent a map request
// to control for which we received a 200 response. // to control for which we received a 200 response.
func NoteMapRequestHeard(mr *tailcfg.MapRequest) { func (t *Tracker) NoteMapRequestHeard(mr *tailcfg.MapRequest) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
}
t.mu.Lock()
defer t.mu.Unlock()
// TODO: extract mr.HostInfo.NetInfo.PreferredDERP, compare // TODO: extract mr.HostInfo.NetInfo.PreferredDERP, compare
// against SetMagicSockDERPHome and // against SetMagicSockDERPHome and
// SetDERPRegionConnectedState // SetDERPRegionConnectedState
lastMapRequestHeard = time.Now() t.lastMapRequestHeard = time.Now()
selfCheckLocked() t.selfCheckLocked()
} }
func SetDERPRegionConnectedState(region int, connected bool) { func (t *Tracker) SetDERPRegionConnectedState(region int, connected bool) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
derpRegionConnected[region] = connected }
selfCheckLocked() t.mu.Lock()
defer t.mu.Unlock()
mak.Set(&t.derpRegionConnected, region, connected)
t.selfCheckLocked()
} }
// SetDERPRegionHealth sets or clears any problem associated with the // SetDERPRegionHealth sets or clears any problem associated with the
// provided DERP region. // provided DERP region.
func SetDERPRegionHealth(region int, problem string) { func (t *Tracker) SetDERPRegionHealth(region int, problem string) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
if problem == "" {
delete(derpRegionHealthProblem, region)
} else {
derpRegionHealthProblem[region] = problem
} }
selfCheckLocked() t.mu.Lock()
defer t.mu.Unlock()
if problem == "" {
delete(t.derpRegionHealthProblem, region)
} else {
mak.Set(&t.derpRegionHealthProblem, region, problem)
}
t.selfCheckLocked()
} }
// NoteDERPRegionReceivedFrame is called to note that a frame was received from // NoteDERPRegionReceivedFrame is called to note that a frame was received from
// the given DERP region at the current time. // the given DERP region at the current time.
func NoteDERPRegionReceivedFrame(region int) { func (t *Tracker) NoteDERPRegionReceivedFrame(region int) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
derpRegionLastFrame[region] = time.Now() }
selfCheckLocked() t.mu.Lock()
defer t.mu.Unlock()
mak.Set(&t.derpRegionLastFrame, region, time.Now())
t.selfCheckLocked()
} }
// GetDERPRegionReceivedTime returns the last time that a frame was received // GetDERPRegionReceivedTime returns the last time that a frame was received
// from the given DERP region, or the zero time if no communication with that // from the given DERP region, or the zero time if no communication with that
// region has occurred. // region has occurred.
func GetDERPRegionReceivedTime(region int) time.Time { func (t *Tracker) GetDERPRegionReceivedTime(region int) time.Time {
mu.Lock() if t.nil() {
defer mu.Unlock() return time.Time{}
return derpRegionLastFrame[region] }
t.mu.Lock()
defer t.mu.Unlock()
return t.derpRegionLastFrame[region]
} }
// state is an ipn.State.String() value: "Running", "Stopped", "NeedsLogin", etc. // state is an ipn.State.String() value: "Running", "Stopped", "NeedsLogin", etc.
func SetIPNState(state string, wantRunning bool) { func (t *Tracker) SetIPNState(state string, wantRunning bool) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
ipnState = state }
ipnWantRunning = wantRunning t.mu.Lock()
selfCheckLocked() defer t.mu.Unlock()
t.ipnState = state
t.ipnWantRunning = wantRunning
t.selfCheckLocked()
} }
// SetAnyInterfaceUp sets whether any network interface is up. // SetAnyInterfaceUp sets whether any network interface is up.
func SetAnyInterfaceUp(up bool) { func (t *Tracker) SetAnyInterfaceUp(up bool) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
anyInterfaceUp = up }
selfCheckLocked() t.mu.Lock()
defer t.mu.Unlock()
t.anyInterfaceUp.Set(up)
t.selfCheckLocked()
} }
// SetUDP4Unbound sets whether the udp4 bind failed completely. // SetUDP4Unbound sets whether the udp4 bind failed completely.
func SetUDP4Unbound(unbound bool) { func (t *Tracker) SetUDP4Unbound(unbound bool) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
udp4Unbound = unbound }
selfCheckLocked() t.mu.Lock()
defer t.mu.Unlock()
t.udp4Unbound = unbound
t.selfCheckLocked()
} }
// SetAuthRoutineInError records the latest error encountered as a result of a // SetAuthRoutineInError records the latest error encountered as a result of a
// login attempt. Providing a nil error indicates successful login, or that // login attempt. Providing a nil error indicates successful login, or that
// being logged in w/coordination is not currently desired. // being logged in w/coordination is not currently desired.
func SetAuthRoutineInError(err error) { func (t *Tracker) SetAuthRoutineInError(err error) {
mu.Lock() if t.nil() {
defer mu.Unlock() return
lastLoginErr = err }
t.mu.Lock()
defer t.mu.Unlock()
t.lastLoginErr = err
} }
func timerSelfCheck() { func (t *Tracker) timerSelfCheck() {
mu.Lock() if t.nil() {
defer mu.Unlock() return
checkReceiveFuncs() }
selfCheckLocked() t.mu.Lock()
if timer != nil { defer t.mu.Unlock()
timer.Reset(time.Minute) t.checkReceiveFuncsLocked()
t.selfCheckLocked()
if t.timer != nil {
t.timer.Reset(time.Minute)
} }
} }
func selfCheckLocked() { func (t *Tracker) selfCheckLocked() {
if ipnState == "" { if t.ipnState == "" {
// Don't check yet. // Don't check yet.
return return
} }
setLocked(SysOverall, overallErrorLocked()) t.setLocked(SysOverall, t.overallErrorLocked())
} }
// OverallError returns a summary of the health state. // OverallError returns a summary of the health state.
// //
// If there are multiple problems, the error will be of type // If there are multiple problems, the error will be of type
// multierr.Error. // multierr.Error.
func OverallError() error { func (t *Tracker) OverallError() error {
mu.Lock() if t.nil() {
defer mu.Unlock() return nil
return overallErrorLocked() }
t.mu.Lock()
defer t.mu.Unlock()
return t.overallErrorLocked()
} }
var fakeErrForTesting = envknob.RegisterString("TS_DEBUG_FAKE_HEALTH_ERROR") var fakeErrForTesting = envknob.RegisterString("TS_DEBUG_FAKE_HEALTH_ERROR")
// networkErrorf creates an error that indicates issues with outgoing network // networkErrorfLocked creates an error that indicates issues with outgoing network
// connectivity. Any active warnings related to network connectivity will // connectivity. Any active warnings related to network connectivity will
// automatically be appended to it. // automatically be appended to it.
func networkErrorf(format string, a ...any) error { //
// t.mu must be held.
func (t *Tracker) networkErrorfLocked(format string, a ...any) error {
errs := []error{ errs := []error{
fmt.Errorf(format, a...), fmt.Errorf(format, a...),
} }
for w := range warnables { for _, w := range t.warnables {
if !w.hasConnectivityImpact { if !w.hasConnectivityImpact {
continue continue
} }
if err := w.get(); err != nil { if err := t.warnableVal[w]; err != nil {
errs = append(errs, err) errs = append(errs, err)
} }
} }
@ -477,81 +605,82 @@ func networkErrorf(format string, a ...any) error {
return multierr.New(errs...) return multierr.New(errs...)
} }
var errNetworkDown = networkErrorf("network down") var errNetworkDown = errors.New("network down")
var errNotInMapPoll = networkErrorf("not in map poll") var errNotInMapPoll = errors.New("not in map poll")
var errNoDERPHome = errors.New("no DERP home") var errNoDERPHome = errors.New("no DERP home")
var errNoUDP4Bind = networkErrorf("no udp4 bind") var errNoUDP4Bind = errors.New("no udp4 bind")
func overallErrorLocked() error { func (t *Tracker) overallErrorLocked() error {
if !anyInterfaceUp { if v, ok := t.anyInterfaceUp.Get(); ok && !v {
return errNetworkDown return errNetworkDown
} }
if localLogConfigErr != nil { if t.localLogConfigErr != nil {
return localLogConfigErr return t.localLogConfigErr
} }
if !ipnWantRunning { if !t.ipnWantRunning {
return fmt.Errorf("state=%v, wantRunning=%v", ipnState, ipnWantRunning) return fmt.Errorf("state=%v, wantRunning=%v", t.ipnState, t.ipnWantRunning)
} }
if lastLoginErr != nil { if t.lastLoginErr != nil {
return fmt.Errorf("not logged in, last login error=%v", lastLoginErr) return fmt.Errorf("not logged in, last login error=%v", t.lastLoginErr)
} }
now := time.Now() now := time.Now()
if !inMapPoll && (lastMapPollEndedAt.IsZero() || now.Sub(lastMapPollEndedAt) > 10*time.Second) { if !t.inMapPoll && (t.lastMapPollEndedAt.IsZero() || now.Sub(t.lastMapPollEndedAt) > 10*time.Second) {
return errNotInMapPoll return errNotInMapPoll
} }
const tooIdle = 2*time.Minute + 5*time.Second const tooIdle = 2*time.Minute + 5*time.Second
if d := now.Sub(lastStreamedMapResponse).Round(time.Second); d > tooIdle { if d := now.Sub(t.lastStreamedMapResponse).Round(time.Second); d > tooIdle {
return networkErrorf("no map response in %v", d) return t.networkErrorfLocked("no map response in %v", d)
} }
if !derpHomeless { if !t.derpHomeless {
rid := derpHomeRegion rid := t.derpHomeRegion
if rid == 0 { if rid == 0 {
return errNoDERPHome return errNoDERPHome
} }
if !derpRegionConnected[rid] { if !t.derpRegionConnected[rid] {
return networkErrorf("not connected to home DERP region %v", rid) return t.networkErrorfLocked("not connected to home DERP region %v", rid)
} }
if d := now.Sub(derpRegionLastFrame[rid]).Round(time.Second); d > tooIdle { if d := now.Sub(t.derpRegionLastFrame[rid]).Round(time.Second); d > tooIdle {
return networkErrorf("haven't heard from home DERP region %v in %v", rid, d) return t.networkErrorfLocked("haven't heard from home DERP region %v in %v", rid, d)
} }
} }
if udp4Unbound { if t.udp4Unbound {
return errNoUDP4Bind return errNoUDP4Bind
} }
// TODO: use // TODO: use
_ = inMapPollSince _ = t.inMapPollSince
_ = lastMapPollEndedAt _ = t.lastMapPollEndedAt
_ = lastStreamedMapResponse _ = t.lastStreamedMapResponse
_ = lastMapRequestHeard _ = t.lastMapRequestHeard
var errs []error var errs []error
for _, recv := range receiveFuncs { for i := range t.MagicSockReceiveFuncs {
if recv.missing { f := &t.MagicSockReceiveFuncs[i]
errs = append(errs, fmt.Errorf("%s is not running", recv.name)) if f.missing {
errs = append(errs, fmt.Errorf("%s is not running", f.name))
} }
} }
for sys, err := range sysErr { for sys, err := range t.sysErr {
if err == nil || sys == SysOverall { if err == nil || sys == SysOverall {
continue continue
} }
errs = append(errs, fmt.Errorf("%v: %w", sys, err)) errs = append(errs, fmt.Errorf("%v: %w", sys, err))
} }
for w := range warnables { for _, w := range t.warnables {
if err := w.get(); err != nil { if err := t.warnableVal[w]; err != nil {
errs = append(errs, err) errs = append(errs, err)
} }
} }
for regionID, problem := range derpRegionHealthProblem { for regionID, problem := range t.derpRegionHealthProblem {
errs = append(errs, fmt.Errorf("derp%d: %v", regionID, problem)) errs = append(errs, fmt.Errorf("derp%d: %v", regionID, problem))
} }
for _, s := range controlHealth { for _, s := range t.controlHealth {
errs = append(errs, errors.New(s)) errs = append(errs, errors.New(s))
} }
if err := envknob.ApplyDiskConfigError(); err != nil { if err := envknob.ApplyDiskConfigError(); err != nil {
errs = append(errs, err) errs = append(errs, err)
} }
for serverName, err := range tlsConnectionErrors { for serverName, err := range t.tlsConnectionErrors {
errs = append(errs, fmt.Errorf("TLS connection error for %q: %w", serverName, err)) errs = append(errs, fmt.Errorf("TLS connection error for %q: %w", serverName, err))
} }
if e := fakeErrForTesting(); len(errs) == 0 && e != "" { if e := fakeErrForTesting(); len(errs) == 0 && e != "" {
@ -564,63 +693,69 @@ func overallErrorLocked() error {
return multierr.New(errs...) return multierr.New(errs...)
} }
var (
ReceiveIPv4 = ReceiveFuncStats{name: "ReceiveIPv4"}
ReceiveIPv6 = ReceiveFuncStats{name: "ReceiveIPv6"}
ReceiveDERP = ReceiveFuncStats{name: "ReceiveDERP"}
receiveFuncs = []*ReceiveFuncStats{&ReceiveIPv4, &ReceiveIPv6, &ReceiveDERP}
)
func init() {
if runtime.GOOS == "js" {
receiveFuncs = receiveFuncs[2:] // ignore IPv4 and IPv6
}
}
// ReceiveFuncStats tracks the calls made to a wireguard-go receive func. // ReceiveFuncStats tracks the calls made to a wireguard-go receive func.
type ReceiveFuncStats struct { type ReceiveFuncStats struct {
// name is the name of the receive func. // name is the name of the receive func.
// It's lazily populated.
name string name string
// numCalls is the number of times the receive func has ever been called. // numCalls is the number of times the receive func has ever been called.
// It is required because it is possible for a receive func's wireguard-go goroutine // It is required because it is possible for a receive func's wireguard-go goroutine
// to be active even though the receive func isn't. // to be active even though the receive func isn't.
// The wireguard-go goroutine alternates between calling the receive func and // The wireguard-go goroutine alternates between calling the receive func and
// processing what the func returned. // processing what the func returned.
numCalls uint64 // accessed atomically numCalls atomic.Uint64
// prevNumCalls is the value of numCalls last time the health check examined it. // prevNumCalls is the value of numCalls last time the health check examined it.
prevNumCalls uint64 prevNumCalls uint64
// inCall indicates whether the receive func is currently running. // inCall indicates whether the receive func is currently running.
inCall uint32 // bool, accessed atomically inCall atomic.Bool
// missing indicates whether the receive func is not running. // missing indicates whether the receive func is not running.
missing bool missing bool
} }
func (s *ReceiveFuncStats) Enter() { func (s *ReceiveFuncStats) Enter() {
atomic.AddUint64(&s.numCalls, 1) s.numCalls.Add(1)
atomic.StoreUint32(&s.inCall, 1) s.inCall.Store(true)
} }
func (s *ReceiveFuncStats) Exit() { func (s *ReceiveFuncStats) Exit() {
atomic.StoreUint32(&s.inCall, 0) s.inCall.Store(false)
} }
func checkReceiveFuncs() { // ReceiveFuncStats returns the ReceiveFuncStats tracker for the given func
for _, recv := range receiveFuncs { // type.
recv.missing = false //
prev := recv.prevNumCalls // If t is nil, it returns nil.
numCalls := atomic.LoadUint64(&recv.numCalls) func (t *Tracker) ReceiveFuncStats(which ReceiveFunc) *ReceiveFuncStats {
recv.prevNumCalls = numCalls if t == nil {
return nil
}
return &t.MagicSockReceiveFuncs[which]
}
func (t *Tracker) checkReceiveFuncsLocked() {
for i := range t.MagicSockReceiveFuncs {
f := &t.MagicSockReceiveFuncs[i]
if f.name == "" {
f.name = (ReceiveFunc(i)).String()
}
if runtime.GOOS == "js" && i < 2 {
// Skip IPv4 and IPv6 on js.
continue
}
f.missing = false
prev := f.prevNumCalls
numCalls := f.numCalls.Load()
f.prevNumCalls = numCalls
if numCalls > prev { if numCalls > prev {
// OK: the function has gotten called since last we checked // OK: the function has gotten called since last we checked
continue continue
} }
if atomic.LoadUint32(&recv.inCall) == 1 { if f.inCall.Load() {
// OK: the function is active, probably blocked due to inactivity // OK: the function is active, probably blocked due to inactivity
continue continue
} }
// Not OK: The function is not active, and not accumulating new calls. // Not OK: The function is not active, and not accumulating new calls.
// It is probably MIA. // It is probably MIA.
recv.missing = true f.missing = true
} }
} }

View File

@ -8,17 +8,15 @@ import (
"fmt" "fmt"
"reflect" "reflect"
"testing" "testing"
"tailscale.com/util/set"
) )
func TestAppendWarnableDebugFlags(t *testing.T) { func TestAppendWarnableDebugFlags(t *testing.T) {
resetWarnables() var tr Tracker
for i := range 10 { for i := range 10 {
w := NewWarnable(WithMapDebugFlag(fmt.Sprint(i))) w := NewWarnable(WithMapDebugFlag(fmt.Sprint(i)))
if i%2 == 0 { if i%2 == 0 {
w.Set(errors.New("boom")) tr.SetWarnable(w, errors.New("boom"))
} }
} }
@ -27,15 +25,27 @@ func TestAppendWarnableDebugFlags(t *testing.T) {
var got []string var got []string
for range 20 { for range 20 {
got = append(got[:0], "z", "y") got = append(got[:0], "z", "y")
got = AppendWarnableDebugFlags(got) got = tr.AppendWarnableDebugFlags(got)
if !reflect.DeepEqual(got, want) { if !reflect.DeepEqual(got, want) {
t.Fatalf("AppendWarnableDebugFlags = %q; want %q", got, want) t.Fatalf("AppendWarnableDebugFlags = %q; want %q", got, want)
} }
} }
} }
func resetWarnables() { // Test that all exported methods on *Tracker don't panic with a nil receiver.
mu.Lock() func TestNilMethodsDontCrash(t *testing.T) {
defer mu.Unlock() var nilt *Tracker
warnables = set.Set[*Warnable]{} rv := reflect.ValueOf(nilt)
for i := 0; i < rv.NumMethod(); i++ {
mt := rv.Type().Method(i)
t.Logf("calling Tracker.%s ...", mt.Name)
var args []reflect.Value
for j := 0; j < mt.Type.NumIn(); j++ {
if j == 0 && mt.Type.In(j) == reflect.TypeFor[*Tracker]() {
continue
}
args = append(args, reflect.Zero(mt.Type.In(j)))
}
rv.Method(i).Call(args)
}
} }

View File

@ -170,6 +170,7 @@ type LocalBackend struct {
keyLogf logger.Logf // for printing list of peers on change keyLogf logger.Logf // for printing list of peers on change
statsLogf logger.Logf // for printing peers stats on change statsLogf logger.Logf // for printing peers stats on change
sys *tsd.System sys *tsd.System
health *health.Tracker // always non-nil
e wgengine.Engine // non-nil; TODO(bradfitz): remove; use sys e wgengine.Engine // non-nil; TODO(bradfitz): remove; use sys
store ipn.StateStore // non-nil; TODO(bradfitz): remove; use sys store ipn.StateStore // non-nil; TODO(bradfitz): remove; use sys
dialer *tsdial.Dialer // non-nil; TODO(bradfitz): remove; use sys dialer *tsdial.Dialer // non-nil; TODO(bradfitz): remove; use sys
@ -326,6 +327,16 @@ type LocalBackend struct {
outgoingFiles map[string]*ipn.OutgoingFile outgoingFiles map[string]*ipn.OutgoingFile
} }
// HealthTracker returns the health tracker for the backend.
func (b *LocalBackend) HealthTracker() *health.Tracker {
return b.health
}
// NetMon returns the network monitor for the backend.
func (b *LocalBackend) NetMon() *netmon.Monitor {
return b.sys.NetMon.Get()
}
type updateStatus struct { type updateStatus struct {
started bool started bool
} }
@ -386,6 +397,7 @@ func NewLocalBackend(logf logger.Logf, logID logid.PublicID, sys *tsd.System, lo
keyLogf: logger.LogOnChange(logf, 5*time.Minute, clock.Now), keyLogf: logger.LogOnChange(logf, 5*time.Minute, clock.Now),
statsLogf: logger.LogOnChange(logf, 5*time.Minute, clock.Now), statsLogf: logger.LogOnChange(logf, 5*time.Minute, clock.Now),
sys: sys, sys: sys,
health: sys.HealthTracker(),
conf: sys.InitialConfig, conf: sys.InitialConfig,
e: e, e: e,
dialer: dialer, dialer: dialer,
@ -403,7 +415,7 @@ func NewLocalBackend(logf logger.Logf, logID logid.PublicID, sys *tsd.System, lo
} }
netMon := sys.NetMon.Get() netMon := sys.NetMon.Get()
b.sockstatLogger, err = sockstatlog.NewLogger(logpolicy.LogsDir(logf), logf, logID, netMon) b.sockstatLogger, err = sockstatlog.NewLogger(logpolicy.LogsDir(logf), logf, logID, netMon, sys.HealthTracker())
if err != nil { if err != nil {
log.Printf("error setting up sockstat logger: %v", err) log.Printf("error setting up sockstat logger: %v", err)
} }
@ -426,7 +438,7 @@ func NewLocalBackend(logf logger.Logf, logID logid.PublicID, sys *tsd.System, lo
b.linkChange(&netmon.ChangeDelta{New: netMon.InterfaceState()}) b.linkChange(&netmon.ChangeDelta{New: netMon.InterfaceState()})
b.unregisterNetMon = netMon.RegisterChangeCallback(b.linkChange) b.unregisterNetMon = netMon.RegisterChangeCallback(b.linkChange)
b.unregisterHealthWatch = health.RegisterWatcher(b.onHealthChange) b.unregisterHealthWatch = b.health.RegisterWatcher(b.onHealthChange)
if tunWrap, ok := b.sys.Tun.GetOK(); ok { if tunWrap, ok := b.sys.Tun.GetOK(); ok {
tunWrap.PeerAPIPort = b.GetPeerAPIPort tunWrap.PeerAPIPort = b.GetPeerAPIPort
@ -625,7 +637,7 @@ func (b *LocalBackend) linkChange(delta *netmon.ChangeDelta) {
// If the local network configuration has changed, our filter may // If the local network configuration has changed, our filter may
// need updating to tweak default routes. // need updating to tweak default routes.
b.updateFilterLocked(b.netMap, b.pm.CurrentPrefs()) b.updateFilterLocked(b.netMap, b.pm.CurrentPrefs())
updateExitNodeUsageWarning(b.pm.CurrentPrefs(), delta.New) updateExitNodeUsageWarning(b.pm.CurrentPrefs(), delta.New, b.health)
if peerAPIListenAsync && b.netMap != nil && b.state == ipn.Running { if peerAPIListenAsync && b.netMap != nil && b.state == ipn.Running {
want := b.netMap.GetAddresses().Len() want := b.netMap.GetAddresses().Len()
@ -761,7 +773,7 @@ func (b *LocalBackend) UpdateStatus(sb *ipnstate.StatusBuilder) {
} }
} }
} }
if err := health.OverallError(); err != nil { if err := b.health.OverallError(); err != nil {
switch e := err.(type) { switch e := err.(type) {
case multierr.Error: case multierr.Error:
for _, err := range e.Errors() { for _, err := range e.Errors() {
@ -820,7 +832,7 @@ func (b *LocalBackend) UpdateStatus(sb *ipnstate.StatusBuilder) {
sb.MutateSelfStatus(func(ss *ipnstate.PeerStatus) { sb.MutateSelfStatus(func(ss *ipnstate.PeerStatus) {
ss.OS = version.OS() ss.OS = version.OS()
ss.Online = health.GetInPollNetMap() ss.Online = b.health.GetInPollNetMap()
if b.netMap != nil { if b.netMap != nil {
ss.InNetworkMap = true ss.InNetworkMap = true
if hi := b.netMap.SelfNode.Hostinfo(); hi.Valid() { if hi := b.netMap.SelfNode.Hostinfo(); hi.Valid() {
@ -1221,7 +1233,7 @@ func (b *LocalBackend) SetControlClientStatus(c controlclient.Client, st control
if st.NetMap != nil { if st.NetMap != nil {
if envknob.NoLogsNoSupport() && st.NetMap.HasCap(tailcfg.CapabilityDataPlaneAuditLogs) { if envknob.NoLogsNoSupport() && st.NetMap.HasCap(tailcfg.CapabilityDataPlaneAuditLogs) {
msg := "tailnet requires logging to be enabled. Remove --no-logs-no-support from tailscaled command line." msg := "tailnet requires logging to be enabled. Remove --no-logs-no-support from tailscaled command line."
health.SetLocalLogConfigHealth(errors.New(msg)) b.health.SetLocalLogConfigHealth(errors.New(msg))
// Connecting to this tailnet without logging is forbidden; boot us outta here. // Connecting to this tailnet without logging is forbidden; boot us outta here.
b.mu.Lock() b.mu.Lock()
prefs.WantRunning = false prefs.WantRunning = false
@ -1751,6 +1763,7 @@ func (b *LocalBackend) Start(opts ipn.Options) error {
DiscoPublicKey: discoPublic, DiscoPublicKey: discoPublic,
DebugFlags: debugFlags, DebugFlags: debugFlags,
NetMon: b.sys.NetMon.Get(), NetMon: b.sys.NetMon.Get(),
HealthTracker: b.health,
Pinger: b, Pinger: b,
PopBrowserURL: b.tellClientToBrowseToURL, PopBrowserURL: b.tellClientToBrowseToURL,
OnClientVersion: b.onClientVersion, OnClientVersion: b.onClientVersion,
@ -1851,10 +1864,10 @@ func (b *LocalBackend) updateFilterLocked(netMap *netmap.NetworkMap, prefs ipn.P
if packetFilterPermitsUnlockedNodes(b.peers, packetFilter) { if packetFilterPermitsUnlockedNodes(b.peers, packetFilter) {
err := errors.New("server sent invalid packet filter permitting traffic to unlocked nodes; rejecting all packets for safety") err := errors.New("server sent invalid packet filter permitting traffic to unlocked nodes; rejecting all packets for safety")
warnInvalidUnsignedNodes.Set(err) b.health.SetWarnable(warnInvalidUnsignedNodes, err)
packetFilter = nil packetFilter = nil
} else { } else {
warnInvalidUnsignedNodes.Set(nil) b.health.SetWarnable(warnInvalidUnsignedNodes, nil)
} }
} }
if prefs.Valid() { if prefs.Valid() {
@ -3048,7 +3061,7 @@ var warnExitNodeUsage = health.NewWarnable(health.WithConnectivityImpact())
// updateExitNodeUsageWarning updates a warnable meant to notify users of // updateExitNodeUsageWarning updates a warnable meant to notify users of
// configuration issues that could break exit node usage. // configuration issues that could break exit node usage.
func updateExitNodeUsageWarning(p ipn.PrefsView, state *interfaces.State) { func updateExitNodeUsageWarning(p ipn.PrefsView, state *interfaces.State, health *health.Tracker) {
var result error var result error
if p.ExitNodeIP().IsValid() || p.ExitNodeID() != "" { if p.ExitNodeIP().IsValid() || p.ExitNodeID() != "" {
warn, _ := netutil.CheckReversePathFiltering(state) warn, _ := netutil.CheckReversePathFiltering(state)
@ -3057,7 +3070,7 @@ func updateExitNodeUsageWarning(p ipn.PrefsView, state *interfaces.State) {
result = fmt.Errorf("%s: %v, %s", healthmsg.WarnExitNodeUsage, warn, comment) result = fmt.Errorf("%s: %v, %s", healthmsg.WarnExitNodeUsage, warn, comment)
} }
} }
warnExitNodeUsage.Set(result) health.SetWarnable(warnExitNodeUsage, result)
} }
func (b *LocalBackend) checkExitNodePrefsLocked(p *ipn.Prefs) error { func (b *LocalBackend) checkExitNodePrefsLocked(p *ipn.Prefs) error {
@ -3121,6 +3134,19 @@ func (b *LocalBackend) SetUseExitNodeEnabled(v bool) (ipn.PrefsView, error) {
return b.editPrefsLockedOnEntry(mp, unlock) return b.editPrefsLockedOnEntry(mp, unlock)
} }
// MaybeClearAppConnector clears the routes from any AppConnector if
// AdvertiseRoutes has been set in the MaskedPrefs.
func (b *LocalBackend) MaybeClearAppConnector(mp *ipn.MaskedPrefs) error {
var err error
if b.appConnector != nil && mp.AdvertiseRoutesSet {
err = b.appConnector.ClearRoutes()
if err != nil {
b.logf("appc: clear routes error: %v", err)
}
}
return err
}
func (b *LocalBackend) EditPrefs(mp *ipn.MaskedPrefs) (ipn.PrefsView, error) { func (b *LocalBackend) EditPrefs(mp *ipn.MaskedPrefs) (ipn.PrefsView, error) {
if mp.SetsInternal() { if mp.SetsInternal() {
return ipn.PrefsView{}, errors.New("can't set Internal fields") return ipn.PrefsView{}, errors.New("can't set Internal fields")
@ -3499,8 +3525,22 @@ func (b *LocalBackend) reconfigAppConnectorLocked(nm *netmap.NetworkMap, prefs i
return return
} }
if b.appConnector == nil { shouldAppCStoreRoutes := b.ControlKnobs().AppCStoreRoutes.Load()
b.appConnector = appc.NewAppConnector(b.logf, b) if b.appConnector == nil || b.appConnector.ShouldStoreRoutes() != shouldAppCStoreRoutes {
var ri *appc.RouteInfo
var storeFunc func(*appc.RouteInfo) error
if shouldAppCStoreRoutes {
var err error
ri, err = b.readRouteInfoLocked()
if err != nil {
ri = &appc.RouteInfo{}
if err != ipn.ErrStateNotExist {
b.logf("Unsuccessful Read RouteInfo: ", err)
}
}
storeFunc = b.storeRouteInfo
}
b.appConnector = appc.NewAppConnector(b.logf, b, ri, storeFunc)
} }
if nm == nil { if nm == nil {
return return
@ -4254,7 +4294,7 @@ func (b *LocalBackend) enterStateLockedOnEntry(newState ipn.State, unlock unlock
// prefs may change irrespective of state; WantRunning should be explicitly // prefs may change irrespective of state; WantRunning should be explicitly
// set before potential early return even if the state is unchanged. // set before potential early return even if the state is unchanged.
health.SetIPNState(newState.String(), prefs.Valid() && prefs.WantRunning()) b.health.SetIPNState(newState.String(), prefs.Valid() && prefs.WantRunning())
if oldState == newState { if oldState == newState {
return return
} }
@ -4692,9 +4732,9 @@ func (b *LocalBackend) setNetMapLocked(nm *netmap.NetworkMap) {
b.pauseOrResumeControlClientLocked() b.pauseOrResumeControlClientLocked()
if nm != nil { if nm != nil {
health.SetControlHealth(nm.ControlHealth) b.health.SetControlHealth(nm.ControlHealth)
} else { } else {
health.SetControlHealth(nil) b.health.SetControlHealth(nil)
} }
// Determine if file sharing is enabled // Determine if file sharing is enabled
@ -5679,9 +5719,9 @@ var warnSSHSELinux = health.NewWarnable()
func (b *LocalBackend) updateSELinuxHealthWarning() { func (b *LocalBackend) updateSELinuxHealthWarning() {
if hostinfo.IsSELinuxEnforcing() { if hostinfo.IsSELinuxEnforcing() {
warnSSHSELinux.Set(errors.New("SELinux is enabled; Tailscale SSH may not work. See https://tailscale.com/s/ssh-selinux")) b.health.SetWarnable(warnSSHSELinux, errors.New("SELinux is enabled; Tailscale SSH may not work. See https://tailscale.com/s/ssh-selinux"))
} else { } else {
warnSSHSELinux.Set(nil) b.health.SetWarnable(warnSSHSELinux, nil)
} }
} }
@ -5908,7 +5948,7 @@ func (b *LocalBackend) resetForProfileChangeLockedOnEntry(unlock unlockOnce) err
b.lastServeConfJSON = mem.B(nil) b.lastServeConfJSON = mem.B(nil)
b.serveConfig = ipn.ServeConfigView{} b.serveConfig = ipn.ServeConfigView{}
b.enterStateLockedOnEntry(ipn.NoState, unlock) // Reset state; releases b.mu b.enterStateLockedOnEntry(ipn.NoState, unlock) // Reset state; releases b.mu
health.SetLocalLogConfigHealth(nil) b.health.SetLocalLogConfigHealth(nil)
return b.Start(ipn.Options{}) return b.Start(ipn.Options{})
} }
@ -6193,6 +6233,43 @@ func (b *LocalBackend) UnadvertiseRoute(toRemove ...netip.Prefix) error {
return err return err
} }
// namespace a key with the profile manager's current profile key, if any
func namespaceKeyForCurrentProfile(pm *profileManager, key ipn.StateKey) ipn.StateKey {
return pm.CurrentProfile().Key + "||" + key
}
const routeInfoStateStoreKey ipn.StateKey = "_routeInfo"
func (b *LocalBackend) storeRouteInfo(ri *appc.RouteInfo) error {
b.mu.Lock()
defer b.mu.Unlock()
if b.pm.CurrentProfile().ID == "" {
return nil
}
key := namespaceKeyForCurrentProfile(b.pm, routeInfoStateStoreKey)
bs, err := json.Marshal(ri)
if err != nil {
return err
}
return b.pm.WriteState(key, bs)
}
func (b *LocalBackend) readRouteInfoLocked() (*appc.RouteInfo, error) {
if b.pm.CurrentProfile().ID == "" {
return &appc.RouteInfo{}, nil
}
key := namespaceKeyForCurrentProfile(b.pm, routeInfoStateStoreKey)
bs, err := b.pm.Store().ReadState(key)
ri := &appc.RouteInfo{}
if err != nil {
return nil, err
}
if err := json.Unmarshal(bs, ri); err != nil {
return nil, err
}
return ri, nil
}
// seamlessRenewalEnabled reports whether seamless key renewals are enabled // seamlessRenewalEnabled reports whether seamless key renewals are enabled
// (i.e. we saw our self node with the SeamlessKeyRenewal attr in a netmap). // (i.e. we saw our self node with the SeamlessKeyRenewal attr in a netmap).
// This enables beta functionality of renewing node keys without breaking // This enables beta functionality of renewing node keys without breaking
@ -6240,6 +6317,7 @@ func mayDeref[T any](p *T) (v T) {
} }
var ErrNoPreferredDERP = errors.New("no preferred DERP, try again later") var ErrNoPreferredDERP = errors.New("no preferred DERP, try again later")
var ErrCannotSuggestExitNode = errors.New("unable to suggest an exit node, try again later")
// SuggestExitNode computes a suggestion based on the current netmap and last netcheck report. If // SuggestExitNode computes a suggestion based on the current netmap and last netcheck report. If
// there are multiple equally good options, one is selected at random, so the result is not stable. To be // there are multiple equally good options, one is selected at random, so the result is not stable. To be
@ -6253,6 +6331,9 @@ func (b *LocalBackend) SuggestExitNode() (response apitype.ExitNodeSuggestionRes
lastReport := b.MagicConn().GetLastNetcheckReport(b.ctx) lastReport := b.MagicConn().GetLastNetcheckReport(b.ctx)
netMap := b.netMap netMap := b.netMap
b.mu.Unlock() b.mu.Unlock()
if lastReport == nil || netMap == nil {
return response, ErrCannotSuggestExitNode
}
seed := time.Now().UnixNano() seed := time.Now().UnixNano()
r := rand.New(rand.NewSource(seed)) r := rand.New(rand.NewSource(seed))
return suggestExitNode(lastReport, netMap, r) return suggestExitNode(lastReport, netMap, r)

View File

@ -54,6 +54,8 @@ import (
"tailscale.com/wgengine/wgcfg" "tailscale.com/wgengine/wgcfg"
) )
func fakeStoreRoutes(*appc.RouteInfo) error { return nil }
func inRemove(ip netip.Addr) bool { func inRemove(ip netip.Addr) bool {
for _, pfx := range removeFromDefaultRoute { for _, pfx := range removeFromDefaultRoute {
if pfx.Contains(ip) { if pfx.Contains(ip) {
@ -1290,13 +1292,19 @@ func TestDNSConfigForNetmapForExitNodeConfigs(t *testing.T) {
} }
func TestOfferingAppConnector(t *testing.T) { func TestOfferingAppConnector(t *testing.T) {
b := newTestBackend(t) for _, shouldStore := range []bool{false, true} {
if b.OfferingAppConnector() { b := newTestBackend(t)
t.Fatal("unexpected offering app connector") if b.OfferingAppConnector() {
} t.Fatal("unexpected offering app connector")
b.appConnector = appc.NewAppConnector(t.Logf, nil) }
if !b.OfferingAppConnector() { if shouldStore {
t.Fatal("unexpected not offering app connector") b.appConnector = appc.NewAppConnector(t.Logf, nil, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
b.appConnector = appc.NewAppConnector(t.Logf, nil, nil, nil)
}
if !b.OfferingAppConnector() {
t.Fatal("unexpected not offering app connector")
}
} }
} }
@ -1341,21 +1349,27 @@ func TestRouterAdvertiserIgnoresContainedRoutes(t *testing.T) {
} }
func TestObserveDNSResponse(t *testing.T) { func TestObserveDNSResponse(t *testing.T) {
b := newTestBackend(t) for _, shouldStore := range []bool{false, true} {
b := newTestBackend(t)
// ensure no error when no app connector is configured // ensure no error when no app connector is configured
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8")) b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
b.appConnector = appc.NewAppConnector(t.Logf, rc) if shouldStore {
b.appConnector.UpdateDomains([]string{"example.com"}) b.appConnector = appc.NewAppConnector(t.Logf, rc, &appc.RouteInfo{}, fakeStoreRoutes)
b.appConnector.Wait(context.Background()) } else {
b.appConnector = appc.NewAppConnector(t.Logf, rc, nil, nil)
}
b.appConnector.UpdateDomains([]string{"example.com"})
b.appConnector.Wait(context.Background())
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8")) b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
b.appConnector.Wait(context.Background()) b.appConnector.Wait(context.Background())
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")} wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) { if !slices.Equal(rc.Routes(), wantRoutes) {
t.Fatalf("got routes %v, want %v", rc.Routes(), wantRoutes) t.Fatalf("got routes %v, want %v", rc.Routes(), wantRoutes)
}
} }
} }
@ -3451,3 +3465,66 @@ func TestEnableAutoUpdates(t *testing.T) {
t.Fatalf("disabling auto-updates: got error: %v", err) t.Fatalf("disabling auto-updates: got error: %v", err)
} }
} }
func TestReadWriteRouteInfo(t *testing.T) {
// set up a backend with more than one profile
b := newTestBackend(t)
prof1 := ipn.LoginProfile{ID: "id1", Key: "key1"}
prof2 := ipn.LoginProfile{ID: "id2", Key: "key2"}
b.pm.knownProfiles["id1"] = &prof1
b.pm.knownProfiles["id2"] = &prof2
b.pm.currentProfile = &prof1
// set up routeInfo
ri1 := &appc.RouteInfo{}
ri1.Wildcards = []string{"1"}
ri2 := &appc.RouteInfo{}
ri2.Wildcards = []string{"2"}
// read before write
readRi, err := b.readRouteInfoLocked()
if readRi != nil {
t.Fatalf("read before writing: want nil, got %v", readRi)
}
if err != ipn.ErrStateNotExist {
t.Fatalf("read before writing: want %v, got %v", ipn.ErrStateNotExist, err)
}
// write the first routeInfo
if err := b.storeRouteInfo(ri1); err != nil {
t.Fatal(err)
}
// write the other routeInfo as the other profile
if err := b.pm.SwitchProfile("id2"); err != nil {
t.Fatal(err)
}
if err := b.storeRouteInfo(ri2); err != nil {
t.Fatal(err)
}
// read the routeInfo of the first profile
if err := b.pm.SwitchProfile("id1"); err != nil {
t.Fatal(err)
}
readRi, err = b.readRouteInfoLocked()
if err != nil {
t.Fatal(err)
}
if !slices.Equal(readRi.Wildcards, ri1.Wildcards) {
t.Fatalf("read prof1 routeInfo wildcards: want %v, got %v", ri1.Wildcards, readRi.Wildcards)
}
// read the routeInfo of the second profile
if err := b.pm.SwitchProfile("id2"); err != nil {
t.Fatal(err)
}
readRi, err = b.readRouteInfoLocked()
if err != nil {
t.Fatal(err)
}
if !slices.Equal(readRi.Wildcards, ri2.Wildcards) {
t.Fatalf("read prof2 routeInfo wildcards: want %v, got %v", ri2.Wildcards, readRi.Wildcards)
}
}

View File

@ -20,7 +20,6 @@ import (
"path/filepath" "path/filepath"
"time" "time"
"tailscale.com/health"
"tailscale.com/health/healthmsg" "tailscale.com/health/healthmsg"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/ipnstate" "tailscale.com/ipn/ipnstate"
@ -59,11 +58,11 @@ type tkaState struct {
// b.mu must be held. // b.mu must be held.
func (b *LocalBackend) tkaFilterNetmapLocked(nm *netmap.NetworkMap) { func (b *LocalBackend) tkaFilterNetmapLocked(nm *netmap.NetworkMap) {
if b.tka == nil && !b.capTailnetLock { if b.tka == nil && !b.capTailnetLock {
health.SetTKAHealth(nil) b.health.SetTKAHealth(nil)
return return
} }
if b.tka == nil { if b.tka == nil {
health.SetTKAHealth(nil) b.health.SetTKAHealth(nil)
return // TKA not enabled. return // TKA not enabled.
} }
@ -117,9 +116,9 @@ func (b *LocalBackend) tkaFilterNetmapLocked(nm *netmap.NetworkMap) {
// Check that we ourselves are not locked out, report a health issue if so. // Check that we ourselves are not locked out, report a health issue if so.
if nm.SelfNode.Valid() && b.tka.authority.NodeKeyAuthorized(nm.SelfNode.Key(), nm.SelfNode.KeySignature().AsSlice()) != nil { if nm.SelfNode.Valid() && b.tka.authority.NodeKeyAuthorized(nm.SelfNode.Key(), nm.SelfNode.KeySignature().AsSlice()) != nil {
health.SetTKAHealth(errors.New(healthmsg.LockedOut)) b.health.SetTKAHealth(errors.New(healthmsg.LockedOut))
} else { } else {
health.SetTKAHealth(nil) b.health.SetTKAHealth(nil)
} }
} }
@ -188,7 +187,7 @@ func (b *LocalBackend) tkaSyncIfNeeded(nm *netmap.NetworkMap, prefs ipn.PrefsVie
b.logf("Disablement failed, leaving TKA enabled. Error: %v", err) b.logf("Disablement failed, leaving TKA enabled. Error: %v", err)
} else { } else {
isEnabled = false isEnabled = false
health.SetTKAHealth(nil) b.health.SetTKAHealth(nil)
} }
} else { } else {
return fmt.Errorf("[bug] unreachable invariant of wantEnabled w/ isEnabled") return fmt.Errorf("[bug] unreachable invariant of wantEnabled w/ isEnabled")

View File

@ -687,185 +687,209 @@ func TestPeerAPIReplyToDNSQueries(t *testing.T) {
} }
func TestPeerAPIPrettyReplyCNAME(t *testing.T) { func TestPeerAPIPrettyReplyCNAME(t *testing.T) {
var h peerAPIHandler for _, shouldStore := range []bool{false, true} {
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345") var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0) eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf)) pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
h.ps = &peerAPIServer{ var a *appc.AppConnector
b: &LocalBackend{ if shouldStore {
e: eng, a = appc.NewAppConnector(t.Logf, &appctest.RouteCollector{}, &appc.RouteInfo{}, fakeStoreRoutes)
pm: pm, } else {
store: pm.Store(), a = appc.NewAppConnector(t.Logf, &appctest.RouteCollector{}, nil, nil)
// configure as an app connector just to enable the API. }
appConnector: appc.NewAppConnector(t.Logf, &appctest.RouteCollector{}), h.ps = &peerAPIServer{
}, b: &LocalBackend{
} e: eng,
pm: pm,
store: pm.Store(),
// configure as an app connector just to enable the API.
appConnector: a,
},
}
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) { h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.CNAMEResource( b.CNAMEResource(
dnsmessage.ResourceHeader{ dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."), Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME, Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET, Class: dnsmessage.ClassINET,
TTL: 0, TTL: 0,
}, },
dnsmessage.CNAMEResource{ dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."), CNAME: dnsmessage.MustNewName("example.com."),
}, },
) )
b.AResource( b.AResource(
dnsmessage.ResourceHeader{ dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."), Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA, Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET, Class: dnsmessage.ClassINET,
TTL: 0, TTL: 0,
}, },
dnsmessage.AResource{ dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8}, A: [4]byte{192, 0, 0, 8},
}, },
) )
}} }}
f := filter.NewAllowAllForTest(logger.Discard) f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f) h.ps.b.setFilter(f)
if !h.replyToDNSQueries() { if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server") t.Errorf("unexpectedly deny; wanted to be a DNS server")
} }
w := httptest.NewRecorder() w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil)) h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
if w.Code != http.StatusOK { if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code) t.Errorf("unexpected status code: %v", w.Code)
} }
var addrs []string var addrs []string
json.NewDecoder(w.Body).Decode(&addrs) json.NewDecoder(w.Body).Decode(&addrs)
if len(addrs) == 0 { if len(addrs) == 0 {
t.Fatalf("no addresses returned") t.Fatalf("no addresses returned")
} }
for _, addr := range addrs { for _, addr := range addrs {
netip.MustParseAddr(addr) netip.MustParseAddr(addr)
}
} }
} }
func TestPeerAPIReplyToDNSQueriesAreObserved(t *testing.T) { func TestPeerAPIReplyToDNSQueriesAreObserved(t *testing.T) {
ctx := context.Background() for _, shouldStore := range []bool{false, true} {
var h peerAPIHandler ctx := context.Background()
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345") var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0) eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf)) pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
h.ps = &peerAPIServer{ var a *appc.AppConnector
b: &LocalBackend{ if shouldStore {
e: eng, a = appc.NewAppConnector(t.Logf, rc, &appc.RouteInfo{}, fakeStoreRoutes)
pm: pm, } else {
store: pm.Store(), a = appc.NewAppConnector(t.Logf, rc, nil, nil)
appConnector: appc.NewAppConnector(t.Logf, rc), }
}, h.ps = &peerAPIServer{
} b: &LocalBackend{
h.ps.b.appConnector.UpdateDomains([]string{"example.com"}) e: eng,
h.ps.b.appConnector.Wait(ctx) pm: pm,
store: pm.Store(),
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) { appConnector: a,
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
}, },
dnsmessage.AResource{ }
A: [4]byte{192, 0, 0, 8}, h.ps.b.appConnector.UpdateDomains([]string{"example.com"})
}, h.ps.b.appConnector.Wait(ctx)
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
if !h.ps.b.OfferingAppConnector() { h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
t.Fatal("expecting to be offering app connector") b.AResource(
} dnsmessage.ResourceHeader{
if !h.replyToDNSQueries() { Name: dnsmessage.MustNewName("example.com."),
t.Errorf("unexpectedly deny; wanted to be a DNS server") Type: dnsmessage.TypeA,
} Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
w := httptest.NewRecorder() if !h.ps.b.OfferingAppConnector() {
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=example.com.", nil)) t.Fatal("expecting to be offering app connector")
if w.Code != http.StatusOK { }
t.Errorf("unexpected status code: %v", w.Code) if !h.replyToDNSQueries() {
} t.Errorf("unexpectedly deny; wanted to be a DNS server")
h.ps.b.appConnector.Wait(ctx) }
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")} w := httptest.NewRecorder()
if !slices.Equal(rc.Routes(), wantRoutes) { h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=example.com.", nil))
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes) if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
}
} }
} }
func TestPeerAPIReplyToDNSQueriesAreObservedWithCNAMEFlattening(t *testing.T) { func TestPeerAPIReplyToDNSQueriesAreObservedWithCNAMEFlattening(t *testing.T) {
ctx := context.Background() for _, shouldStore := range []bool{false, true} {
var h peerAPIHandler ctx := context.Background()
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345") var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0) eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf)) pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
h.ps = &peerAPIServer{ var a *appc.AppConnector
b: &LocalBackend{ if shouldStore {
e: eng, a = appc.NewAppConnector(t.Logf, rc, &appc.RouteInfo{}, fakeStoreRoutes)
pm: pm, } else {
store: pm.Store(), a = appc.NewAppConnector(t.Logf, rc, nil, nil)
appConnector: appc.NewAppConnector(t.Logf, rc), }
}, h.ps = &peerAPIServer{
} b: &LocalBackend{
h.ps.b.appConnector.UpdateDomains([]string{"www.example.com"}) e: eng,
h.ps.b.appConnector.Wait(ctx) pm: pm,
store: pm.Store(),
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) { appConnector: a,
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
}, },
dnsmessage.CNAMEResource{ }
CNAME: dnsmessage.MustNewName("example.com."), h.ps.b.appConnector.UpdateDomains([]string{"www.example.com"})
}, h.ps.b.appConnector.Wait(ctx)
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
if !h.ps.b.OfferingAppConnector() { h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
t.Fatal("expecting to be offering app connector") b.CNAMEResource(
} dnsmessage.ResourceHeader{
if !h.replyToDNSQueries() { Name: dnsmessage.MustNewName("www.example.com."),
t.Errorf("unexpectedly deny; wanted to be a DNS server") Type: dnsmessage.TypeCNAME,
} Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."),
},
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
w := httptest.NewRecorder() if !h.ps.b.OfferingAppConnector() {
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil)) t.Fatal("expecting to be offering app connector")
if w.Code != http.StatusOK { }
t.Errorf("unexpected status code: %v", w.Code) if !h.replyToDNSQueries() {
} t.Errorf("unexpectedly deny; wanted to be a DNS server")
h.ps.b.appConnector.Wait(ctx) }
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")} w := httptest.NewRecorder()
if !slices.Equal(rc.Routes(), wantRoutes) { h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes) if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
}
} }
} }

View File

@ -199,7 +199,7 @@ func (s *Server) serveHTTP(w http.ResponseWriter, r *http.Request) {
defer onDone() defer onDone()
if strings.HasPrefix(r.URL.Path, "/localapi/") { if strings.HasPrefix(r.URL.Path, "/localapi/") {
lah := localapi.NewHandler(lb, s.logf, s.netMon, s.backendLogID) lah := localapi.NewHandler(lb, s.logf, s.backendLogID)
lah.PermitRead, lah.PermitWrite = s.localAPIPermissions(ci) lah.PermitRead, lah.PermitWrite = s.localAPIPermissions(ci)
lah.PermitCert = s.connCanFetchCerts(ci) lah.PermitCert = s.connCanFetchCerts(ci)
lah.ConnIdentity = ci lah.ConnIdentity = ci

View File

@ -140,7 +140,7 @@ func (h *Handler) serveDebugDERPRegion(w http.ResponseWriter, r *http.Request) {
} }
checkSTUN4 := func(derpNode *tailcfg.DERPNode) { checkSTUN4 := func(derpNode *tailcfg.DERPNode) {
u4, err := nettype.MakePacketListenerWithNetIP(netns.Listener(h.logf, h.netMon)).ListenPacket(ctx, "udp4", ":0") u4, err := nettype.MakePacketListenerWithNetIP(netns.Listener(h.logf, h.b.NetMon())).ListenPacket(ctx, "udp4", ":0")
if err != nil { if err != nil {
st.Errors = append(st.Errors, fmt.Sprintf("Error creating IPv4 STUN listener: %v", err)) st.Errors = append(st.Errors, fmt.Sprintf("Error creating IPv4 STUN listener: %v", err))
return return
@ -249,7 +249,7 @@ func (h *Handler) serveDebugDERPRegion(w http.ResponseWriter, r *http.Request) {
serverPubKeys := make(map[key.NodePublic]bool) serverPubKeys := make(map[key.NodePublic]bool)
for i := range 5 { for i := range 5 {
func() { func() {
rc := derphttp.NewRegionClient(fakePrivKey, h.logf, h.netMon, func() *tailcfg.DERPRegion { rc := derphttp.NewRegionClient(fakePrivKey, h.logf, h.b.NetMon(), func() *tailcfg.DERPRegion {
return &tailcfg.DERPRegion{ return &tailcfg.DERPRegion{
RegionID: reg.RegionID, RegionID: reg.RegionID,
RegionCode: reg.RegionCode, RegionCode: reg.RegionCode,

View File

@ -36,7 +36,6 @@ import (
"tailscale.com/clientupdate" "tailscale.com/clientupdate"
"tailscale.com/drive" "tailscale.com/drive"
"tailscale.com/envknob" "tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/hostinfo" "tailscale.com/hostinfo"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/ipnauth" "tailscale.com/ipn/ipnauth"
@ -156,8 +155,8 @@ var (
// NewHandler creates a new LocalAPI HTTP handler. All parameters except netMon // NewHandler creates a new LocalAPI HTTP handler. All parameters except netMon
// are required (if non-nil it's used to do faster interface lookups). // are required (if non-nil it's used to do faster interface lookups).
func NewHandler(b *ipnlocal.LocalBackend, logf logger.Logf, netMon *netmon.Monitor, logID logid.PublicID) *Handler { func NewHandler(b *ipnlocal.LocalBackend, logf logger.Logf, logID logid.PublicID) *Handler {
return &Handler{b: b, logf: logf, netMon: netMon, backendLogID: logID, clock: tstime.StdClock{}} return &Handler{b: b, logf: logf, backendLogID: logID, clock: tstime.StdClock{}}
} }
type Handler struct { type Handler struct {
@ -188,7 +187,6 @@ type Handler struct {
b *ipnlocal.LocalBackend b *ipnlocal.LocalBackend
logf logger.Logf logf logger.Logf
netMon *netmon.Monitor // optional; nil means interfaces will be looked up on-demand
backendLogID logid.PublicID backendLogID logid.PublicID
clock tstime.Clock clock tstime.Clock
} }
@ -358,7 +356,7 @@ func (h *Handler) serveBugReport(w http.ResponseWriter, r *http.Request) {
} }
hi, _ := json.Marshal(hostinfo.New()) hi, _ := json.Marshal(hostinfo.New())
h.logf("user bugreport hostinfo: %s", hi) h.logf("user bugreport hostinfo: %s", hi)
if err := health.OverallError(); err != nil { if err := h.b.HealthTracker().OverallError(); err != nil {
h.logf("user bugreport health: %s", err.Error()) h.logf("user bugreport health: %s", err.Error())
} else { } else {
h.logf("user bugreport health: ok") h.logf("user bugreport health: ok")
@ -748,7 +746,7 @@ func (h *Handler) serveDebugPortmap(w http.ResponseWriter, r *http.Request) {
done := make(chan bool, 1) done := make(chan bool, 1)
var c *portmapper.Client var c *portmapper.Client
c = portmapper.NewClient(logger.WithPrefix(logf, "portmapper: "), h.netMon, debugKnobs, h.b.ControlKnobs(), func() { c = portmapper.NewClient(logger.WithPrefix(logf, "portmapper: "), h.b.NetMon(), debugKnobs, h.b.ControlKnobs(), func() {
logf("portmapping changed.") logf("portmapping changed.")
logf("have mapping: %v", c.HaveMapping()) logf("have mapping: %v", c.HaveMapping())
@ -1368,6 +1366,12 @@ func (h *Handler) servePrefs(w http.ResponseWriter, r *http.Request) {
http.Error(w, err.Error(), http.StatusBadRequest) http.Error(w, err.Error(), http.StatusBadRequest)
return return
} }
if err := h.b.MaybeClearAppConnector(mp); err != nil {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusInternalServerError)
json.NewEncoder(w).Encode(resJSON{Error: err.Error()})
return
}
var err error var err error
prefs, err = h.b.EditPrefs(mp) prefs, err = h.b.EditPrefs(mp)
if err != nil { if err != nil {

View File

@ -330,11 +330,45 @@ Specification of the desired state of the ProxyClass resource. https://git.k8s.i
</tr> </tr>
</thead> </thead>
<tbody><tr> <tbody><tr>
<td><b><a href="#proxyclassspecmetrics">metrics</a></b></td>
<td>object</td>
<td>
Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.<br/>
</td>
<td>false</td>
</tr><tr>
<td><b><a href="#proxyclassspecstatefulset">statefulSet</a></b></td> <td><b><a href="#proxyclassspecstatefulset">statefulSet</a></b></td>
<td>object</td> <td>object</td>
<td> <td>
Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector).<br/> Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector).<br/>
</td> </td>
<td>false</td>
</tr></tbody>
</table>
### ProxyClass.spec.metrics
<sup><sup>[↩ Parent](#proxyclassspec)</sup></sup>
Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody><tr>
<td><b>enable</b></td>
<td>boolean</td>
<td>
Setting enable to true will make the proxy serve Tailscale metrics at <pod-ip>:9001/debug/metrics. Defaults to false.<br/>
</td>
<td>true</td> <td>true</td>
</tr></tbody> </tr></tbody>
</table> </table>

View File

@ -52,7 +52,14 @@ type ProxyClassSpec struct {
// Configuration parameters for the proxy's StatefulSet. Tailscale // Configuration parameters for the proxy's StatefulSet. Tailscale
// Kubernetes operator deploys a StatefulSet for each of the user // Kubernetes operator deploys a StatefulSet for each of the user
// configured proxies (Tailscale Ingress, Tailscale Service, Connector). // configured proxies (Tailscale Ingress, Tailscale Service, Connector).
// +optional
StatefulSet *StatefulSet `json:"statefulSet"` StatefulSet *StatefulSet `json:"statefulSet"`
// Configuration for proxy metrics. Metrics are currently not supported
// for egress proxies and for Ingress proxies that have been configured
// with tailscale.com/experimental-forward-cluster-traffic-via-ingress
// annotation.
// +optional
Metrics *Metrics `json:"metrics,omitempty"`
} }
type StatefulSet struct { type StatefulSet struct {
@ -131,6 +138,14 @@ type Pod struct {
// https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling // https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling
// +optional // +optional
Tolerations []corev1.Toleration `json:"tolerations,omitempty"` Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
// +optional
}
type Metrics struct {
// Setting enable to true will make the proxy serve Tailscale metrics
// at <pod-ip>:9001/debug/metrics.
// Defaults to false.
Enable bool `json:"enable"`
} }
type Container struct { type Container struct {

View File

@ -178,6 +178,21 @@ func (in *Env) DeepCopy() *Env {
return out return out
} }
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Metrics) DeepCopyInto(out *Metrics) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Metrics.
func (in *Metrics) DeepCopy() *Metrics {
if in == nil {
return nil
}
out := new(Metrics)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Pod) DeepCopyInto(out *Pod) { func (in *Pod) DeepCopyInto(out *Pod) {
*out = *in *out = *in
@ -313,6 +328,11 @@ func (in *ProxyClassSpec) DeepCopyInto(out *ProxyClassSpec) {
*out = new(StatefulSet) *out = new(StatefulSet)
(*in).DeepCopyInto(*out) (*in).DeepCopyInto(*out)
} }
if in.Metrics != nil {
in, out := &in.Metrics, &out.Metrics
*out = new(Metrics)
**out = **in
}
} }
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProxyClassSpec. // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProxyClassSpec.

View File

@ -17,6 +17,7 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"tailscale.com/health"
"tailscale.com/logpolicy" "tailscale.com/logpolicy"
"tailscale.com/logtail" "tailscale.com/logtail"
"tailscale.com/logtail/filch" "tailscale.com/logtail/filch"
@ -93,7 +94,7 @@ func SockstatLogID(logID logid.PublicID) logid.PrivateID {
// The returned Logger is not yet enabled, and must be shut down with Shutdown when it is no longer needed. // The returned Logger is not yet enabled, and must be shut down with Shutdown when it is no longer needed.
// Logs will be uploaded to the log server using a new log ID derived from the provided backend logID. // Logs will be uploaded to the log server using a new log ID derived from the provided backend logID.
// The netMon parameter is optional; if non-nil it's used to do faster interface lookups. // The netMon parameter is optional; if non-nil it's used to do faster interface lookups.
func NewLogger(logdir string, logf logger.Logf, logID logid.PublicID, netMon *netmon.Monitor) (*Logger, error) { func NewLogger(logdir string, logf logger.Logf, logID logid.PublicID, netMon *netmon.Monitor, health *health.Tracker) (*Logger, error) {
if !sockstats.IsAvailable { if !sockstats.IsAvailable {
return nil, nil return nil, nil
} }
@ -113,7 +114,7 @@ func NewLogger(logdir string, logf logger.Logf, logID logid.PublicID, netMon *ne
logger := &Logger{ logger := &Logger{
logf: logf, logf: logf,
filch: filch, filch: filch,
tr: logpolicy.NewLogtailTransport(logtail.DefaultHost, netMon, logf), tr: logpolicy.NewLogtailTransport(logtail.DefaultHost, netMon, health, logf),
} }
logger.logger = logtail.NewLogger(logtail.Config{ logger.logger = logtail.NewLogger(logtail.Config{
BaseURL: logpolicy.LogURL(), BaseURL: logpolicy.LogURL(),

View File

@ -23,7 +23,7 @@ func TestResourceCleanup(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
lg, err := NewLogger(td, logger.Discard, id.Public(), nil) lg, err := NewLogger(td, logger.Discard, id.Public(), nil, nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@ -30,6 +30,7 @@ import (
"golang.org/x/term" "golang.org/x/term"
"tailscale.com/atomicfile" "tailscale.com/atomicfile"
"tailscale.com/envknob" "tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/log/filelogger" "tailscale.com/log/filelogger"
"tailscale.com/logtail" "tailscale.com/logtail"
"tailscale.com/logtail/filch" "tailscale.com/logtail/filch"
@ -452,13 +453,13 @@ func tryFixLogStateLocation(dir, cmdname string, logf logger.Logf) {
// The logf parameter is optional; if non-nil, information logs (e.g. when // The logf parameter is optional; if non-nil, information logs (e.g. when
// migrating state) are sent to that logger, and global changes to the log // migrating state) are sent to that logger, and global changes to the log
// package are avoided. If nil, logs will be printed using log.Printf. // package are avoided. If nil, logs will be printed using log.Printf.
func New(collection string, netMon *netmon.Monitor, logf logger.Logf) *Policy { func New(collection string, netMon *netmon.Monitor, health *health.Tracker, logf logger.Logf) *Policy {
return NewWithConfigPath(collection, "", "", netMon, logf) return NewWithConfigPath(collection, "", "", netMon, health, logf)
} }
// NewWithConfigPath is identical to New, but uses the specified directory and // NewWithConfigPath is identical to New, but uses the specified directory and
// command name. If either is empty, it derives them automatically. // command name. If either is empty, it derives them automatically.
func NewWithConfigPath(collection, dir, cmdName string, netMon *netmon.Monitor, logf logger.Logf) *Policy { func NewWithConfigPath(collection, dir, cmdName string, netMon *netmon.Monitor, health *health.Tracker, logf logger.Logf) *Policy {
var lflags int var lflags int
if term.IsTerminal(2) || runtime.GOOS == "windows" { if term.IsTerminal(2) || runtime.GOOS == "windows" {
lflags = 0 lflags = 0
@ -554,7 +555,7 @@ func NewWithConfigPath(collection, dir, cmdName string, netMon *netmon.Monitor,
PrivateID: newc.PrivateID, PrivateID: newc.PrivateID,
Stderr: logWriter{console}, Stderr: logWriter{console},
CompressLogs: true, CompressLogs: true,
HTTPC: &http.Client{Transport: NewLogtailTransport(logtail.DefaultHost, netMon, logf)}, HTTPC: &http.Client{Transport: NewLogtailTransport(logtail.DefaultHost, netMon, health, logf)},
} }
if collection == logtail.CollectionNode { if collection == logtail.CollectionNode {
conf.MetricsDelta = clientmetric.EncodeLogTailMetricsDelta conf.MetricsDelta = clientmetric.EncodeLogTailMetricsDelta
@ -569,7 +570,7 @@ func NewWithConfigPath(collection, dir, cmdName string, netMon *netmon.Monitor,
logf("You have enabled a non-default log target. Doing without being told to by Tailscale staff or your network administrator will make getting support difficult.") logf("You have enabled a non-default log target. Doing without being told to by Tailscale staff or your network administrator will make getting support difficult.")
conf.BaseURL = val conf.BaseURL = val
u, _ := url.Parse(val) u, _ := url.Parse(val)
conf.HTTPC = &http.Client{Transport: NewLogtailTransport(u.Host, netMon, logf)} conf.HTTPC = &http.Client{Transport: NewLogtailTransport(u.Host, netMon, health, logf)}
} }
filchOptions := filch.Options{ filchOptions := filch.Options{
@ -741,7 +742,7 @@ func dialContext(ctx context.Context, netw, addr string, netMon *netmon.Monitor,
// //
// The logf parameter is optional; if non-nil, logs are printed using the // The logf parameter is optional; if non-nil, logs are printed using the
// provided function; if nil, log.Printf will be used instead. // provided function; if nil, log.Printf will be used instead.
func NewLogtailTransport(host string, netMon *netmon.Monitor, logf logger.Logf) http.RoundTripper { func NewLogtailTransport(host string, netMon *netmon.Monitor, health *health.Tracker, logf logger.Logf) http.RoundTripper {
if testenv.InTest() { if testenv.InTest() {
return noopPretendSuccessTransport{} return noopPretendSuccessTransport{}
} }
@ -782,7 +783,7 @@ func NewLogtailTransport(host string, netMon *netmon.Monitor, logf logger.Logf)
tr.TLSNextProto = map[string]func(authority string, c *tls.Conn) http.RoundTripper{} tr.TLSNextProto = map[string]func(authority string, c *tls.Conn) http.RoundTripper{}
} }
tr.TLSClientConfig = tlsdial.Config(host, tr.TLSClientConfig) tr.TLSClientConfig = tlsdial.Config(host, health, tr.TLSClientConfig)
return tr return tr
} }

View File

@ -21,6 +21,7 @@ import (
"sync" "sync"
"time" "time"
"tailscale.com/health"
"tailscale.com/net/dns/resolvconffile" "tailscale.com/net/dns/resolvconffile"
"tailscale.com/net/tsaddr" "tailscale.com/net/tsaddr"
"tailscale.com/types/logger" "tailscale.com/types/logger"
@ -116,8 +117,9 @@ func restartResolved() error {
// The caller must call Down before program shutdown // The caller must call Down before program shutdown
// or as cleanup if the program terminates unexpectedly. // or as cleanup if the program terminates unexpectedly.
type directManager struct { type directManager struct {
logf logger.Logf logf logger.Logf
fs wholeFileFS health *health.Tracker
fs wholeFileFS
// renameBroken is set if fs.Rename to or from /etc/resolv.conf // renameBroken is set if fs.Rename to or from /etc/resolv.conf
// fails. This can happen in some container runtimes, where // fails. This can happen in some container runtimes, where
// /etc/resolv.conf is bind-mounted from outside the container, // /etc/resolv.conf is bind-mounted from outside the container,
@ -140,14 +142,15 @@ type directManager struct {
} }
//lint:ignore U1000 used in manager_{freebsd,openbsd}.go //lint:ignore U1000 used in manager_{freebsd,openbsd}.go
func newDirectManager(logf logger.Logf) *directManager { func newDirectManager(logf logger.Logf, health *health.Tracker) *directManager {
return newDirectManagerOnFS(logf, directFS{}) return newDirectManagerOnFS(logf, health, directFS{})
} }
func newDirectManagerOnFS(logf logger.Logf, fs wholeFileFS) *directManager { func newDirectManagerOnFS(logf logger.Logf, health *health.Tracker, fs wholeFileFS) *directManager {
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
m := &directManager{ m := &directManager{
logf: logf, logf: logf,
health: health,
fs: fs, fs: fs,
ctx: ctx, ctx: ctx,
ctxClose: cancel, ctxClose: cancel,

View File

@ -78,7 +78,7 @@ func (m *directManager) checkForFileTrample() {
return return
} }
if bytes.Equal(cur, want) { if bytes.Equal(cur, want) {
warnTrample.Set(nil) m.health.SetWarnable(warnTrample, nil)
if lastWarn != nil { if lastWarn != nil {
m.mu.Lock() m.mu.Lock()
m.lastWarnContents = nil m.lastWarnContents = nil
@ -101,7 +101,7 @@ func (m *directManager) checkForFileTrample() {
show = show[:1024] show = show[:1024]
} }
m.logf("trample: resolv.conf changed from what we expected. did some other program interfere? current contents: %q", show) m.logf("trample: resolv.conf changed from what we expected. did some other program interfere? current contents: %q", show)
warnTrample.Set(errors.New("Linux DNS config not ideal. /etc/resolv.conf overwritten. See https://tailscale.com/s/dns-fight")) m.health.SetWarnable(warnTrample, errors.New("Linux DNS config not ideal. /etc/resolv.conf overwritten. See https://tailscale.com/s/dns-fight"))
} }
func (m *directManager) closeInotifyOnDone(ctx context.Context, in *gonotify.Inotify) { func (m *directManager) closeInotifyOnDone(ctx context.Context, in *gonotify.Inotify) {

View File

@ -42,7 +42,8 @@ const maxActiveQueries = 256
// Manager manages system DNS settings. // Manager manages system DNS settings.
type Manager struct { type Manager struct {
logf logger.Logf logf logger.Logf
health *health.Tracker
activeQueriesAtomic int32 activeQueriesAtomic int32
@ -55,7 +56,7 @@ type Manager struct {
// NewManagers created a new manager from the given config. // NewManagers created a new manager from the given config.
// The netMon parameter is optional; if non-nil it's used to do faster interface lookups. // The netMon parameter is optional; if non-nil it's used to do faster interface lookups.
func NewManager(logf logger.Logf, oscfg OSConfigurator, netMon *netmon.Monitor, dialer *tsdial.Dialer, linkSel resolver.ForwardLinkSelector, knobs *controlknobs.Knobs) *Manager { func NewManager(logf logger.Logf, oscfg OSConfigurator, netMon *netmon.Monitor, health *health.Tracker, dialer *tsdial.Dialer, linkSel resolver.ForwardLinkSelector, knobs *controlknobs.Knobs) *Manager {
if dialer == nil { if dialer == nil {
panic("nil Dialer") panic("nil Dialer")
} }
@ -64,6 +65,7 @@ func NewManager(logf logger.Logf, oscfg OSConfigurator, netMon *netmon.Monitor,
logf: logf, logf: logf,
resolver: resolver.New(logf, netMon, linkSel, dialer, knobs), resolver: resolver.New(logf, netMon, linkSel, dialer, knobs),
os: oscfg, os: oscfg,
health: health,
} }
m.ctx, m.ctxCancel = context.WithCancel(context.Background()) m.ctx, m.ctxCancel = context.WithCancel(context.Background())
m.logf("using %T", m.os) m.logf("using %T", m.os)
@ -94,10 +96,10 @@ func (m *Manager) Set(cfg Config) error {
return err return err
} }
if err := m.os.SetDNS(ocfg); err != nil { if err := m.os.SetDNS(ocfg); err != nil {
health.SetDNSOSHealth(err) m.health.SetDNSOSHealth(err)
return err return err
} }
health.SetDNSOSHealth(nil) m.health.SetDNSOSHealth(nil)
return nil return nil
} }
@ -248,7 +250,7 @@ func (m *Manager) compileConfig(cfg Config) (rcfg resolver.Config, ocfg OSConfig
// This is currently (2022-10-13) expected on certain iOS and macOS // This is currently (2022-10-13) expected on certain iOS and macOS
// builds. // builds.
} else { } else {
health.SetDNSOSHealth(err) m.health.SetDNSOSHealth(err)
return resolver.Config{}, OSConfig{}, err return resolver.Config{}, OSConfig{}, err
} }
} }
@ -453,12 +455,12 @@ func (m *Manager) FlushCaches() error {
// in case the Tailscale daemon terminated without closing the router. // in case the Tailscale daemon terminated without closing the router.
// No other state needs to be instantiated before this runs. // No other state needs to be instantiated before this runs.
func CleanUp(logf logger.Logf, interfaceName string) { func CleanUp(logf logger.Logf, interfaceName string) {
oscfg, err := NewOSConfigurator(logf, interfaceName) oscfg, err := NewOSConfigurator(logf, nil, interfaceName)
if err != nil { if err != nil {
logf("creating dns cleanup: %v", err) logf("creating dns cleanup: %v", err)
return return
} }
dns := NewManager(logf, oscfg, nil, &tsdial.Dialer{Logf: logf}, nil, nil) dns := NewManager(logf, oscfg, nil, nil, &tsdial.Dialer{Logf: logf}, nil, nil)
if err := dns.Down(); err != nil { if err := dns.Down(); err != nil {
logf("dns down: %v", err) logf("dns down: %v", err)
} }

View File

@ -8,11 +8,12 @@ import (
"os" "os"
"go4.org/mem" "go4.org/mem"
"tailscale.com/health"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/util/mak" "tailscale.com/util/mak"
) )
func NewOSConfigurator(logf logger.Logf, ifName string) (OSConfigurator, error) { func NewOSConfigurator(logf logger.Logf, health *health.Tracker, ifName string) (OSConfigurator, error) {
return &darwinConfigurator{logf: logf, ifName: ifName}, nil return &darwinConfigurator{logf: logf, ifName: ifName}, nil
} }

View File

@ -5,11 +5,11 @@
package dns package dns
import "tailscale.com/types/logger" import (
"tailscale.com/health"
"tailscale.com/types/logger"
)
func NewOSConfigurator(logger.Logf, string) (OSConfigurator, error) { func NewOSConfigurator(logger.Logf, *health.Tracker, string) (OSConfigurator, error) {
// TODO(dmytro): on darwin, we should use a macOS-specific method such as scutil.
// This is currently not implemented. Editing /etc/resolv.conf does not work,
// as most applications use the system resolver, which disregards it.
return NewNoopManager() return NewNoopManager()
} }

View File

@ -7,13 +7,14 @@ import (
"fmt" "fmt"
"os" "os"
"tailscale.com/health"
"tailscale.com/types/logger" "tailscale.com/types/logger"
) )
func NewOSConfigurator(logf logger.Logf, _ string) (OSConfigurator, error) { func NewOSConfigurator(logf logger.Logf, health *health.Tracker, _ string) (OSConfigurator, error) {
bs, err := os.ReadFile("/etc/resolv.conf") bs, err := os.ReadFile("/etc/resolv.conf")
if os.IsNotExist(err) { if os.IsNotExist(err) {
return newDirectManager(logf), nil return newDirectManager(logf, health), nil
} }
if err != nil { if err != nil {
return nil, fmt.Errorf("reading /etc/resolv.conf: %w", err) return nil, fmt.Errorf("reading /etc/resolv.conf: %w", err)
@ -23,16 +24,16 @@ func NewOSConfigurator(logf logger.Logf, _ string) (OSConfigurator, error) {
case "resolvconf": case "resolvconf":
switch resolvconfStyle() { switch resolvconfStyle() {
case "": case "":
return newDirectManager(logf), nil return newDirectManager(logf, health), nil
case "debian": case "debian":
return newDebianResolvconfManager(logf) return newDebianResolvconfManager(logf)
case "openresolv": case "openresolv":
return newOpenresolvManager(logf) return newOpenresolvManager(logf)
default: default:
logf("[unexpected] got unknown flavor of resolvconf %q, falling back to direct manager", resolvconfStyle()) logf("[unexpected] got unknown flavor of resolvconf %q, falling back to direct manager", resolvconfStyle())
return newDirectManager(logf), nil return newDirectManager(logf, health), nil
} }
default: default:
return newDirectManager(logf), nil return newDirectManager(logf, health), nil
} }
} }

View File

@ -31,7 +31,7 @@ func (kv kv) String() string {
var publishOnce sync.Once var publishOnce sync.Once
func NewOSConfigurator(logf logger.Logf, interfaceName string) (ret OSConfigurator, err error) { func NewOSConfigurator(logf logger.Logf, health *health.Tracker, interfaceName string) (ret OSConfigurator, err error) {
env := newOSConfigEnv{ env := newOSConfigEnv{
fs: directFS{}, fs: directFS{},
dbusPing: dbusPing, dbusPing: dbusPing,
@ -40,7 +40,7 @@ func NewOSConfigurator(logf logger.Logf, interfaceName string) (ret OSConfigurat
nmVersionBetween: nmVersionBetween, nmVersionBetween: nmVersionBetween,
resolvconfStyle: resolvconfStyle, resolvconfStyle: resolvconfStyle,
} }
mode, err := dnsMode(logf, env) mode, err := dnsMode(logf, health, env)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -52,9 +52,9 @@ func NewOSConfigurator(logf logger.Logf, interfaceName string) (ret OSConfigurat
logf("dns: using %q mode", mode) logf("dns: using %q mode", mode)
switch mode { switch mode {
case "direct": case "direct":
return newDirectManagerOnFS(logf, env.fs), nil return newDirectManagerOnFS(logf, health, env.fs), nil
case "systemd-resolved": case "systemd-resolved":
return newResolvedManager(logf, interfaceName) return newResolvedManager(logf, health, interfaceName)
case "network-manager": case "network-manager":
return newNMManager(interfaceName) return newNMManager(interfaceName)
case "debian-resolvconf": case "debian-resolvconf":
@ -63,7 +63,7 @@ func NewOSConfigurator(logf logger.Logf, interfaceName string) (ret OSConfigurat
return newOpenresolvManager(logf) return newOpenresolvManager(logf)
default: default:
logf("[unexpected] detected unknown DNS mode %q, using direct manager as last resort", mode) logf("[unexpected] detected unknown DNS mode %q, using direct manager as last resort", mode)
return newDirectManagerOnFS(logf, env.fs), nil return newDirectManagerOnFS(logf, health, env.fs), nil
} }
} }
@ -77,7 +77,7 @@ type newOSConfigEnv struct {
resolvconfStyle func() string resolvconfStyle func() string
} }
func dnsMode(logf logger.Logf, env newOSConfigEnv) (ret string, err error) { func dnsMode(logf logger.Logf, health *health.Tracker, env newOSConfigEnv) (ret string, err error) {
var debug []kv var debug []kv
dbg := func(k, v string) { dbg := func(k, v string) {
debug = append(debug, kv{k, v}) debug = append(debug, kv{k, v})

View File

@ -286,7 +286,7 @@ func TestLinuxDNSMode(t *testing.T) {
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
var logBuf tstest.MemLogger var logBuf tstest.MemLogger
got, err := dnsMode(logBuf.Logf, tt.env) got, err := dnsMode(logBuf.Logf, nil, tt.env)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@ -8,6 +8,7 @@ import (
"fmt" "fmt"
"os" "os"
"tailscale.com/health"
"tailscale.com/types/logger" "tailscale.com/types/logger"
) )
@ -19,8 +20,8 @@ func (kv kv) String() string {
return fmt.Sprintf("%s=%s", kv.k, kv.v) return fmt.Sprintf("%s=%s", kv.k, kv.v)
} }
func NewOSConfigurator(logf logger.Logf, interfaceName string) (OSConfigurator, error) { func NewOSConfigurator(logf logger.Logf, health *health.Tracker, interfaceName string) (OSConfigurator, error) {
return newOSConfigurator(logf, interfaceName, return newOSConfigurator(logf, health, interfaceName,
newOSConfigEnv{ newOSConfigEnv{
rcIsResolvd: rcIsResolvd, rcIsResolvd: rcIsResolvd,
fs: directFS{}, fs: directFS{},
@ -33,7 +34,7 @@ type newOSConfigEnv struct {
rcIsResolvd func(resolvConfContents []byte) bool rcIsResolvd func(resolvConfContents []byte) bool
} }
func newOSConfigurator(logf logger.Logf, interfaceName string, env newOSConfigEnv) (ret OSConfigurator, err error) { func newOSConfigurator(logf logger.Logf, health *health.Tracker, interfaceName string, env newOSConfigEnv) (ret OSConfigurator, err error) {
var debug []kv var debug []kv
dbg := func(k, v string) { dbg := func(k, v string) {
debug = append(debug, kv{k, v}) debug = append(debug, kv{k, v})
@ -48,7 +49,7 @@ func newOSConfigurator(logf logger.Logf, interfaceName string, env newOSConfigEn
bs, err := env.fs.ReadFile(resolvConf) bs, err := env.fs.ReadFile(resolvConf)
if os.IsNotExist(err) { if os.IsNotExist(err) {
dbg("rc", "missing") dbg("rc", "missing")
return newDirectManager(logf), nil return newDirectManager(logf, health), nil
} }
if err != nil { if err != nil {
return nil, fmt.Errorf("reading /etc/resolv.conf: %w", err) return nil, fmt.Errorf("reading /etc/resolv.conf: %w", err)
@ -60,7 +61,7 @@ func newOSConfigurator(logf logger.Logf, interfaceName string, env newOSConfigEn
} }
dbg("resolvd", "missing") dbg("resolvd", "missing")
return newDirectManager(logf), nil return newDirectManager(logf, health), nil
} }
func rcIsResolvd(resolvConfContents []byte) bool { func rcIsResolvd(resolvConfContents []byte) bool {

View File

@ -87,7 +87,7 @@ func TestDNSOverTCP(t *testing.T) {
SearchDomains: fqdns("coffee.shop"), SearchDomains: fqdns("coffee.shop"),
}, },
} }
m := NewManager(t.Logf, &f, nil, new(tsdial.Dialer), nil, nil) m := NewManager(t.Logf, &f, nil, nil, new(tsdial.Dialer), nil, nil)
m.resolver.TestOnlySetHook(f.SetResolver) m.resolver.TestOnlySetHook(f.SetResolver)
m.Set(Config{ m.Set(Config{
Hosts: hosts( Hosts: hosts(
@ -172,7 +172,7 @@ func TestDNSOverTCP_TooLarge(t *testing.T) {
SearchDomains: fqdns("coffee.shop"), SearchDomains: fqdns("coffee.shop"),
}, },
} }
m := NewManager(log, &f, nil, new(tsdial.Dialer), nil, nil) m := NewManager(log, &f, nil, nil, new(tsdial.Dialer), nil, nil)
m.resolver.TestOnlySetHook(f.SetResolver) m.resolver.TestOnlySetHook(f.SetResolver)
m.Set(Config{ m.Set(Config{
Hosts: hosts("andrew.ts.com.", "1.2.3.4"), Hosts: hosts("andrew.ts.com.", "1.2.3.4"),

View File

@ -613,7 +613,7 @@ func TestManager(t *testing.T) {
SplitDNS: test.split, SplitDNS: test.split,
BaseConfig: test.bs, BaseConfig: test.bs,
} }
m := NewManager(t.Logf, &f, nil, new(tsdial.Dialer), nil, nil) m := NewManager(t.Logf, &f, nil, nil, new(tsdial.Dialer), nil, nil)
m.resolver.TestOnlySetHook(f.SetResolver) m.resolver.TestOnlySetHook(f.SetResolver)
if err := m.Set(test.in); err != nil { if err := m.Set(test.in); err != nil {

View File

@ -23,6 +23,7 @@ import (
"golang.zx2c4.com/wireguard/windows/tunnel/winipcfg" "golang.zx2c4.com/wireguard/windows/tunnel/winipcfg"
"tailscale.com/atomicfile" "tailscale.com/atomicfile"
"tailscale.com/envknob" "tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/util/dnsname" "tailscale.com/util/dnsname"
"tailscale.com/util/winutil" "tailscale.com/util/winutil"
@ -44,11 +45,11 @@ type windowsManager struct {
closing bool closing bool
} }
func NewOSConfigurator(logf logger.Logf, interfaceName string) (OSConfigurator, error) { func NewOSConfigurator(logf logger.Logf, health *health.Tracker, interfaceName string) (OSConfigurator, error) {
ret := &windowsManager{ ret := &windowsManager{
logf: logf, logf: logf,
guid: interfaceName, guid: interfaceName,
wslManager: newWSLManager(logf), wslManager: newWSLManager(logf, health),
} }
if isWindows10OrBetter() { if isWindows10OrBetter() {

View File

@ -84,7 +84,7 @@ func TestManagerWindowsGPCopy(t *testing.T) {
} }
defer delIfKey() defer delIfKey()
cfg, err := NewOSConfigurator(logf, fakeInterface.String()) cfg, err := NewOSConfigurator(logf, nil, fakeInterface.String())
if err != nil { if err != nil {
t.Fatalf("NewOSConfigurator: %v\n", err) t.Fatalf("NewOSConfigurator: %v\n", err)
} }
@ -213,7 +213,7 @@ func runTest(t *testing.T, isLocal bool) {
} }
defer delIfKey() defer delIfKey()
cfg, err := NewOSConfigurator(logf, fakeInterface.String()) cfg, err := NewOSConfigurator(logf, nil, fakeInterface.String())
if err != nil { if err != nil {
t.Fatalf("NewOSConfigurator: %v\n", err) t.Fatalf("NewOSConfigurator: %v\n", err)
} }

View File

@ -63,13 +63,14 @@ type resolvedManager struct {
ctx context.Context ctx context.Context
cancel func() // terminate the context, for close cancel func() // terminate the context, for close
logf logger.Logf logf logger.Logf
ifidx int health *health.Tracker
ifidx int
configCR chan changeRequest // tracks OSConfigs changes and error responses configCR chan changeRequest // tracks OSConfigs changes and error responses
} }
func newResolvedManager(logf logger.Logf, interfaceName string) (*resolvedManager, error) { func newResolvedManager(logf logger.Logf, health *health.Tracker, interfaceName string) (*resolvedManager, error) {
iface, err := net.InterfaceByName(interfaceName) iface, err := net.InterfaceByName(interfaceName)
if err != nil { if err != nil {
return nil, err return nil, err
@ -82,8 +83,9 @@ func newResolvedManager(logf logger.Logf, interfaceName string) (*resolvedManage
ctx: ctx, ctx: ctx,
cancel: cancel, cancel: cancel,
logf: logf, logf: logf,
ifidx: iface.Index, health: health,
ifidx: iface.Index,
configCR: make(chan changeRequest), configCR: make(chan changeRequest),
} }
@ -163,7 +165,7 @@ func (m *resolvedManager) run(ctx context.Context) {
// Reset backoff and SetNSOSHealth after successful on reconnect. // Reset backoff and SetNSOSHealth after successful on reconnect.
bo.BackOff(ctx, nil) bo.BackOff(ctx, nil)
health.SetDNSOSHealth(nil) m.health.SetDNSOSHealth(nil)
return nil return nil
} }
@ -241,7 +243,7 @@ func (m *resolvedManager) run(ctx context.Context) {
// Set health while holding the lock, because this will // Set health while holding the lock, because this will
// graciously serialize the resync's health outcome with a // graciously serialize the resync's health outcome with a
// concurrent SetDNS call. // concurrent SetDNS call.
health.SetDNSOSHealth(err) m.health.SetDNSOSHealth(err)
if err != nil { if err != nil {
m.logf("failed to configure systemd-resolved: %v", err) m.logf("failed to configure systemd-resolved: %v", err)
} }

View File

@ -16,6 +16,7 @@ import (
"time" "time"
"golang.org/x/sys/windows" "golang.org/x/sys/windows"
"tailscale.com/health"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/util/winutil" "tailscale.com/util/winutil"
) )
@ -54,12 +55,14 @@ func wslDistros() ([]string, error) {
// wslManager is a DNS manager for WSL2 linux distributions. // wslManager is a DNS manager for WSL2 linux distributions.
// It configures /etc/wsl.conf and /etc/resolv.conf. // It configures /etc/wsl.conf and /etc/resolv.conf.
type wslManager struct { type wslManager struct {
logf logger.Logf logf logger.Logf
health *health.Tracker
} }
func newWSLManager(logf logger.Logf) *wslManager { func newWSLManager(logf logger.Logf, health *health.Tracker) *wslManager {
m := &wslManager{ m := &wslManager{
logf: logf, logf: logf,
health: health,
} }
return m return m
} }
@ -73,7 +76,7 @@ func (wm *wslManager) SetDNS(cfg OSConfig) error {
} }
managers := make(map[string]*directManager) managers := make(map[string]*directManager)
for _, distro := range distros { for _, distro := range distros {
managers[distro] = newDirectManagerOnFS(wm.logf, wslFS{ managers[distro] = newDirectManagerOnFS(wm.logf, wm.health, wslFS{
user: "root", user: "root",
distro: distro, distro: distro,
}) })

View File

@ -28,6 +28,7 @@ import (
"tailscale.com/atomicfile" "tailscale.com/atomicfile"
"tailscale.com/envknob" "tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/net/dns/recursive" "tailscale.com/net/dns/recursive"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/net/netns" "tailscale.com/net/netns"
@ -64,9 +65,10 @@ func MakeLookupFunc(logf logger.Logf, netMon *netmon.Monitor) func(ctx context.C
// fallbackResolver contains the state and configuration for a DNS resolution // fallbackResolver contains the state and configuration for a DNS resolution
// function. // function.
type fallbackResolver struct { type fallbackResolver struct {
logf logger.Logf logf logger.Logf
netMon *netmon.Monitor // or nil netMon *netmon.Monitor // or nil
sf singleflight.Group[string, resolveResult] healthTracker *health.Tracker // or nil
sf singleflight.Group[string, resolveResult]
// for tests // for tests
waitForCompare bool waitForCompare bool
@ -79,7 +81,7 @@ func (fr *fallbackResolver) Lookup(ctx context.Context, host string) ([]netip.Ad
// recursive resolver. (tailscale/corp#15261) In the future, we might // recursive resolver. (tailscale/corp#15261) In the future, we might
// change the default (the opt.Bool being unset) to mean enabled. // change the default (the opt.Bool being unset) to mean enabled.
if disableRecursiveResolver() || !optRecursiveResolver().EqualBool(true) { if disableRecursiveResolver() || !optRecursiveResolver().EqualBool(true) {
return lookup(ctx, host, fr.logf, fr.netMon) return lookup(ctx, host, fr.logf, fr.healthTracker, fr.netMon)
} }
addrsCh := make(chan []netip.Addr, 1) addrsCh := make(chan []netip.Addr, 1)
@ -99,7 +101,7 @@ func (fr *fallbackResolver) Lookup(ctx context.Context, host string) ([]netip.Ad
go fr.compareWithRecursive(ctx, addrsCh, host) go fr.compareWithRecursive(ctx, addrsCh, host)
} }
addrs, err := lookup(ctx, host, fr.logf, fr.netMon) addrs, err := lookup(ctx, host, fr.logf, fr.healthTracker, fr.netMon)
if err != nil { if err != nil {
addrsCh <- nil addrsCh <- nil
return nil, err return nil, err
@ -207,7 +209,7 @@ func (fr *fallbackResolver) compareWithRecursive(
} }
} }
func lookup(ctx context.Context, host string, logf logger.Logf, netMon *netmon.Monitor) ([]netip.Addr, error) { func lookup(ctx context.Context, host string, logf logger.Logf, ht *health.Tracker, netMon *netmon.Monitor) ([]netip.Addr, error) {
if ip, err := netip.ParseAddr(host); err == nil && ip.IsValid() { if ip, err := netip.ParseAddr(host); err == nil && ip.IsValid() {
return []netip.Addr{ip}, nil return []netip.Addr{ip}, nil
} }
@ -255,7 +257,7 @@ func lookup(ctx context.Context, host string, logf logger.Logf, netMon *netmon.M
logf("trying bootstrapDNS(%q, %q) for %q ...", cand.dnsName, cand.ip, host) logf("trying bootstrapDNS(%q, %q) for %q ...", cand.dnsName, cand.ip, host)
ctx, cancel := context.WithTimeout(ctx, 3*time.Second) ctx, cancel := context.WithTimeout(ctx, 3*time.Second)
defer cancel() defer cancel()
dm, err := bootstrapDNSMap(ctx, cand.dnsName, cand.ip, host, logf, netMon) dm, err := bootstrapDNSMap(ctx, cand.dnsName, cand.ip, host, logf, ht, netMon)
if err != nil { if err != nil {
logf("bootstrapDNS(%q, %q) for %q error: %v", cand.dnsName, cand.ip, host, err) logf("bootstrapDNS(%q, %q) for %q error: %v", cand.dnsName, cand.ip, host, err)
continue continue
@ -274,14 +276,16 @@ func lookup(ctx context.Context, host string, logf logger.Logf, netMon *netmon.M
// serverName and serverIP of are, say, "derpN.tailscale.com". // serverName and serverIP of are, say, "derpN.tailscale.com".
// queryName is the name being sought (e.g. "controlplane.tailscale.com"), passed as hint. // queryName is the name being sought (e.g. "controlplane.tailscale.com"), passed as hint.
func bootstrapDNSMap(ctx context.Context, serverName string, serverIP netip.Addr, queryName string, logf logger.Logf, netMon *netmon.Monitor) (dnsMap, error) { //
// ht may be nil.
func bootstrapDNSMap(ctx context.Context, serverName string, serverIP netip.Addr, queryName string, logf logger.Logf, ht *health.Tracker, netMon *netmon.Monitor) (dnsMap, error) {
dialer := netns.NewDialer(logf, netMon) dialer := netns.NewDialer(logf, netMon)
tr := http.DefaultTransport.(*http.Transport).Clone() tr := http.DefaultTransport.(*http.Transport).Clone()
tr.Proxy = tshttpproxy.ProxyFromEnvironment tr.Proxy = tshttpproxy.ProxyFromEnvironment
tr.DialContext = func(ctx context.Context, netw, addr string) (net.Conn, error) { tr.DialContext = func(ctx context.Context, netw, addr string) (net.Conn, error) {
return dialer.DialContext(ctx, "tcp", net.JoinHostPort(serverIP.String(), "443")) return dialer.DialContext(ctx, "tcp", net.JoinHostPort(serverIP.String(), "443"))
} }
tr.TLSClientConfig = tlsdial.Config(serverName, tr.TLSClientConfig) tr.TLSClientConfig = tlsdial.Config(serverName, ht, tr.TLSClientConfig)
c := &http.Client{Transport: tr} c := &http.Client{Transport: tr}
req, err := http.NewRequestWithContext(ctx, "GET", "https://"+serverName+"/bootstrap-dns?q="+url.QueryEscape(queryName), nil) req, err := http.NewRequestWithContext(ctx, "GET", "https://"+serverName+"/bootstrap-dns?q="+url.QueryEscape(queryName), nil)
if err != nil { if err != nil {

View File

@ -6,6 +6,7 @@ package netutil
import ( import (
"bytes" "bytes"
"errors"
"fmt" "fmt"
"net/netip" "net/netip"
"os" "os"
@ -145,8 +146,6 @@ func CheckIPForwarding(routes []netip.Prefix, state *interfaces.State) (warn, er
// disabled or set to 'loose' mode for exit node functionality on any // disabled or set to 'loose' mode for exit node functionality on any
// interface. // interface.
// //
// The state param can be nil, in which case interfaces.GetState is used.
//
// The routes should only be advertised routes, and should not contain the // The routes should only be advertised routes, and should not contain the
// node's Tailscale IPs. // node's Tailscale IPs.
// //
@ -159,11 +158,7 @@ func CheckReversePathFiltering(state *interfaces.State) (warn []string, err erro
} }
if state == nil { if state == nil {
var err error return nil, errors.New("no link state")
state, err = interfaces.GetState()
if err != nil {
return nil, err
}
} }
// The kernel uses the maximum value for rp_filter between the 'all' // The kernel uses the maximum value for rp_filter between the 'all'

View File

@ -8,6 +8,8 @@ import (
"net" "net"
"runtime" "runtime"
"testing" "testing"
"tailscale.com/net/netmon"
) )
type conn struct { type conn struct {
@ -70,7 +72,13 @@ func TestCheckReversePathFiltering(t *testing.T) {
if runtime.GOOS != "linux" { if runtime.GOOS != "linux" {
t.Skipf("skipping on %s", runtime.GOOS) t.Skipf("skipping on %s", runtime.GOOS)
} }
warn, err := CheckReversePathFiltering(nil) netMon, err := netmon.New(t.Logf)
if err != nil {
t.Fatal(err)
}
defer netMon.Close()
warn, err := CheckReversePathFiltering(netMon.InterfaceState())
t.Logf("err: %v", err) t.Logf("err: %v", err)
t.Logf("warnings: %v", warn) t.Logf("warnings: %v", warn)
} }

View File

@ -46,7 +46,8 @@ var tlsdialWarningPrinted sync.Map // map[string]bool
// Config returns a tls.Config for connecting to a server. // Config returns a tls.Config for connecting to a server.
// If base is non-nil, it's cloned as the base config before // If base is non-nil, it's cloned as the base config before
// being configured and returned. // being configured and returned.
func Config(host string, base *tls.Config) *tls.Config { // If ht is non-nil, it's used to report health errors.
func Config(host string, ht *health.Tracker, base *tls.Config) *tls.Config {
var conf *tls.Config var conf *tls.Config
if base == nil { if base == nil {
conf = new(tls.Config) conf = new(tls.Config)
@ -78,12 +79,14 @@ func Config(host string, base *tls.Config) *tls.Config {
conf.VerifyConnection = func(cs tls.ConnectionState) error { conf.VerifyConnection = func(cs tls.ConnectionState) error {
// Perform some health checks on this certificate before we do // Perform some health checks on this certificate before we do
// any verification. // any verification.
if certIsSelfSigned(cs.PeerCertificates[0]) { if ht != nil {
// Self-signed certs are never valid. if certIsSelfSigned(cs.PeerCertificates[0]) {
health.SetTLSConnectionError(cs.ServerName, fmt.Errorf("certificate is self-signed")) // Self-signed certs are never valid.
} else { ht.SetTLSConnectionError(cs.ServerName, fmt.Errorf("certificate is self-signed"))
// Ensure we clear any error state for this ServerName. } else {
health.SetTLSConnectionError(cs.ServerName, nil) // Ensure we clear any error state for this ServerName.
ht.SetTLSConnectionError(cs.ServerName, nil)
}
} }
// First try doing x509 verification with the system's // First try doing x509 verification with the system's
@ -204,7 +207,7 @@ func NewTransport() *http.Transport {
return nil, err return nil, err
} }
var d tls.Dialer var d tls.Dialer
d.Config = Config(host, nil) d.Config = Config(host, nil, nil)
return d.DialContext(ctx, network, addr) return d.DialContext(ctx, network, addr)
}, },
} }

View File

@ -15,6 +15,8 @@ import (
"runtime" "runtime"
"sync/atomic" "sync/atomic"
"testing" "testing"
"tailscale.com/health"
) )
func resetOnce() { func resetOnce() {
@ -105,7 +107,8 @@ func TestFallbackRootWorks(t *testing.T) {
}, },
DisableKeepAlives: true, // for test cleanup ease DisableKeepAlives: true, // for test cleanup ease
} }
tr.TLSClientConfig = Config("tlsdial.test", tr.TLSClientConfig) ht := new(health.Tracker)
tr.TLSClientConfig = Config("tlsdial.test", ht, tr.TLSClientConfig)
c := &http.Client{Transport: tr} c := &http.Client{Transport: tr}
ctr0 := atomic.LoadInt32(&counterFallbackOK) ctr0 := atomic.LoadInt32(&counterFallbackOK)

View File

@ -70,12 +70,14 @@
package safeweb package safeweb
import ( import (
"cmp"
crand "crypto/rand" crand "crypto/rand"
"fmt" "fmt"
"log" "log"
"net" "net"
"net/http" "net/http"
"net/url" "net/url"
"path"
"strings" "strings"
"github.com/gorilla/csrf" "github.com/gorilla/csrf"
@ -195,6 +197,30 @@ func NewServer(config Config) (*Server, error) {
return s, nil return s, nil
} }
type handlerType int
const (
unknownHandler handlerType = iota
apiHandler
browserHandler
)
// checkHandlerType returns either apiHandler or browserHandler, depending on
// whether apiPattern or browserPattern is more specific (i.e. which pattern
// contains more pathname components). If they are equally specific, it returns
// unknownHandler.
func checkHandlerType(apiPattern, browserPattern string) handlerType {
c := cmp.Compare(strings.Count(path.Clean(apiPattern), "/"), strings.Count(path.Clean(browserPattern), "/"))
switch {
case c > 0:
return apiHandler
case c < 0:
return browserHandler
default:
return unknownHandler
}
}
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) { func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
_, bp := s.BrowserMux.Handler(r) _, bp := s.BrowserMux.Handler(r)
_, ap := s.APIMux.Handler(r) _, ap := s.APIMux.Handler(r)
@ -206,24 +232,25 @@ func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
case bp == "" && ap == "": // neither match case bp == "" && ap == "": // neither match
http.NotFound(w, r) http.NotFound(w, r)
case bp != "" && ap != "": case bp != "" && ap != "":
// Both muxes match the path. This can be because: // Both muxes match the path. Route to the more-specific handler (as
// * one of them registers a wildcard "/" handler // determined by the number of components in the path). If it somehow
// * there are overlapping specific handlers // happens that both patterns are equally specific, something strange
// has happened; say so.
// //
// If it's the former, route to the more-specific handler. If it's the // NOTE: checkHandlerType does not know about what the serve* handlers
// latter - that's a bug so return an error to avoid mis-routing the // will do — including, possibly, redirecting to more specific patterns.
// request. // If you have a less-specific pattern that redirects to something more
// // specific, this logic will not do what you wanted.
// TODO(awly): match the longest path instead of only special-casing handler := checkHandlerType(ap, bp)
// "/". switch handler {
switch { case apiHandler:
case bp == "/":
s.serveAPI(w, r) s.serveAPI(w, r)
case ap == "/": case browserHandler:
s.serveBrowser(w, r) s.serveBrowser(w, r)
default: default:
log.Printf("conflicting mux paths in safeweb: request %q matches browser mux pattern %q and API mux patter %q; returning 500", r.URL.Path, bp, ap) s := http.StatusInternalServerError
http.Error(w, "multiple handlers match this request", http.StatusInternalServerError) log.Printf("conflicting mux paths in safeweb: request %q matches browser mux pattern %q and API mux pattern %q; returning %d", r.URL.Path, bp, ap, s)
http.Error(w, "multiple handlers match this request", s)
} }
} }
} }

View File

@ -447,7 +447,7 @@ func TestRouting(t *testing.T) {
browserPatterns: []string{"/foo/"}, browserPatterns: []string{"/foo/"},
apiPatterns: []string{"/foo/bar/"}, apiPatterns: []string{"/foo/bar/"},
requestPath: "/foo/bar/baz", requestPath: "/foo/bar/baz",
want: "multiple handlers match this request", want: "api",
}, },
{ {
desc: "no match", desc: "no match",
@ -488,3 +488,68 @@ func TestRouting(t *testing.T) {
}) })
} }
} }
func TestGetMoreSpecificPattern(t *testing.T) {
for _, tt := range []struct {
desc string
a string
b string
want handlerType
}{
{
desc: "identical",
a: "/foo/bar",
b: "/foo/bar",
want: unknownHandler,
},
{
desc: "identical prefix",
a: "/foo/bar/",
b: "/foo/bar/",
want: unknownHandler,
},
{
desc: "trailing slash",
a: "/foo",
b: "/foo/", // path.Clean will strip the trailing slash.
want: unknownHandler,
},
{
desc: "same prefix",
a: "/foo/bar/quux",
b: "/foo/bar/",
want: apiHandler,
},
{
desc: "almost same prefix, but not a path component",
a: "/goat/sheep/cheese",
b: "/goat/sheepcheese/",
want: apiHandler,
},
{
desc: "attempt to make less-specific pattern look more specific",
a: "/goat/cat/buddy",
b: "/goat/../../../../../../../cat", // path.Clean catches this foolishness
want: apiHandler,
},
{
desc: "2 names for / (1)",
a: "/",
b: "/../../../../../../",
want: unknownHandler,
},
{
desc: "2 names for / (2)",
a: "/",
b: "///////",
want: unknownHandler,
},
} {
t.Run(tt.desc, func(t *testing.T) {
got := checkHandlerType(tt.a, tt.b)
if got != tt.want {
t.Errorf("got %q, want %q", got, tt.want)
}
})
}
}

View File

@ -2250,6 +2250,12 @@ const (
// NodeAttrLogExitFlows enables exit node destinations in network flow logs. // NodeAttrLogExitFlows enables exit node destinations in network flow logs.
NodeAttrLogExitFlows NodeCapability = "log-exit-flows" NodeAttrLogExitFlows NodeCapability = "log-exit-flows"
// NodeAttrAutoExitNode permits the automatic exit nodes feature.
NodeAttrAutoExitNode NodeCapability = "auto-exit-node"
// NodeAttrStoreAppCRoutes enables storing app connector routes persistently.
NodeAttrStoreAppCRoutes NodeCapability = "store-appc-routes"
) )
// SetDNSRequest is a request to add a DNS record. // SetDNSRequest is a request to add a DNS record.

View File

@ -23,6 +23,7 @@ import (
"tailscale.com/control/controlknobs" "tailscale.com/control/controlknobs"
"tailscale.com/drive" "tailscale.com/drive"
"tailscale.com/health"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/conffile" "tailscale.com/ipn/conffile"
"tailscale.com/net/dns" "tailscale.com/net/dns"
@ -63,6 +64,8 @@ type System struct {
controlKnobs controlknobs.Knobs controlKnobs controlknobs.Knobs
proxyMap proxymap.Mapper proxyMap proxymap.Mapper
healthTracker health.Tracker
} }
// NetstackImpl is the interface that *netstack.Impl implements. // NetstackImpl is the interface that *netstack.Impl implements.
@ -134,6 +137,11 @@ func (s *System) ProxyMapper() *proxymap.Mapper {
return &s.proxyMap return &s.proxyMap
} }
// HealthTracker returns the system health tracker.
func (s *System) HealthTracker() *health.Tracker {
return &s.healthTracker
}
// SubSystem represents some subsystem of the Tailscale node daemon. // SubSystem represents some subsystem of the Tailscale node daemon.
// //
// A subsystem can be set to a value, and then later retrieved. A subsystem // A subsystem can be set to a value, and then later retrieved. A subsystem

View File

@ -31,6 +31,7 @@ import (
"tailscale.com/client/tailscale" "tailscale.com/client/tailscale"
"tailscale.com/control/controlclient" "tailscale.com/control/controlclient"
"tailscale.com/envknob" "tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/hostinfo" "tailscale.com/hostinfo"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/ipnlocal" "tailscale.com/ipn/ipnlocal"
@ -233,7 +234,7 @@ func (s *Server) Loopback() (addr string, proxyCred, localAPICred string, err er
// out the CONNECT code from tailscaled/proxy.go that uses // out the CONNECT code from tailscaled/proxy.go that uses
// httputil.ReverseProxy and adding auth support. // httputil.ReverseProxy and adding auth support.
go func() { go func() {
lah := localapi.NewHandler(s.lb, s.logf, s.netMon, s.logid) lah := localapi.NewHandler(s.lb, s.logf, s.logid)
lah.PermitWrite = true lah.PermitWrite = true
lah.PermitRead = true lah.PermitRead = true
lah.RequiredPassword = s.localAPICred lah.RequiredPassword = s.localAPICred
@ -504,7 +505,8 @@ func (s *Server) start() (reterr error) {
return fmt.Errorf("%v is not a directory", s.rootPath) return fmt.Errorf("%v is not a directory", s.rootPath)
} }
if err := s.startLogger(&closePool); err != nil { sys := new(tsd.System)
if err := s.startLogger(&closePool, sys.HealthTracker()); err != nil {
return err return err
} }
@ -514,14 +516,14 @@ func (s *Server) start() (reterr error) {
} }
closePool.add(s.netMon) closePool.add(s.netMon)
sys := new(tsd.System)
s.dialer = &tsdial.Dialer{Logf: logf} // mutated below (before used) s.dialer = &tsdial.Dialer{Logf: logf} // mutated below (before used)
eng, err := wgengine.NewUserspaceEngine(logf, wgengine.Config{ eng, err := wgengine.NewUserspaceEngine(logf, wgengine.Config{
ListenPort: s.Port, ListenPort: s.Port,
NetMon: s.netMon, NetMon: s.netMon,
Dialer: s.dialer, Dialer: s.dialer,
SetSubsystem: sys.Set, SetSubsystem: sys.Set,
ControlKnobs: sys.ControlKnobs(), ControlKnobs: sys.ControlKnobs(),
HealthTracker: sys.HealthTracker(),
}) })
if err != nil { if err != nil {
return err return err
@ -606,7 +608,7 @@ func (s *Server) start() (reterr error) {
go s.printAuthURLLoop() go s.printAuthURLLoop()
// Run the localapi handler, to allow fetching LetsEncrypt certs. // Run the localapi handler, to allow fetching LetsEncrypt certs.
lah := localapi.NewHandler(lb, logf, s.netMon, s.logid) lah := localapi.NewHandler(lb, logf, s.logid)
lah.PermitWrite = true lah.PermitWrite = true
lah.PermitRead = true lah.PermitRead = true
@ -626,7 +628,7 @@ func (s *Server) start() (reterr error) {
return nil return nil
} }
func (s *Server) startLogger(closePool *closeOnErrorPool) error { func (s *Server) startLogger(closePool *closeOnErrorPool, health *health.Tracker) error {
if testenv.InTest() { if testenv.InTest() {
return nil return nil
} }
@ -657,7 +659,7 @@ func (s *Server) startLogger(closePool *closeOnErrorPool) error {
Stderr: io.Discard, // log everything to Buffer Stderr: io.Discard, // log everything to Buffer
Buffer: s.logbuffer, Buffer: s.logbuffer,
CompressLogs: true, CompressLogs: true,
HTTPC: &http.Client{Transport: logpolicy.NewLogtailTransport(logtail.DefaultHost, s.netMon, s.logf)}, HTTPC: &http.Client{Transport: logpolicy.NewLogtailTransport(logtail.DefaultHost, s.netMon, health, s.logf)},
MetricsDelta: clientmetric.EncodeLogTailMetricsDelta, MetricsDelta: clientmetric.EncodeLogTailMetricsDelta,
} }
s.logtail = logtail.NewLogger(c, s.logf) s.logtail = logtail.NewLogger(c, s.logf)

View File

@ -17,6 +17,7 @@ import (
_ "tailscale.com/derp/derphttp" _ "tailscale.com/derp/derphttp"
_ "tailscale.com/drive/driveimpl" _ "tailscale.com/drive/driveimpl"
_ "tailscale.com/envknob" _ "tailscale.com/envknob"
_ "tailscale.com/health"
_ "tailscale.com/ipn" _ "tailscale.com/ipn"
_ "tailscale.com/ipn/conffile" _ "tailscale.com/ipn/conffile"
_ "tailscale.com/ipn/ipnlocal" _ "tailscale.com/ipn/ipnlocal"

View File

@ -17,6 +17,7 @@ import (
_ "tailscale.com/derp/derphttp" _ "tailscale.com/derp/derphttp"
_ "tailscale.com/drive/driveimpl" _ "tailscale.com/drive/driveimpl"
_ "tailscale.com/envknob" _ "tailscale.com/envknob"
_ "tailscale.com/health"
_ "tailscale.com/ipn" _ "tailscale.com/ipn"
_ "tailscale.com/ipn/conffile" _ "tailscale.com/ipn/conffile"
_ "tailscale.com/ipn/ipnlocal" _ "tailscale.com/ipn/ipnlocal"

View File

@ -17,6 +17,7 @@ import (
_ "tailscale.com/derp/derphttp" _ "tailscale.com/derp/derphttp"
_ "tailscale.com/drive/driveimpl" _ "tailscale.com/drive/driveimpl"
_ "tailscale.com/envknob" _ "tailscale.com/envknob"
_ "tailscale.com/health"
_ "tailscale.com/ipn" _ "tailscale.com/ipn"
_ "tailscale.com/ipn/conffile" _ "tailscale.com/ipn/conffile"
_ "tailscale.com/ipn/ipnlocal" _ "tailscale.com/ipn/ipnlocal"

View File

@ -17,6 +17,7 @@ import (
_ "tailscale.com/derp/derphttp" _ "tailscale.com/derp/derphttp"
_ "tailscale.com/drive/driveimpl" _ "tailscale.com/drive/driveimpl"
_ "tailscale.com/envknob" _ "tailscale.com/envknob"
_ "tailscale.com/health"
_ "tailscale.com/ipn" _ "tailscale.com/ipn"
_ "tailscale.com/ipn/conffile" _ "tailscale.com/ipn/conffile"
_ "tailscale.com/ipn/ipnlocal" _ "tailscale.com/ipn/ipnlocal"

View File

@ -24,6 +24,7 @@ import (
_ "tailscale.com/derp/derphttp" _ "tailscale.com/derp/derphttp"
_ "tailscale.com/drive/driveimpl" _ "tailscale.com/drive/driveimpl"
_ "tailscale.com/envknob" _ "tailscale.com/envknob"
_ "tailscale.com/health"
_ "tailscale.com/ipn" _ "tailscale.com/ipn"
_ "tailscale.com/ipn/conffile" _ "tailscale.com/ipn/conffile"
_ "tailscale.com/ipn/ipnlocal" _ "tailscale.com/ipn/ipnlocal"

View File

@ -165,7 +165,7 @@ func (c *Conn) maybeSetNearestDERP(report *netcheck.Report) (preferredDERP int)
if testenv.InTest() && !checkControlHealthDuringNearestDERPInTests { if testenv.InTest() && !checkControlHealthDuringNearestDERPInTests {
connectedToControl = true connectedToControl = true
} else { } else {
connectedToControl = health.GetInPollNetMap() connectedToControl = c.health.GetInPollNetMap()
} }
if !connectedToControl { if !connectedToControl {
c.mu.Lock() c.mu.Lock()
@ -201,12 +201,12 @@ func (c *Conn) setNearestDERP(derpNum int) (wantDERP bool) {
defer c.mu.Unlock() defer c.mu.Unlock()
if !c.wantDerpLocked() { if !c.wantDerpLocked() {
c.myDerp = 0 c.myDerp = 0
health.SetMagicSockDERPHome(0, c.homeless) c.health.SetMagicSockDERPHome(0, c.homeless)
return false return false
} }
if c.homeless { if c.homeless {
c.myDerp = 0 c.myDerp = 0
health.SetMagicSockDERPHome(0, c.homeless) c.health.SetMagicSockDERPHome(0, c.homeless)
return false return false
} }
if derpNum == c.myDerp { if derpNum == c.myDerp {
@ -217,7 +217,7 @@ func (c *Conn) setNearestDERP(derpNum int) (wantDERP bool) {
metricDERPHomeChange.Add(1) metricDERPHomeChange.Add(1)
} }
c.myDerp = derpNum c.myDerp = derpNum
health.SetMagicSockDERPHome(derpNum, c.homeless) c.health.SetMagicSockDERPHome(derpNum, c.homeless)
if c.privateKey.IsZero() { if c.privateKey.IsZero() {
// No private key yet, so DERP connections won't come up anyway. // No private key yet, so DERP connections won't come up anyway.
@ -400,6 +400,7 @@ func (c *Conn) derpWriteChanOfAddr(addr netip.AddrPort, peer key.NodePublic) cha
} }
return derpMap.Regions[regionID] return derpMap.Regions[regionID]
}) })
dc.HealthTracker = c.health
dc.SetCanAckPings(true) dc.SetCanAckPings(true)
dc.NotePreferred(c.myDerp == regionID) dc.NotePreferred(c.myDerp == regionID)
@ -525,8 +526,8 @@ func (c *Conn) runDerpReader(ctx context.Context, derpFakeAddr netip.AddrPort, d
return n return n
} }
defer health.SetDERPRegionConnectedState(regionID, false) defer c.health.SetDERPRegionConnectedState(regionID, false)
defer health.SetDERPRegionHealth(regionID, "") defer c.health.SetDERPRegionHealth(regionID, "")
// peerPresent is the set of senders we know are present on this // peerPresent is the set of senders we know are present on this
// connection, based on messages we've received from the server. // connection, based on messages we've received from the server.
@ -538,7 +539,7 @@ func (c *Conn) runDerpReader(ctx context.Context, derpFakeAddr netip.AddrPort, d
for { for {
msg, connGen, err := dc.RecvDetail() msg, connGen, err := dc.RecvDetail()
if err != nil { if err != nil {
health.SetDERPRegionConnectedState(regionID, false) c.health.SetDERPRegionConnectedState(regionID, false)
// Forget that all these peers have routes. // Forget that all these peers have routes.
for peer := range peerPresent { for peer := range peerPresent {
delete(peerPresent, peer) delete(peerPresent, peer)
@ -576,14 +577,14 @@ func (c *Conn) runDerpReader(ctx context.Context, derpFakeAddr netip.AddrPort, d
now := time.Now() now := time.Now()
if lastPacketTime.IsZero() || now.Sub(lastPacketTime) > frameReceiveRecordRate { if lastPacketTime.IsZero() || now.Sub(lastPacketTime) > frameReceiveRecordRate {
health.NoteDERPRegionReceivedFrame(regionID) c.health.NoteDERPRegionReceivedFrame(regionID)
lastPacketTime = now lastPacketTime = now
} }
switch m := msg.(type) { switch m := msg.(type) {
case derp.ServerInfoMessage: case derp.ServerInfoMessage:
health.SetDERPRegionConnectedState(regionID, true) c.health.SetDERPRegionConnectedState(regionID, true)
health.SetDERPRegionHealth(regionID, "") // until declared otherwise c.health.SetDERPRegionHealth(regionID, "") // until declared otherwise
c.logf("magicsock: derp-%d connected; connGen=%v", regionID, connGen) c.logf("magicsock: derp-%d connected; connGen=%v", regionID, connGen)
continue continue
case derp.ReceivedPacket: case derp.ReceivedPacket:
@ -623,7 +624,7 @@ func (c *Conn) runDerpReader(ctx context.Context, derpFakeAddr netip.AddrPort, d
}() }()
continue continue
case derp.HealthMessage: case derp.HealthMessage:
health.SetDERPRegionHealth(regionID, m.Problem) c.health.SetDERPRegionHealth(regionID, m.Problem)
continue continue
case derp.PeerGoneMessage: case derp.PeerGoneMessage:
switch m.Reason { switch m.Reason {
@ -680,8 +681,10 @@ func (c *Conn) runDerpWriter(ctx context.Context, dc *derphttp.Client, ch <-chan
} }
func (c *connBind) receiveDERP(buffs [][]byte, sizes []int, eps []conn.Endpoint) (int, error) { func (c *connBind) receiveDERP(buffs [][]byte, sizes []int, eps []conn.Endpoint) (int, error) {
health.ReceiveDERP.Enter() if s := c.Conn.health.ReceiveFuncStats(health.ReceiveDERP); s != nil {
defer health.ReceiveDERP.Exit() s.Enter()
defer s.Exit()
}
for dm := range c.derpRecvCh { for dm := range c.derpRecvCh {
if c.isClosed() { if c.isClosed() {

View File

@ -91,6 +91,7 @@ type Conn struct {
testOnlyPacketListener nettype.PacketListener testOnlyPacketListener nettype.PacketListener
noteRecvActivity func(key.NodePublic) // or nil, see Options.NoteRecvActivity noteRecvActivity func(key.NodePublic) // or nil, see Options.NoteRecvActivity
netMon *netmon.Monitor // or nil netMon *netmon.Monitor // or nil
health *health.Tracker // or nil
controlKnobs *controlknobs.Knobs // or nil controlKnobs *controlknobs.Knobs // or nil
// ================================================================ // ================================================================
@ -369,9 +370,13 @@ type Options struct {
NoteRecvActivity func(key.NodePublic) NoteRecvActivity func(key.NodePublic)
// NetMon is the network monitor to use. // NetMon is the network monitor to use.
// With one, the portmapper won't be used. // If nil, the portmapper won't be used.
NetMon *netmon.Monitor NetMon *netmon.Monitor
// HealthTracker optionally specifies the health tracker to
// report errors and warnings to.
HealthTracker *health.Tracker
// ControlKnobs are the set of control knobs to use. // ControlKnobs are the set of control knobs to use.
// If nil, they're ignored and not updated. // If nil, they're ignored and not updated.
ControlKnobs *controlknobs.Knobs ControlKnobs *controlknobs.Knobs
@ -463,6 +468,7 @@ func NewConn(opts Options) (*Conn, error) {
c.portMapper.SetGatewayLookupFunc(opts.NetMon.GatewayAndSelfIP) c.portMapper.SetGatewayLookupFunc(opts.NetMon.GatewayAndSelfIP)
} }
c.netMon = opts.NetMon c.netMon = opts.NetMon
c.health = opts.HealthTracker
c.onPortUpdate = opts.OnPortUpdate c.onPortUpdate = opts.OnPortUpdate
c.getPeerByKey = opts.PeerByKeyFunc c.getPeerByKey = opts.PeerByKeyFunc
@ -666,7 +672,7 @@ func (c *Conn) updateNetInfo(ctx context.Context) (*netcheck.Report, error) {
// NOTE(andrew-d): I don't love that we're depending on the // NOTE(andrew-d): I don't love that we're depending on the
// health package here, but I'd rather do that and not store // health package here, but I'd rather do that and not store
// the exact same state in two different places. // the exact same state in two different places.
GetLastDERPActivity: health.GetDERPRegionReceivedTime, GetLastDERPActivity: c.health.GetDERPRegionReceivedTime,
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@ -1197,12 +1203,12 @@ func (c *Conn) putReceiveBatch(batch *receiveBatch) {
// receiveIPv4 creates an IPv4 ReceiveFunc reading from c.pconn4. // receiveIPv4 creates an IPv4 ReceiveFunc reading from c.pconn4.
func (c *Conn) receiveIPv4() conn.ReceiveFunc { func (c *Conn) receiveIPv4() conn.ReceiveFunc {
return c.mkReceiveFunc(&c.pconn4, &health.ReceiveIPv4, metricRecvDataIPv4) return c.mkReceiveFunc(&c.pconn4, c.health.ReceiveFuncStats(health.ReceiveIPv4), metricRecvDataIPv4)
} }
// receiveIPv6 creates an IPv6 ReceiveFunc reading from c.pconn6. // receiveIPv6 creates an IPv6 ReceiveFunc reading from c.pconn6.
func (c *Conn) receiveIPv6() conn.ReceiveFunc { func (c *Conn) receiveIPv6() conn.ReceiveFunc {
return c.mkReceiveFunc(&c.pconn6, &health.ReceiveIPv6, metricRecvDataIPv6) return c.mkReceiveFunc(&c.pconn6, c.health.ReceiveFuncStats(health.ReceiveIPv6), metricRecvDataIPv6)
} }
// mkReceiveFunc creates a ReceiveFunc reading from ruc. // mkReceiveFunc creates a ReceiveFunc reading from ruc.
@ -2471,7 +2477,7 @@ func (c *Conn) bindSocket(ruc *RebindingUDPConn, network string, curPortFate cur
} }
ruc.setConnLocked(pconn, network, c.bind.BatchSize()) ruc.setConnLocked(pconn, network, c.bind.BatchSize())
if network == "udp4" { if network == "udp4" {
health.SetUDP4Unbound(false) c.health.SetUDP4Unbound(false)
} }
return nil return nil
} }
@ -2482,7 +2488,7 @@ func (c *Conn) bindSocket(ruc *RebindingUDPConn, network string, curPortFate cur
// we get a link change and we can try binding again. // we get a link change and we can try binding again.
ruc.setConnLocked(newBlockForeverConn(), "", c.bind.BatchSize()) ruc.setConnLocked(newBlockForeverConn(), "", c.bind.BatchSize())
if network == "udp4" { if network == "udp4" {
health.SetUDP4Unbound(true) c.health.SetUDP4Unbound(true)
} }
return fmt.Errorf("failed to bind any ports (tried %v)", ports) return fmt.Errorf("failed to bind any ports (tried %v)", ports)
} }

View File

@ -3113,21 +3113,23 @@ func TestMaybeSetNearestDERP(t *testing.T) {
} }
for _, tt := range testCases { for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
ht := new(health.Tracker)
c := newConn() c := newConn()
c.logf = t.Logf c.logf = t.Logf
c.myDerp = tt.old c.myDerp = tt.old
c.derpMap = derpMap c.derpMap = derpMap
c.health = ht
report := &netcheck.Report{PreferredDERP: tt.reportDERP} report := &netcheck.Report{PreferredDERP: tt.reportDERP}
oldConnected := health.GetInPollNetMap() oldConnected := ht.GetInPollNetMap()
if tt.connectedToControl != oldConnected { if tt.connectedToControl != oldConnected {
if tt.connectedToControl { if tt.connectedToControl {
health.GotStreamedMapResponse() ht.GotStreamedMapResponse()
t.Cleanup(health.SetOutOfPollNetMap) t.Cleanup(ht.SetOutOfPollNetMap)
} else { } else {
health.SetOutOfPollNetMap() ht.SetOutOfPollNetMap()
t.Cleanup(health.GotStreamedMapResponse) t.Cleanup(ht.GotStreamedMapResponse)
} }
} }

View File

@ -16,6 +16,7 @@ import (
"sync" "sync"
"time" "time"
"tailscale.com/health"
"tailscale.com/logpolicy" "tailscale.com/logpolicy"
"tailscale.com/logtail" "tailscale.com/logtail"
"tailscale.com/net/connstats" "tailscale.com/net/connstats"
@ -92,7 +93,7 @@ var testClient *http.Client
// The IP protocol and source port are always zero. // The IP protocol and source port are always zero.
// The sock is used to populated the PhysicalTraffic field in Message. // The sock is used to populated the PhysicalTraffic field in Message.
// The netMon parameter is optional; if non-nil it's used to do faster interface lookups. // The netMon parameter is optional; if non-nil it's used to do faster interface lookups.
func (nl *Logger) Startup(nodeID tailcfg.StableNodeID, nodeLogID, domainLogID logid.PrivateID, tun, sock Device, netMon *netmon.Monitor) error { func (nl *Logger) Startup(nodeID tailcfg.StableNodeID, nodeLogID, domainLogID logid.PrivateID, tun, sock Device, netMon *netmon.Monitor, health *health.Tracker) error {
nl.mu.Lock() nl.mu.Lock()
defer nl.mu.Unlock() defer nl.mu.Unlock()
if nl.logger != nil { if nl.logger != nil {
@ -101,7 +102,7 @@ func (nl *Logger) Startup(nodeID tailcfg.StableNodeID, nodeLogID, domainLogID lo
// Startup a log stream to Tailscale's logging service. // Startup a log stream to Tailscale's logging service.
logf := log.Printf logf := log.Printf
httpc := &http.Client{Transport: logpolicy.NewLogtailTransport(logtail.DefaultHost, netMon, logf)} httpc := &http.Client{Transport: logpolicy.NewLogtailTransport(logtail.DefaultHost, netMon, health, logf)}
if testClient != nil { if testClient != nil {
httpc = testClient httpc = testClient
} }

View File

@ -237,7 +237,7 @@ func interfaceFromLUID(luid winipcfg.LUID, flags winipcfg.GAAFlags) (*winipcfg.I
var networkCategoryWarning = health.NewWarnable(health.WithMapDebugFlag("warn-network-category-unhealthy")) var networkCategoryWarning = health.NewWarnable(health.WithMapDebugFlag("warn-network-category-unhealthy"))
func configureInterface(cfg *Config, tun *tun.NativeTun) (retErr error) { func configureInterface(cfg *Config, tun *tun.NativeTun, health *health.Tracker) (retErr error) {
var mtu = tstun.DefaultTUNMTU() var mtu = tstun.DefaultTUNMTU()
luid := winipcfg.LUID(tun.LUID()) luid := winipcfg.LUID(tun.LUID())
iface, err := interfaceFromLUID(luid, iface, err := interfaceFromLUID(luid,
@ -268,10 +268,10 @@ func configureInterface(cfg *Config, tun *tun.NativeTun) (retErr error) {
for i := range tries { for i := range tries {
found, err := setPrivateNetwork(luid) found, err := setPrivateNetwork(luid)
if err != nil { if err != nil {
networkCategoryWarning.Set(fmt.Errorf("set-network-category: %w", err)) health.SetWarnable(networkCategoryWarning, fmt.Errorf("set-network-category: %w", err))
log.Printf("setPrivateNetwork(try=%d): %v", i, err) log.Printf("setPrivateNetwork(try=%d): %v", i, err)
} else { } else {
networkCategoryWarning.Set(nil) health.SetWarnable(networkCategoryWarning, nil)
if found { if found {
if i > 0 { if i > 0 {
log.Printf("setPrivateNetwork(try=%d): success", i) log.Printf("setPrivateNetwork(try=%d): success", i)

View File

@ -10,6 +10,7 @@ import (
"reflect" "reflect"
"github.com/tailscale/wireguard-go/tun" "github.com/tailscale/wireguard-go/tun"
"tailscale.com/health"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/types/preftype" "tailscale.com/types/preftype"
@ -44,9 +45,9 @@ type Router interface {
// //
// If netMon is nil, it's not used. It's currently (2021-07-20) only // If netMon is nil, it's not used. It's currently (2021-07-20) only
// used on Linux in some situations. // used on Linux in some situations.
func New(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor) (Router, error) { func New(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor, health *health.Tracker) (Router, error) {
logf = logger.WithPrefix(logf, "router: ") logf = logger.WithPrefix(logf, "router: ")
return newUserspaceRouter(logf, tundev, netMon) return newUserspaceRouter(logf, tundev, netMon, health)
} }
// CleanUp restores the system network configuration to its original state // CleanUp restores the system network configuration to its original state

View File

@ -5,12 +5,13 @@ package router
import ( import (
"github.com/tailscale/wireguard-go/tun" "github.com/tailscale/wireguard-go/tun"
"tailscale.com/health"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/types/logger" "tailscale.com/types/logger"
) )
func newUserspaceRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor) (Router, error) { func newUserspaceRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor, health *health.Tracker) (Router, error) {
return newUserspaceBSDRouter(logf, tundev, netMon) return newUserspaceBSDRouter(logf, tundev, netMon, health)
} }
func cleanUp(logger.Logf, string) { func cleanUp(logger.Logf, string) {

View File

@ -10,11 +10,12 @@ import (
"runtime" "runtime"
"github.com/tailscale/wireguard-go/tun" "github.com/tailscale/wireguard-go/tun"
"tailscale.com/health"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/types/logger" "tailscale.com/types/logger"
) )
func newUserspaceRouter(logf logger.Logf, tunDev tun.Device, netMon *netmon.Monitor) (Router, error) { func newUserspaceRouter(logf logger.Logf, tunDev tun.Device, netMon *netmon.Monitor, health *health.Tracker) (Router, error) {
return nil, fmt.Errorf("unsupported OS %q", runtime.GOOS) return nil, fmt.Errorf("unsupported OS %q", runtime.GOOS)
} }

View File

@ -5,6 +5,7 @@ package router
import ( import (
"github.com/tailscale/wireguard-go/tun" "github.com/tailscale/wireguard-go/tun"
"tailscale.com/health"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/types/logger" "tailscale.com/types/logger"
) )
@ -14,8 +15,8 @@ import (
// Work is currently underway for an in-kernel FreeBSD implementation of wireguard // Work is currently underway for an in-kernel FreeBSD implementation of wireguard
// https://svnweb.freebsd.org/base?view=revision&revision=357986 // https://svnweb.freebsd.org/base?view=revision&revision=357986
func newUserspaceRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor) (Router, error) { func newUserspaceRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor, health *health.Tracker) (Router, error) {
return newUserspaceBSDRouter(logf, tundev, netMon) return newUserspaceBSDRouter(logf, tundev, netMon, health)
} }
func cleanUp(logf logger.Logf, interfaceName string) { func cleanUp(logf logger.Logf, interfaceName string) {

View File

@ -22,6 +22,7 @@ import (
"golang.org/x/sys/unix" "golang.org/x/sys/unix"
"golang.org/x/time/rate" "golang.org/x/time/rate"
"tailscale.com/envknob" "tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/types/preftype" "tailscale.com/types/preftype"
@ -69,7 +70,7 @@ type linuxRouter struct {
magicsockPortV6 uint16 magicsockPortV6 uint16
} }
func newUserspaceRouter(logf logger.Logf, tunDev tun.Device, netMon *netmon.Monitor) (Router, error) { func newUserspaceRouter(logf logger.Logf, tunDev tun.Device, netMon *netmon.Monitor, health *health.Tracker) (Router, error) {
tunname, err := tunDev.Name() tunname, err := tunDev.Name()
if err != nil { if err != nil {
return nil, err return nil, err

View File

@ -886,7 +886,7 @@ func newLinuxRootTest(t *testing.T) *linuxTest {
mon.Start() mon.Start()
lt.mon = mon lt.mon = mon
r, err := newUserspaceRouter(logf, lt.tun, mon) r, err := newUserspaceRouter(logf, lt.tun, mon, nil)
if err != nil { if err != nil {
lt.Close() lt.Close()
t.Fatal(err) t.Fatal(err)

View File

@ -12,6 +12,7 @@ import (
"github.com/tailscale/wireguard-go/tun" "github.com/tailscale/wireguard-go/tun"
"go4.org/netipx" "go4.org/netipx"
"tailscale.com/health"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/util/set" "tailscale.com/util/set"
@ -30,7 +31,7 @@ type openbsdRouter struct {
routes set.Set[netip.Prefix] routes set.Set[netip.Prefix]
} }
func newUserspaceRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor) (Router, error) { func newUserspaceRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor, health *health.Tracker) (Router, error) {
tunname, err := tundev.Name() tunname, err := tundev.Name()
if err != nil { if err != nil {
return nil, err return nil, err

View File

@ -14,6 +14,7 @@ import (
"github.com/tailscale/wireguard-go/tun" "github.com/tailscale/wireguard-go/tun"
"go4.org/netipx" "go4.org/netipx"
"tailscale.com/health"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
"tailscale.com/net/tsaddr" "tailscale.com/net/tsaddr"
"tailscale.com/types/logger" "tailscale.com/types/logger"
@ -23,12 +24,13 @@ import (
type userspaceBSDRouter struct { type userspaceBSDRouter struct {
logf logger.Logf logf logger.Logf
netMon *netmon.Monitor netMon *netmon.Monitor
health *health.Tracker
tunname string tunname string
local []netip.Prefix local []netip.Prefix
routes map[netip.Prefix]bool routes map[netip.Prefix]bool
} }
func newUserspaceBSDRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor) (Router, error) { func newUserspaceBSDRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor, health *health.Tracker) (Router, error) {
tunname, err := tundev.Name() tunname, err := tundev.Name()
if err != nil { if err != nil {
return nil, err return nil, err
@ -37,6 +39,7 @@ func newUserspaceBSDRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.M
return &userspaceBSDRouter{ return &userspaceBSDRouter{
logf: logf, logf: logf,
netMon: netMon, netMon: netMon,
health: health,
tunname: tunname, tunname: tunname,
}, nil }, nil
} }

View File

@ -22,6 +22,7 @@ import (
"github.com/tailscale/wireguard-go/tun" "github.com/tailscale/wireguard-go/tun"
"golang.org/x/sys/windows" "golang.org/x/sys/windows"
"golang.zx2c4.com/wireguard/windows/tunnel/winipcfg" "golang.zx2c4.com/wireguard/windows/tunnel/winipcfg"
"tailscale.com/health"
"tailscale.com/logtail/backoff" "tailscale.com/logtail/backoff"
"tailscale.com/net/dns" "tailscale.com/net/dns"
"tailscale.com/net/netmon" "tailscale.com/net/netmon"
@ -31,12 +32,13 @@ import (
type winRouter struct { type winRouter struct {
logf func(fmt string, args ...any) logf func(fmt string, args ...any)
netMon *netmon.Monitor // may be nil netMon *netmon.Monitor // may be nil
health *health.Tracker
nativeTun *tun.NativeTun nativeTun *tun.NativeTun
routeChangeCallback *winipcfg.RouteChangeCallback routeChangeCallback *winipcfg.RouteChangeCallback
firewall *firewallTweaker firewall *firewallTweaker
} }
func newUserspaceRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor) (Router, error) { func newUserspaceRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Monitor, health *health.Tracker) (Router, error) {
nativeTun := tundev.(*tun.NativeTun) nativeTun := tundev.(*tun.NativeTun)
luid := winipcfg.LUID(nativeTun.LUID()) luid := winipcfg.LUID(nativeTun.LUID())
guid, err := luid.GUID() guid, err := luid.GUID()
@ -47,6 +49,7 @@ func newUserspaceRouter(logf logger.Logf, tundev tun.Device, netMon *netmon.Moni
return &winRouter{ return &winRouter{
logf: logf, logf: logf,
netMon: netMon, netMon: netMon,
health: health,
nativeTun: nativeTun, nativeTun: nativeTun,
firewall: &firewallTweaker{ firewall: &firewallTweaker{
logf: logger.WithPrefix(logf, "firewall: "), logf: logger.WithPrefix(logf, "firewall: "),
@ -80,7 +83,7 @@ func (r *winRouter) Set(cfg *Config) error {
} }
r.firewall.set(localAddrs, cfg.Routes, cfg.LocalRoutes) r.firewall.set(localAddrs, cfg.Routes, cfg.LocalRoutes)
err := configureInterface(cfg, r.nativeTun) err := configureInterface(cfg, r.nativeTun, r.health)
if err != nil { if err != nil {
r.logf("ConfigureInterface: %v", err) r.logf("ConfigureInterface: %v", err)
return err return err

View File

@ -98,6 +98,7 @@ type userspaceEngine struct {
dns *dns.Manager dns *dns.Manager
magicConn *magicsock.Conn magicConn *magicsock.Conn
netMon *netmon.Monitor netMon *netmon.Monitor
health *health.Tracker
netMonOwned bool // whether we created netMon (and thus need to close it) netMonOwned bool // whether we created netMon (and thus need to close it)
netMonUnregister func() // unsubscribes from changes; used regardless of netMonOwned netMonUnregister func() // unsubscribes from changes; used regardless of netMonOwned
birdClient BIRDClient // or nil birdClient BIRDClient // or nil
@ -188,6 +189,9 @@ type Config struct {
// If nil, a new network monitor is created. // If nil, a new network monitor is created.
NetMon *netmon.Monitor NetMon *netmon.Monitor
// HealthTracker, if non-nil, is the health tracker to use.
HealthTracker *health.Tracker
// Dialer is the dialer to use for outbound connections. // Dialer is the dialer to use for outbound connections.
// If nil, a new Dialer is created // If nil, a new Dialer is created
Dialer *tsdial.Dialer Dialer *tsdial.Dialer
@ -310,6 +314,7 @@ func NewUserspaceEngine(logf logger.Logf, conf Config) (_ Engine, reterr error)
birdClient: conf.BIRDClient, birdClient: conf.BIRDClient,
controlKnobs: conf.ControlKnobs, controlKnobs: conf.ControlKnobs,
reconfigureVPN: conf.ReconfigureVPN, reconfigureVPN: conf.ReconfigureVPN,
health: conf.HealthTracker,
} }
if e.birdClient != nil { if e.birdClient != nil {
@ -336,7 +341,7 @@ func NewUserspaceEngine(logf logger.Logf, conf Config) (_ Engine, reterr error)
tunName, _ := conf.Tun.Name() tunName, _ := conf.Tun.Name()
conf.Dialer.SetTUNName(tunName) conf.Dialer.SetTUNName(tunName)
conf.Dialer.SetNetMon(e.netMon) conf.Dialer.SetNetMon(e.netMon)
e.dns = dns.NewManager(logf, conf.DNS, e.netMon, conf.Dialer, fwdDNSLinkSelector{e, tunName}, conf.ControlKnobs) e.dns = dns.NewManager(logf, conf.DNS, e.netMon, e.health, conf.Dialer, fwdDNSLinkSelector{e, tunName}, conf.ControlKnobs)
// TODO: there's probably a better place for this // TODO: there's probably a better place for this
sockstats.SetNetMon(e.netMon) sockstats.SetNetMon(e.netMon)
@ -372,6 +377,7 @@ func NewUserspaceEngine(logf logger.Logf, conf Config) (_ Engine, reterr error)
IdleFunc: e.tundev.IdleDuration, IdleFunc: e.tundev.IdleDuration,
NoteRecvActivity: e.noteRecvActivity, NoteRecvActivity: e.noteRecvActivity,
NetMon: e.netMon, NetMon: e.netMon,
HealthTracker: e.health,
ControlKnobs: conf.ControlKnobs, ControlKnobs: conf.ControlKnobs,
OnPortUpdate: onPortUpdate, OnPortUpdate: onPortUpdate,
PeerByKeyFunc: e.PeerByKey, PeerByKeyFunc: e.PeerByKey,
@ -960,7 +966,7 @@ func (e *userspaceEngine) Reconfig(cfg *wgcfg.Config, routerCfg *router.Config,
nid := cfg.NetworkLogging.NodeID nid := cfg.NetworkLogging.NodeID
tid := cfg.NetworkLogging.DomainID tid := cfg.NetworkLogging.DomainID
e.logf("wgengine: Reconfig: starting up network logger (node:%s tailnet:%s)", nid.Public(), tid.Public()) e.logf("wgengine: Reconfig: starting up network logger (node:%s tailnet:%s)", nid.Public(), tid.Public())
if err := e.networkLogger.Startup(cfg.NodeID, nid, tid, e.tundev, e.magicConn, e.netMon); err != nil { if err := e.networkLogger.Startup(cfg.NodeID, nid, tid, e.tundev, e.magicConn, e.netMon, e.health); err != nil {
e.logf("wgengine: Reconfig: error starting up network logger: %v", err) e.logf("wgengine: Reconfig: error starting up network logger: %v", err)
} }
e.networkLogger.ReconfigRoutes(routerCfg) e.networkLogger.ReconfigRoutes(routerCfg)
@ -970,7 +976,7 @@ func (e *userspaceEngine) Reconfig(cfg *wgcfg.Config, routerCfg *router.Config,
e.logf("wgengine: Reconfig: configuring router") e.logf("wgengine: Reconfig: configuring router")
e.networkLogger.ReconfigRoutes(routerCfg) e.networkLogger.ReconfigRoutes(routerCfg)
err := e.router.Set(routerCfg) err := e.router.Set(routerCfg)
health.SetRouterHealth(err) e.health.SetRouterHealth(err)
if err != nil { if err != nil {
return err return err
} }
@ -979,7 +985,7 @@ func (e *userspaceEngine) Reconfig(cfg *wgcfg.Config, routerCfg *router.Config,
// assigned address. // assigned address.
e.logf("wgengine: Reconfig: configuring DNS") e.logf("wgengine: Reconfig: configuring DNS")
err = e.dns.Set(*dnsCfg) err = e.dns.Set(*dnsCfg)
health.SetDNSHealth(err) e.health.SetDNSHealth(err)
if err != nil { if err != nil {
return err return err
} }
@ -1183,7 +1189,7 @@ func (e *userspaceEngine) linkChange(delta *netmon.ChangeDelta) {
e.logf("[v1] LinkChange: minor") e.logf("[v1] LinkChange: minor")
} }
health.SetAnyInterfaceUp(up) e.health.SetAnyInterfaceUp(up)
e.magicConn.SetNetworkUp(up) e.magicConn.SetNetworkUp(up)
if !up || changed { if !up || changed {
if err := e.dns.FlushCaches(); err != nil { if err := e.dns.FlushCaches(); err != nil {