This change introduces a new subcommand, `exit-node`, along with a
subsubcommand of `list` and a `--filter` flag.
Exit nodes without location data will continue to be displayed when
`status` is used. Exit nodes with location data will only be displayed
behind `exit-node list`, and in status if they are the active exit node.
The `filter` flag can be used to filter exit nodes with location data by
country.
Exit nodes with Location.Priority data will have only the highest
priority option for each country and city listed. For countries with
multiple cities, a <Country> <Any> option will be displayed, indicating
the highest priority node within that country.
Updates tailscale/corp#13025
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
Implement `tailscale update` on FreeBSD. This is much simpler than other
platforms because `pkg rquery` lets us get the version in their repos
without any extra parsing.
Updates #6995
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
Define PeerCapabilty and PeerCapMap as the new way of sending down
inter-peer capability information.
Previously, this was unstructured and you could only send down strings
which got too limiting for certain usecases. Instead add the ability
to send down raw JSON messages that are opaque to Tailscale but provide
the applications to define them however they wish.
Also update accessors to use the new values.
Updates #4217
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Similar to Arch support, use the latest version info from the official
`apk` repo and don't offer explicit track or version switching.
Add detection for Alpine Linux in version/distro along the way.
Updates #6995
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
This is the Fedora family of distros, including CentOS, RHEL and others.
Tested in `fedora:latest` and `centos:7` containers.
Updates #6995
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
The util/linuxfw/iptables.go had a bunch of code that wasn't yet used
(in prep for future work) but because of its imports, ended up
initializing code deep within gvisor that panicked on init on arm64
systems not using 4KB pages.
This deletes the unused code to delete the imports and remove the
panic. We can then cherry-pick this back to the branch and restore it
later in a different way.
A new test makes sure we don't regress in the future by depending on
the panicking package in question.
Fixes#8658
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Arch version of tailscale is not maintained by us, but is generally
up-to-date with our releases. Therefore "tailscale update" is just a
thin wrapper around "pacman -Sy tailscale" with different flags.
Updates #6995
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
* cmd/tailscale/cli: make `tailscale update` query `softwareupdate`
Even on macOS when Tailscale was installed via the App Store, we can check for
and even install new versions if people ask explicitly. Also, warn if App Store
AutoUpdate is not turned on.
Updates #6995
This changes the ACLTestError type to reuse the existing/identical
types from the ACL implementation, to avoid issues in the future if
the two types fall out of sync.
Updates #8645
Signed-off-by: Jenny Zhang <jz@tailscale.com>
When using a custom http port like 8080, this was resulting in a
constructed hostname of `host.tailnet.ts.net:8080.tailnet.ts.net` when
looking up the serve handler. Instead, strip off the port before adding
the MagicDNS suffix.
Also use the actual hostname in `serve status` rather than the literal
string "host".
Fixes#8635
Signed-off-by: Will Norris <will@tailscale.com>
Add a few helper functions in tsweb to add common security headers to handlers. Use those functions for all non-tailscaled-facing endpoints in derper.
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
`go test -json` outputs invalid JSON when a build fails.
Handle that case by reseting the json.Decode and continuing to read.
Updates #8493
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This allows sending logs from the "logpolicy" package (and associated
callees) to something other than the log package. The behaviour for
tailscaled remains the same, passing in log.Printf
Updates #8249
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ie1d43b75fa7281933d9225bffd388462c08a5f31
When performing a fallback DNS query, run the recursive resolver in a
separate goroutine and compare the results returned by the recursive
resolver with the results we get from "regular" bootstrap DNS. This will
allow us to gather data about whether the recursive DNS resolver works
better, worse, or about the same as "regular" bootstrap DNS.
Updates #5853
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ifa0b0cc9eeb0dccd6f7a3d91675fe44b3b34bd48
Previously it would wait for all tests to run before printing anything,
instead stream the results over a channel so that they can be emitted
immediately.
Updates #8493
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Previously it would only print the failures without providing
more information on which package the failures from.
This commit makes it so that it prints out the package information
as well as the attempt numbers.
```
➜ tailscale.com git:(main) ✗ go run ./cmd/testwrapper ./cmd/...
ok tailscale.com/cmd/derper
ok tailscale.com/cmd/k8s-operator
ok tailscale.com/cmd/tailscale/cli
ok tailscale.com/cmd/tailscaled
=== RUN TestFlakeRun
flakytest.go:38: flakytest: issue tracking this flaky test: https://github.com/tailscale/tailscale/issues/0
flakytest_test.go:41: First run in testwrapper, failing so that test is retried. This is expected.
--- FAIL: TestFlakeRun (0.00s)
FAIL tailscale.com/cmd/testwrapper/flakytest
Attempt #2: Retrying flaky tests:
ok tailscale.com/cmd/testwrapper/flakytest
```
Updates #8493
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This change is introducing new netfilterRunner interface and moving iptables manipulation to a lower leveled iptables runner.
For #391
Signed-off-by: KevinLiang10 <kevinliang@tailscale.com>
Redo the testwrapper to track and only retry flaky tests instead
of retrying the entire pkg. It also fails early if a non-flaky test fails.
This also makes it so that the go test caches are used.
Fixes#7975
Signed-off-by: Maisem Ali <maisem@tailscale.com>
In order to improve our ability to understand the state of policies and
registry settings when troubleshooting, we enumerate all values in all subkeys.
x/sys/windows does not already offer this, so we need to call RegEnumValue
directly.
For now we're just logging this during startup, however in a future PR I plan to
also trigger this code during a bugreport. I also want to log more than just
registry.
Fixes#8141
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
The client/tailscale is a stable-ish API we try not to break. Revert
the Client.CreateKey method as it was and add a new
CreateKeyWithExpiry method to do the new thing. And document the
expiry field and enforce that the time.Duration can't be between in
range greater than 0 and less than a second.
Updates #7143
Updates #8124 (reverts it, effectively)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Adds a parameter for create key that allows a number of seconds
(less than 90) to be specified for new keys.
Fixes https://github.com/tailscale/tailscale/issues/7965
Signed-off-by: Matthew Brown <matthew@bargrove.com>
getSingleObject can return `nil, nil`, getDeviceInfo was not handling
that case which resulted in panics.
Fixes#7303
Signed-off-by: Maisem Ali <maisem@tailscale.com>
The retry logic was pathological in the following ways:
* If we restarted the logging service, any pending uploads
would be placed in a retry-loop where it depended on backoff.Backoff,
which was too aggresive. It would retry failures within milliseconds,
taking at least 10 retries to hit a delay of 1 second.
* In the event where a logstream was rate limited,
the aggressive retry logic would severely exacerbate the problem
since each retry would also log an error message.
It is by chance that the rate of log error spam
does not happen to exceed the rate limit itself.
We modify the retry logic in the following ways:
* We now respect the "Retry-After" header sent by the logging service.
* Lacking a "Retry-After" header, we retry after a hard-coded period of
30 to 60 seconds. This avoids the thundering-herd effect when all nodes
try reconnecting to the logging service at the same time after a restart.
* We do not treat a status 400 as having been uploaded.
This is simply not the behavior of the logging service.
Updates #tailscale/corp#11213
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
I noticed cmd/{cloner,viewer} didn't support structs with embedded
fields while working on a change in another repo. This adds support.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Signed-off-by: Chenyang Gao <gps949@outlook.com>
in commit 6e96744, the tsd system type has been added.
Which will cause the daemon will crash on some OSs (Windows, darwin and so on).
The root cause is that on those OSs, handleSubnetsInNetstack() will return true and set the conf.Router with a wrapper.
Later in NewUserspaceEngine() it will do subsystem set and found that early set router mismatch to current value, then panic.
This is part of an effort to clean up tailscaled initialization between
tailscaled, tailscaled Windows service, tsnet, and the mac GUI.
Updates #8036
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>