Commit Graph

168 Commits

Author SHA1 Message Date
Brad Fitzpatrick ca08e316af util/endian: delete package; use updated josharian/native instead
See josharian/native#3

Updates golang/go#57237

Change-Id: I238c04c6654e5b9e7d9cfb81a7bbc5e1043a84a2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-12-12 20:12:45 -08:00
Joe Tsai c47578b528
util/multierr: add Range (#6643)
Errors in Go are no longer viewed as a linear chain, but a tree.
See golang/go#53435.

Add a Range function that iterates through an error
in a pre-order, depth-first order.
This matches the iteration order of errors.As in Go 1.20.

This adds the logic (but currently commented out) for having
Error implement the multi-error version of Unwrap in Go 1.20.
It is commented out currently since it causes "go vet"
to complain about having the "wrong" signature.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-12-12 16:48:11 -08:00
Brad Fitzpatrick 197a4f1ae8 types/ptr: move all the ptrTo funcs to one new package's ptr.To
Change-Id: Ia0b820ffe7aa72897515f19bd415204b6fe743c7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-11-30 17:50:51 -08:00
Aaron Klotz 6e33d2da2b ipn/ipnauth, util/winutil: add temporary LookupPseudoUser workaround to address os/user.LookupId errors on Windows
I added util/winutil/LookupPseudoUser, which essentially consists of the bits
that I am in the process of adding to Go's standard library.

We check the provided SID for "S-1-5-x" where 17 <= x <= 20 (which are the
known pseudo-users) and then manually populate a os/user.User struct with
the correct information.

Fixes https://github.com/tailscale/tailscale/issues/869
Fixes https://github.com/tailscale/tailscale/issues/2894

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2022-11-28 15:53:34 -06:00
Brad Fitzpatrick ea25ef8236 util/set: add new set package for SetHandle type
We use this pattern in a number of places (in this repo and elsewhere)
and I was about to add a fourth to this repo which was crossing the line.
Add this type instead so they're all the same.

Also, we have another Set type (SliceSet, which tracks its keys in
order) in another repo we can move to this package later.

Change-Id: Ibbdcdba5443fae9b6956f63990bdb9e9443cefa9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-11-28 10:44:17 -08:00
Andrew Dunham 0af61f7c40 cmd/tailscale, util/quarantine: set quarantine flags on files from Taildrop
This sets the "com.apple.quarantine" flag on macOS, and the
"Zone.Identifier" alternate data stream on Windows.

Change-Id: If14f805467b0e2963067937d7f34e08ba1d1fa85
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
2022-11-17 15:06:02 -05:00
ysicing cba1312dab util/endian: add support on Loongnix-Server (loong64)
Change-Id: I8275d158779c749a4cae2b4457d390871b4b8e3c
Signed-off-by: ysicing <i@ysicing.me>
2022-11-08 08:24:15 -08:00
Brad Fitzpatrick db2cc393af util/dirwalk, metrics, portlist: add new package for fast directory walking
This is similar to the golang.org/x/tools/internal/fastwalk I'd
previously written but not recursive and using mem.RO.

The metrics package already had some Linux-specific directory reading
code in it. Move that out to a new general package that can be reused
by portlist too, which helps its scanning of all /proc files:

    name                old time/op    new time/op    delta
    FindProcessNames-8    2.79ms ± 6%    2.45ms ± 7%  -12.11%  (p=0.000 n=10+10)

    name                old alloc/op   new alloc/op   delta
    FindProcessNames-8    62.9kB ± 0%    33.5kB ± 0%  -46.76%  (p=0.000 n=9+10)

    name                old allocs/op  new allocs/op  delta
    FindProcessNames-8     2.25k ± 0%     0.38k ± 0%  -82.98%  (p=0.000 n=9+10)

Change-Id: I75db393032c328f12d95c39f71c9742c375f207a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-11-05 16:26:51 -07:00
Brad Fitzpatrick da8def8e13 all: remove old +build tags
The //go:build syntax was introduced in Go 1.17:

https://go.dev/doc/go1.17#build-lines

gofmt has kept the +build and go:build lines in sync since
then, but enough time has passed. Time to remove them.

Done with:

    perl -i -npe 's,^// \+build.*\n,,' $(git grep -l -F '+build')

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-11-04 07:25:42 -07:00
Adrian Dewhurst c3a5489e72 util/winutil: remove log spam for missing registry keys
It's normal for HKLM\SOFTWARE\Policies\Tailscale to not exist but that
currently produces a lot of log spam.

Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
2022-11-03 18:55:39 -04:00
Brad Fitzpatrick e55ae53169 tailcfg: add Node.UnsignedPeerAPIOnly to let server mark node as peerapi-only
capver 48

Change-Id: I20b2fa81d61ef8cc8a84e5f2afeefb68832bd904
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-11-02 21:55:04 -07:00
Mihai Parparita 63ad49890f cmd/tsconnect: pre-compress main.wasm when building the NPM package
This way we can do that once (out of band, in the GitHub action),
instead of increasing the time of each deploy that uses the package.

.wasm is removed from the list of automatically pre-compressed
extensions, an OSS bump and small change on the corp side is needed to
make use of this change.

Signed-off-by: Mihai Parparita <mihai@tailscale.com>
2022-10-14 15:08:06 -07:00
Cuong Manh Le a7efc7bd17 util/singleflight: sync with upstream
Sync with golang.org/x/sync/singleflight at commit
8fcdb60fdcc0539c5e357b2308249e4e752147f1

Fixes #5790

Signed-off-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2022-09-30 06:55:04 -07:00
Josh Soref d4811f11a0 all: fix spelling mistakes
Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2022-09-29 13:36:13 -07:00
Aaron Klotz 44f13d32d7 cmd/tailscaled, util/winutil: log Windows service diagnostics when the wintun device fails to install
I added new functions to winutil to obtain the state of a service and all
its depedencies, serialize them to JSON, and write them to a Logf.

When tstun.New returns a wrapped ERROR_DEVICE_NOT_AVAILABLE, we know that wintun
installation failed. We then log the service graph rooted at "NetSetupSvc".
We are interested in that specific service because network devices will not
install if that service is not running.

Updates https://github.com/tailscale/tailscale/issues/5531

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2022-09-28 16:09:10 -06:00
Andrew Dunham 4b996ad5e3
util/deephash: add AppendSum method (#5768)
This method can be used to obtain the hex-formatted deephash.Sum
instance without allocations.

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
2022-09-27 18:22:31 -04:00
Brad Fitzpatrick 9bdf0cd8cd ipn/ipnlocal: add c2n /debug/{goroutines,prefs,metrics}
* and move goroutine scrubbing code to its own package for reuse
* bump capver to 45

Change-Id: I9b4dfa5af44d2ecada6cc044cd1b5674ee427575
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-09-26 11:16:38 -07:00
Aaron Klotz acc7baac6d tailcfg, util/deephash: add DataPlaneAuditLogID to Node and DomainDataPlaneAuditLogID to MapResponse
We're adding two log IDs to facilitate data-plane audit logging: a node-specific
log ID, and a domain-specific log ID.

Updated util/deephash/deephash_test.go with revised expectations for tailcfg.Node.

Updates https://github.com/tailscale/corp/issues/6991

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2022-09-22 17:18:28 -06:00
Brad Fitzpatrick f3ce1e2536 util/mak: deprecate NonNil, add type-safe NonNilSliceForJSON, NonNilMapForJSON
And put the rationale in the name too to save the callers the need for a comment.

Change-Id: I090f51b749a5a0641897ee89a8fb2e2080c8b782
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-09-10 12:19:22 -07:00
Andrew Dunham 70ed22ccf9
util/uniq: add ModifySliceFunc (#5504)
Follow-up to #5491. This was used in control... oops!

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
2022-08-30 18:51:18 -04:00
Andrew Dunham e945d87d76
util/uniq: use generics instead of reflect (#5491)
This takes 75% less time per operation per some benchmarks on my mac.

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
2022-08-30 17:56:51 -04:00
Joe Tsai 7e40071571
util/deephash: handle slice edge-cases (#5471)
It is unclear whether the lack of checking nil-ness of slices
was an oversight or a deliberate feature.
Lacking a comment, the assumption is that this was an oversight.

Also, expand the logic to perform cycle detection for recursive slices.
We do this on a per-element basis since a slice is semantically
equivalent to a list of pointers.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-30 00:33:18 -07:00
Andrew Dunham 58cc049a9f
util/cstruct: add package for decoding padded C structures (#5429)
I was working on my "dump iptables rules using only syscalls" branch and
had a bunch of C structure decoding to do. Rather than manually
calculating the padding or using unsafe trickery to actually cast
variable-length structures to Go types, I'd rather use a helper package
that deals with padding for me.

Padding rules were taken from the following article:
  http://www.catb.org/esr/structure-packing/

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
2022-08-28 11:12:09 -04:00
Joe Tsai 9bf13fc3d1
util/deephash: remove getTypeInfo (#5469)
Add a new lookupTypeHasher function that is just a cached front-end
around the makeTypeHasher function.
We do not need to worry about the recursive type cycle issue that
made getTypeInfo more complicated since makeTypeHasher
is not directly recursive. All calls to itself happen lazily
through a sync.Once upon first use.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 17:39:51 -07:00
Joe Tsai ab7e6f3f11
util/deephash: require pointer in API (#5467)
The entry logic of Hash has extra complexity to make sure
we always have an addressable value on hand.
If not, we heap allocate the input.
For this reason we document that there are performance benefits
to always providing a pointer.
Rather than documenting this, just enforce it through generics.

Also, delete the unused HasherForType function.
It's an interesting use of generics, but not well tested.
We can resurrect it from code history if there's a need for it.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 16:08:31 -07:00
Joe Tsai c5b1565337
util/deephash: move pointer and interface logic to separate function (#5465)
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 15:51:34 -07:00
Joe Tsai d2e2d8438b
util/deephash: move map logic to separate function (#5464)
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.

There is a minor adjustment where we hash the length of the map
to be more on the cautious side.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 15:49:26 -07:00
Joe Tsai 23c3831ff9
util/deephash: coalesce struct logic (#5466)
Rather than having two copies []fieldInfo,
just maintain one and perform merging in the same pass.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 15:39:46 -07:00
Joe Tsai 296b008b9f
util/deephash: move array and slice logic to separate function (#5463)
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 15:37:36 -07:00
Joe Tsai 31bf3874d6
util/deephash: use unsafe.Pointer instead of reflect.Value (#5459)
Use of reflect.Value.SetXXX panics if the provided argument was
obtained from an unexported struct field.
Instead, pass an unsafe.Pointer around and convert to a
reflect.Value when necessary (i.e., for maps and interfaces).
Converting from unsafe.Pointer to reflect.Value guarantees that
none of the read-only bits will be populated.

When running in race mode, we attach type information to the pointer
so that we can type check every pointer operation.
This also type-checks that direct memory hashing is within
the valid range of a struct value.

We add test cases that previously caused deephash to panic,
but now pass.

Performance:

	name              old time/op    new time/op    delta
	Hash              14.1µs ± 1%    14.1µs ± 1%    ~     (p=0.590 n=10+9)
	HashPacketFilter  2.53µs ± 2%    2.44µs ± 1%  -3.79%  (p=0.000 n=9+10)
	TailcfgNode       1.45µs ± 1%    1.43µs ± 0%  -1.36%  (p=0.000 n=9+9)
	HashArray         318ns ± 2%     318ns ± 2%    ~      (p=0.541 n=10+10)
	HashMapAcyclic    32.9µs ± 1%    31.6µs ± 1%  -4.16%  (p=0.000 n=10+9)

There is a slight performance gain due to the use of unsafe.Pointer
over reflect.Value methods. Also, passing an unsafe.Pointer (1 word)
on the stack is cheaper than passing a reflect.Value (3 words).

Performance gains are diminishing since SHA-256 hashing now dominates the runtime.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 12:30:35 -07:00
Joe Tsai e0c5ac1f02
util/deephash: add debug printer (#5460)
When built with "deephash_debug", print the set of HashXXX methods.

Example usage:

	$ go test -run=GetTypeHasher/string_slice -tags=deephash_debug
	U64(2)+U64(3)+S("foo")+U64(3)+S("bar")+FIN

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 12:14:07 -07:00
Joe Tsai 70f9fc8c7a
util/deephash: rely on direct memory hashing for primitive kinds (#5457)
Rather than separate functions to hash each kind,
just rely on the fact that these are direct memory hashable,
thus simplifying the code.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-26 20:50:56 -07:00
Joe Tsai 531ccca648
util/deephash: delete slow path (#5423)
Every implementation of typeHasherFunc always returns true,
which implies that the slow path is no longer executed.
Delete it.

h.hashValueWithType(v, ti, ...) is deleted as it is equivalent to:
	ti.hasher()(h, v)

h.hashValue(v, ...) is deleted as it is equivalent to:
	ti := getTypeInfo(v.Type())
	ti.hasher()(h, v)

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-26 17:46:22 -07:00
Joe Tsai 3fc8683585
util/deephash: expand fast-path capabilities (#5404)
Add support for maps and interfaces to the fast path.
Add cycle-detection to the pointer handling logic.
This logic is mostly copied from the slow path.

A future commit will delete the slow path once
the fast path never falls back to the slow path.

Performance:

	name                 old time/op    new time/op    delta
	Hash-24                18.5µs ± 1%    14.9µs ± 2%  -19.52%  (p=0.000 n=10+10)
	HashPacketFilter-24    2.54µs ± 1%    2.60µs ± 1%   +2.19%  (p=0.000 n=10+10)
	HashMapAcyclic-24      31.6µs ± 1%    30.5µs ± 1%   -3.42%  (p=0.000 n=9+8)
	TailcfgNode-24         1.44µs ± 2%    1.43µs ± 1%     ~     (p=0.171 n=10+10)
	HashArray-24            324ns ± 1%     324ns ± 2%     ~     (p=0.425 n=9+9)

The additional cycle detection logic doesn't incur much slow down
since it only activates if a type is recursive, which does not apply
for any of the types that we care about.

There is a notable performance boost since we switch from the fath path
to the slow path less often. Most notably, a struct with a field that
could not be handled by the fast path would previously cause
the entire struct to go through the slow path.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-24 01:31:01 -07:00
Tom DNetto 18edd79421 control/controlclient,tailcfg: [capver 40] create KeySignature field in tailcfg.Node
We calve out a space to put the node-key signature (used on tailnets where network lock is enabled).

Signed-off-by: Tom DNetto <tom@tailscale.com>
2022-08-22 11:25:41 -07:00
Joe Tsai d32700c7b2
util/deephash: specialize for netip.Addr and drop AppendTo support (#5402)
There are 5 types that we care about that implement AppendTo:

	key.DiscoPublic
	key.NodePublic
	netip.Prefix
	netipx.IPRange
	netip.Addr

The key types are thin wrappers around [32]byte and are memory hashable.
The netip.Prefix and netipx.IPRange types are thin wrappers over netip.Addr
and are hashable by default if netip.Addr is hashable.
The netip.Addr type is the only one with a complex structure where
the default behavior of deephash does not hash it correctly due to the presence
of the intern.Value type.

Drop support for AppendTo and instead add specialized hashing for netip.Addr
that would be semantically equivalent to == on the netip.Addr values.

The AppendTo support was already broken prior to this change.
It was fully removed (intentionally or not) in #4870.
It was partially restored in #4858 for the fast path,
but still broken in the slow path.
Just drop support for it altogether.

This does mean we lack any ability for types to self-hash themselves.
In the future we can add support for types that implement:

	interface { DeepHash() Sum }

Test and fuzz cases were added for the relevant types that
used to rely on the AppendTo method.
FuzzAddr has been executed on 1 billion samples without issues.

Signed-off-by: Joe Tsai joetsai@digital-static.net
2022-08-18 22:54:56 -07:00
Joe Tsai 03f7e4e577
util/hashx: move from sha256x (#5388) 2022-08-16 13:15:33 -07:00
Joe Tsai f061d20c9d
util/sha256x: rename Hash as Block512 (#5351)
Rename Hash as Block512 to indicate that this is a general-purpose
hash.Hash for any algorithm that operates on 512-bit block sizes.

While we rename the package as hashx in this commit,
a subsequent commit will move the sha256x package to hashx.
This is done separately to avoid confusing git.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-16 09:49:48 -07:00
Joe Tsai 44d62b65d0
util/deephash: move typeIsRecursive and canMemHash to types.go (#5386)
Also, rename canMemHash to typeIsMemHashable to be consistent.
There are zero changes to the semantics.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-16 09:31:19 -07:00
Joe Tsai d53eb6fa11
util/deephash: simplify typeIsRecursive (#5385)
Any type that is memory hashable must not be recursive since
there are definitely no pointers involved to make a cycle.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-15 23:29:06 -07:00
Joe Tsai 23ec3c104a
util/deephash: remove unused stack slice in typeIsRecursive (#5363)
No operation ever reads from this variable.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-15 22:36:08 -07:00
Joe Tsai c200229f9e
util/deephash: simplify canMemHash (#5384)
Put the t.Size() == 0 check first since this is applicable in all cases.
Drop the last struct field conditional since this is covered by the
sumFieldSize check at the end.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-15 22:22:18 -07:00
Joe Tsai 32a1a3d1c0
util/deephash: avoid variadic argument for Update (#5372)
Hashing []any is slow since hashing of interfaces is slow.
Hashing of interfaces is slow since we pessimistically assume
that cycles can occur through them and start cycle tracking.

Drop the variadic signature of Update and fix callers to pass in
an anonymous struct so that we are hashing concrete types
near the root of the value tree.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-15 11:22:28 -07:00
Maisem Ali 545639ee44 util/winutil: consolidate interface specific registry keys
Code movement to allow reuse in a follow up PR.

Updates #1659

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2022-08-14 22:34:59 -07:00
Joe Tsai 548fa63e49
util/deephash: use binary encoding of time.Time (#5352)
Formatting a time.Time as RFC3339 is slow.
See https://go.dev/issue/54093

Now that we have efficient hashing of fixed-width integers,
just hash the time.Time as a binary value.

Performance:

	Hash-24                19.0µs ± 1%    18.6µs ± 1%   -2.03%  (p=0.000 n=10+9)
	TailcfgNode-24         1.79µs ± 1%    1.40µs ± 1%  -21.74%  (p=0.000 n=10+9)

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-12 14:42:51 -07:00
Joe Tsai 1f7479466e
util/deephash: use sha256x (#5339)
Switch deephash to use sha256x.Hash.

We add sha256x.HashString to efficiently hash a string.
It uses unsafe under the hood to convert a string to a []byte.
We also modify sha256x.Hash to export the underlying hash.Hash
for testing purposes so that we can intercept all hash.Hash calls.

Performance:

	name                 old time/op    new time/op    delta
	Hash-24                19.8µs ± 1%    19.2µs ± 1%  -3.01%  (p=0.000 n=10+10)
	HashPacketFilter-24    2.61µs ± 0%    2.53µs ± 1%  -3.01%  (p=0.000 n=8+10)
	HashMapAcyclic-24      31.3µs ± 1%    29.8µs ± 0%  -4.80%  (p=0.000 n=10+9)
	TailcfgNode-24         1.83µs ± 1%    1.82µs ± 2%    ~     (p=0.305 n=10+10)
	HashArray-24            344ns ± 2%     323ns ± 1%  -6.02%  (p=0.000 n=9+10)

The performance gains is not as dramatic as sha256x over sha256 due to:
1. most of the hashing already occurring through the direct memory hashing logic, and
2. what does not go through direct memory hashing is slowed down by reflect.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-11 17:44:09 -07:00
Joe Tsai 77a92f326d
util/deephash: avoid using sync.Pool for reflect.MapIter (#5333)
In Go 1.19, the reflect.Value.MapRange method uses "function outlining"
so that the allocation of reflect.MapIter is inlinable by the caller.
If the iterator doesn't escape the caller, it can be stack allocated.
See https://go.dev/cl/400675

Performance:

	name               old time/op    new time/op    delta
	HashMapAcyclic-24    31.9µs ± 2%    32.1µs ± 1%   ~     (p=0.075 n=10+10)

	name               old alloc/op   new alloc/op   delta
	HashMapAcyclic-24     0.00B          0.00B        ~     (all equal)

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-11 00:33:40 -07:00
Joe Tsai 1c3c6b5382
util/sha256x: make Hash.Sum non-escaping (#5338)
Since Hash is a concrete type, we can make it such that
Sum never escapes the input.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-10 17:31:44 -07:00
Joe Tsai 76b0e578c5
util/sha256x: new package (#5337)
The hash.Hash provided by sha256.New is much more efficient
if we always provide it with data a multiple of the block size.
This avoids double-copying of data into the internal block
of sha256.digest.x. Effectively, we are managing a block ourselves
to ensure we only ever call hash.Hash.Write with full blocks.

Performance:

	name    old time/op    new time/op    delta
	Hash    33.5µs ± 1%    20.6µs ± 1%  -38.40%  (p=0.000 n=10+9)

The logic has gone through CPU-hours of fuzzing.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-10 15:49:36 -07:00
Joe Tsai 539c5e44c5
util/deephash: always keep values addressable (#5328)
The logic of deephash is both simpler and easier to reason about
if values are always addressable.

In Go, the composite kinds are slices, arrays, maps, structs,
interfaces, pointers, channels, and functions,
where we define "composite" as a Go value that encapsulates
some other Go value (e.g., a map is a collection of key-value entries).

In the cases of pointers and slices, the sub-values are always addressable.

In the cases of arrays and structs, the sub-values are always addressable
if and only if the parent value is addressable.

In the case of maps and interfaces, the sub-values are never addressable.
To make them addressable, we need to copy them onto the heap.

For the purposes of deephash, we do not care about channels and functions.

For all non-composite kinds (e.g., strings and ints), they are only addressable
if obtained from one of the composite kinds that produce addressable values
(i.e., pointers, slices, addressable arrays, and addressable structs).
A non-addressible, non-composite kind can be made addressable by
allocating it on the heap, obtaining a pointer to it, and dereferencing it.

Thus, if we can ensure that values are addressable at the entry points,
and shallow copy sub-values whenever we encounter an interface or map,
then we can ensure that all values are always addressable and
assume such property throughout all the logic.

Performance:

	name                 old time/op    new time/op    delta
	Hash-24                21.5µs ± 1%    19.7µs ± 1%  -8.29%  (p=0.000 n=9+9)
	HashPacketFilter-24    2.61µs ± 1%    2.62µs ± 0%  +0.29%  (p=0.037 n=10+9)
	HashMapAcyclic-24      30.8µs ± 1%    30.9µs ± 1%    ~     (p=0.400 n=9+10)
	TailcfgNode-24         1.84µs ± 1%    1.84µs ± 2%    ~     (p=0.928 n=10+10)
	HashArray-24            324ns ± 2%     332ns ± 2%  +2.45%  (p=0.000 n=10+10)

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-09 22:00:02 -07:00