Mirror of nyancrimew/goop@github.com
Go to file
maia tillie arson crimew a6c6710b3c force full concurrency wherever appropriate 2021-10-24 20:11:22 +02:00
cmd performance and edge case handling improvements 2021-10-20 20:57:29 +02:00
internal use jobtracker from the extracted library 2021-10-24 19:28:58 +02:00
pkg/goop force full concurrency wherever appropriate 2021-10-24 20:11:22 +02:00
.gitignore Basic implementation to support sites with AutoIndex on 2020-10-30 11:47:23 +01:00
LICENSE Add readme and license 2020-10-30 18:48:44 +01:00
README.md update README 2021-10-21 03:15:53 +02:00
go.mod force full concurrency wherever appropriate 2021-10-24 20:11:22 +02:00
go.sum force full concurrency wherever appropriate 2021-10-24 20:11:22 +02:00
main.go performance and edge case handling improvements 2021-10-20 20:57:29 +02:00

README.md

goop

Yet another tool to dump a git repository from a website. goop tries to focus on as-complete-as-possible dumps and handling as many edge-cases as possible, compared to other tools, which seem to focus on bare minimum dumps. Original codebase heavily inspired by arthaud/git-dumper.

Usage

Usage:
  goop [flags] url [DIR]

Flags:
  -f, --force   overrides DIR if it already exists
  -h, --help    help for goop
  -k, --keep    keeps already downloaded files in DIR, useful if you keep being ratelimited by server
  -l, --list    allows you to supply the name of a file containing a list of domain names instead of just one domain

Example

$ goop example.com

Installation

go get -u github.com/deletescape/goop@latest

How does it work?

The tool will first check if directory listing is available. If it is, then it will just recursively download the .git directory (what you would do with wget).

If directory listing is not available, it will use several methods to find as many files as possible. Step by step, goop will:

  • Fetch all common files (.gitignore, .git/HEAD, .git/index, etc.);
  • Find as many refs as possible (such as refs/heads/master, refs/remotes/origin/HEAD, etc.) by analyzing .git/HEAD, .git/logs/HEAD, .git/config, .git/packed-refs and so on;
  • Find as many objects (sha1) as possible by analyzing .git/packed-refs, .git/index, .git/refs/* and .git/logs/*;
  • Fetch all objects recursively, analyzing each commits to find their parents;
  • Run git checkout . to recover the current working tree;
  • Attempt to fetch missing files listed in the git index;
  • Attempt to create objects for manually fetched files;
  • Attempt to fetch files listed in .gitignore