From 9c9969aa999cf0c75038ead0cb61218a1636a7bd Mon Sep 17 00:00:00 2001 From: maia tillie arson crimew Date: Wed, 20 Oct 2021 22:22:59 +0200 Subject: [PATCH] update README --- README.md | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 61ddff0..bc952f1 100644 --- a/README.md +++ b/README.md @@ -1,14 +1,17 @@ # goop -Yet another tool to dump a git repository from a website. Code structure and console outputs heavily inspired by [arthaud/git-dumper](https://github.com/arthaud/git-dumper). +Yet another tool to dump a git repository from a website. Original codebase heavily inspired by [arthaud/git-dumper](https://github.com/arthaud/git-dumper). ## Usage ```bash -usage: goop [flags] url [dir] +Usage: + goop [flags] url [DIR] Flags: - -f, --force overrides DIR if it already exists - -h, --help help for goop + -f, --force overrides DIR if it already exists + -h, --help help for goop + -k, --keep keeps already downloaded files in DIR, useful if you keep being ratelimited by server + -l, --list allows you to supply the name of a file containing a list of domain names instead of just one domain ``` ### Example @@ -19,16 +22,18 @@ $ goop example.com ## Installation ```bash -GO111MODULE=on go get -u github.com/deletescape/goop +go get -u github.com/deletescape/goop@latest ``` ## How does it work? The tool will first check if directory listing is available. If it is, then it will just recursively download the .git directory (what you would do with `wget`). -If directory listing is not available, it will use several methods to find as many files as possible. Step by step, git-dumper will: +If directory listing is not available, it will use several methods to find as many files as possible. Step by step, goop will: * Fetch all common files (`.gitignore`, `.git/HEAD`, `.git/index`, etc.); * Find as many refs as possible (such as `refs/heads/master`, `refs/remotes/origin/HEAD`, etc.) by analyzing `.git/HEAD`, `.git/logs/HEAD`, `.git/config`, `.git/packed-refs` and so on; * Find as many objects (sha1) as possible by analyzing `.git/packed-refs`, `.git/index`, `.git/refs/*` and `.git/logs/*`; * Fetch all objects recursively, analyzing each commits to find their parents; -* Run `git checkout .` to recover the current working tree +* Run `git checkout .` to recover the current working tree; +* Attempt to fetch missing files listed in the git index; +* Attempt to fetch files listed in .gitignore