The Wayback Machine - https://web.archive.org/web/20200906094307/https://github.com/schollz/squirrel
Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
src
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

squirrel
Version Build
Status

curl https://getsquirrel.schollz.com | bash

Downloading the web can be cumbersome if you end up with thousands or millions of files. This tool allows you to download websites directly into a file-based database in SQLite, since SQlite performs faster than a filesystem for reading and writing.

Install

Download the latest release for your system, or install a release from the command-line:

$ curl https://getsquirrel.schollz.com | bash

On macOS you can install the latest release with Homebrew:

$ brew install schollz/tap/squirrel

Or, you can install Go and build from source (requires Go 1.11+):

$ go get -v github.com/schollz/squirrel

Usage

Basic usage

It should be compatible with Firefox's "Copy as cURL", just replace curl with squirrel. By default it will save the data in a database, urls.db.

$ squirrel "https://www.sqlite.org/fasterthanfs.html" -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "Referer: https://www.google.com/" -H "Connection: keep-alive" -H "Upgrade-Insecure-Requests: 1" -H "If-Modified-Since: Thu, 02 May 2019 16:25:06 +0000" -H "If-None-Match: ""m5ccb19e2s6076""" -H "Cache-Control: max-age=0"

Contributing

Pull requests are welcome. Feel free to...

  • Revise documentation
  • Add new features
  • Fix bugs
  • Suggest improvements

Thanks

Thanks Dr. H for the idea.

License

MIT

About

Like curl, or wget, but downloads directly go to a SQLite databse

Topics

Resources

License

Packages

No packages published
You can’t perform that action at this time.