Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).
To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.
"Crawler" is a generic term for any program (such as a robot or spider) that is used to
automatically discover and scan websites by following links from one webpage to another.
Google's main crawler is called
Googlebot. This table lists information
about the common Google crawlers you may see in your referrer logs, and how to
specify them in
robots.txt, the
robots meta tags, and the
X-Robots-Tag HTTP directives.
The following table shows the crawlers used by various products and services at Google:
The user agent token is used in the User-agent: line in robots.txt
to match a crawler type when writing crawl rules for your site. Some crawlers have more than
one token, as shown in the table; you need to match only one crawler token for a rule to
apply. This list is not complete, but covers most of the crawlers you might see on your
website.
The full user agent string is a full description of the crawler, and appears in
the request and your web logs.
Mozilla/5.0 (iPhone; CPU iPhone OS 9_1 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Version/9.0 Mobile/13B143 Safari/601.1 (compatible; AdsBot-Google-Mobile; +http://www.google.com/mobile/adsbot.html)
Mozilla/5.0 (Linux; Android 4.2.1; en-us; Nexus 5 Build/JOP40D) AppleWebKit/535.19 (KHTML, like Gecko; googleweblight) Chrome/38.0.1025.166 Mobile Safari/535.19
Google StoreBot
User agent token
Storebot-Google
Full user agent strings
Desktop agent:
Mozilla/5.0 (X11; Linux x86_64; Storebot-Google/1.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36
Mobile agent:
Mozilla/5.0 (Linux; Android 8.0; Pixel 2 Build/OPD3.170816.012; Storebot-Google/1.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Mobile Safari/537.36
User agents in robots.txt
Where several user agents are recognized in the robots.txt file, Google will follow the most
specific. If you want all of Google to be able to crawl your pages, you don't need a
robots.txt file at all. If you want to block or allow all of Google's crawlers from accessing
some of your content, you can do this by specifying Googlebot as the user agent. For example,
if you want all your pages to appear in Google Search, and if you want AdSense ads to appear
on your pages, you don't need a robots.txt file. Similarly, if you want to block some pages
from Google altogether, blocking the Googlebot user agent will also block all
Google's other user agents.
But if you want more fine-grained control, you can get more specific. For example, you might
want all your pages to appear in Google Search, but you don't want images in your personal
directory to be crawled. In this case, use robots.txt to disallow the
Googlebot-Image user agent from crawling the files in your personal directory
(while allowing Googlebot to crawl all files), like this:
To take another example, say that you want ads on all your pages, but you don't want those
pages to appear in Google Search. Here, you'd block Googlebot, but allow the
Mediapartners-Google user agent, like this: