Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).
To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.
This guide helps you identify and fix JavaScript issues that may be blocking your page, or
specific content on JavaScript powered pages, from showing up in Google Search. While
Googlebot does run JavaScript, there are some differences and limitations that you need to
account for when designing your pages and applications to accommodate how crawlers access and
render your content. Our guide on JavaScript SEO basics has
more information on how you can optimize your JavaScript site for Google Search.
Googlebot is designed to be a good citizen of the web. Crawling is its
main priority,
while making sure it doesn't degrade the experience of users visiting the site. Googlebot and
its Web Rendering Service (WRS) component continuously analyze and identify resources that
don't contribute to essential page content and may not fetch such resources. For example,
reporting and error requests that don't contribute to essential page content, and other
similar types of requests are unused or unnecessary to extract essential page content. Client-side
analytics may not provide a full or accurate
representation of Googlebot and WRS activity on your site. Use
Search Console to monitor
Googlebot and WRS activity and feedback on your site.
If you suspect that JavaScript issues might be blocking your page, or specific content on
JavaScript powered pages, from showing up in Google Search, follow these steps. If you're not sure if JavaScript is the main cause, follow our
general debugging guide to determine the specific
issue.
To test how Google crawls and renders a URL, use the
Mobile-Friendly Test
or the URL Inspection Tool in Search
Console. You can see loaded resources, JavaScript console output and exceptions, rendered
DOM, and more information.
Optionally, we also recommend collecting and auditing JavaScript errors encountered by
users, including Googlebot, on your site to identify potential issues that may affect how
content is rendered.
Show example
Here's an example that shows how to log JavaScript errors that are logged in the
global onerror handler.
window.addEventListener('error', function(e) {
var errorText = [
e.message,
'URL: ' + e.filename,
'Line: ' + e.lineno + ', Column: ' + e.colno,
'Stack: ' + (e.error && e.error.stack || '(no stack trace)')
].join('\n');
// Example: log errors as visual output into the host page.
// Note: you probably don't want to show such errors to users, or
// have the errors get indexed by Googlebot; however, it may
// be a useful feature while actively debugging the page.
var DOM_ID = 'rendering-debug-pre';
if (!document.getElementById(DOM_ID)) {
var log = document.createElement('pre');
log.id = DOM_ID;
log.style.whiteSpace = 'pre-wrap';
log.textContent = errorText;
if (!document.body) document.body = document.createElement('body');
document.body.insertBefore(log, document.body.firstChild);
} else {
document.getElementById(DOM_ID).textContent += '\n\n' + errorText;
}
// Example: log the error to remote service.
// Note: you can log errors to a remote service, to understand
// and monitor the types of errors encountered by regular users,
// Googlebot, and other crawlers.
var client = new XMLHttpRequest();
client.open('POST', 'https://example.com/logError');
client.setRequestHeader('Content-Type', 'text/plain;charset=UTF-8');
client.send(errorText);
});
Make sure to prevent soft 404 errors.
In a single-page application (SPA), this can be especially difficult.
To prevent error pages from being indexed, you can use one or both of the following strategies:
Redirect to a URL where the server responds with a 404 status code.
Show example
fetch(`https://api.kitten.club/cats/${id}`)
.then(res => res.json())
.then((cat) => {
if (!cat.exists) {
// redirect to page that gives a 404
window.location.href = '/not-found';
}
});
Don't use fragment URLs to load different content.
Don't rely on data persistence to serve content.
Use content fingerprinting to avoid caching issues with Googlebot.
Ensure that your application uses
feature detection
for all critical APIs that it needs and provide a fallback
behavior or polyfill where applicable.
Make sure your content works with HTTP connections.
Make sure your web components render as expected.
Use the Mobile-Friendly Test or the
URL Inspection Tool to
check if the rendered HTML has all content you expect.