
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Rate limiting has been through a few iterations starting with the original
LimitableRequestPublisherthat actually imposed no upper limit and was exposed to issues like #514, later replaced byRateLimitableRequestPublisherin #672 which works likeFlux#rateLimitbut without batching up requests below the upper limit and therefore not requiring an internal queue. In #736 this is further refactored toRateLimitableRequestSubscriber. There is also a PR against Reactor Core #1879 creating arateLimitvariant based on the work in RSocket but with a more evolved implementation.Rate limiting is important and probably should be more pluggable as well as visible in the RSocket API rather than being completely embedded. This could be an interceptor for example with a default but configurable upper limit but also possible to customize, replace, or completely turn off.
This would allow varying the implementation. For example using
Flux#rateLimitto start could be a dead simple, default implementation, but it would also allow for other strategies too. Further options to consider now or eventually would be to vary rate limiting by specific streams.Considering that most RSocket applications are likely using Reactor to compose logic, it's important to have rate limiting close to the transport because it may not be easy to apply
rateLimitin application code, e.g. if subsequent code needs to apply async serialization of Objects to byte buffers which could bring in operators like flatMap with their own prefetch.