The Internet Archive discovers and captures web pages through many different web crawls.
At any given time several distinct crawls are running, some for months, and some every day or longer.
View the web archive through the Wayback Machine.
Content crawled via the Wayback Machine Live Proxy mostly by the Save Page Now feature on web.archive.org.
Liveweb proxy is a component of Internet Archive’s wayback machine project. The liveweb proxy captures the content of a web page in real time, archives it into a ARC or WARC file and returns the ARC/WARC record back to the wayback machine to process. The recorded ARC/WARC file becomes part of the wayback machine in due course of time.
TIMESTAMPS
The Wayback Machine - https://web.archive.org/web/20200205191335/https://github.com/topics/availability
Is your feature request related to a problem? Please describe.
It takes far too long to test all URLs of a list (Slow Internet).
It would therefore be very good if you could check a list of URLs only for incorrect URLs.
Without DNS query or so, only a check if all URLs are correct and if no errors have occurred e.g. a dot at the end of a URL.
Environment.monitor is the solution for continuous tracking of any environmental components availability over time. Designed to provide single and comprehensive interface for tracking and evaluation of environment health.
[DEPRECATED] Tradle bot framework, allows to drive user interactions in Tradle mobile and web apps and (soon) using smart contracts for critical functions
🔖 Daily-updated reading list for designing High Scalability 🍒, High Availability 🔥, High Stability 🗻 back-end systems - Pull requests are greatly welcome 👬 I hope you will find this project helpful 🍀 Please help me share it to more and more people ❤️ Thank you - 谢谢 - धन्यवाद - ধন্যবাদ - Спасибо - شكرا - Merci - Gracias - Danke - Cảm ơn! 🙇
Is your feature request related to a problem? Please describe.
It takes far too long to test all URLs of a list (Slow Internet).
It would therefore be very good if you could check a list of URLs only for incorrect URLs.
Without DNS query or so, only a check if all URLs are correct and if no errors have occurred e.g. a dot at the end of a URL.
Describe the solution you'd like
When I