FeedReader
FeedReader is a .net library used for reading and parsing RSS and ATOM feeds. Supports RSS 0.91, 0.92, 1.0, 2.0 and ATOM. Developed because tested existing libraries do not work with different languages, encodings or have other issues. Library tested with multiple languages, encodings and feeds.
FeedReader library is available as NuGet package: https://www.nuget.org/packages/CodeHollow.FeedReader/
Usage
The simplest way to read a feed and show the information is:
var feed = FeedReader.Read("https://codehollow.com/feed");
Console.WriteLine("Feed Title: " + feed.Title);
Console.WriteLine("Feed Description: " + feed.Description);
Console.WriteLine("Feed Image: " + feed.ImageUrl);
// ...
foreach(var item in feed.Items)
{
Console.WriteLine(item.Title + " - " + item.Link);
}There are some properties that are only available in e.g. RSS 2.0. If you want to get those properties, the property "SpecificFeed" is the right one:
var feed = FeedReader.Read("https://codehollow.com/feed");
Console.WriteLine("Feed Title: " + feed.Title);
if(feed.Type == FeedType.Rss_2_0)
{
var rss20feed = (Feeds.Rss20Feed)feed.SpecificFeed;
Console.WriteLine("Generator: " + rss20feed.Generator);
}If the url to the feed is not known, then you can use FeedReader.GetFeedUrlsFromUrl(url) to parse the url from the html webpage:
string url = "codehollow.com";
var urls = FeedReader.GetFeedUrlsFromUrl(url);
string feedUrl;
if (urls.Count() < 1) // no url - probably the url is already the right feed url
feedUrl = url;
else if (urls.Count() == 1)
feedUrl = urls.First().Url;
else if (urls.Count() == 2) // if 2 urls, then its usually a feed and a comments feed, so take the first per default
feedUrl = urls.First().Url;
else
{
// show all urls and let the user select (or take the first or ...)
// ...
}
var feed = FeedReader.Read(feedUrl);
// ...The code contains a sample console application: https://github.com/codehollow/FeedReader/tree/master/FeedReader.ConsoleSample

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
