Several governments are starting to publish open data: datasets generated by the goverment, made freely available for citizens to use for value-added app development, analysis, and feedback. For instance, the City of Vancouver (Canada) Open Data Catalogue publishes 130 datasets. The subject matter ranges from tabular files of city councillor contact information to geographical datasets of zoning districts. Formats range from Comma-Separated Value (CSV) to SHP to KML and beyond.
It would be nice for each of these open data portals to have a dataset of datasets: their catalogue of datasets in structured data file form. The catalogue dataset should have metadata describing each dataset (name of dataset, URL of download page, the formats the dataset is available in, maybe a description of the dataset format and attributes or URL to same.
What is a good data model and a good format for such a catalogue dataset? If this is a solved problem, I'd like to suggest that Vancouver reuse that solution, instead of inventing one.
Update: in response to the question, why is it desirable to to have a catalogue as a structured dataset, I can think of three classes of use case.
Analysis across all the datasets of a data provider. It is convenient to get a list of all datasets, with links to descriptions etc., which I can import into a spreadsheet and annotate. Someone else may want to count total number of records published, or breadth of government activity covered by the data. In my case, I'm working on a Vancouver Open Data language census.
Analysis of corresponding datasets across multiple data providers. For instance, one might want to aggregate a list of all zoning boundary datasets published by Canadian cities. That is easier if one can sift through dataset lists by machine instead of by hand.
Analysis of dataset catalogue change over time. It might be interesting to analyse the growth in Open Data from one year to the next. Structured catalogues make this easier to automate.