The Wayback Machine - https://web.archive.org/web/20211113204600/https://github.com/topics/data-catalog
Skip to content
#

data-catalog

Here are 67 public repositories matching this topic...

feng-tao
feng-tao commented May 14, 2021

Currently we only support db store publisher (e.g neo4j, mysql,neptune). But it would be pretty easy to support message queue publisher using the interface (e.g SQS, kinesis, Eventhub, kafka) which allows push ETL model support.

There is a pr (amundsen-io/amundsendatabuilder#431) which unfortunately isn't get merged. The pr could be used as an example on how to support t

phixMe
phixMe commented Oct 12, 2021

We have recently made dataset versions traversable via our dataset tab on our lineage page. We would like to do the same for job versions as well. We will want to be able to start with a job, navigate across versions, then navigate again across the runs for that job version. We would also like to see detailed information about job versions on this intermediate page as well. One prereq for this is

vrajat
vrajat commented Feb 14, 2020

It is not surprising that deep and shallow scan show different results. Shallow scan only looks at column names. Deep scan looks at a sample of the data. I've even noticed that two different runs of deep scan show different results as sample rows are different. This is the challenge with not scanning all of the data. Its a trade-off between performance/cost and accuracy. There is no right answer.

jbusecke
jbusecke commented Feb 18, 2021

Intake-esm adds the attribute intake_esm_varname to datasets, and I have encountered cases where that ends up being None (still looking for the exact model).

Zarr does not like that type of metadata:

import xarray as xr
ds_test = xr.DataArray(5).to_dataset(name='test')
ds_test.attrs['test'] = None

ds_test.to_zarr('test.zarr')

gives

------------------------
scortier
scortier commented Sep 23, 2021

Deliverables

  • add unit tests
  • add extractor
  • add README.md in plugins/extractors/mariadb, defining output
  • register your extractor plugins/extractors/populate.go
  • add extractor the extractor list in docs/reference/extractor.md

Output must contain a Table

Table

Field Sample Value
urn `my_database.my_t

National Data Archive (NADA) is an open source data cataloging system that serves as a portal for researchers to browse, search, compare, apply for access, and download relevant census or survey information. It was originally developed to support the establishment of national survey data archives.

  • Updated Nov 12, 2021
  • PHP

The Data Marketplace frontend repository is part of the Corporate Linked Data Catalog - short: COLID - application. Users can search for registered resources in COLID. It provides a search bar, aggregation filters and search result displaying including term highlighting.

  • Updated Jul 9, 2021
  • TypeScript

Improve this page

Add a description, image, and links to the data-catalog topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the data-catalog topic, visit your repo's landing page and select "manage topics."

Learn more