DataOps
DataOps is an automated, process-oriented methodology, used by analytic and data teams, to improve the quality and reduce the cycle time of data analytics. While DataOps began as a set of best practices, it has now matured to become a new and independent approach to data analytics. DataOps applies to the entire data lifecycle from data preparation to reporting, and recognizes the interconnected nature of the data analytics team and information technology operations.
Here are 113 public repositories matching this topic...
We need to support writing to and reading from TFRecord format.
Reference doc: https://www.tensorflow.org/tutorials/load_data/tfrecord
More on TypeTransfomers can be found here.
Related PR that adds PyTorch tensor and module as Flyte types: flyteorg/flytekit#1032
-
Updated
Jul 7, 2022 - Shell
-
Updated
Jul 8, 2022 - Jupyter Notebook
The default RubrixLogHTTPMiddleware record mapper for token classification expect a structured including a text field for inputs. This could make prediction model inputs a bit cumbersome. Default mapper could accepts also flat strings as inputs:
def token_classification_mapper(inputs, outputs):
i-
Updated
Jun 20, 2022 - Scala
Support copy into queries
-
Updated
Jul 6, 2022 - Java
-
Updated
Jun 29, 2022 - Python
-
Updated
Jun 4, 2022
-
Updated
Jul 5, 2022 - Shell
-
Updated
Jul 8, 2022 - Python
-
Updated
Jul 7, 2022 - Java
-
Updated
Jul 8, 2022 - Python
Currently, both Kafka and Influx sink logs only the data(Row) that is being sent.
Add support for logging column names as well along with data points similar to the implementation in log sink.
This will enable users to correlate the data points with column names.
Is your feature request related to a problem? Please describe.
I was trying to store extra group information with below details
[PUT] /v1beta1/groups/{id}
{
"name": "My Group",
"slug": "my-group",
"orgId": "{ORG_ID}",
"metadata": {
"description": "my-group-description"
"is_active": true
}
}
But shield responded with
Status: 400 Bad Request
{
What is the feature request? What problem does it solve?
As employees leave the organization/company or users change mails , eventually the notification list configured for the job would start containing a lot of invalid mails. This causes issues with SMTP relay (e.g postfix) which could be buffering all invalid requests until the queu is full, which cause all mails coming for all jobs to b
-
Updated
Jun 30, 2022 - Go
-
Updated
Apr 29, 2022 - Shell
In golang client, consumers get dynamic message instance after parsing. Add an example in the docs on how to use dynamic message instance to get values of different types in consumer code.
List of protobuf types to cover
- timestamp
- duration
- bytes
- message type
- struct
- map
-
Updated
Jul 7, 2022 - Go
Deliverables
- add unit tests
- add extractor
- add README.md in
plugins/extractors/neo4j, defining output - register your extractor
plugins/extractors/populate.go - add extractor the extractor list in
docs/reference/extractor.md
Output must contain a Table
Explore the Table Data Model and add as many features as possible.
Table
| Fi
-
Updated
Jun 1, 2022 - Python
-
Updated
Jul 7, 2022 - Python
-
Updated
Aug 2, 2020 - Smarty
-
Updated
Feb 18, 2022 - Go
-
Updated
Jan 31, 2022

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.

When there are many connectors, it is not convenient for users to locate the failed task because it cannot be filtered by the task status.