-
Updated
Jul 31, 2021
big-data
Here are 2,552 public repositories matching this topic...
-
Updated
May 13, 2021 - Python
-
Updated
Aug 4, 2021 - C++
-
Updated
Aug 4, 2021 - JavaScript
-
Updated
Jan 9, 2021 - Scala
-
Updated
Jul 29, 2021 - Scala
-
Updated
Apr 2, 2021
New Metric Request
It would be great to have FBeta, F2, or F0.5 metrics to be implemented without the need for a custom metric class defined by user.
catboost version: 0.26
-
Updated
Aug 4, 2021 - Jupyter Notebook
-
Updated
Aug 4, 2021 - Go
-
Updated
Aug 2, 2021 - Erlang
Remove the initial shortcut sync code in viewer/db.js for v3.1. Note that to upgrade to v3.1+ you must upgrade to v3.0 first.
x-arkime-cookies
-
Updated
Jul 29, 2021 - Python
There is no technical difficulty to support includeValue option, looks like we are just missing it on the API level.
See SO question
-
Updated
Aug 4, 2021 - Java
Currently we are able to map schemas and tables using file based mapping. This is great as it makes it possible to handle the case where schemas and tables names differ with casing only. However we do not support this for columns yet. So in case user has a table where two column names differs with casing only (like column A and a) we are not able to read it.
See `io.trino.plugin.jdbc.mappin
-
Updated
Jul 28, 2021 - Scala
... to make it easier to read Vespa documentation on an e-reader / offline
Vespa documentation is generated using Jekyll from .md and .html files, look into options for generating the artifact as part of site generation (there might be plugins we can use here)
Hi,
I am running the deltaTable = DeltaTable.convertToDelta(spark, f"parquet.{data_path}") to read a DeltaTable from the parquet files but it doesn't return one as suggested in the documents. However, it successfully converts them. If I read them exactly after that line again using forPath, it will give me the DeltaTable.
 A user may want to backup all tables but no metadata like users, privileges, etc. without explicitly defining each table inside the CREATE SNAPSHOT statement.
2.) A user may want to transfer users & privileges, custom analyzers or user-defined-functions from one cluster to another without backing up a complete cluster including all data (tables).
*Feature description
Improve this page
Add a description, image, and links to the big-data topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the big-data topic, visit your repo's landing page and select "manage topics."


After add patch which fixes #4209 I found that sphinx emits some warnings.