-
Updated
Sep 16, 2020 - Makefile
cuda
Here are 2,488 public repositories matching this topic...
-
Updated
Oct 23, 2020 - Shell
-
Updated
Aug 17, 2020 - Python
Problem:
catboost version: 0.23.2
Operating System: all
Tutorial: https://github.com/catboost/tutorials/blob/master/custom_loss/custom_metric_tutorial.md
Impossible to use custom metric (С++).
Code example
from catboost import CatBoost
train_data = [[1, 4, 5, 6],
-
Updated
Oct 23, 2020 - C++
-
Updated
Oct 23, 2020 - Go
Improve readability of thread id based branches by giving them more descriptive names.
e.g.
if (!t) // is actually a t == 0and
https://github.com/rapidsai/cudf/blob/57ef76927373d7260b6a0eda781e59a4c563d36e/cpp/src/io/statistics/column_stats.cu#L285
Is actually a lane_id == 0
As demonstrated in rapidsai/cudf#6241 (comment), pr
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
It would be wonderful if I could inspect the contents of thrust containers: host_vector and device_vector in GDB (and more importantly, in VSCode). GDB allows customizing this.
It would save a lot of time if I could inspect device vectors without having to bring them to the host (e.g. the pretty printer script would do that behind the
-
Updated
Oct 20, 2020 - C++
The functions under metrics and score in src_prims can be semantically categorized under a common name (whether that can be named 'metrics' or 'scores' is open for discussion).
It is exposed in cuml under a common rubric as [metrics](https://github.
-
Updated
Sep 11, 2018 - C++
I often use -v just to see that something is going on, but a progress bar (enabled by default) would serve the same purpose and be more concise.
We can just factor out the code from futhark bench for this.
-
Updated
Sep 29, 2020 - Jupyter Notebook
-
Updated
Jul 22, 2020 - C++
Thank you for this fantastic work!
Could it be possible the fit_transform() method returns the KL divergence of the run?
Thx!
-
Updated
Oct 12, 2020 - Python
-
Updated
Oct 14, 2020 - Python
-
Updated
Oct 18, 2020 - C++
-
Updated
Oct 22, 2020 - Python
-
Updated
Jul 15, 2019
-
Updated
Oct 23, 2020 - Clojure
Improve this page
Add a description, image, and links to the cuda topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the cuda topic, visit your repo's landing page and select "manage topics."



See: numba/numba#6368 (comment)
The values tested will be random for each invocation of the tests, because there is no RNG seeding. The RNG should be seeded for each test, so that values are stable.