-
Updated
Mar 24, 2021 - Makefile
cuda
Here are 2,770 public repositories matching this topic...
-
Updated
Apr 12, 2021 - Shell
Problem: the approximate method can still be slow for many trees
catboost version: master
Operating System: ubuntu 18.04
CPU: i9
GPU: RTX2080
Would be good to be able to specify how many trees to use for shapley. The model.predict and prediction_type versions allow this. lgbm/xgb allow this.
-
Updated
Feb 17, 2021 - Python
-
Updated
Apr 12, 2021 - C++
-
Updated
Apr 13, 2021 - C++
-
Updated
Apr 13, 2021 - Go
Describe the bug
After applying the unstack function, the variable names change to numeric format.
Steps/Code to reproduce bug
def get_df(length, num_cols, num_months, acc_offset):
cols = [ 'var_{}'.format(i) for i in range(num_cols)]
df = cudf.DataFrame({col: cupy.random.rand(length * num_months) for col in cols})
df['acc_id'] = cupy.repeat(cupy.arange(length), nu
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
Problem
Cub allows itself to place into a namespace via CUB_NS_PREFIX and CUB_NS_POSTFIX, such that multiple shared libraries can each utilize their own copy of it (and thus different versions can safely coexist). Static variables used for caching could otherwise cause problems (e.g., https://github.com/NVIDIA/cub/blob/main/cub/util_device.cuh#L212).
Thrust however depends on cub and
-
Updated
Apr 13, 2021 - C
-
Updated
Apr 8, 2021 - C++
confusion_matrix should automatically convert dtypes as appropriate in order to avoid failing, like other metric functions.
from sklearn.metrics import confusion_matrix
import numpy as np
import cuml
y = np.array([0.0, 1.0, 0.0])
y_pred = np.array([0.0, 1.0, 1.0])
print(confusion_matrix(y, y_pred))
cuml.metrics.confusion_matrix(y, y_pred)
[[1 1]
[0 1]]
-----------------
Updated
Sep 11, 2018 - C++
I often use -v just to see that something is going on, but a progress bar (enabled by default) would serve the same purpose and be more concise.
We can just factor out the code from futhark bench for this.
-
Updated
Apr 12, 2021 - Python
-
Updated
Feb 10, 2021 - C++
-
Updated
Dec 15, 2020 - Jupyter Notebook
-
Updated
Apr 12, 2021 - Python
Thank you for this fantastic work!
Could it be possible the fit_transform() method returns the KL divergence of the run?
Thx!
-
Updated
Apr 9, 2021 - C++
-
Updated
Apr 10, 2021 - Python
Improve this page
Add a description, image, and links to the cuda topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the cuda topic, visit your repo's landing page and select "manage topics."


In numba/stencils/stencil.py, there are various places (like line 552, "if isinstance(kernel_size[i][0], int):") where we check for "int" in relation to neighborhoods. I ran across a case where I was creating a neighborhood tuple by extracting values from a Numpy array. This causes a problem because those Numpy values will not match in these isinstance int checks. I worked around it by conver