-
Updated
May 18, 2021 - Makefile
CUDA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Here are 2,876 public repositories matching this topic...
-
Updated
Jun 8, 2021 - Shell
Problem: the approximate method can still be slow for many trees
catboost version: master
Operating System: ubuntu 18.04
CPU: i9
GPU: RTX2080
Would be good to be able to specify how many trees to use for shapley. The model.predict and prediction_type versions allow this. lgbm/xgb allow this.
-
Updated
Jun 10, 2021 - Python
-
Updated
Jun 10, 2021 - C++
-
Updated
Jun 11, 2021 - C++
-
Updated
Jun 9, 2021 - Go
Describe the bug
Integer columns that are enclosed in quotes are not correctly inferred as integer columns.
Steps/Code to reproduce bug
import cudf
import pandas as pd
from io import StringIO
from cudf.tests.utils import assert_eq
buffer = '"intcol","stringcol"\n"1","some string"\n"2","some other string"'
pd_df = pd.read_csv(StringIO(buffer))
cu_df = cudf.read_csv(String
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
Problem
Cub allows itself to place into a namespace via CUB_NS_PREFIX and CUB_NS_POSTFIX, such that multiple shared libraries can each utilize their own copy of it (and thus different versions can safely coexist). Static variables used for caching could otherwise cause problems (e.g., https://github.com/NVIDIA/cub/blob/main/cub/util_device.cuh#L212).
Thrust however depends on cub and
请问可以直接training tmfile出来吗? 因为tengine-convert-tool covert 会有error
tengine-lite library version: 1.4-dev
Get input tensor failed

或是有例子能training出下面tmfile 呢?
 would serve the same purpose and be more concise.
We can just factor out the code from futhark bench for this.
-
Updated
May 6, 2021 - Python
-
Updated
Feb 10, 2021 - C++
-
Updated
Dec 15, 2020 - Jupyter Notebook
-
Updated
Jun 3, 2021 - Python
-
Updated
Jun 10, 2021 - C
-
Updated
Jun 2, 2021 - C++
Thank you for this fantastic work!
Could it be possible the fit_transform() method returns the KL divergence of the run?
Thx!
Created by Nvidia
Released June 23, 2007
- Website
- developer.nvidia.com/cuda-zone
- Wikipedia
- Wikipedia


In numba/stencils/stencil.py, there are various places (like line 552, "if isinstance(kernel_size[i][0], int):") where we check for "int" in relation to neighborhoods. I ran across a case where I was creating a neighborhood tuple by extracting values from a Numpy array. This causes a problem because those Numpy values will not match in these isinstance int checks. I worked around it by conver