gpu
Here are 2,535 public repositories matching this topic...
-
Updated
Nov 7, 2021 - Jupyter Notebook
These APIs are deprecated a while ago, we'll want to get rid of them.
λ ~/github/taichi master rg "@deprecated" python
python/taichi/lang/matrix.py
516: @deprecated('ti.Matrix.transposed(a)', 'a.transpose()')
520: @deprecated('a.T()', 'a.transpose()')
902: @deprecated('ti.Matrix.var', 'ti.Matrix.field')
915: @deprecated('ti.Vector.var', 'ti.Vector.field')
1142: @depr
-
Updated
Nov 24, 2021 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
-
Updated
Nov 11, 2021 - JavaScript
-
Updated
Nov 24, 2021 - Python
-
Updated
Nov 22, 2021 - Python
usually, after trained model. i save model in cpp format with code:
cat_model.save_model('a', format="cpp")
cat_model.save_model('b', format="cpp")
but when my cpp need to use multi models.
in my main.cpp
#include "a.hpp"
#include "b.hpp"
int main() {
// do something
double a_pv = ApplyCatboostModel({1.2, 2.3}); // i want to a.hpp's model here
double b_pv
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
-
Updated
Nov 24, 2021 - C++
-
Updated
Jun 10, 2021 - Python
-
Updated
Nov 24, 2021 - Python
-
Updated
Nov 24, 2021 - Jupyter Notebook
Our users are often confused by the output from programs such as zip2john sometimes being very large (multi-gigabyte). Maybe we should identify and enhance these programs to output a message to stderr to explain to users that it's normal for the output to be very large - maybe always or maybe only when the output size is above a threshold (e.g., 1 million bytes?)
Implement GPU version of numpy.* functions in cupy.* namespace.
This is a tracker issue that lists the remaining numpy.* APIs (see also: comparison table). I've categorized them based on difficulty so that new contributors can pick the right task. Your contribution is highly welcomed and appreciated!
List of A
Is your feature request related to a problem? Please describe.
I have to place nvim\bin into path environment variable just so that neovide can sense nvim.exe.
Describe the solution you'd like
Add an option to specify nvim path in neovide settings.
-
Updated
Nov 24, 2021 - C++
Describe the bug
W-THU is not supported right now. It only works if user specifies seasonal_period in setup
Expected behavior
Would have expected it to automatically infer the freq = 7 from there.
For pandas API compatibility, we can implement Series.autocorr. autocorr calculates the Pearson correlation between the Series and itself lagged by N steps. Conceptually, this is a combination of shift and corr.
import pandas as pd
s = pd.Series([0.25, 0.5, 0.2, -0.05])
print(s.autocorr())
print(s-
Updated
Apr 24, 2020 - Jsonnet
环境
1.系统环境:
2.MegEngine版本:1.6.0rc1
3.python版本:Python 3.8.10
The program stuck at net.load when I was trying to use the MegFlow. I wait for more than 10min and there is no sign of finishing it.
-
Updated
Jun 13, 2020 - HTML
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."




EDIT: The failure is due to update in Python 3.10 behaviour.
The following OpInfo tests fail locally (with Python 3.10) but pass on CI for
gradientophttps://github.com/pytorch/pytorch/blob/97f29bda59deab8c063cf01f0a8ff4321b93c55e/torch/testing/_internal/common_methods_invocations.py#L8277-L8282
Local failure log