microsoft / onnxruntime Public
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Evaluating oonxruntime .ORT format by Python
component:coreruntime
related to core runtime
#11189
opened Apr 12, 2022 by
binhpht
Custom Op does not support dynamic input/output number
feature:customops
Issues related to the creation/usage of custom ops
#11186
opened Apr 12, 2022 by
luchangli03
onnxruntime-web is 11-17x times slower than native inference
component:ort-web
related to ONNXRuntime Web (JavaScript/TypeScript code)
#11181
opened Apr 12, 2022 by
CanyonWind
Build from source issue on Windows
component:build
related to builds
#11178
opened Apr 11, 2022 by
Darshcg
A normal output of convolution layer multiplies infinity will result in NaN
#11173
opened Apr 11, 2022 by
maybeLee
C# - InferenceSession fails with "invalid weights type of Int8" even though Int8 enabled in TensorRT
ep:TensorRT
questions/issues related to TensorRT EP
#11141
opened Apr 7, 2022 by
dannetsecure
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ./onnx_files/GPT2_opt_with_fusion.onnx failed:Node (Reshape_Fuse_0) Op (Reshape) [ShapeInferenceError] Cannot parse data from external tensors. Please load external data into raw data for tensor: constant_shape_0
#11137
opened Apr 7, 2022 by
lileilai
Documentation for io binding
component:documentation
related to documentation
type:enhancement
request for unsupported feature or enhancement
#11133
opened Apr 7, 2022 by
dashesy
[Documentation Request] Undocumented removed args in mobile performance tuning page
#11124
opened Apr 6, 2022 by
gqgs
Different detection output values for C++ and Python with onnxruntime
#11123
opened Apr 6, 2022 by
omerwer
Using DnnlExecutionProvider for inference is much slower than using CPUExecutionProvider.
#11122
opened Apr 6, 2022 by
xiaoxiaohehe001
Slice behavior wrong with negative step and end = INT_MAX
component:operator
related to specific ONNX operator support
#11107
opened Apr 4, 2022 by
garymm
Performance reduction due to copying of output OrtValues to numpy arrays
#11099
opened Apr 4, 2022 by
vvchernov
ONNX Runtime Mobile Training (Android/iOS)
component:training-core
related to training core
feature: mobile
type:enhancement
request for unsupported feature or enhancement
#11098
opened Apr 4, 2022 by
danieljanes
compile with cuda error:Couldn't find CUDA library root.
component:build
related to builds
ep:TensorRT
questions/issues related to TensorRT EP
#11090
opened Apr 2, 2022 by
vinceyzw
Keeping hidden state vector of RNN layers inside the model at online operation
type:enhancement
request for unsupported feature or enhancement
#11085
opened Apr 1, 2022 by
okankop
Previous Next
ProTip!
What’s not been updated in a month: updated:<2022-03-12.

