inference
Here are 408 public repositories matching this topic...
Hi, I'm a starter and trying to learn how to customize or modify my own mediapipe line. I used Neural Networks to train landmarks which extract from mediapipe. Is there any way I can put my trained model back to mediapipe to implementing real-time gesture recognition?Thanks for your help.
-
Updated
Apr 19, 2020 - Python
GCP QUICKSTART GUIDE
To get started using this repo quickly using a Google Cloud Platform (GCP) Deep Learning Virtual Machine (VM) follow the instructions below. New GCP users are eligible for a $300 free credit offer. Other quickstart options for this repo include our [Google Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov3/blob
🚀 Feature request
Current Behavior
flow(
SomeIOType.decode,
... etc
)
Accessing .decode of a type by passing it, causes lint warning:
warning Avoid referencing unbound methods which may cause unintentional scoping of this @typescript-eslint/unbound-method
However the function is specifically bound: this.decode = this.decode.bind(this);
Desi
When attempting to download cityscapes_2048x1024 I got: ./download-models.sh: line 721: download_fcn_resnet18_cityscapes_2048x512: command not found
It looks like there was a typo, and line 721 needs to be changed from:
download_fcn_resnet18_cityscapes_2048x512 to download_fcn_resnet18_cityscapes_2048x1024
Thanks for the amazing repo!
Followed instructions from https://dev.grakn.ai/docs/general/quickstart and immediatly reached these errors :
grakn-core-all-windows-1.7.1>grakn.bat console --keyspace social_network --file ./schema.gql
22:08:17,078 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
22:08:17,078 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could
-
Updated
Jan 3, 2019 - Python
-
Updated
Apr 14, 2020 - C
The dldt/get-started-linux.md documented references <DLDT_DIR>/inference-engine/samples/sample_data in several places, but I'm not able to find that directory or any references to it except in the documentation.
Hi NVIDIA Team,
To make this project successful, I would like to suggest to add few things. I would love to assist on this.
- Complete technical installation steps to add more values
- Details of all pre-requisites, to build this project successfully.
- Overall technical background, design and architecture. Just like we provide a technical guide and documentation for other software engine
-
Updated
Apr 27, 2020 - Python
-
Updated
Jun 2, 2020 - C++
'max_request_size' seems to refer to bytes, not mb.
-
Updated
Jun 2, 2020 - Python
-
Updated
Jan 24, 2020 - Python
-
Updated
Nov 22, 2019 - Python
-
Updated
Oct 25, 2019
-
Updated
May 30, 2020 - Python
-
Updated
Jun 1, 2020 - C
-
Updated
Jul 20, 2018 - C++
-
Updated
Aug 18, 2019 - CMake
-
Updated
May 7, 2020
-
Updated
Jun 1, 2020 - Lua
-
Updated
Feb 14, 2020 - C++
Improve this page
Add a description, image, and links to the inference topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the inference topic, visit your repo's landing page and select "manage topics."


我发现examples/retinaface.cpp中,如果开启OMP加速的话似乎在检测到人脸时会发生内存泄漏,但我定位不了这个问题的具体原因。
值得注意的时,如果将qsort_descent_inplace函数中的OMP指令注释掉这个问题就会消失掉。