inference
Here are 431 public repositories matching this topic...
Hello, dear Mediapipe guys.
I want to inference the hand pose with Mediapipe model and my own model.
I have my own tf-lite models, it can work on the RGB bitmap.
I try to query the RGB bitmap from input frame with data packet.
My code is
private static final String INPUT_VIDEO_STREAM_NAME = "input_video";
processor.addPacketCallback(INPUT_VIDEO_STREAM_NAME, (packet)->{
GCP QUICKSTART GUIDE
To get started using this repo quickly using a Google Cloud Platform (GCP) Deep Learning Virtual Machine (VM) follow the instructions below. New GCP users are eligible for a $300 free credit offer. Other quickstart options for this repo include our [Google Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov3/blob
-
Updated
Jun 20, 2020 - Python
🚀 Feature request
Current Behavior
flow(
SomeIOType.decode,
... etc
)
Accessing .decode of a type by passing it, causes lint warning:
warning Avoid referencing unbound methods which may cause unintentional scoping of this @typescript-eslint/unbound-method
However the function is specifically bound: this.decode = this.decode.bind(this);
Desi
When attempting to download cityscapes_2048x1024 I got: ./download-models.sh: line 721: download_fcn_resnet18_cityscapes_2048x512: command not found
It looks like there was a typo, and line 721 needs to be changed from:
download_fcn_resnet18_cityscapes_2048x512 to download_fcn_resnet18_cityscapes_2048x1024
Thanks for the amazing repo!
Problem to Solve
At the moment Grakn user of the Docker container may not know how to capture logs from inside a running docker container - especially important when the run has failed.
Current Workaround
There may not be a clean one, but in a case, we could log into the docker container:
$ docker exec -it [container id] /bin/bash
$ cd /path/to/logs
$ [some how upload via
-
Updated
Jan 3, 2019 - Python
-
Updated
Apr 14, 2020 - C
The dldt/get-started-linux.md documented references <DLDT_DIR>/inference-engine/samples/sample_data in several places, but I'm not able to find that directory or any references to it except in the documentation.
Hi NVIDIA Team,
To make this project successful, I would like to suggest to add few things. I would love to assist on this.
- Complete technical installation steps to add more values
- Details of all pre-requisites, to build this project successfully.
- Overall technical background, design and architecture. Just like we provide a technical guide and documentation for other software engine
-
Updated
Jul 9, 2020 - C++
-
Updated
Jul 7, 2020 - Python
-
Updated
Jun 25, 2020 - C++
'max_request_size' seems to refer to bytes, not mb.
-
Updated
Jan 24, 2020 - Python
-
Updated
Nov 22, 2019 - Python
-
Updated
Oct 25, 2019
-
Updated
Jul 8, 2020 - C
-
Updated
May 30, 2020 - Python
-
Updated
Jul 20, 2018 - C++
-
Updated
Jun 28, 2020
-
Updated
Aug 18, 2019 - CMake
-
Updated
Jul 7, 2020 - C++
-
Updated
Jul 8, 2020 - Lua
Improve this page
Add a description, image, and links to the inference topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the inference topic, visit your repo's landing page and select "manage topics."


我发现examples/retinaface.cpp中,如果开启OMP加速的话似乎在检测到人脸时会发生内存泄漏,但我定位不了这个问题的具体原因。
值得注意的时,如果将qsort_descent_inplace函数中的OMP指令注释掉这个问题就会消失掉。