stream-processing
Here are 704 public repositories matching this topic...
-
Updated
Jun 24, 2021 - Java
-
Updated
Jun 16, 2021
-
Updated
Jun 24, 2021 - Python
There is no technical difficulty to support includeValue option, looks like we are just missing it on the API level.
See SO question
I was looking at some logs and for a cluster that has no lag and only a single host (which was alive at the time) I was seeing the following exception:
Unable to execute pull query. All nodes are dead or exceed max allowed lag.
The error message doesn't help me understand what happened, which partition failed or if there's some other issue. We should print more debugging information.
Bug description
There is no fallback component when there is any unhandled Error in the React component
Please describe.
If this affects the front-end, screenshots would be of great help.
Expected behav
-
Updated
Apr 7, 2021
-
Updated
Jul 14, 2021 - Go
For an implementation of #126 (PostgreSQL driver with SKIP LOCKED), I create a SQL table for each consumer group containing the offsets ready to be consumed. The name for these tables is build by concatenating some prefix, the name of the topic and the name of the consumer group. In some of the test cases in the test suite, UUID are used for both, the topic and the consumer group. Each UUID has
-
Updated
Jul 17, 2021 - C
-
Updated
May 1, 2019 - C
-
Updated
Jul 17, 2021 - Java
-
Updated
Apr 17, 2021 - Go
-
Updated
Jun 15, 2021
-
Updated
Aug 14, 2020 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printflike we have been until now. We need to either include a timestamp in every@printfcall (laborious and error prone) or c
-
Updated
Jul 9, 2021 - Java
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Jul 5, 2021 - JavaScript
-
Updated
Jul 16, 2021 - Java
-
Updated
Jul 8, 2021 - Scala
-
Updated
Jul 16, 2021 - Java
-
Updated
Jun 25, 2021 - Go
-
Updated
Jun 28, 2021 - TypeScript
-
Updated
May 16, 2021 - Go
-
Updated
Mar 31, 2018
-
Updated
Jul 4, 2021 - JavaScript
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."



I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h