kafka-client
Here are 140 public repositories matching this topic...
-
Updated
Sep 3, 2020 - Go
-
Updated
Sep 3, 2020 - Python
-
Updated
Aug 20, 2020 - C#
SQL Insert Statement
Current behavior:
All the SQL activities either don't support Insert or are specific to a usecase
Expected behavior:
to be able to insert to a sql database in an activity
What is the motivation / use case for changing the behavior?
many workflows/pipelines require logging to a database
Additional information you deem important (e.g. I need this tomorrow):
While the correlation id used to match network requests with responses is always supposed to be unique at any given time, some tests were misbehaving because that wasn't the case.
Since this is one of those things that should never happen, we could add an invariant that throws a non-retriable error if this happens anyway, since something has gone terribly wrong and we won't get valid results
-
Updated
Sep 3, 2020 - Scala
-
Updated
Sep 3, 2020 - Ruby
-
Updated
Apr 23, 2020 - Java
-
Updated
Sep 1, 2020 - Rust
-
Updated
Aug 31, 2020 - Python
Typo
The error message "Hostname could not be found in context. HostNamePartitioningStrategy will not work." and variable name "hostname" are weird.
ContextNameKeyingStrategy: <-- problem code
@Override
public void setContext(Context context) {
super.setContext(context);
final String hostname = context.getProperty(CoreConstants.CONTEXT_NAME_KEY);
if (hostnambased on: https://kafka.js.org/docs/configuration and tulios/kafkajs#298
We may not have the correct settings for the JSConsumer and JSProducer. This issue is to ensure we have them up to date after nodefluent/node-sinek#154 has been merged
-
Updated
Aug 18, 2020 - Clojure
-
Updated
Apr 22, 2020 - Java
-
Updated
May 11, 2020 - Ruby
Similarly to #234, it would be useful to provide functions for creating test KafkaProducers.
A good first function would be one which yields somewhat sensible default RecordMetadata.
object KafkaProducer {
def unit[F[_], K, V](implicit F: Sync[F]): F[KafkaProducer[F, K, V]] = ???
}Likely, this would require some internal state, hence F[KafkaProducer[F, K, V]].
-
Updated
Jan 19, 2018 - Java
-
Updated
Aug 9, 2020 - Java
-
Updated
Sep 2, 2020 - Java
-
Updated
Jul 30, 2020 - Java
-
Updated
Jul 29, 2020 - Java
-
Updated
Mar 14, 2020 - Scala
There's a delete topic example in the readme to use automatic api_version discovery. However, the code says it's a required field: https://github.com/StephenSorriaux/ansible-kafka-admin/blob/master/library/kafka_lib.py#L1603
It'd be great if it wasn't required and auto discovery worked!
-
Updated
May 7, 2020 - Go
Improve this page
Add a description, image, and links to the kafka-client topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the kafka-client topic, visit your repo's landing page and select "manage topics."


https://github.com/collectd/collectd/blob/0b2796dfa3b763ed10194ccd66b39b1e056da9b9/src/mysql.c#L772
Hi,
As I saw in the source for the mysql plugin, the collector specifically ignors the Prepared_stmt_count variable.
I would like to have that in the output for collectd as well.
Is it possible to enable this key in the collectd mysql collector?
Unfortunately my C skills are pretty near zer