Focusing
- Hangzhou China
Block or Report
Block or report JingsongLi
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePopular repositories
-
-
-
flink-sql-benchmark Public
TPC-DS benchmark tools for flink batch sql. Version 1.10 or above.
-
incubator-paimon Public
Forked from apache/incubator-paimon
An Apache Flink subproject to provide storage for dynamic tables.
-
-
1,358 contributions in the last year
Less
More
Contribution activity
March 2023
Created 129 commits in 4 repositories
Created a pull request in apache/incubator-paimon that received 1 comment
[FLINK-31343] Remove JMH dependency in flink-table-store-micro-benchmark
JMH open source license is incompatible
+498
−869
•
1
comment
Opened 21 other pull requests in 3 repositories
apache/incubator-paimon
1
open
17
merged
1
closed
- [flink] Limit should not rely on assign all splits in StaticFileStoreSplitEnumerator
- [bug] Bounded Stream should end directly if the full scan snapshot should be end
- [flink] Introduce watermark alignment options
- [doc] Document roadmap
- [code] Clean Flink related comment and naming
- [test] Split pre commit tests
- [github] Update README and create issue/pr template
- [rename] Rename package to apache paimon
- [core] Introduce RowCompactedSerializer to compact bytes in LookupLevels
- [core] Introduce StreamTableScan to support checkpoint and restore
- [core] Introduce LookupMergeFunction and ForceUpLevel0Compaction
- [FLINK-31397] Introduce write-once hash lookup store
- [core] Introduce lookup changelog producer
- [FLINK-31392] Refactor classes code of full-compaction
- [FLINK-31331] Flink 1.16 should implement new LookupFunction
- [FLINK-31329] Fix Parquet stats extractor
- [FLINK-31311] Supports Bounded Watermark streaming read
- [hotfix] Remove ignoreEmptyCommit in StreamTableCommit
- [hotfix] Add DelegateCatalog
JingsongLi/paimon-trino
1
closed
ververica/flink-cdc-connectors
1
merged
Reviewed 64 pull requests in 2 repositories
apache/incubator-paimon
25 pull requests
- [spark] Support time travel for Spark 3.3 (VERSION AS OF and TIMESTAMP AS OF)
- [FLINK-31434] Introduce CDC sink
- [spark] Upgrade spark version to 3.3.2 of spark-common
- [flink] Introduce watermark alignment options
- [refactor] simplify String operations and add JDK version into README
- [FLINK-31433] Make SchemaChange serializable
- [doc] Document roadmap
- [core] add read batch size option
- [FLINK-31432] Introduce a special StoreWriteOperator to deal with schema changes
- [engine] Introduce Presto Reader for Paimon
- [license] Add license header to ci file
- [core] Replace table_store with paimon in docs and classes
- [test-utils] Introduce AssertionUtils
- [FLINK-31462] Supports full calculation from the specified snapshot in streaming mode
-
[docs] Decoupling
Overviewdocumentation from Flink - [docs] Fix stable version link
- [FLINK-31451] Introduce Presto Reader for table store
- [FLINK-31338] support infer parallelism for flink table store
- minor: correct README.md
- [hotfix] change name in CODE_OF_CONDUCT file
- [PAIMON-643] Change the import package to start with org.apache.paimon
- Add column position for paimon
- [doc] fix typo
- [docs] Move document pages from content/docs to content/
- [doc] Disable document build in incubator-paimon and change document base url
- Some pull request reviews not shown.
apache/incubator-paimon-shade
4 pull requests
Created an issue in apache/incubator-paimon that received 5 comments
[Document] Provide jar download url in documentation
Now we have already published to snapshot jar to nexus repository. We can just provide download in documentation, user don't need to build by thems…
5
comments
Opened 17 other issues in 2 repositories
apache/incubator-paimon
12
open
2
closed
-
[Bug] Assign all splits in one time will produce exceed
akka.framesizeto exceptions in StaticFileStoreSplitEnumerator - [Feature] Optimize serialization of TableSchema
- [Feature] Introduce metrics about the busyness of compaction thread
- [Document] Add document for hdfs and recommend to use the computing engine's own filesystem
- [Bug] Hive catalog should not shade hive dependencies
- [feature] Format Read Batch size should be configurable
- [feature] Integrate to Flink 1.17
- [improvement] Use new ReadBuilder and WriteBuilder API in tests
- [bug] TableScan.plan should never return null
- [feature] Use caffeine to replace with guava cache
- [bug][spark] Spark write should work with kryo serializer
- [feature] Provide changelog-producer.row-deduplicate to deduplicate same change
- [Test] Test compatibility to table store 0.3 hive metastore
- Disable document build in incubator-paimon.





