-
Notifications
You must be signed in to change notification settings - Fork 699
Comparing changes
Open a pull request
base repository: neondatabase/neon
base: release-proxy-8538
head repository: neondatabase/neon
compare: main
- 15 commits
- 188 files changed
- 9 contributors
Commits on Jun 24, 2025
-
Create proxy-bench periodic run in CI (#12242)
Currently run for test only via pushing to the test-proxy-bench branch. Relates to the #22681
1Configuration menu - View commit details
-
Copy full SHA for a29772b - Browse repository at this point
Copy the full SHA a29772bView commit details -
apply clippy fixes for 1.88.0 beta (#12331)
The 1.88.0 stable release is near (this Thursday). We'd like to fix most warnings beforehand so that the compiler upgrade doesn't require approval from too many teams. This is therefore a preparation PR (like similar PRs before it). There is a lot of changes for this release, mostly because the `uninlined_format_args` lint has been added to the `style` lint group. One can read more about the lint [here](https://rust-lang.github.io/rust-clippy/master/#/uninlined_format_args). The PR is the result of `cargo +beta clippy --fix` and `cargo fmt`. One remaining warning is left for the proxy team. --------- Co-authored-by: Conrad Ludgate <[email protected]>
1Configuration menu - View commit details
-
Copy full SHA for 5522496 - Browse repository at this point
Copy the full SHA 5522496View commit details -
[proxy]: authenticate to compute after connect_to_compute (#12335)
## Problem PGLB will do the connect_to_compute logic, neonkeeper will do the session establishment logic. We should split it. ## Summary of changes Moves postgres authentication to compute to a separate routine that happens after connect_to_compute.
1Configuration menu - View commit details
-
Copy full SHA for 4dd9ca7 - Browse repository at this point
Copy the full SHA 4dd9ca7View commit details -
Switch the billing metrics storage format to ndjson. (#12338)
## Problem The billing team wants to change the billing events pipeline and use a common events format in S3 buckets across different event producers. ## Summary of changes Change the events storage format for billing events from JSON to NDJSON. Resolves: neondatabase/cloud#29994
1Configuration menu - View commit details
-
Copy full SHA for 158d84e - Browse repository at this point
Copy the full SHA 158d84eView commit details -
Use enum-typed PG versions (#12317)
This makes it possible for the compiler to validate that a match block matched all PostgreSQL versions we support. ## Problem We did not have a complete picture about which places we had to test against PG versions, and what format these versions were: The full PG version ID format (Major/minor/bugfix `MMmmbb`) as transfered in protocol messages, or only the Major release version (`MM`). This meant type confusion was rampant. With this change, it becomes easier to develop new version-dependent features, by making type and niche confusion impossible. ## Summary of changes Every use of `pg_version` is now typed as either `PgVersionId` (u32, valued in decimal `MMmmbb`) or PgMajorVersion (an enum, with a value for every major version we support, serialized and stored like a u32 with the value of that major version) --------- Co-authored-by: Arpad Müller <[email protected]>
1Configuration menu - View commit details
-
Copy full SHA for 6c6de63 - Browse repository at this point
Copy the full SHA 6c6de63View commit details -
Set pgaudit.log=none for monitoring connections (#12137)
pgaudit can spam logs due to all the monitoring that we do. Logs from these connections are not necessary for HIPPA compliance, so we can stop logging from those connections. Part-of: neondatabase/cloud#29574 Signed-off-by: Tristan Partin <[email protected]>
1Configuration menu - View commit details
-
Copy full SHA for aa75722 - Browse repository at this point
Copy the full SHA aa75722View commit details
Commits on Jun 25, 2025
-
Update pgaudit to latest versions (#12328)
These updates contain some bug fixes and are completely backwards compatible with what we currently support in Neon. Link: pgaudit/pgaudit@1.6.2...1.6.3 Link: pgaudit/pgaudit@1.7.0...1.7.1 Link: pgaudit/pgaudit@16.0...16.1 Link: pgaudit/pgaudit@17.0...17.1 Signed-off-by: Tristan Partin <[email protected]> Signed-off-by: Tristan Partin <[email protected]>
1Configuration menu - View commit details
-
Copy full SHA for a2d6236 - Browse repository at this point
Copy the full SHA a2d6236View commit details -
Remove unnecessary separate installation of libpq (#12287)
`make install` compiles and installs libpq. Remove redundant separate step to compile and install it.
1Configuration menu - View commit details
-
Copy full SHA for 7c4c36f - Browse repository at this point
Copy the full SHA 7c4c36fView commit details -
Support cancellations of timelines with hanging ondemand downloads (#…
…12330) In `test_layer_download_cancelled_by_config_location`, we simulate hung downloads via the `before-downloading-layer-stream-pausable` failpoint. Then, we cancel a timeline via the `location_config` endpoint. With the new default as of #11712, we would be creating the timeline on safekeepers regardless if there have been writes or not, and it turns out the test relied on the timeline not existing on safekeepers, due to a cancellation bug: * as established before, the test makes the read path hang * the timeline cancellation function first cancels the walreceiver, and only then cancels the timeline's token * `WalIngest::new` is requesting a checkpoint, which hits the read path * at cancellation time, we'd be hanging inside the read, not seeing the cancellation of the walreceiver * the test would time out due to the hang This is probably also reproducible in the wild when there is S3 unavailabilies or bottlenecks. So we thought that it's worthwhile to fix the hang issue. The approach chosen in the end involves the `tokio::select` macro. In PR 11712, we originally punted on the test due to the hang and opted it out from the new default, but now we can use the new default. Part of #12299
1Configuration menu - View commit details
-
Copy full SHA for 1dc01c9 - Browse repository at this point
Copy the full SHA 1dc01c9View commit details -
[console_redirect_proxy]: fix channel binding (#12238)
## Problem While working more on TLS to compute, I realised that Console Redirect -> pg-sni-router -> compute would break if channel binding was set to prefer. This is because the channel binding data would differ between Console Redirect -> pg-sni-router vs pg-sni-router -> compute. I also noticed that I actually disabled channel binding in #12145, since `connect_raw` would think that the connection didn't support TLS. ## Summary of changes Make sure we specify the channel binding. Make sure that `connect_raw` can see if we have TLS support.
1Configuration menu - View commit details
-
Copy full SHA for 27ca1e2 - Browse repository at this point
Copy the full SHA 27ca1e2View commit details -
[proxy]: BatchQueue::call is not cancel safe - make it directly cance…
…llation aware (#12345) ## Problem neondatabase/cloud#30539 If the current leader cancels the `call` function, then it has removed the jobs from the queue, but will never finish sending the responses. Because of this, it is not cancellation safe. ## Summary of changes Document these functions as not cancellation safe. Move cancellation of the queued jobs into the queue itself. ## Alternatives considered 1. We could spawn the task that runs the batch, since that won't get cancelled. * This requires `fn call(self: Arc<Self>)` or `fn call(&'static self)`. 2. We could add another scopeguard and return the requests back to the queue. * This requires that requests are always retry safe, and also requires requests to be `Clone`.
1Configuration menu - View commit details
-
Copy full SHA for 517a3d0 - Browse repository at this point
Copy the full SHA 517a3d0View commit details -
feat(storcon): retrieve feature flag and pass to pageservers (#12324)
## Problem part of #11813 ## Summary of changes It costs $$$ to directly retrieve the feature flags from the pageserver. Therefore, this patch adds new APIs to retrieve the spec from the storcon and updates it via pageserver. * Storcon retrieves the feature flag and send it to the pageservers. * If the feature flag gets updated outside of the normal refresh loop of the pageserver, pageserver won't fetch the flags on its own as long as the last updated time <= refresh_period. Signed-off-by: Alex Chi Z <[email protected]>
1Configuration menu - View commit details
-
Copy full SHA for 6c77638 - Browse repository at this point
Copy the full SHA 6c77638View commit details -
RFC: Endpoint Persistent Unlogged Files Storage (#9661)
## Summary A design for a storage system that allows storage of files required to make Neon's Endpoints have a better experience at or after a reboot. ## Motivation Several systems inside PostgreSQL (and Neon) need some persistent storage for optimal workings across reboots and restarts, but still work without. Examples are the cumulative statistics file in `pg_stat/global.stat`, `pg_stat_statements`' `pg_stat/pg_stat_statements.stat`, and `pg_prewarm`'s `autoprewarm.blocks`. We need a storage system that can store and manage these files for each Endpoint. [GH rendered file](https://github.com/neondatabase/neon/blob/MMeent/rfc-unlogged-file/docs/rfcs/040-Endpoint-Persistent-Unlogged-Files-Storage.md) Part of neondatabase/cloud#24225
1Configuration menu - View commit details
-
Copy full SHA for 1d49eef - Browse repository at this point
Copy the full SHA 1d49eefView commit details -
pageserver: payload compression for gRPC base backups (#12346)
## Problem gRPC base backups use gRPC compression. However, this has two problems: * Base backup caching will cache compressed base backups (making gRPC compression pointless). * Tonic does not support varying the compression level, and zstd default level is 10% slower than gzip fastest level. Touches #11728. Touches neondatabase/cloud#29353. ## Summary of changes This patch adds a gRPC parameter `BaseBackupRequest::compression` specifying the compression algorithm. It also moves compression into `send_basebackup_tarball` to reduce code duplication. A follow-up PR will integrate the base backup cache with gRPC.
1Configuration menu - View commit details
-
Copy full SHA for f755979 - Browse repository at this point
Copy the full SHA f755979View commit details -
fix(pageserver): allow refresh_interval to be empty (#12349)
## Problem Fix for #12324 ## Summary of changes Need `serde(default)` to allow this field not present in the config, otherwise there will be a config deserialization error. --------- Signed-off-by: Alex Chi Z <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 6f70885 - Browse repository at this point
Copy the full SHA 6f70885View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff release-proxy-8538...main