We are converting a database from SQL Server 2022 to Postgres 18. Our system has a lot of code in the database and a lot of queries that use local temp tables a lot more than CTE's. This is due to the queries being extremely large and complex and are broken down so that the SQL Server optimizer has a better chance of choosing an optimal execution plan. In Postgres I have read that temp tables can cause performance issues under heavy load due to design limitations which can lead to bloated catalogues and then need for frequent Full Vacuums. I have no idea what is the heavy load that can cause this. Has anyone got experience of this in a heavily used Postgres database system, and if so, was the solution to replace the temp table with a CTE?
We are at the start of the migration process, which may take another year, but wanted to know if we have problems ahead with our current reliance on Temp tables over CTE's for complex queries.
pgttextension, emulating them usingunlogged. Working with fairly large and temp-heavy OLAP workloads on pg, you just tune your setup to minimise bloat and refresh more often (examples from @Laurenz Albe), and it won't clog up. Depending on how you use the temps, you might want to also run vacuum+analyze+reindex+cluster cycles directly from the app/operator.