2

I created a temp table in my PostgreSQL DB using the following query

SELECT * INTO TEMP TABLE tempdata FROM data WHERE id=2004;

Now I want to create a backup of this temp table tempdata.
So i use the following command line execution

"C:\Program Files\PostgreSQL\9.0\bin\pg_dump.exe" -F t -a -U my_admin -t tempdata myDB >"e:\mydump.backup"  

I get a message saying

pg_dump: No matching tables were found  

Is it possible to create a dump of temp tables?
Am I doing it correctly?

P.S. : I would also want to restore the same.I don't want to use any extra components.

TIA.

7
  • It would help if you could give some background on what you are trying to achieve. What are you loading into these temp tables? Why pg_dump them? Also, how do you expect to restore a temp table - what result would you expect, given that temp tables are temporary and go away at the end of the session? Restoring a temp table would have no effect even if you could do it. Commented Dec 14, 2011 at 11:27
  • @CraigRinger I am trying to do something like this Since my data is scattered over multiple tables I want to take a backup of the only some specific data from all the tables and dump it in a backup file.Later on I want to restore this data on some other system which may/may not have this data. Commented Dec 14, 2011 at 11:38
  • @CraigRinger I have implemented this using non-temp tables but the solution is not that effective. Commented Dec 14, 2011 at 11:43
  • 1
    The linked problem sounds like it was basically custom designed for COPY (SELECT ....) TO 'filename' and COPY tablename FROM 'filename', except for the single file bit. For that: dump to multiple files, include a psql script that runs COPY for each of them for restore, and bundle it in a zip file. You now have a single file. Commented Dec 14, 2011 at 12:11
  • @CraigRinger I am implementing all this through an application.So problems with zipping of the file,and unzipping it. Commented Dec 14, 2011 at 12:24

1 Answer 1

6

I don't think you'll be able to use pg_dump for that temporary table. The problem is that temporary tables only exist within the session where they were created:

PostgreSQL instead requires each session to issue its own CREATE TEMPORARY TABLE command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.

So you'd create the temporary table in one session but pg_dump would be using a different session that doesn't have your temporary table.

However, COPY should work:

COPY moves data between PostgreSQL tables and standard file-system files.

but you'll either be copying the data to the standard output or a file on the database server (which requires superuser access):

COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible to the server and the name must be specified from the viewpoint of the server.
[...]
COPY naming a file is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.

So using COPY to dump the temporary table straight to a file might not be an option. You can COPY to the standard output though but how well that will work depends on how you're accessing the database.

You might have better luck if you didn't use temporary tables. You would, of course, have to manage unique table names to avoid conflicts with other sessions and you'd have to take care to ensure that your non-temporary temporary tables were dropped when you were done with them.

Sign up to request clarification or add additional context in comments.

6 Comments

I can't use COPY since there are more than 1 tables to create a backup of.Any other suggestions?
@Shirish11: COPY them one by one or don't use temp tables. If you don't need to worry about uniqueness (i.e. you can guarantee that only one session will need to write to your "temp" tables at a time then you can use non-temp tables and pg_dump.
Have been using this but its not a very effective one.Any errors while execution will cause me to loose all my data. (Is it that if I use the same session for creating my temp tables and use pg_dump it will workout?)
I don't think you can attach pg_dump to an existing session. I don't think temp tables are the right tool in this case.
@shirish11 I think @muistooshort is quite right - temp tables are NOT the right tool for this job. Either COPY to multiple different files (possibly from within a PL/PgSQL function if you want to encapsulate the work) or use non-temp tables. If you want to isolate concurrent runs from each other, try creating your non-temp tables in different schema (see CREATE SCHEMA) and telling pg_dump to only dump the particular schema you're interested in.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.