1

after many deletions/insertions it seems that the key sequence of a table has reached its limit (2147483647) and postgres' nextval() does not return a new index value. The good thing is that this table's primary key (id) is not used as a foreign key so I can mess with it.

there are 3741003 records in that table but the last given id is 2146658815.

How can I fix this? (apart from creating a python script which is my plan B at the moment)

Is there a way to "recreate" all ids into a uniform sequence starting from 1 and ending at 3741003 for that table through postgres?

UPDATE: This is the error I am getting:

*** SQL ERROR ***
DataError('nextval: reached maximum value of sequence "data_observeddata_id_seq" (2147483647)\n')
8
  • 3
    Time to change your column type to bigint Commented Jan 10, 2024 at 12:54
  • 1
    Even if there are no foreign keys, something may refer to that primary key. Make 100% sure that nothing references the value of the primary key before you start messing with it Commented Jan 10, 2024 at 12:57
  • 2
    Before going bigint, what is striking me is that you are using only 0.17% of the values your sequence has generated. While some values from a sequence are bound not to be used, I have never seen such an extreme example. Care to explain some of the context? Was it really a matter of records being deleted or rather a matter of values being discarded? Commented Jan 10, 2024 at 12:58
  • 1
    Emptied and refilled? what is the smallest value currently in the table then? something over 2 billion? Commented Jan 10, 2024 at 14:01
  • 1
    I see. By emptied, I had assumed all the records had been removed, but it seems the record with value 1 has always remained though. Maybe next time such a cleaning operation is executed, make sure the table really gets emptied (aka truncated). Commented Jan 10, 2024 at 15:22

1 Answer 1

1

For your question:

Is there a way to "recreate" all ids into a uniform sequence starting from 1 and ending at 3741003 for that table through postgres?

The answer is yes but I cannot stress out enough how dangerous these operations can be, despite how mundane you might think it is. Underestimating the impact of a change like this one is the best way to experience the ohnosecond; it may not happen this time but any undeserved confidence, going for an update of data when you can avoid it, will come back to bite you one day.

Before I get to explain the method, you need to try everything you can to avoid doing that, here are 2 alternative solutions that could work just as well, without the risk.

Basically, these alternatives are what you should use.
They are applicable if the values currently held by the table are all above some threshold (usually, sequences start by generating positive values; as per your comment, this is the case for you.).

Before starting

The next time your table is emptied (assuming it is entirely emptied), avoid using DELETE. Instead, do it with a TRUNCATE and ensure it resets the sequence with it (documentation here):

TRUNCATE MyTable RESTART IDENTITY

Solution 1: Alter the sequence

What we want is to make the sequence cycle, as per the documentation.

ALTER SEQUENCE IF EXISTS data_observeddata_id_seq MINVALUE -2147483647 CYCLE

This will let the next calls to nextval succeed, and you'll get 2 billion+ available values. However, if an application loads the content of the table, you must make sure it can do so with negative values of the PK (and the same will go for the next solution).

Solution 2: Reset the sequence

SELECT setval('data_observeddata_id_seq', -2147483647)

Had the minimal value in your PK column been close to 2 billion, you could have reset the sequence to 0. As per your comment, you are forced to let the sequence generate negative values though, which has the same constraint as above.

The downside of this solution compared to the previous one is that, after your next TRUNCATE on the table, the sequence will only get 2 billion or so values to use, whereas solution 1 doubles the range by using negative values too.

Solution 3: Backup, clear and reinsert

Before you decide to apply this solution, let me repeat that THIS SHOULD BE YOUR LAST RESORT. This is especially true if the DB/table was created by someone else and you are not completely sure of how it was intended to be used.

First make sure:

  • No column from any table references the values in your primary key (it could happen even without a foreign key between both tables).
    Warning to future readers: OP here has checked there is no foreign key involved. If you do have a foreign key in the mix, that is all the more reason not to use the below queries. Additional work is required to manage foreign keys, which I completely skipped.
  • All your queries are run inside a transaction. Do NOT let your SQL client autocommit your queries.
  • The queries I will present below consist in copying the data rather than just updating the column, because that introduces redundancy of controls: if something fails, even if you committed a query, you may not be out of luck (emphasis on "may").
    Since you will have data and a copy, run some control queries at each step (at the very least, make sure the number of rows match the expectation). I am not going to write the control queries myself for a lack of information about your schema.

Make a backup of your table with psql, as explained in this answer.

Then, make a copy of your table into a new table.

BEGIN;
/* Create a backup */
CREATE TABLE MyTable_backup AS SELECT * FROM MyTable;

Control, then if OK, commit and go to next step.

BEGIN;
/* Truncate table (in Postgres, TRUNCATE can be rolled back) */
TRUNCATE MyTable RESTART IDENTITY;

Control, then if OK, commit and go to next step.

BEGIN;
/* Insert records back into the table, with ids being generated by the sequence */
INSERT INTO MyTable(<all columns except your primary key>)
SELECT <same list of field as above> FROM MyTable_backup;

Control, then if OK, commit and go to next step.

DROP TABLE MyTable_backup

After some observation time, you can delete the backup.

Sign up to request clarification or add additional context in comments.

7 Comments

Why do all of your snippets start with BEGIN? And where's the COMMIT?
Because like I said, you need to check the data after every step, in case of something going wrong with the server.
@AdrianKlaver: The reason I have not included it is because it is the same, but worse, as what I was warning about with the sentence: "However, if an application loads the content of the table, you must make sure it can do so with negative values of the PK". IMO, expanding the type of a column is more likely to have side effects compared to resetting the sequence to negative value and both are just as easy as each other, hence only the latter remained in my answer.
@Atmo Yes, but the formatting does not make it clear that all the statements (including the "data checks" you omitted) should run in a single transaction, not in several transactions.
OK I can see that.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.