For your question:
Is there a way to "recreate" all ids into a uniform sequence starting from 1 and ending at 3741003 for that table through postgres?
The answer is yes but I cannot stress out enough how dangerous these operations can be, despite how mundane you might think it is. Underestimating the impact of a change like this one is the best way to experience the ohnosecond; it may not happen this time but any undeserved confidence, going for an update of data when you can avoid it, will come back to bite you one day.
Before I get to explain the method, you need to try everything you can to avoid doing that, here are 2 alternative solutions that could work just as well, without the risk.
Basically, these alternatives are what you should use.
They are applicable if the values currently held by the table are all above some threshold (usually, sequences start by generating positive values; as per your comment, this is the case for you.).
Before starting
The next time your table is emptied (assuming it is entirely emptied), avoid using DELETE. Instead, do it with a TRUNCATE and ensure it resets the sequence with it (documentation here):
TRUNCATE MyTable RESTART IDENTITY
Solution 1: Alter the sequence
What we want is to make the sequence cycle, as per the documentation.
ALTER SEQUENCE IF EXISTS data_observeddata_id_seq MINVALUE -2147483647 CYCLE
This will let the next calls to nextval succeed, and you'll get 2 billion+ available values. However, if an application loads the content of the table, you must make sure it can do so with negative values of the PK (and the same will go for the next solution).
Solution 2: Reset the sequence
SELECT setval('data_observeddata_id_seq', -2147483647)
Had the minimal value in your PK column been close to 2 billion, you could have reset the sequence to 0. As per your comment, you are forced to let the sequence generate negative values though, which has the same constraint as above.
The downside of this solution compared to the previous one is that, after your next TRUNCATE on the table, the sequence will only get 2 billion or so values to use, whereas solution 1 doubles the range by using negative values too.
Solution 3: Backup, clear and reinsert
Before you decide to apply this solution, let me repeat that THIS SHOULD BE YOUR LAST RESORT. This is especially true if the DB/table was created by someone else and you are not completely sure of how it was intended to be used.
First make sure:
- No column from any table references the values in your primary key (it could happen even without a foreign key between both tables).
Warning to future readers: OP here has checked there is no foreign key involved. If you do have a foreign key in the mix, that is all the more reason not to use the below queries. Additional work is required to manage foreign keys, which I completely skipped.
- All your queries are run inside a transaction. Do NOT let your SQL client autocommit your queries.
- The queries I will present below consist in copying the data rather than just updating the column, because that introduces redundancy of controls: if something fails, even if you committed a query, you may not be out of luck (emphasis on "may").
Since you will have data and a copy, run some control queries at each step (at the very least, make sure the number of rows match the expectation). I am not going to write the control queries myself for a lack of information about your schema.
Make a backup of your table with psql, as explained in this answer.
Then, make a copy of your table into a new table.
BEGIN;
/* Create a backup */
CREATE TABLE MyTable_backup AS SELECT * FROM MyTable;
Control, then if OK, commit and go to next step.
BEGIN;
/* Truncate table (in Postgres, TRUNCATE can be rolled back) */
TRUNCATE MyTable RESTART IDENTITY;
Control, then if OK, commit and go to next step.
BEGIN;
/* Insert records back into the table, with ids being generated by the sequence */
INSERT INTO MyTable(<all columns except your primary key>)
SELECT <same list of field as above> FROM MyTable_backup;
Control, then if OK, commit and go to next step.
DROP TABLE MyTable_backup
After some observation time, you can delete the backup.
bigint…bigint, what is striking me is that you are using only 0.17% of the values your sequence has generated. While some values from a sequence are bound not to be used, I have never seen such an extreme example. Care to explain some of the context? Was it really a matter of records being deleted or rather a matter of values being discarded?