DEV Community

MangeshJadhav
MangeshJadhav

Posted on • Edited on

# PostgreSQL Tutorial: What Really Happens When You Add and Drop Columns 2000 Times in PostgreSQL

PostgreSQL is famous for its robust architecture and SQL standards compliance.

But what happens if you keep adding and dropping columns — again and again — 2000 times?

Spoiler alert: You hit a hard wall. And it's not about performance first — it's about PostgreSQL's internal 1600 column limit.

Let's explore what happens under the hood when you repeatedly modify a table's schema!


How PostgreSQL Handles ADD and DROP COLUMN

  • Adding a column without a default is instantaneous.

    • It only updates system catalogs like pg_attribute and pg_class.
    • No physical rewrite of the table data.
  • Dropping a column is lazy.

    • PostgreSQL marks the column as dropped in pg_attribute.
    • The physical data still exists inside table tuples.

🔗 Dropped columns remain "invisible" but are still internally counted until a full table rewrite happens!


The Experiment: 2000 Schema Changes

CREATE TABLE crazy_table (id SERIAL PRIMARY KEY);

DO $$
BEGIN
  FOR i IN 1..2000 LOOP
    EXECUTE 'ALTER TABLE crazy_table ADD COLUMN temp_col' || i || ' TEXT';
    EXECUTE 'ALTER TABLE crazy_table DROP COLUMN temp_col' || i;
  END LOOP;
END
$$;
Enter fullscreen mode Exit fullscreen mode

The idea:

  • Add a temp column
  • Drop it
  • Repeat 2000 times!

Results: What Actually Happened?

Metric Result
Table still accessible? Yes
Physical table size? Slightly bigger
pg_attribute catalog size? Significantly bigger
Performance? Slightly slower queries
Error? Yes at ~1598 iterations

After about 1598 add/drop cycles, PostgreSQL threw an error:

ERROR: tables can have at most 1600 columns

Why 1600 Columns?

PostgreSQL enforces a hard internal limit:

  • Each row uses a bitmap for null/non-null tracking.
  • Bitmap can track only 1600 columns max.
  • Even dropped columns occupy slots until the table is fully rewritten!

In simple words:

Dropped columns are "ghosts" — they aren't truly gone until the table is rebuilt.

Thus, even invisible dropped columns block new schema changes after 1600 slots are filled.


How to Fix After Hitting the Limit

If you hit the 1600-column error, you need to reclaim space properly.

Problem Solution
Dropped column ghosts VACUUM FULL table_name;
Catalog bloat (pg_attribute) REINDEX SYSTEM database_name;
Persistent tuple bloat Create a fresh table

Example:

CREATE TABLE crazy_table_new AS
SELECT id FROM crazy_table;

DROP TABLE crazy_table;

ALTER TABLE crazy_table_new RENAME TO crazy_table;
Enter fullscreen mode Exit fullscreen mode

This forces a physical rebuild, clearing all dropped column artifacts!


Best Practices for Evolving Schemas

  • Avoid constantly altering table structure.
  • Prefer JSONB columns for dynamic attributes.
  • Monitor catalog bloat using:
SELECT COUNT(*) FROM pg_attribute WHERE attrelid = 'your_table_name'::regclass AND attisdropped;
Enter fullscreen mode Exit fullscreen mode
  • Regularly VACUUM FULL large frequently-modified tables.
  • Use pg_repack if downtime must be minimized.

Final Takeaway

PostgreSQL is resilient, but system catalog bloat is real.

Adding and dropping columns 2000 times won't immediately crash your database, but:

  • You'll slow down planning and querying.
  • You'll hit hard limits you can't ignore.
  • You'll eventually need to rewrite tables manually.

Moral of the story:

Treat your schema with respect! Think carefully before designing dynamic column systems.

Happy PostgreSQL!


🔗 Further Reading

Top comments (2)

Collapse
 
nevodavid profile image
Nevo David

Gotta say, I crashed into weird limits like this before and it seriously threw me off. Kinda reminds me why I gotta stay on top of cleanup and not mess with schemas too much lmao.

Collapse
 
mangesh28 profile image
MangeshJadhav

😄 Totally feel you!
These weird PostgreSQL limits — like the 1600 column cap — sneak up fast if you're doing lots of schema churn.
It's a big reminder that cleanup, planning, and keeping schemas tidy are just as important as writing good queries!

close