An overview over BLOBs in PostgreSQL
Published on June 16, 2025 by Stefanie Janine Stölting
PostgreSQL BLOB BYTEA Lo Large Objects
2 min READ
I have already published a blog post about PostgreSQL blobs.
But due to someone posting to get help about another implementation on the PostgreSQL Chat Telegram group about a very unusual method to store blobs, I thought, that should now also be covered.
I did not cover that method, because it is one of the worst ideas to handle blobs inside PostgreSQL.
Someone was migrating data from Oracle to PostgreSQL and had blobs exceeding the limit BYTEA of max 1 GB of data.
They used a method that he described as writing BLOBS to OIDs. Which is obviously not what he realy did, ad OID is a numerical data type.
In fact they used lo. That is storing a blob in a table where the binary data is stored as TOAST in the file system where PostgreSQL is located and an OID points to the TOASTed data.
With this method it is possible to store bigger binary files than the 1 GB limit of BYTEA.
But that does also add a lot of overhead to handling the blob data. The client can only handle the complete blob data. All layers in between have to handle the data, the PostgreSQL instance to get the data by pointers from disk, using the memory to load the binary data, and client side for example ODBC or JDBC.
Also deleting these binaray objects does have downsides, too. Deleting rows or truncating a table does not delete the blob data, it leaves orphaned blob data.
One has to take care of orphaned blob data, that is not referenced anymore in the table pg_largeobject_metadata have to be removed with another job, vacuumlo.
That causes additional disk traffic, of course.
Compare that to the easyness of getting a file location from the database and the content from disk. Including deleting files is much easier and does not impact the database perfomrance.
The large objects are also part of the database backups. They are blowing up your database backups.
You can restore a deleted or accidentally changed file just from a file backup.
The database would have to be restored to that point in time berore the change/deletion happened.
That couses database downtime and even data loss as all data after the change is not restored. All data is at the point in time of the restore.
More documentation about how to handle large objects.
My advice is still the same as in the previous blog post about BLOBs: Do not store them inside of a database, write them to a file server or even to S3 and only store links in a TEXT column inside the database.