0

I am using Oracle for production, and need to migrate to Postgres with no downtime. I figured out that I can follow the 4-step? approach for the migration.

First, write changes to both databases, and read only from Oracle, until Postgres has all the previous data from Oracle.

When the data migration is complete, start reading from both Postgres and Oracle, and compare the transactions. We still use Oracle, but check the Postgres transactions and log for errors/inconsistencies with Oracle.

When Postgres is trusted, meaning makes no errors, replace reading from Oracle with Postgres, but keep writing the changes to both databases. If something snaps with Postgres, with no downtime we can switch back to Oracle

When we finally trust Postgres, and its error-free, we can drop Oracle and continue only with Postgres.

Now the question: how do I efficiently write/read to Postgres, without disrupting the main flow of the application? Do I just open two connections? I have a feeling that this would impact the flow of the application.

Also, what about the performance loss when copying data from Oracle to Postgres? I cane across ora2pg, but I am still not sure about it.

So far I have tried opening two connections to both Oracle and Postgres, I have a timer on Postgres write for N-ms, inside a try-catch statement. I am worried that this would impact the normal flow performance of the application. Same for writing changes to both databases.

New contributor
displaynamegoeshere is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
7
  • 3
    I suggest that you give up on the "no downtime" idea. Commented Oct 10 at 13:29
  • 1
    ... and to the the "writing to two databases" and checking stuff idea. Look for a replication tool that can handle oracle -> postgres (and that your budget can handle) Commented Oct 10 at 13:39
  • 1
    The only way you will become more sure about the performance impact of the dual writes is to try it. No one reading your question has more insight into this than you, because we don't have access to test your application. You haven't described anything about your application — its query traffic, its users, its tolerance to temporary performance loss, etc. My advice: do some testing and decide for yourself if its within the tolerance of your application. Commented Oct 10 at 13:57
  • The steps you describe can achieve zero downtime migration. But there is no solution to achieve zero downtime AND zero performance cost AND zero testing! Commented Oct 10 at 13:58
  • There should be an "application layer" and a "database" (access) layer; and you should be looking (first) at where they interface. When you start talking about multiple connections, the app layer should not be impacted. The "interface" now needs to make some "connection" decisions. Commented Oct 10 at 15:26

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.