I am designing a CI system where I need to leverage a database as part of the software build process for integration tests. Different developers on different branches of the same repo will want to run a build on their code. I am struggling a bit on how to design the the database integration. As part of the process the database schema or data itself might be changed. Also there are many different schema's used by the software depending on what code base we are building.
The problem I have is how to handle a set of db backups. These are pretty complex with the views, store procedures and tables etc. I could just restore the database each time programmatically for each build. However, that could cause a conflict between concurrent builds by different developers, if they attempt to use that database at the same time.
Ideally I can take a series of .bak files, restore them to a database, and programmatically create unique schemas to avoid any conflicts between builds.
However, that is a huge lift at this point because I need to track these dynamically generated schemas and point the software to them with some find/replace.
I can help but believe that this is a pretty normal problem to have and that engineering a similar solution has already been done. Is there a standard way of dealing with this?
The system is for windows software and I am using Jenkins with a shared library so I can do almost anything programmatically I need to do, just looking for the best solution.