We are storing our configuration files in S3. Each file is a JSON file. When config changes, the old file is backed up and a new file replaces it. These happen behind a service, where it also maintains optimistic locks to deal with concurrent writes. We store changes as audit logs, along with the author and timestamp.
The current setup has been working reasonably well for us, but we are seeing some issues with audit logs when we have more engineers and more changes. It's hard to find out a field was last edited by whom, and when.
Since we use Git and its blame feature often as engineers, I feel that it fits the use case nicely. Every change to the config can be modelled as a commit. We can easily see the file history, who has changed which lines, and more.
Now the problems I see is that we have to deal with Git on the CLI with the plumbing commands, which can be difficult to work with. I don't think merge conflicts are likely, but I admit I don't know well enough to say they definitely won't happen when used this way.
Are there alternatives that achieve this subset functions of Git blame with less complexity? Do you recommend this approach or not, and why? What are the best practices?
Edit to add file and usage metrics
We roughly have the number of files in the ~50 region, and they stay stable unless we spin up new services which is not often.
Each file can have 10 lines to a few thousands depending on the complexity of the configs. It's currently sorted by keys (we do have schemas for the configs and the serialization library always makes sure to produce a consistent order and formatting).
There are both engineers and automated systems making changes to the configs, and it can be a few tens of edits per day.