3

I've recently been involved with a project to migrate a customer's SQL Server environment to a new server. Since the migration, the .ldf file for a particular database on the new server is growing every night to be at least the same size if not larger than the .mdf file. It either was not doing this on the old server, or it was growing then automatically shrinking back down so it wasn't noticed.

Some background information:

  • The database in question is massive (over 300gb)
  • There are a number of scripts (Powershell, batch, SQL Server Agent jobs, dtsx files) which run on this database, including one which does a huge number of deletions/inserts (we know this is not ideal and will need addressing)
  • The customer assures me that no significant changes have been made to these scripts since the migration (this is hard to verify given the number of scripts involved and the number of people the customer has making changes on the server)
  • The database is set to Simple recovery model
  • The database is set to auto-grow but not auto shrink
  • Manually shrinking the .ldf file brings it back down to just under half the size of the .mdf file
  • The old SQL Server was running SQL Server version 2019 (v15.0.2140.1), the new is running version 2022 (v16.0.4210.1)
  • The database is not regularly backed up in any way

From what I can tell, there are no differences between how the SQL Servers are configured, or how the database itself is configured. The new server is higher spec than the old in terms of processor and RAM.

The only difference I've found between the scripts on old and new is a move from using SQLNCLI11 as the provider in the connection string to MSOLEDBSQL. The database on the new server currently contains fewer records than the old database.

I'm at a dead-end trying to figure out how to explain why we are seeing this issue on the new server when we weren't on the old - can anyone suggest something I've overlooked?

Edited for additional info in response to comments:

  • Autogrowth is already set at 64mb
  • There was nothing in the old environment doing backups or trimming files
  • Unfortunately there was no benchmarking for the scripts on the old server so unknown whether they have regressed or not
4
  • 300GB isn't massive by any standard and the growth of the transaction log depends on the data changes, not the use of a discontinued client driver. The customer assures me they always do. But somehow, somewhere, they're running a job that generates a ton of log entries. Rebuilding indexes perhaps? Or one which does a huge number of deletions/inserts ? Commented Oct 14 at 9:52
  • we weren't on the old are you sure? You said automatically shrinking back down. We can't guess what's going on, and this question says there are a lot of data modifications which always result in large transaction logs, unknown scripts, differences between the servers, differences between the clients, people making changes without anyone noticing. Perhaps someone runs an UPDATE that touches half the database? Commented Oct 14 at 9:57
  • 1
    Did the old server have a maintenance plan that was trimming the logs? The fact the database isn't backed up though, that is a super scary position to be in. Commented Oct 14 at 12:24
  • 1
    Have you tested and verified that none of the "number of scripts" have regressed and we running slower on the new server now? That could cause the Log space they use to be held longer making the Log file grow while other transactions are simultaneously occurring. Commented Oct 14 at 12:43

1 Answer 1

6

[...] can anyone suggest something I've overlooked?

If just the single database or set of user databases were moved, it's most likely there was a job which was shrinking the log, assuming all else stayed the same, such as the amount of data ingested, the time it takes for all of those scripts to run, etc.

Simple recovery mode will attempt to truncate the log at various points, assuming it can, so the fact that it's getting this large means there is some open transaction which is doing whatever with the data to the point where it's generating that much log. Once the log grows, it'll stay there unless auto shrink is enabled (which is has been stated it isn't).

I'd just let the log be that size. If the customer doesn't like it, put in some monitoring to see what operation the customer is doing which is so long that the log grows to that size, then have them break it up so that it doesn't grow to that size in the first place.

The worst thing would be to shrink it back every whatever interval, as that's just a bunch of extra growths the following whatever interval for no reason and will only add time to the processing.

4
  • 1
    May be worth noting that on SQL Server 2022 (OP's version), setting log growth to 64MB will make those growth events behave like IFI. Commented Oct 14 at 13:26
  • True, @AaronBertrand, great call out, but then they'll be getting thousands of VLFs since it's clear they exclusively rely on autogrow. That'll put them in an even worse spot. Commented Oct 14 at 13:44
  • 1
    Well they should fix that, too, but changing the setting can be great as a fallback. Commented Oct 14 at 14:05
  • Yep, like I said great call out :) Commented Oct 14 at 15:19

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.