I am running a job which deletes data from 7 tables, 2 of which contains 1-2 million records. But the job gets stuck at a point when deleting data from web_activity table which holds only 42 000 records. It takes 4 hours most of the times. But sometimes it takes only 7 minutes. If this is the issue of index then what happens on the day of 7 minute execution.
There are 4 other jobs that run parallel every day and sometimes create blockings, but blocking is due to resource being utilized by other jobs.
What I am concerned about is, how can I reduce the four hours for deletion of only 42 000 records for web_activity table?
One more point: There are huge number of logical reads 2 066 225 339. I am not sure if that is the cause or not.
While doing DBCC SHOWCONFIG for that tables it shows below data:
I am using an alternative approach:
select Web_Activity_id into #Temp_web_activity from Web_Activity
where MONTH_NUMBER >=@min_month_to_delete
and year >= @year_to_delete;
DELETE FROM Web_Activity WHERE Web_Activity_id
in (select Web_Activity_id from #Temp_web_activity);
Will it be helpful? I also tried deletion in batches on lower platform but it was not much helpful.