As per the title at work we have a SQL 2005 node as part of a cluster.
This particular node hosts our a few DB's but mainly our biggest database which is around 900GB in size. We have been getting complaints that in the evening this database becomes unusable for a couple of hours. Our initial thoughts were that backups would be causing this but after a little more monitoring we can see they are not the cause.
Basically our backups do an incremental each evening with a full on a Friday. The incremental backup finished between 8-9pm, we have even suspended the backup on this server one evening to fully rule this out for our DB team.
I have looked at the server reports using various monitoring and can clearly see that between just after 9pm something kicks off on the server and it constantly begins reading in 200MB/s data from the SAN. This continues until around midnight.
Looking into a little further I can see that the read operation is purely being done by the SQL process and at the time I can see it's the massive DB that seems to being read in (file monitoring level) i.e. gigs and gigs of it. No other process on the server is causing it. And it's that single file from the servers data drive.
There are no scheduled maintenance plans at that time the earliest on is at around 10:45 PM but this starts a 9pm. It does it every night at the same time so it must be something that's scheduled. The fact is that whilst this DB is been read in so intesivly end users are suffering and this is a DB which must have 24/7 access.
I have looked through SQL logs but nothing is showing as being run at that time. I'm not a DBA and know only a little of SQL but seem unable to track down the cause and to be honest our DB team are a waste of space at anything other than setting up the odd maintenance plan.
Does anyone have any ideas as to what could be causing this? If someone had created a very bad scheduled report etc would that cause it? If so how could I track that down? Could it be some sort of expensive query that is been remotely executed? Is there any circumstance that SQL would be doing something with the DB outside of a maintenance plan?
Any help would be really appreciated as I'm really struggling to pinpoint the issue. Perfmon is showing large read disk queue depths as to be exepected with the I/O that we are seeing. I've tried looking in activity monitor within management studio but can't seem to see anything I'm guessing there may be something I'm missing
This particular node hosts our a few DB's but mainly our biggest database which is around 900GB in size. We have been getting complaints that in the evening this database becomes unusable for a couple of hours. Our initial thoughts were that backups would be causing this but after a little more monitoring we can see they are not the cause.
Basically our backups do an incremental each evening with a full on a Friday. The incremental backup finished between 8-9pm, we have even suspended the backup on this server one evening to fully rule this out for our DB team.
I have looked at the server reports using various monitoring and can clearly see that between just after 9pm something kicks off on the server and it constantly begins reading in 200MB/s data from the SAN. This continues until around midnight.
Looking into a little further I can see that the read operation is purely being done by the SQL process and at the time I can see it's the massive DB that seems to being read in (file monitoring level) i.e. gigs and gigs of it. No other process on the server is causing it. And it's that single file from the servers data drive.
There are no scheduled maintenance plans at that time the earliest on is at around 10:45 PM but this starts a 9pm. It does it every night at the same time so it must be something that's scheduled. The fact is that whilst this DB is been read in so intesivly end users are suffering and this is a DB which must have 24/7 access.
I have looked through SQL logs but nothing is showing as being run at that time. I'm not a DBA and know only a little of SQL but seem unable to track down the cause and to be honest our DB team are a waste of space at anything other than setting up the odd maintenance plan.
Does anyone have any ideas as to what could be causing this? If someone had created a very bad scheduled report etc would that cause it? If so how could I track that down? Could it be some sort of expensive query that is been remotely executed? Is there any circumstance that SQL would be doing something with the DB outside of a maintenance plan?
Any help would be really appreciated as I'm really struggling to pinpoint the issue. Perfmon is showing large read disk queue depths as to be exepected with the I/O that we are seeing. I've tried looking in activity monitor within management studio but can't seem to see anything I'm guessing there may be something I'm missing
Last edited: