SQL Server - Indexes

Soldato
Joined
5 Mar 2003
Posts
10,771
Location
Nottingham
Can someone confirm a few things for me:

1) Each index always contains the primary key information (so if the table is a linker table, I don't need to 'include' the primary key column)?

2) If my where clause includes three columns and I have 3 indexes (one per column) will SQL Server (where it thinks appropriate) use those three indexes to find primary keys that appear in all three and then go a look up to find the full columns? Or, will it just use one index that it thinks will be most helpful and then does a lookup?

We've got a large table (ledger) which has a number of foreign keys. The data in this table, even though it's primary concern is quantity in a location, is viewed in many ways (and filtered in many ways). At the moment we have 10 indexes, some on individual columns, some composite indexes and a mixture with includes or non includes. Just wondering if the individual indexes are pointless (99% of queries will have 2 or more columns in the where clause).

Cheers!
 
Last edited:
What if you don't include ID? Is it still just an index seek (assuming ID is primary key)?

We used stored procs for complex reports such as the one I'm having issues with. The problem is the execution plan does not make sense. It says one part of the stored proc is taking 5% of the time, but when I comment it out the query runs in 1 second instead of 7. Also when I run that query by itself it's fast but uses different indexes than when the query is run as part of a larger stored proc (as I reference the data in several different ways, I'm loading the data into a table variable).

We are running 2005, moving to 2008 shortly.
 
Can't really do all that I'm afraid - company data (and a lot of it - 1m+ rows). What you've done (thank you!) is answered the first of my questions...

What I can do (tomorrow) is show you the query in question and the 2 different execution plans... the goal is to work out if sql server can use multiple indexes to work out the target rows or whether it just has to pick the best one for the job and then work through all the data (forcing me to make the 'one' made up of multiple rows to ensure a small result set).

Thanks again.
 
All sorted! And it's very bizzare....

Basically I had a table which I was referring to a lot (to cut down the result sets of other queries). To make this faster I thought 'lets load it into a table variable and then just do 'distinct' on the various columns when I need it'. The table is only 1800 rows, so what harm can it do? Well apparently, a lot.

Query was taking 20 seconds to run. By taking the table variable and creating 3 more table variables (containing the distinct values) the query now takes 3 seconds to run. And now the excution plan looks a lot more sensible (cost high where I expect it to be).

The only clue I had to make the changes I did is one icon on the execution plan, which has a distinct on the client id (expecting to get 3 rows) returned 1.3m rows. The cost for this was only 1%, the only reason I looked at it was due to the arrow coming off it being large (indicating large results etc).

Strange. Very strange! Will be thinking twice when setting up table variables in future.
 
I did have primary keys but when removing them everything was the same (cost to insert primary key was same cost to insert into table etc). Also no cursors... just 3 tables variables (making them inline slowed the queries down again?) that are then all joined together to provide the result set. I really thought that doing a distinct would help reduce the work load.... learn something new every day :)
 
Back
Top Bottom