I’m doing some testing of columnstore indexing on a single table that has about 500 million rows.
The performance gains on aggregate queries have been awesome (a query that previously took about 2 minutes to run now runs in 0 seconds to aggregate the entire table).
But I also noticed another test query that leverages seeking on an existing rowstore index on the same table is now running 4x as slow as it previously did before creating the columnstore index. I can repeatedly demonstrate when dropping the columnstore index the rowstore query runs in 5 seconds, and by adding back in the columnstore index the rowstore query runs in 20 seconds.
I’m keeping an eye on the actual execution plan for the rowstore index query, and it’s almost exactly the same in both cases, regardless if the columnstore index exists. (It uses the rowstore index in both cases.)
The rowstore test query is:
SELECT * INTO #TEMP FROM Table1 WITH (FORCESEEK) WHERE IntField1 = 571 AND DateField1 >= '6/01/2020'
The rowstore index used in this query is:
CREATE NONCLUSTERED INDEX IX_Table1_1 ON Table1 (IntField1, DateField1) INCLUDE (IntField2)
The columnstore test query is:
SELECT COUNT(DISTINCT IntField2) AS IntField2_UniqueCount, COUNT(1) AS RowCount FROM Table1 WHERE IntField1 = 571 -- Some other test columnstore queries also don't use any WHERE predicates on this table AND DateField1 >= '1/1/2019'
The columnstore index is:
CREATE NONCLUSTERED COLUMNSTORE INDEX IX_Table1_2 ON Table1 (IntField2, IntField1, DateField1)
The only differences I notice between the two plans is the Sort operation’s warning goes away after creating the columnstore index, and the Key Lookup and Table Insert (#TEMP) operators take significantly longer.
I would’ve thought a read query that is specifically leveraging the same rowstore index and execution plan in both cases should have roughly the same performance on every run, regardless of what other indexes exist on that table. What gives here?