

This is because of the WITH RECOMPILE option, here each execution of stored procedure generates a new execution plan. Here you see the better execution plan and great improvement in Statistics IO. Now execute this stored procedure as: set statistics IO on

Now again creating that stored procedure with RECOMPILE option. Here when we execute stored procedure again it uses the same execution plan with clustered index which is stored in procedure cache, while we know that if it uses non clustered index to retrieve the data here then performance will be fast. Now executing the same procedure with different parameter value: set statistics IO on The output of this execution generates below mention statistics and Execution plan: Select address,name from xtdetails where execute this stored procedure as: set statistics IO on Now create stored procedure as shown below: create procedure as varchar(50)) Set into xtdetails table xtdetails contains 10000 rows, where only 10 rows having name = asheesh and address=Moradabad. Now, I am inserting the data into this table: declare as int Ĭreate clustered index IX_xtdetails_id on xtdetails(id)Ĭreate Nonclustered index IX_xtdetails_address on xtdetails(address) In this case if we reuse the same plan for different values of parameters then performance may degrade.įor Example, create a table xtdetails and create indexes on them and insert some data as shown below: CREATE TABLE. But sometimes plans generation depends on parameter values of stored procedures. If plan found in cache then it reuse that plan that means we save our CPU cycles to generate a new plan. If we again execute the same procedure then before creating a new execution plan sql server search that plan in procedure cache. When we execute stored procedure then sql server create an execution plan for that procedure and stored that plan in procedure cache. Here i am focusing on why we use WITH RECOMPILE option. Some time, we also use WITH RECOMPILE option in stored procedures. We use stored procedures in sql server to get the benefit of reusability. Unfortunately that often goes hand-in-hand with making the query simpler, and I don't know how much control you have over that.Today here, I am explaining the Use of Recompile Clause in SQL Server Stored Procedures. What you often want to try and do in these cases is to work towards plan stability, so that no matter what happens (outside of changing the schema), then a good enough plan is always generated. To answer why this suddenly changed one night - well there could be updated statistics, or a 'better' plan got pushed out of cache? I can envisage lots of scenarios where 'nothing' has changed, and yet the performance suddenly tanks. The slower plan does not have the early termination, which means that it thinks it has arrived at the best plan for the estimates and statistics that it has - clearly this is not the case as borne out by your experience. (see for more info) It's not unusual for this seemingly negative position to actually deliver you a plan that is good - just because it didn't evaluate every single possibility, doesn't mean it hasn't already got the best plan. We also see that the faster plan has a Reason for early termination: Time Out - which means that the optimiser reached a threshold and had to abandon looking for any further possible plans and go with the best it had so far.

Unfortunately the plan and query are so complex it is difficult to pinpoint exactly why. However in your case the actual performance is improved by this 'wrong' plan, when compared to the 'better' plan. So the faster plan wrongly assumes it is going to be doing less work and so chooses a plan and operators based on that assumption.

Conversely the '22 hour' plan (with option recompile) estimates a higher row count of 8,659. Looking at the plans (and yes they are fairly complex!) we can see that the '42 min' plan (without option recompile) has an estimated output row count of 2 rows, whereas the actual output is 21k. I think the first point I would make is that option(recompile) is not necessarily an optimisation, I'd view it as a 'change of behaviour', which can affect performance either way.
