Field = Parameter OR Parameter IS NULL Pattern





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







7















I am aware of the parameter sniffing issues associated with stored procedures written with a predicate like the following:



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
SELECT [Field] FROM [dbo].[Table]
WHERE [Field] = @Parameter
OR @Parameter IS NULL;
END;


Depending on the value of the parameter, Scalar or NULL on first execution, a plan is cached that will likely be sub optimal for the opposite value.



Assuming [Field] is scalar, and the clustering index on a table. What are the pros and cons to the following approaches to writing a stored procedure(s) to support the query:



Conditioned selects in same stored procedure



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
IF(@Parameter IS NOT NULL) BEGIN;
SELECT [Field]
FROM [dbo].[Table]
WHERE [Field] = @Parameter;
END;
ELSE BEGIN;
SELECT [Field]
FROM [dbo].[Table];
END;
END;


Dynamic SQL within stored procedure



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
DECLARE @sql NVARCHAR(MAX) = N'';
SET @sql += N'SELECT [Field]'
SET @sql += N'FROM [dbo].[Table]';

IF(@Parameter IS NOT NULL) BEGIN;
@sql += N'WHERE [Field] = @Parameter';
END;

SET @sql += N';';

EXEC sp_executesql @sql N'@Parameter INT', @Parameter;
END;


Separate stored procedures



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
SELECT [Field]
FROM [dbo].[Table]
WHERE [Field] = @Parameter;
END;

CREATE PROCEDURE [dbo].[GetAll] AS BEGIN;
SELECT [Field]
FROM [dbo].[Table];
END;









share|improve this question


















  • 5





    I'm a big fan of the dynamic SQL approach because you can very easily fine-tune every single parameter combination if you have to. See blogs.sentryone.com/aaronbertrand/…

    – Aaron Bertrand
    Dec 24 '18 at 22:39






  • 2





    The first one seems like you may get parameter sniffing issues if compiled for NULL I imagine the NOT NULL branch will be stuck with a 1 row estimate.

    – Martin Smith
    Dec 25 '18 at 1:26











  • @Martin Smith - I thought the same thing, but that isn't the case, this question goes into detail as to what will actually occur: dba.stackexchange.com/questions/185252/….

    – M. Jacobson
    Dec 26 '18 at 21:41











  • @M.Jacobson that answer backs up what I thought would happen. All statements will be compiled according to the first parameter value passed. When that is NULL then =NULL will return 0 rows. Though often zero row estimates get rounded up to 1

    – Martin Smith
    Dec 26 '18 at 22:07











  • @Martin Smith - Apologies, I misinterpreted your statement.

    – M. Jacobson
    Dec 29 '18 at 10:23


















7















I am aware of the parameter sniffing issues associated with stored procedures written with a predicate like the following:



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
SELECT [Field] FROM [dbo].[Table]
WHERE [Field] = @Parameter
OR @Parameter IS NULL;
END;


Depending on the value of the parameter, Scalar or NULL on first execution, a plan is cached that will likely be sub optimal for the opposite value.



Assuming [Field] is scalar, and the clustering index on a table. What are the pros and cons to the following approaches to writing a stored procedure(s) to support the query:



Conditioned selects in same stored procedure



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
IF(@Parameter IS NOT NULL) BEGIN;
SELECT [Field]
FROM [dbo].[Table]
WHERE [Field] = @Parameter;
END;
ELSE BEGIN;
SELECT [Field]
FROM [dbo].[Table];
END;
END;


Dynamic SQL within stored procedure



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
DECLARE @sql NVARCHAR(MAX) = N'';
SET @sql += N'SELECT [Field]'
SET @sql += N'FROM [dbo].[Table]';

IF(@Parameter IS NOT NULL) BEGIN;
@sql += N'WHERE [Field] = @Parameter';
END;

SET @sql += N';';

EXEC sp_executesql @sql N'@Parameter INT', @Parameter;
END;


Separate stored procedures



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
SELECT [Field]
FROM [dbo].[Table]
WHERE [Field] = @Parameter;
END;

CREATE PROCEDURE [dbo].[GetAll] AS BEGIN;
SELECT [Field]
FROM [dbo].[Table];
END;









share|improve this question


















  • 5





    I'm a big fan of the dynamic SQL approach because you can very easily fine-tune every single parameter combination if you have to. See blogs.sentryone.com/aaronbertrand/…

    – Aaron Bertrand
    Dec 24 '18 at 22:39






  • 2





    The first one seems like you may get parameter sniffing issues if compiled for NULL I imagine the NOT NULL branch will be stuck with a 1 row estimate.

    – Martin Smith
    Dec 25 '18 at 1:26











  • @Martin Smith - I thought the same thing, but that isn't the case, this question goes into detail as to what will actually occur: dba.stackexchange.com/questions/185252/….

    – M. Jacobson
    Dec 26 '18 at 21:41











  • @M.Jacobson that answer backs up what I thought would happen. All statements will be compiled according to the first parameter value passed. When that is NULL then =NULL will return 0 rows. Though often zero row estimates get rounded up to 1

    – Martin Smith
    Dec 26 '18 at 22:07











  • @Martin Smith - Apologies, I misinterpreted your statement.

    – M. Jacobson
    Dec 29 '18 at 10:23














7












7








7


2






I am aware of the parameter sniffing issues associated with stored procedures written with a predicate like the following:



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
SELECT [Field] FROM [dbo].[Table]
WHERE [Field] = @Parameter
OR @Parameter IS NULL;
END;


Depending on the value of the parameter, Scalar or NULL on first execution, a plan is cached that will likely be sub optimal for the opposite value.



Assuming [Field] is scalar, and the clustering index on a table. What are the pros and cons to the following approaches to writing a stored procedure(s) to support the query:



Conditioned selects in same stored procedure



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
IF(@Parameter IS NOT NULL) BEGIN;
SELECT [Field]
FROM [dbo].[Table]
WHERE [Field] = @Parameter;
END;
ELSE BEGIN;
SELECT [Field]
FROM [dbo].[Table];
END;
END;


Dynamic SQL within stored procedure



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
DECLARE @sql NVARCHAR(MAX) = N'';
SET @sql += N'SELECT [Field]'
SET @sql += N'FROM [dbo].[Table]';

IF(@Parameter IS NOT NULL) BEGIN;
@sql += N'WHERE [Field] = @Parameter';
END;

SET @sql += N';';

EXEC sp_executesql @sql N'@Parameter INT', @Parameter;
END;


Separate stored procedures



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
SELECT [Field]
FROM [dbo].[Table]
WHERE [Field] = @Parameter;
END;

CREATE PROCEDURE [dbo].[GetAll] AS BEGIN;
SELECT [Field]
FROM [dbo].[Table];
END;









share|improve this question














I am aware of the parameter sniffing issues associated with stored procedures written with a predicate like the following:



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
SELECT [Field] FROM [dbo].[Table]
WHERE [Field] = @Parameter
OR @Parameter IS NULL;
END;


Depending on the value of the parameter, Scalar or NULL on first execution, a plan is cached that will likely be sub optimal for the opposite value.



Assuming [Field] is scalar, and the clustering index on a table. What are the pros and cons to the following approaches to writing a stored procedure(s) to support the query:



Conditioned selects in same stored procedure



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
IF(@Parameter IS NOT NULL) BEGIN;
SELECT [Field]
FROM [dbo].[Table]
WHERE [Field] = @Parameter;
END;
ELSE BEGIN;
SELECT [Field]
FROM [dbo].[Table];
END;
END;


Dynamic SQL within stored procedure



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
DECLARE @sql NVARCHAR(MAX) = N'';
SET @sql += N'SELECT [Field]'
SET @sql += N'FROM [dbo].[Table]';

IF(@Parameter IS NOT NULL) BEGIN;
@sql += N'WHERE [Field] = @Parameter';
END;

SET @sql += N';';

EXEC sp_executesql @sql N'@Parameter INT', @Parameter;
END;


Separate stored procedures



CREATE PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
SELECT [Field]
FROM [dbo].[Table]
WHERE [Field] = @Parameter;
END;

CREATE PROCEDURE [dbo].[GetAll] AS BEGIN;
SELECT [Field]
FROM [dbo].[Table];
END;






sql-server execution-plan plan-cache






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Dec 24 '18 at 19:29









M. JacobsonM. Jacobson

827




827








  • 5





    I'm a big fan of the dynamic SQL approach because you can very easily fine-tune every single parameter combination if you have to. See blogs.sentryone.com/aaronbertrand/…

    – Aaron Bertrand
    Dec 24 '18 at 22:39






  • 2





    The first one seems like you may get parameter sniffing issues if compiled for NULL I imagine the NOT NULL branch will be stuck with a 1 row estimate.

    – Martin Smith
    Dec 25 '18 at 1:26











  • @Martin Smith - I thought the same thing, but that isn't the case, this question goes into detail as to what will actually occur: dba.stackexchange.com/questions/185252/….

    – M. Jacobson
    Dec 26 '18 at 21:41











  • @M.Jacobson that answer backs up what I thought would happen. All statements will be compiled according to the first parameter value passed. When that is NULL then =NULL will return 0 rows. Though often zero row estimates get rounded up to 1

    – Martin Smith
    Dec 26 '18 at 22:07











  • @Martin Smith - Apologies, I misinterpreted your statement.

    – M. Jacobson
    Dec 29 '18 at 10:23














  • 5





    I'm a big fan of the dynamic SQL approach because you can very easily fine-tune every single parameter combination if you have to. See blogs.sentryone.com/aaronbertrand/…

    – Aaron Bertrand
    Dec 24 '18 at 22:39






  • 2





    The first one seems like you may get parameter sniffing issues if compiled for NULL I imagine the NOT NULL branch will be stuck with a 1 row estimate.

    – Martin Smith
    Dec 25 '18 at 1:26











  • @Martin Smith - I thought the same thing, but that isn't the case, this question goes into detail as to what will actually occur: dba.stackexchange.com/questions/185252/….

    – M. Jacobson
    Dec 26 '18 at 21:41











  • @M.Jacobson that answer backs up what I thought would happen. All statements will be compiled according to the first parameter value passed. When that is NULL then =NULL will return 0 rows. Though often zero row estimates get rounded up to 1

    – Martin Smith
    Dec 26 '18 at 22:07











  • @Martin Smith - Apologies, I misinterpreted your statement.

    – M. Jacobson
    Dec 29 '18 at 10:23








5




5





I'm a big fan of the dynamic SQL approach because you can very easily fine-tune every single parameter combination if you have to. See blogs.sentryone.com/aaronbertrand/…

– Aaron Bertrand
Dec 24 '18 at 22:39





I'm a big fan of the dynamic SQL approach because you can very easily fine-tune every single parameter combination if you have to. See blogs.sentryone.com/aaronbertrand/…

– Aaron Bertrand
Dec 24 '18 at 22:39




2




2





The first one seems like you may get parameter sniffing issues if compiled for NULL I imagine the NOT NULL branch will be stuck with a 1 row estimate.

– Martin Smith
Dec 25 '18 at 1:26





The first one seems like you may get parameter sniffing issues if compiled for NULL I imagine the NOT NULL branch will be stuck with a 1 row estimate.

– Martin Smith
Dec 25 '18 at 1:26













@Martin Smith - I thought the same thing, but that isn't the case, this question goes into detail as to what will actually occur: dba.stackexchange.com/questions/185252/….

– M. Jacobson
Dec 26 '18 at 21:41





@Martin Smith - I thought the same thing, but that isn't the case, this question goes into detail as to what will actually occur: dba.stackexchange.com/questions/185252/….

– M. Jacobson
Dec 26 '18 at 21:41













@M.Jacobson that answer backs up what I thought would happen. All statements will be compiled according to the first parameter value passed. When that is NULL then =NULL will return 0 rows. Though often zero row estimates get rounded up to 1

– Martin Smith
Dec 26 '18 at 22:07





@M.Jacobson that answer backs up what I thought would happen. All statements will be compiled according to the first parameter value passed. When that is NULL then =NULL will return 0 rows. Though often zero row estimates get rounded up to 1

– Martin Smith
Dec 26 '18 at 22:07













@Martin Smith - Apologies, I misinterpreted your statement.

– M. Jacobson
Dec 29 '18 at 10:23





@Martin Smith - Apologies, I misinterpreted your statement.

– M. Jacobson
Dec 29 '18 at 10:23










3 Answers
3






active

oldest

votes


















5














They’re all excellent. Really. They all have the same impact of having two plans in the cache, which is what you want.



As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.



If this were a function I’d suggest avoiding multi-statement options, so that the QO could do its stuff more nicely.






share|improve this answer































    5














    If you are on a reasonably recent build of SQL Server one other option that could be considered is



    SELECT [Field]
    FROM [dbo].[Table]
    WHERE [Field] = @Parameter
    OR @Parameter IS NULL
    OPTION (RECOMPILE);


    To get an optimal plan for the runtime value each time at the expense of a compilation each time.



    Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).



    In the specific example in your question where you are selecting a single column and that is the same column as you are filtering by this is unlikely to present a problem but it can do in other cases.



    e.g. with the following example the first call to [dbo].[Get] 1 takes 333,731 logical reads as it chooses an inappropriate plan with key lookups. When the plan is removed from cache and recompiled with 1 passed first the logical reads fall to 4,330



    DROP TABLE IF EXISTS [Table]

    GO

    CREATE TABLE [Table]
    (
    [Field1] INT INDEX IX,
    [Field2] INT,
    [Field3] INT,
    );

    INSERT INTO [Table]
    SELECT TOP 1000000 CRYPT_GEN_RANDOM(1)%3, CRYPT_GEN_RANDOM(4), CRYPT_GEN_RANDOM(4)
    FROM sys.all_objects o1, sys.all_objects o2

    GO

    CREATE OR ALTER PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
    IF(@Parameter IS NOT NULL) BEGIN;
    SELECT *
    FROM [dbo].[Table]
    WHERE [Field1] = @Parameter;
    END;
    ELSE BEGIN;
    SELECT *
    FROM [dbo].[Table];
    END;
    END;

    GO

    SET STATISTICS TIME ON
    SET STATISTICS IO ON


    EXEC [dbo].[Get]

    EXEC [dbo].[Get] 1;

    declare @plan_handle varbinary(64) = (select plan_handle from sys.dm_exec_procedure_stats where object_id = object_id('[dbo].[Get]'));

    --Remove the plan from the cache
    DBCC FREEPROCCACHE (@plan_handle);

    --Re-execute it with NOT NULL passed first
    EXEC [dbo].[Get] 1;





    share|improve this answer































      2














      Based on the previous answers and comments from Aaron Bertrand, Martin Smith, and Rob Farley. I wanted to put together a pro/con list for each approach, including the additional approach OPTION(RECOMPILE):





      Conditioned selects in same stored procedure



      From Martin Smith's response:




      Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).





      • No recompile cost.

      • Plan cache reuse for every statement and the stored procedure.

      • Cached plans are vulnerable to parameters sniffing even when there is no significant variance in result set when @Parameter is NOT NULL.

      • Does not scale well administratively as the number of parameters increases.

      • Intellisense on all T-SQL.




      Dynamic SQL within stored procedure



      From Rob Farley:




      As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.





      • No recompile cost.

      • Plan cache reuse for every statement and the stored procedure.

      • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

      • Scales well administratively as the number of parameters increases.

      • Does not provide Intellisense on all T-SQL.




      Separate stored procedures




      • No recompile cost.

      • Plan cache reuse for every statement and the stored procedure.

      • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

      • Does not scale well administratively as the number of parameters increases.

      • Intellisense on all T-SQL.




      OPTION(RECOMPILE)



      From Martin Smith:




      To get an optimal plan for the run-time value each time at the expense of a compilation each time.





      • CPU cost for recompile.

      • No plan cache reuse for statements followed by OPTION(RECOMPILE), only the stored procedure and statements without OPTION(RECOMPILE).

      • Scales well administratively as the number of parameters increases.

      • Not vulnerable to parameter sniffing.

      • Intellisense on all T-SQL.




      My Personal Takeaway



      If there is no significant variance in the result set with different scalar values of @Parameter, Dynamic SQL is a top performer, has the least system overhead, and is only marginally worse in regards to administrative overhead in comparison to OPTION(RECOMPILE). In more complex scenarios, where variance in parameter value can cause significant changes in result sets, using Dynamic SQL with a conditional inclusion or exclusion of OPTION(RECOMPILE) will be the best overall performer. Here is a link to Aaron Bertrand's article detailing the approach: https://blogs.sentryone.com/aaronbertrand/backtobasics-updated-kitchen-sink-example/






      share|improve this answer
























        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "182"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f225736%2ffield-parameter-or-parameter-is-null-pattern%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        5














        They’re all excellent. Really. They all have the same impact of having two plans in the cache, which is what you want.



        As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.



        If this were a function I’d suggest avoiding multi-statement options, so that the QO could do its stuff more nicely.






        share|improve this answer




























          5














          They’re all excellent. Really. They all have the same impact of having two plans in the cache, which is what you want.



          As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.



          If this were a function I’d suggest avoiding multi-statement options, so that the QO could do its stuff more nicely.






          share|improve this answer


























            5












            5








            5







            They’re all excellent. Really. They all have the same impact of having two plans in the cache, which is what you want.



            As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.



            If this were a function I’d suggest avoiding multi-statement options, so that the QO could do its stuff more nicely.






            share|improve this answer













            They’re all excellent. Really. They all have the same impact of having two plans in the cache, which is what you want.



            As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.



            If this were a function I’d suggest avoiding multi-statement options, so that the QO could do its stuff more nicely.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Dec 24 '18 at 21:05









            Rob FarleyRob Farley

            14k12549




            14k12549

























                5














                If you are on a reasonably recent build of SQL Server one other option that could be considered is



                SELECT [Field]
                FROM [dbo].[Table]
                WHERE [Field] = @Parameter
                OR @Parameter IS NULL
                OPTION (RECOMPILE);


                To get an optimal plan for the runtime value each time at the expense of a compilation each time.



                Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).



                In the specific example in your question where you are selecting a single column and that is the same column as you are filtering by this is unlikely to present a problem but it can do in other cases.



                e.g. with the following example the first call to [dbo].[Get] 1 takes 333,731 logical reads as it chooses an inappropriate plan with key lookups. When the plan is removed from cache and recompiled with 1 passed first the logical reads fall to 4,330



                DROP TABLE IF EXISTS [Table]

                GO

                CREATE TABLE [Table]
                (
                [Field1] INT INDEX IX,
                [Field2] INT,
                [Field3] INT,
                );

                INSERT INTO [Table]
                SELECT TOP 1000000 CRYPT_GEN_RANDOM(1)%3, CRYPT_GEN_RANDOM(4), CRYPT_GEN_RANDOM(4)
                FROM sys.all_objects o1, sys.all_objects o2

                GO

                CREATE OR ALTER PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
                IF(@Parameter IS NOT NULL) BEGIN;
                SELECT *
                FROM [dbo].[Table]
                WHERE [Field1] = @Parameter;
                END;
                ELSE BEGIN;
                SELECT *
                FROM [dbo].[Table];
                END;
                END;

                GO

                SET STATISTICS TIME ON
                SET STATISTICS IO ON


                EXEC [dbo].[Get]

                EXEC [dbo].[Get] 1;

                declare @plan_handle varbinary(64) = (select plan_handle from sys.dm_exec_procedure_stats where object_id = object_id('[dbo].[Get]'));

                --Remove the plan from the cache
                DBCC FREEPROCCACHE (@plan_handle);

                --Re-execute it with NOT NULL passed first
                EXEC [dbo].[Get] 1;





                share|improve this answer




























                  5














                  If you are on a reasonably recent build of SQL Server one other option that could be considered is



                  SELECT [Field]
                  FROM [dbo].[Table]
                  WHERE [Field] = @Parameter
                  OR @Parameter IS NULL
                  OPTION (RECOMPILE);


                  To get an optimal plan for the runtime value each time at the expense of a compilation each time.



                  Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).



                  In the specific example in your question where you are selecting a single column and that is the same column as you are filtering by this is unlikely to present a problem but it can do in other cases.



                  e.g. with the following example the first call to [dbo].[Get] 1 takes 333,731 logical reads as it chooses an inappropriate plan with key lookups. When the plan is removed from cache and recompiled with 1 passed first the logical reads fall to 4,330



                  DROP TABLE IF EXISTS [Table]

                  GO

                  CREATE TABLE [Table]
                  (
                  [Field1] INT INDEX IX,
                  [Field2] INT,
                  [Field3] INT,
                  );

                  INSERT INTO [Table]
                  SELECT TOP 1000000 CRYPT_GEN_RANDOM(1)%3, CRYPT_GEN_RANDOM(4), CRYPT_GEN_RANDOM(4)
                  FROM sys.all_objects o1, sys.all_objects o2

                  GO

                  CREATE OR ALTER PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
                  IF(@Parameter IS NOT NULL) BEGIN;
                  SELECT *
                  FROM [dbo].[Table]
                  WHERE [Field1] = @Parameter;
                  END;
                  ELSE BEGIN;
                  SELECT *
                  FROM [dbo].[Table];
                  END;
                  END;

                  GO

                  SET STATISTICS TIME ON
                  SET STATISTICS IO ON


                  EXEC [dbo].[Get]

                  EXEC [dbo].[Get] 1;

                  declare @plan_handle varbinary(64) = (select plan_handle from sys.dm_exec_procedure_stats where object_id = object_id('[dbo].[Get]'));

                  --Remove the plan from the cache
                  DBCC FREEPROCCACHE (@plan_handle);

                  --Re-execute it with NOT NULL passed first
                  EXEC [dbo].[Get] 1;





                  share|improve this answer


























                    5












                    5








                    5







                    If you are on a reasonably recent build of SQL Server one other option that could be considered is



                    SELECT [Field]
                    FROM [dbo].[Table]
                    WHERE [Field] = @Parameter
                    OR @Parameter IS NULL
                    OPTION (RECOMPILE);


                    To get an optimal plan for the runtime value each time at the expense of a compilation each time.



                    Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).



                    In the specific example in your question where you are selecting a single column and that is the same column as you are filtering by this is unlikely to present a problem but it can do in other cases.



                    e.g. with the following example the first call to [dbo].[Get] 1 takes 333,731 logical reads as it chooses an inappropriate plan with key lookups. When the plan is removed from cache and recompiled with 1 passed first the logical reads fall to 4,330



                    DROP TABLE IF EXISTS [Table]

                    GO

                    CREATE TABLE [Table]
                    (
                    [Field1] INT INDEX IX,
                    [Field2] INT,
                    [Field3] INT,
                    );

                    INSERT INTO [Table]
                    SELECT TOP 1000000 CRYPT_GEN_RANDOM(1)%3, CRYPT_GEN_RANDOM(4), CRYPT_GEN_RANDOM(4)
                    FROM sys.all_objects o1, sys.all_objects o2

                    GO

                    CREATE OR ALTER PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
                    IF(@Parameter IS NOT NULL) BEGIN;
                    SELECT *
                    FROM [dbo].[Table]
                    WHERE [Field1] = @Parameter;
                    END;
                    ELSE BEGIN;
                    SELECT *
                    FROM [dbo].[Table];
                    END;
                    END;

                    GO

                    SET STATISTICS TIME ON
                    SET STATISTICS IO ON


                    EXEC [dbo].[Get]

                    EXEC [dbo].[Get] 1;

                    declare @plan_handle varbinary(64) = (select plan_handle from sys.dm_exec_procedure_stats where object_id = object_id('[dbo].[Get]'));

                    --Remove the plan from the cache
                    DBCC FREEPROCCACHE (@plan_handle);

                    --Re-execute it with NOT NULL passed first
                    EXEC [dbo].[Get] 1;





                    share|improve this answer













                    If you are on a reasonably recent build of SQL Server one other option that could be considered is



                    SELECT [Field]
                    FROM [dbo].[Table]
                    WHERE [Field] = @Parameter
                    OR @Parameter IS NULL
                    OPTION (RECOMPILE);


                    To get an optimal plan for the runtime value each time at the expense of a compilation each time.



                    Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).



                    In the specific example in your question where you are selecting a single column and that is the same column as you are filtering by this is unlikely to present a problem but it can do in other cases.



                    e.g. with the following example the first call to [dbo].[Get] 1 takes 333,731 logical reads as it chooses an inappropriate plan with key lookups. When the plan is removed from cache and recompiled with 1 passed first the logical reads fall to 4,330



                    DROP TABLE IF EXISTS [Table]

                    GO

                    CREATE TABLE [Table]
                    (
                    [Field1] INT INDEX IX,
                    [Field2] INT,
                    [Field3] INT,
                    );

                    INSERT INTO [Table]
                    SELECT TOP 1000000 CRYPT_GEN_RANDOM(1)%3, CRYPT_GEN_RANDOM(4), CRYPT_GEN_RANDOM(4)
                    FROM sys.all_objects o1, sys.all_objects o2

                    GO

                    CREATE OR ALTER PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
                    IF(@Parameter IS NOT NULL) BEGIN;
                    SELECT *
                    FROM [dbo].[Table]
                    WHERE [Field1] = @Parameter;
                    END;
                    ELSE BEGIN;
                    SELECT *
                    FROM [dbo].[Table];
                    END;
                    END;

                    GO

                    SET STATISTICS TIME ON
                    SET STATISTICS IO ON


                    EXEC [dbo].[Get]

                    EXEC [dbo].[Get] 1;

                    declare @plan_handle varbinary(64) = (select plan_handle from sys.dm_exec_procedure_stats where object_id = object_id('[dbo].[Get]'));

                    --Remove the plan from the cache
                    DBCC FREEPROCCACHE (@plan_handle);

                    --Re-execute it with NOT NULL passed first
                    EXEC [dbo].[Get] 1;






                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Dec 27 '18 at 10:30









                    Martin SmithMartin Smith

                    64.1k10173258




                    64.1k10173258























                        2














                        Based on the previous answers and comments from Aaron Bertrand, Martin Smith, and Rob Farley. I wanted to put together a pro/con list for each approach, including the additional approach OPTION(RECOMPILE):





                        Conditioned selects in same stored procedure



                        From Martin Smith's response:




                        Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).





                        • No recompile cost.

                        • Plan cache reuse for every statement and the stored procedure.

                        • Cached plans are vulnerable to parameters sniffing even when there is no significant variance in result set when @Parameter is NOT NULL.

                        • Does not scale well administratively as the number of parameters increases.

                        • Intellisense on all T-SQL.




                        Dynamic SQL within stored procedure



                        From Rob Farley:




                        As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.





                        • No recompile cost.

                        • Plan cache reuse for every statement and the stored procedure.

                        • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

                        • Scales well administratively as the number of parameters increases.

                        • Does not provide Intellisense on all T-SQL.




                        Separate stored procedures




                        • No recompile cost.

                        • Plan cache reuse for every statement and the stored procedure.

                        • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

                        • Does not scale well administratively as the number of parameters increases.

                        • Intellisense on all T-SQL.




                        OPTION(RECOMPILE)



                        From Martin Smith:




                        To get an optimal plan for the run-time value each time at the expense of a compilation each time.





                        • CPU cost for recompile.

                        • No plan cache reuse for statements followed by OPTION(RECOMPILE), only the stored procedure and statements without OPTION(RECOMPILE).

                        • Scales well administratively as the number of parameters increases.

                        • Not vulnerable to parameter sniffing.

                        • Intellisense on all T-SQL.




                        My Personal Takeaway



                        If there is no significant variance in the result set with different scalar values of @Parameter, Dynamic SQL is a top performer, has the least system overhead, and is only marginally worse in regards to administrative overhead in comparison to OPTION(RECOMPILE). In more complex scenarios, where variance in parameter value can cause significant changes in result sets, using Dynamic SQL with a conditional inclusion or exclusion of OPTION(RECOMPILE) will be the best overall performer. Here is a link to Aaron Bertrand's article detailing the approach: https://blogs.sentryone.com/aaronbertrand/backtobasics-updated-kitchen-sink-example/






                        share|improve this answer




























                          2














                          Based on the previous answers and comments from Aaron Bertrand, Martin Smith, and Rob Farley. I wanted to put together a pro/con list for each approach, including the additional approach OPTION(RECOMPILE):





                          Conditioned selects in same stored procedure



                          From Martin Smith's response:




                          Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).





                          • No recompile cost.

                          • Plan cache reuse for every statement and the stored procedure.

                          • Cached plans are vulnerable to parameters sniffing even when there is no significant variance in result set when @Parameter is NOT NULL.

                          • Does not scale well administratively as the number of parameters increases.

                          • Intellisense on all T-SQL.




                          Dynamic SQL within stored procedure



                          From Rob Farley:




                          As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.





                          • No recompile cost.

                          • Plan cache reuse for every statement and the stored procedure.

                          • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

                          • Scales well administratively as the number of parameters increases.

                          • Does not provide Intellisense on all T-SQL.




                          Separate stored procedures




                          • No recompile cost.

                          • Plan cache reuse for every statement and the stored procedure.

                          • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

                          • Does not scale well administratively as the number of parameters increases.

                          • Intellisense on all T-SQL.




                          OPTION(RECOMPILE)



                          From Martin Smith:




                          To get an optimal plan for the run-time value each time at the expense of a compilation each time.





                          • CPU cost for recompile.

                          • No plan cache reuse for statements followed by OPTION(RECOMPILE), only the stored procedure and statements without OPTION(RECOMPILE).

                          • Scales well administratively as the number of parameters increases.

                          • Not vulnerable to parameter sniffing.

                          • Intellisense on all T-SQL.




                          My Personal Takeaway



                          If there is no significant variance in the result set with different scalar values of @Parameter, Dynamic SQL is a top performer, has the least system overhead, and is only marginally worse in regards to administrative overhead in comparison to OPTION(RECOMPILE). In more complex scenarios, where variance in parameter value can cause significant changes in result sets, using Dynamic SQL with a conditional inclusion or exclusion of OPTION(RECOMPILE) will be the best overall performer. Here is a link to Aaron Bertrand's article detailing the approach: https://blogs.sentryone.com/aaronbertrand/backtobasics-updated-kitchen-sink-example/






                          share|improve this answer


























                            2












                            2








                            2







                            Based on the previous answers and comments from Aaron Bertrand, Martin Smith, and Rob Farley. I wanted to put together a pro/con list for each approach, including the additional approach OPTION(RECOMPILE):





                            Conditioned selects in same stored procedure



                            From Martin Smith's response:




                            Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).





                            • No recompile cost.

                            • Plan cache reuse for every statement and the stored procedure.

                            • Cached plans are vulnerable to parameters sniffing even when there is no significant variance in result set when @Parameter is NOT NULL.

                            • Does not scale well administratively as the number of parameters increases.

                            • Intellisense on all T-SQL.




                            Dynamic SQL within stored procedure



                            From Rob Farley:




                            As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.





                            • No recompile cost.

                            • Plan cache reuse for every statement and the stored procedure.

                            • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

                            • Scales well administratively as the number of parameters increases.

                            • Does not provide Intellisense on all T-SQL.




                            Separate stored procedures




                            • No recompile cost.

                            • Plan cache reuse for every statement and the stored procedure.

                            • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

                            • Does not scale well administratively as the number of parameters increases.

                            • Intellisense on all T-SQL.




                            OPTION(RECOMPILE)



                            From Martin Smith:




                            To get an optimal plan for the run-time value each time at the expense of a compilation each time.





                            • CPU cost for recompile.

                            • No plan cache reuse for statements followed by OPTION(RECOMPILE), only the stored procedure and statements without OPTION(RECOMPILE).

                            • Scales well administratively as the number of parameters increases.

                            • Not vulnerable to parameter sniffing.

                            • Intellisense on all T-SQL.




                            My Personal Takeaway



                            If there is no significant variance in the result set with different scalar values of @Parameter, Dynamic SQL is a top performer, has the least system overhead, and is only marginally worse in regards to administrative overhead in comparison to OPTION(RECOMPILE). In more complex scenarios, where variance in parameter value can cause significant changes in result sets, using Dynamic SQL with a conditional inclusion or exclusion of OPTION(RECOMPILE) will be the best overall performer. Here is a link to Aaron Bertrand's article detailing the approach: https://blogs.sentryone.com/aaronbertrand/backtobasics-updated-kitchen-sink-example/






                            share|improve this answer













                            Based on the previous answers and comments from Aaron Bertrand, Martin Smith, and Rob Farley. I wanted to put together a pro/con list for each approach, including the additional approach OPTION(RECOMPILE):





                            Conditioned selects in same stored procedure



                            From Martin Smith's response:




                            Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).





                            • No recompile cost.

                            • Plan cache reuse for every statement and the stored procedure.

                            • Cached plans are vulnerable to parameters sniffing even when there is no significant variance in result set when @Parameter is NOT NULL.

                            • Does not scale well administratively as the number of parameters increases.

                            • Intellisense on all T-SQL.




                            Dynamic SQL within stored procedure



                            From Rob Farley:




                            As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.





                            • No recompile cost.

                            • Plan cache reuse for every statement and the stored procedure.

                            • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

                            • Scales well administratively as the number of parameters increases.

                            • Does not provide Intellisense on all T-SQL.




                            Separate stored procedures




                            • No recompile cost.

                            • Plan cache reuse for every statement and the stored procedure.

                            • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.

                            • Does not scale well administratively as the number of parameters increases.

                            • Intellisense on all T-SQL.




                            OPTION(RECOMPILE)



                            From Martin Smith:




                            To get an optimal plan for the run-time value each time at the expense of a compilation each time.





                            • CPU cost for recompile.

                            • No plan cache reuse for statements followed by OPTION(RECOMPILE), only the stored procedure and statements without OPTION(RECOMPILE).

                            • Scales well administratively as the number of parameters increases.

                            • Not vulnerable to parameter sniffing.

                            • Intellisense on all T-SQL.




                            My Personal Takeaway



                            If there is no significant variance in the result set with different scalar values of @Parameter, Dynamic SQL is a top performer, has the least system overhead, and is only marginally worse in regards to administrative overhead in comparison to OPTION(RECOMPILE). In more complex scenarios, where variance in parameter value can cause significant changes in result sets, using Dynamic SQL with a conditional inclusion or exclusion of OPTION(RECOMPILE) will be the best overall performer. Here is a link to Aaron Bertrand's article detailing the approach: https://blogs.sentryone.com/aaronbertrand/backtobasics-updated-kitchen-sink-example/







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Dec 29 '18 at 10:21









                            M. JacobsonM. Jacobson

                            827




                            827






























                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Database Administrators Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f225736%2ffield-parameter-or-parameter-is-null-pattern%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Bundesstraße 106

                                Le Mesnil-Réaume

                                Ida-Boy-Ed-Garten