-5

Different environments have different amount of data and usually different data distribution. You can't use SQLCMD variables in table declarations in SSDT, so you can't use them to control different bucket sizes per environment.

Is there any easy enough way to configure different bucket counts?

Example of the table:

CREATE TABLE [dbo].[UserPermissions]
(
    [PrincipalId] [NVARCHAR](35) NOT NULL
  , [UserId] [NVARCHAR](35) NOT NULL
  , [OrderId] [NVARCHAR](36) NOT NULL
  , [OrderType] [VARCHAR](3) NOT NULL
  , [ViewPermission] [BIT] NOT NULL
  , [SignPermission] [BIT] NOT NULL
  , CONSTRAINT [PKm_UserPermissions]
        PRIMARY KEY NONCLUSTERED HASH (
                                          [OrderId]
                                        , [UserId]
                                        , [PrincipalId]
                                      )
        WITH (BUCKET_COUNT = 500000)
  , INDEX [ixm_UserPermissions_OrderId] NONCLUSTERED HASH ([OrderId]) WITH (BUCKET_COUNT = 50000)
  , INDEX [ixm_UserPermissions_UserId_PrincipalId] NONCLUSTERED ([UserId], [PrincipalId])
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO

12
  • What are you trying to do? Permissions change so rarely there's no reason to put them into an in-memory table. All databases (including SQL Server) cache data aggressively. In-memory tables are useful in high contention, ie write scenarios. Creating one bucket per row isn't going to make queries faster. You're just stealing RAM from queries that could benefit from it. BTW you can use script variables in a SQLCMD script. SSDT is the product family name, not the tool itself. Commented 2 days ago
  • Permissions do change a lot in current implementation which ai can't change. It wipes out everything for specific user and then re-insert. For SQLCMD, you can use variables surely, but you can't do that in the table DDL definition, at least in SSDT projects. It won't build. I have avg_chain_length=67, max_chain_length=1,578 for specific index which seems to be an indicator that bucket size is way too small Commented yesterday
  • It's separate from the question, but I'd love to see how you have determined that memory-optimized was the way to go here in the first place. What actual benefits (not marketing slide bullet points) did you see over a regular table? Commented yesterday
  • @DmitrijKultasev do change a lot how many hundreds of times per second? And if locking is a problem, are you sure the culprit isn't incorrect indexing? It won't build. scripts don't get built, they run. Are you saying you tried to use a DB project variable in the table definition? Commented yesterday
  • 1
    clustered with what fill factor? Are you sure you weren't creating a lot of page splits? bunch of deadlocks that's a query problem, often caused by inefficient indexes. much more with indexes did you try filtered indexes? small enough then why BUCKET_COUNT = 500000 ? Commented yesterday

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.