Different environments have different amount of data and usually different data distribution. You can't use SQLCMD variables in table declarations in SSDT, so you can't use them to control different bucket sizes per environment.
Is there any easy enough way to configure different bucket counts?
Example of the table:
CREATE TABLE [dbo].[UserPermissions]
(
[PrincipalId] [NVARCHAR](35) NOT NULL
, [UserId] [NVARCHAR](35) NOT NULL
, [OrderId] [NVARCHAR](36) NOT NULL
, [OrderType] [VARCHAR](3) NOT NULL
, [ViewPermission] [BIT] NOT NULL
, [SignPermission] [BIT] NOT NULL
, CONSTRAINT [PKm_UserPermissions]
PRIMARY KEY NONCLUSTERED HASH (
[OrderId]
, [UserId]
, [PrincipalId]
)
WITH (BUCKET_COUNT = 500000)
, INDEX [ixm_UserPermissions_OrderId] NONCLUSTERED HASH ([OrderId]) WITH (BUCKET_COUNT = 50000)
, INDEX [ixm_UserPermissions_UserId_PrincipalId] NONCLUSTERED ([UserId], [PrincipalId])
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO
do change a lot
how many hundreds of times per second? And if locking is a problem, are you sure the culprit isn't incorrect indexing?It won't build.
scripts don't get built, they run. Are you saying you tried to use a DB project variable in the table definition?clustered
with what fill factor? Are you sure you weren't creating a lot of page splits?bunch of deadlocks
that's a query problem, often caused by inefficient indexes.much more with indexes
did you try filtered indexes?small enough
then whyBUCKET_COUNT = 500000
?