I'm obviously having trouble creating a question that fits StackExchange guidelines in regard of opinions vs metrics. Any help to improve this question is highly appreciated.
I'm searching for a suitable approach to implement tests for access permission checks.
I'm writing an API backend (should be irrelevant). The system has 150 actions (= API endpoints) (= a lot) - mostly CRUD for various resource types (example: groups, users, documents, ...).
The system has concepts like tenants, groups, users, services (groups are affiliated with services) and permissions (e.g. permission.documents.list, permission.documents.read).
A naive approach for implementing tests would be to implement each use case for each action (= API endpoint). That would leave me with (roughly) 16*150 = 2400 tests to manually write and maintain. For each new API 16 new tests have to written.
That leaves plenty of room for forgotten or misimplemented test cases.
Question
What is the common approach for implementing tests for access permission checks for a system with many actions and multiple affiliation combinations (tenant, group), while keeping it maintainable**.
**maintainable :=
- don't implement every use case for every action by hand.
 - When permission management policy changes, all 2400 cases should be updatable by one person within 50 hours.
 - Bugs in the test code should be easily findable (i.e. test itself fails or obvious by reading the code).
 
+α constraint: The "2400 tests" should be resilient to changes in the underlying affiliation structure. I.e. if the concept of groups disappear from the system, I want to rewrite only a small part of the tests.
If you don't have an answer, but a good reference on the topic, I'd appreciate mentioning it in the comments.
My own thoughts
My current testing pattern looks like this:
- create user/group/.. data
 - create resource for user2
 - create resource for user1
 - result = callAPI(action, as=user1)
 - expect(result).toNotContain(resoures of user2)
 
I understand that I need to centralize the part of the test that does all the affiliation stuff (tenant/group/user/service). I also think I should centralize the "16" testcases. I think each test case would then only provide action-specific test data to be created. Perhaps something like testAll16PermissionTests(api="/documents.list", resources=...).
However, I'm not sure how that would look like specifically and if that's even the common approach for this kind of situation.
maintainable := (1) don't implement every use case for every action by hand. (2) When permission management policy changes, all 2400 cases should be updatable by one person within 50 hours. (3) Bugs in the *test* code should be easily findable (i.e. test itself fails or obvious by reading the code).. (I guess obvious is a non-metric. I don't see how I can get rid of it atm)