Final words
 Mucking-about with this concept and code has been a lot of fun. There's nothing intrinsically difficult, once one 'groks' what it is trying to achieve. I do hope it finds use by one-or-two readers, even just to enumerate Happy, Bashful, Doc and the others...
 Reporting an update to this answer, for the first time I've just tried the dumb way of partially implementing this suggested 'cascade' of "well, try looking in this subsection of a large list". My goal was enumerating each of the names of the 50 US states...
 The first hurdles were the two "North ???", two "South ???", four "New ???" states, plus two "Miss????" states. PLUS, a theoretical 'full house' of unique hashes (in this version) would max-out at only 32.
 Being 'clever', I divided the list into two, put one "North ???" and one "South ???" into each group; doing likewise for the two "Miss??" states. To account for four "New ???" states, I added a 'settable' offset (either 0 or 4) so that the attempts to determine non-colliding hash values could use word[w][ 0, 1, 2 ] or word[w][ 4, 5, 6 ]. (Obviously, "Iowa" and "Ohio" were not amongst the second group of names.)
Result: NO attempted permutation of these operations avoided collisions; tough luck!
 Undaunted, I tried using subsection populations of 17 + 17 + 16. Each of these three DID succeed offering up several 'workable' hashing possibilities. Just to note, the original sorted list of 50 state names had by this time, become quite a jumbled collection. A further translation LUT, not implemented, would be needed if it were important that "Alaska" be indexed as 1, and "Wyoming" as 52.
 Having wrestled that bear into submission, and in view of both "big O" considerations and the desire for the best (fastest) return from this function, the clouds had parted to reveal a further potential improvement I will leave alone...
 Based on the lexicon of 50 state names, a binary search may involve up to 5 iterations (strcmp()) to arrive at ~1/2 of its matches (or, worse, rejections of a candidate string.) Using these chunks of 17+17+16, would, in worst case, reach its conclusion in three steps. Note: if the candidate string is absolutely a correctly spelled state name, no verifying strcmp() would be required; just some "light arithmetic or bitwise operations" with the values of 3 bytes of the candidate. And, there's a 1:3 chance that the candidate is found in the first subsection tried.
 This last idea led to another variation that could be interesting to explore. Instead of my simple-minded guesswork assigning states to groups, if one has enough resources, one could seek to stuff as many "compatible" states into the first and second groups as possible. (A 'knapsack' problem.) Supposing, properly tested, the lexicon was such that some combination of ca. 25 of the 50 'keywords' could be uniquely "hashed" into the same (first) subsection, then ca. 50% (not "big O"; not "worst case") of 'in use' searches would result in finding the target with just a few "light bitwise" operations and one strcmp() (only if required).
 This all has been a bit of fun. The origin of all this came from a novice C programmer who wanted to find a way to speed up a rather long running program that 'felt as if' it was clogging up an already strained multi-user minicomputer ('suboptimal' meant additional lost cycles given over to unproductive process 'swapping'). I hope this all has been in some way interesting to the reader.