Asymptotically speaking, it doesn't matter. For example, binary search makes approximately log_2 nlog2 n comparisons, and ternary search makes approximately log_3 nlog3 n comparisons. If you know your logarithms, you know that log_a x = log_b x / log_b aloga x = logb x/logb a, so binary search only makes about 1 / log_3 2 ~= 1.51/log3 2 ≈ 1.5 times as many comparisons as ternary search. This is also the reason nobody ever specifies the base of the logarithm in big Oh notation: It's always a constant factor away from the logarithm in a given base, no matter what the base actual is. So splitting the problem into more subsets doesn't improve time complexity and practically isn't enough to outweigh the more complex logic. In fact, that complexity may affect practical performance negatively, increasing cache pressure or making micro-optimizations less infeasible.
On the other hand, some tree-ish data structure do use a high branching factor (much bigger than 3, often 32 or more), though usually for other reasons. It improves utilization of the memory hierarchy: data structures stored in RAM make better use of the cache, data structures stored on disk require fewer reads HDD->RAM.