One approach is, instead of starting at 2 and working upwards via (i + 1), to start at n / 2 and work downwards via (i - 1).
Another approach is to write biggest_divisor in terms of smallest_divisor:
let biggest_divisor n = n / smallest_divisor n ;;
Edited to add: The second approach is actually more efficient in the average case, since small divisors are closer to zero than large divisors are to n/2. (If 2 is not a divisor, then the next-smallest possible divisor is 3, which is the very next one you try; if n/2 is not a divisor, then the next-biggest possible divisor is n/3, which involves iterating over n/6 possibilities.)
In a comment, you write that you don't want biggest_divisor to depend on smallest_divisor, for some reason. Personally, I think that's a mistake; but if you feel strongly about it, then your best option is probably to mimic the n / smallest_divisor n approach, by iterating up from 2 and then returning n / i when you find a divisor.
Incidentally, you can improve the performance of both approaches by aborting as soon as i * i > n, rather than waiting until i > n / 2. That way you only need to try √n possible values of i, rather than n/2 possible values of i, before detecting that n is prime.