In general, transmitting a data block with CRC tells you whether the whole thing has been transmitted correctly or incorrectly. There's no easy-to-use pattern in differences between the receiver's calculated CRC and the received (potentially corrupted) CRC and the position or number of errors.
CRCs are specially good for certain kinds of errors: that's why we use them. For example, we might have a 1,024 bit message which includes a 32-bit CRC. There are very many (21024) different messages which means that very many must share the same CRC, as there are only 232 different CRC values. (In a perfect CRC scheme, there would be 2992 different messages with each CRC value.) The trick is this: certain errors are lot more common than others (bursts, for example) and good CRCs pick them up with a very low overhead. (Ie, the number of bits in the CRC is small compared to the number of payload bits. "Small" would be perhaps 4 bytes on a 512-byte data block, about 100:1.)
So, no, you can't find or even count the bit errors with CRCs.
BUT
Technique 1: multi-dimensional CRCs. If you arrange your bits in a rectangular grid, you can send CRCs with each row and each column. Now you can easily identify (and thus correct) any single bit error. Under many conditions you can do better.
Technique 2: hunt the error. If you have a block of n bits with a CRC error, perhaps it was only a single bit, and there are only n different positions it could be. Try flipping each bit and see if the CRC computes correctly. If it was 2 bits, try the n x (n-1) possible 2-bit errors. If errors are rare and retransmission is expensive/impossible, and you have a lot of CPU available, this is a great technique.
AND
If you want proper error correction, you'll need a lot more bits in your checksum. Have a look at Hamming(7,4) and Golay binary codes. But these codes use vastly more checking bits than CRCs: Golay for example is 2:1 ie, a third of the transmitted data are check bits.