Hi all! I am writing an image rotating algorithm. I have the code to actually rotate the image, which works fine;
But the rotated image has 'holes' or missing pixels. I do understand why this happens. But here comes the problem; For the missing pixel coordinates, I need to do the calculation backwards to find out where that pixel would be in the original non-rotated image so I can bilinear interpolate to find out what the color should be. But my maths are really failing me on this part. How do I write this algorithm backwards?
But the rotated image has 'holes' or missing pixels. I do understand why this happens. But here comes the problem; For the missing pixel coordinates, I need to do the calculation backwards to find out where that pixel would be in the original non-rotated image so I can bilinear interpolate to find out what the color should be. But my maths are really failing me on this part. How do I write this algorithm backwards?