I have a functional script that converts polar coordinates to Cartesian coordinates and then matches a value in a separate array to the coordinates. It works well, but I find that it takes a long time to run due to the length of the matrices being processed. Each file has four columns and 2,880,000 rows which means that I have 11,520,000 total values being processed. The data looks like this-
   [,1] [,2] [,3]
[1,]    1    1    1
[2,]    2    2    2
[3,]    3    3    3
The array rf.190301 is a three dimensional array that looks like this-
, , 1
     [,1] [,2] [,3]
[1,]    1    1    1
[2,]    2    2    2
[3,]    3    3    3
, , 2
     [,1] [,2] [,3]
[1,]    1    1    1
[2,]    2    2    2
[3,]    3    3    3
, , 3
     [,1] [,2] [,3]
[1,]    1    1    1
[2,]    2    2    2
[3,]    3    3    3
I'm fairly new to R and am just looking for a way to optimize what I'm trying to do in order to make it run a bit faster.
Polar2Cart <- function(x) {
Cart.x <- matrix(NA, nrow = 2880000, ncol = 4)
   for (i in 1:nrow(x)) {
      z[i]<- x[i,1]
      t[i]<- x[i,2]
      r[i]<- x[i,3]
        theta.polar[i] <- (x[i,2] * (pi/180))
        r.polar[i] <- (x[i,3] * 0.075)
            x.cart[i] <- r.polar[i]*cos(theta.polar[i])
            y.cart[i] <- r.polar[i]*sin(theta.polar[i])
            value[i] <- rf.190301[z[i], t[i], r[i]]
        Cart.x[i,] <- cbind(z[i], y.cart[i], x.cart[i], value[i])
     }
}