Skip to main content
added 193 characters in body
Source Link
vnp
  • 58.7k
  • 4
  • 55
  • 144
  • Kudos for figuring out the correct algorithm.

    However you can streamline it by not using a v vector:

    You correctly treated an operation a, b, k as a pair of operations: add k from a to the end, and subtract k from b+1 to the end. Now, instead of storing them in v, collect decoupled operations in a vector of their own. Sort it by index. std::partial_sum it, and find the maximum in the resulting array.

    This will drive the space complexity down from \$O(n)\$ to \$O(m)\$, and change the time complexity from \$O(n+m)\$ to \$O(m\log m)\$. According to constraints, the time complexity seems to be better. One should also keep in mind that accesses to v could be all over the place with no particular order, and a well crafted sequence of operations may incur too many cache misses. I didn't profile though.

  • It is possible that spelling the loop out (rather than using for_each and lambda) would improve readability.

  • The algorithm would fail if k was allowed to be negative. Even it is not the case, it still is a good habit to initialize max and x to v[0], and start the loop at v.begin() + 1.

  • Kudos for figuring out the correct algorithm.

    However you can streamline it by not using a v vector:

    You correctly treated an operation a, b, k as a pair of operations: add k from a to the end, and subtract k from b+1 to the end. Now, instead of storing them in v, collect decoupled operations in a vector of their own. Sort it by index. std::partial_sum it, and find the maximum in the resulting array.

    This will drive the space complexity down from \$O(n)\$ to \$O(m)\$, and change the time complexity from \$O(n+m)\$ to \$O(m\log m)\$. According to constraints, the time complexity seems to be better. One should also keep in mind that accesses to v could be all over the place with no particular order, and a well crafted sequence of operations may incur too many cache misses. I didn't profile though.

  • It is possible that spelling the loop out (rather than using for_each and lambda) would improve readability.

  • Kudos for figuring out the correct algorithm.

    However you can streamline it by not using a v vector:

    You correctly treated an operation a, b, k as a pair of operations: add k from a to the end, and subtract k from b+1 to the end. Now, instead of storing them in v, collect decoupled operations in a vector of their own. Sort it by index. std::partial_sum it, and find the maximum in the resulting array.

    This will drive the space complexity down from \$O(n)\$ to \$O(m)\$, and change the time complexity from \$O(n+m)\$ to \$O(m\log m)\$. According to constraints, the time complexity seems to be better. One should also keep in mind that accesses to v could be all over the place with no particular order, and a well crafted sequence of operations may incur too many cache misses. I didn't profile though.

  • It is possible that spelling the loop out (rather than using for_each and lambda) would improve readability.

  • The algorithm would fail if k was allowed to be negative. Even it is not the case, it still is a good habit to initialize max and x to v[0], and start the loop at v.begin() + 1.

Source Link
vnp
  • 58.7k
  • 4
  • 55
  • 144

  • Kudos for figuring out the correct algorithm.

    However you can streamline it by not using a v vector:

    You correctly treated an operation a, b, k as a pair of operations: add k from a to the end, and subtract k from b+1 to the end. Now, instead of storing them in v, collect decoupled operations in a vector of their own. Sort it by index. std::partial_sum it, and find the maximum in the resulting array.

    This will drive the space complexity down from \$O(n)\$ to \$O(m)\$, and change the time complexity from \$O(n+m)\$ to \$O(m\log m)\$. According to constraints, the time complexity seems to be better. One should also keep in mind that accesses to v could be all over the place with no particular order, and a well crafted sequence of operations may incur too many cache misses. I didn't profile though.

  • It is possible that spelling the loop out (rather than using for_each and lambda) would improve readability.