Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
I fully endorse the approach in this answer, avoiding the duplicate ID and potentially using a loop rather than a reduce. The Map suggested there is a perfectly reasonable alternative to plain objects or groupBy as well.
I fully endorse the approach in this answer, avoiding the duplicate ID and potentially using a loop rather than a reduce. The Map suggested there is a perfectly reasonable alternative to plain objects or groupBy as well.
I fully endorse the approach in this answer, avoiding the duplicate ID and using a loop rather than a reduce. The Map suggested there is a perfectly reasonable alternative to plain objects or groupBy as well.
I fully endorse the approach in this answer, avoiding the duplicate ID and potentially using a loop rather than a reduce. The Map suggested there is a perfectly reasonable alternative to plain objects or groupBy as well.
I fully endorse the approach in this answer, avoiding the duplicate ID and potentially using a loop rather than a reduce. The Map suggested there is a perfectly reasonable alternative to plain objects or groupBy as well.
Speaking of which, groupBy will incur some allocation overhead for those unnecessary arrays. I'd expect reduce to be fastest, but this is just a hunch and I haven't benchmarked anything. It probably doesn't matter a whole lot since it's back in linear time complexity territory either way, and departments is probably on the small side, but worth keeping in mind.
Speaking of which, groupBy will incur some allocation overhead for those unnecessary arrays. I'd expect reduce to be fastest, but this is just a hunch and I haven't benchmarked anything. It probably doesn't matter a whole lot since it's back in linear time complexity territory either way, and departments is probably on the small side, but worth keeping in mind.