4

Let's say we have the following:

node[1].name = "apple";
node[1].color = "red";
node[2].name = "cherry";
node[2].color = "red";
node[3].name = "apple";
node[3].color = "green";
node[4].name = "orange";
node[4].color = "orange;

if I use jQuery.unique(node) I will get all the original nodes because they all have a different name OR color. What I want to do is only get the nodes with a unique name, which should return

node[1] (apple)
node[2] (cherry)
node[4] (orange)

It should not return 3 because it is the same fruit, even though we have green and red apples.

4
  • You'll have to iterate over your array, creating a new array with the unique matches. Commented Dec 9, 2013 at 20:54
  • I would instead use a different data structure where node is an object, and each key of the object is the fruit, each containing an array of colors. Commented Dec 9, 2013 at 20:55
  • @KevinB Maybe the OP wants a certain ordering.... Commented Dec 9, 2013 at 21:45
  • Well they can each be objects, as a matter of fact that would be better, however the colors cannot be an array and there will be duplicates. Commented Dec 9, 2013 at 21:54

3 Answers 3

7

Use Array.filter and a temporary array to store the dups:

function filterByName(arr) {
  var f = []
  return arr.filter(function(n) {
    return f.indexOf(n.name) == -1 && f.push(n.name)
  })
}

Demo: http://jsfiddle.net/mbest/D6aLV/6/

Sign up to request clarification or add additional context in comments.

4 Comments

It would probably be more efficient to cache unique results in an object rather than an array.
@joews Maybe, maybe not. But that could potentially destroy the ordering since objects are unordered. IMO this is more elegant and most likely efficient enough.
I don't see why it affects the ordering - see my answer.
Your code doesn't work. It needs to be f.indexOf(n.name) == -1. jsfiddle.net/mbest/D6aLV/6
2

This approach (forked from @David's) should have better performance for large inputs (because object[] is O(1)).

function filter(arr, attribute) {
  var out = [],
      seen = {}

  return arr.filter(function(n) {
      return (seen[n[attribute]] == undefined) 
              && (seen[n[attribute]] = 1);
  })
}

console.log(filter(node, 'name'));

http://jsfiddle.net/LEBBB/1/

3 Comments

I made a jsPerf - simply because I’m sceptic to forking other answers just to make a micro-performance point: jsperf.com/uniquearr-obj and this method is actually slower.
Yes, but as I said, this is optimised for larger input sets (and in particular a large number of unique values of the filter attribute). Array is much (~80%) slower over 500 input objects with 100 distinct attributes: jsperf.com/unique-big-array-object-large
Having said that, I would use your answer for smaller inputs - it is more elegant and performs well. Here's a jsPerf performance is similar for 50 input objects with 10 unique values: jsperf.com/unique-big-array-object
0

What about doing it like this?

var fruitNames = [];
$.each($.unique(fruits), function(i, fruit) {
    if (fruitNames.indexOf(fruit.name) == -1) {
        fruitNames.push(fruit.name);
        $('#output').append('<div>' + fruit.name + '</div>');
    }
});

Here is a working fiddle.

Obviously, instead of output.append I could just add the current fruit to uniqueFruit[] or something.

Comments