208

I have a data.frame like this -

set.seed(123)
df = data.frame(x=sample(0:1,10,replace=T),y=sample(0:1,10,replace=T),z=1:10)
> df
   x y  z
1  0 1  1
2  1 0  2
3  0 1  3
4  1 1  4
5  1 0  5
6  0 1  6
7  1 0  7
8  1 0  8
9  1 0  9
10 0 1 10

I would like to remove duplicate rows based on first two columns. Expected output -

df[!duplicated(df[,1:2]),]
  x y z
1 0 1 1
2 1 0 2
4 1 1 4

I am specifically looking for a solution using dplyr package.

0

6 Answers 6

292

Here is a solution using dplyr >= 0.5.

library(dplyr)
set.seed(123)
df <- data.frame(
  x = sample(0:1, 10, replace = T),
  y = sample(0:1, 10, replace = T),
  z = 1:10
)

> df %>% distinct(x, y, .keep_all = TRUE)
    x y z
  1 0 1 1
  2 1 0 2
  3 1 1 4
Sign up to request clarification or add additional context in comments.

4 Comments

This solution appears to be much faster (10 times in my case) than the one provided by Hadley.
Technically this too is a solution provided by Hadley :-)
You solve the issue about which rows to remove by arranging, it keeps the first rows.
Remember the .keep_all = TRUE or it will drop the columns you don't specify you want unique values on
167

Note: dplyr now contains the distinct function for this purpose.

Original answer below:


library(dplyr)
set.seed(123)
df <- data.frame(
  x = sample(0:1, 10, replace = T),
  y = sample(0:1, 10, replace = T),
  z = 1:10
)

One approach would be to group, and then only keep the first row:

df %>% group_by(x, y) %>% filter(row_number(z) == 1)

## Source: local data frame [3 x 3]
## Groups: x, y
## 
##   x y z
## 1 0 1 1
## 2 1 0 2
## 3 1 1 4

(In dplyr 0.2 you won't need the dummy z variable and will just be able to write row_number() == 1)

I've also been thinking about adding a slice() function that would work like:

df %>% group_by(x, y) %>% slice(from = 1, to = 1)

Or maybe a variation of unique() that would let you select which variables to use:

df %>% unique(x, y)

6 Comments

@dotcomken Until then could also just use df %>% group_by(x, y) %>% do(head(.,1))
@MahbubulMajumder that will work, but is quite slow. dplyr 0.3 will have distinct()
@hadley I like the unique() and distinct() function, however, they all remove the 2nd duplicate from the data frame. what if I want to have all 1st encounters of the duplicate value removed? How could this be done? Thanks for any help!
@MvZB - wouldn't you just arrange(desc()) and then use distinct?
I'm sure there is a simple solution but what if I want to get rid of both duplicate rows? I often work with metadata associated with biological samples and if I have duplicate sample IDs, I often can't be sure sure which row has the correct data. Safest bet is to dump both to avoid erroneous metadata associations. Any easy solution besides making a list of duplicate sample IDs and filtering out rows with those IDs?
|
31

For completeness’ sake, the following also works:

df %>% group_by(x) %>% filter (! duplicated(y))

However, I prefer the solution using distinct, and I suspect it’s faster, too.

Comments

15

Most of the time, the best solution is using distinct() from dplyr, as has already been suggested.

However, here's another approach that uses the slice() function from dplyr.

# Generate fake data for the example
  library(dplyr)
  set.seed(123)
  df <- data.frame(
    x = sample(0:1, 10, replace = T),
    y = sample(0:1, 10, replace = T),
    z = 1:10
  )

# In each group of rows formed by combinations of x and y
# retain only the first row

    df %>%
      group_by(x, y) %>%
      slice(1)

Difference from using the distinct() function

The advantage of this solution is that it makes it explicit which rows are retained from the original dataframe, and it can pair nicely with the arrange() function.

Let's say you had customer sales data and you wanted to retain one record per customer, and you want that record to be the one from their latest purchase. Then you could write:

customer_purchase_data %>%
   arrange(desc(Purchase_Date)) %>%
   group_by(Customer_ID) %>%
   slice(1)

Comments

4

If you want to find the rows that are duplicated you can use find_duplicates from hablar:

library(dplyr)
library(hablar)

df <- tibble(a = c(1, 2, 2, 4),
             b = c(5, 2, 2, 8))

df %>% find_duplicates()

Comments

3

When selecting columns in R for a reduced data-set you can often end up with duplicates.

These two lines give the same result. Each outputs a unique data-set with two selected columns only:

distinct(mtcars, cyl, hp);

summarise(group_by(mtcars, cyl, hp));

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.