3

I have a dataframe like this

df.show(5)
 kv   |list1     |list2                |p
[k1,v2|[1,2,5,9  |[5,1,7,9,6,3,1,4,9]  |0.5
[k1,v3|[1,2,5,8,9|[5,1,7,9,6,3,1,4,15] |0.9
[k2,v2|[77,2,5,9]|[0,1,8,9,7,3,1,4,100]|0.01
[k5,v5|[1,0,5,9  |[5,1,7,9,6,3,1,4,3]  |0.3
[k9,v2|[1,2,5,9  |[5,1,7,9,6,3,1,4,200]|2.5

df.count()
5200158

I want to get the row that have maximum p, this below works for me but I don't know if there is another cleaner way

val f = df.select(max(struct(
    col("pp") +: df.columns.collect { case x if x != "p" => col(x) }: _*
))).first()
1
  • What happens when there is a tie for max ... two+ rows should be returned. Commented Oct 9, 2023 at 21:36

2 Answers 2

8

Just order by and then take:

import org.apache.spark.sql.functions.desc

df.orderBy(desc("pp")).take(1)

or

df.orderBy(desc("pp")).limit(1).first
Sign up to request clarification or add additional context in comments.

1 Comment

This is a simple solution but a sort is O(sort) versus a roll down which will be O(over window). I'm not sure what algorithms are being used for sorts, but a single pass, as a roll down may be, is far faster. Also, missing is what happens when there is a tie for max ... two+ rows should be returned.
4

You can also use Window-Functions, this is especially useful if the logic of selecting the row gets more complex (other than global min/max) :

import  org.apache.spark.sql.expressions.Window

df
  .withColumn("max_p",max($"p").over(Window.partitionBy()))
  .where($"p" === $"max_p")
  .drop($"max_p")
  .first()

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.