3

I have this PySpark df:

+---------+----+----+----+----+----+----+----+----+----+                        
|partition|   1|   2|   3|   4|   5|   6|   7|   8|   9|
+---------+----+----+----+----+----+----+----+----+----+
|        7|null|null|null|null|null|null| 0.7|null|null|
|        1| 0.2| 0.1| 0.3|null|null|null|null|null|null|
|        8|null|null|null|null|null|null|null| 0.8|null|
|        4|null|null|null| 0.4| 0.5| 0.6|null|null| 0.9|
+---------+----+----+----+----+----+----+----+----+----+

from which I have combined the 9 right columns:

+---------+--------------------+                                                
|partition|            vec_comb|
+---------+--------------------+
|        7|      [,,,,,,,, 0.7]|
|        1|[,,,,,, 0.1, 0.2,...|
|        8|      [,,,,,,,, 0.8]|
|        4|[,,,,, 0.4, 0.5, ...|
+---------+--------------------+

How can I remove the NullTypes from the arrays of vec_comb column?

Expected output:

+---------+--------------------+                                                
|partition|            vec_comb|
+---------+--------------------+
|        7|               [0.7]|
|        1|      [0.1, 0.2,0.3]|
|        8|               [0.8]|
|        4|[0.4, 0.5, 0.6, 0,9]|
+---------+--------------------+

I've tried (obviously wrong, but I can't wrap my head around this):

def clean_vec(array):
    new_Array = []
    for element in array:
        if type(element) == FloatType():
            new_Array.append(element)
    return new_Array

udf_clean_vec = F.udf(f=(lambda c: clean_vec(c)), returnType=ArrayType(FloatType()))
df = df.withColumn('vec_comb_cleaned', udf_clean_vec('vec_comb'))

3 Answers 3

6

You can use higher-order function filter to remove null elements:

import pyspark.sql.functions as F

df2 = df.withColumn('vec_comb_cleaned', F.expr('filter(vec_comb, x -> x is not null)'))

df2.show()
+---------+--------------------+--------------------+
|partition|            vec_comb|    vec_comb_cleaned|
+---------+--------------------+--------------------+
|        7|      [,,,,,, 0.7,,]|               [0.7]|
|        1|[0.2, 0.1, 0.3,,,...|     [0.2, 0.1, 0.3]|
|        8|      [,,,,,,, 0.8,]|               [0.8]|
|        4|[,,, 0.4, 0.5, 0....|[0.4, 0.5, 0.6, 0.9]|
+---------+--------------------+--------------------+

You can use a UDF, but it will be slower, e.g.

udf_clean_vec = F.udf(lambda x: [i for i in x if i is not None], 'array<float>')
df2 = df.withColumn('vec_comb_cleaned', udf_clean_vec('vec_comb'))
Sign up to request clarification or add additional context in comments.

Comments

1

Using no pyspark-specific features, you could also create a list by just filtering out the NaNs:

df['vec_comb'] = df.iloc[:, 1:10].apply(lambda r: list(filter(pd.notna, r)) , axis=1)
df

# Output:
   partition     1     2     3     4     5     6     7     8     9              vec_comb
0          7   NaN   NaN   NaN   NaN   NaN   NaN   0.7   NaN   NaN                 [0.7]
1          1   0.2   0.1   0.3   NaN   NaN   NaN   NaN   NaN   NaN       [0.2, 0.1, 0.3]
2          8   NaN   NaN   NaN   NaN   NaN   NaN   NaN   0.8   NaN                 [0.8]
3          4   NaN   NaN   NaN   0.4   0.5   0.6   NaN   NaN   0.9  [0.4, 0.5, 0.6, 0.9]

And remove the old columns by selecting just the two you want:

df = df[['partition', 'vec_comb']]
df

# Output:
   partition              vec_comb
0          7                 [0.7]
1          1       [0.2, 0.1, 0.3]
2          8                 [0.8]
3          4  [0.4, 0.5, 0.6, 0.9]

Comments

1

Spark 3.4+

F.array_compact('vec_comb')

Full example:

from pyspark.sql import functions as F
df = spark.createDataFrame(
    [([None, None, None, 0.7],),
     ([None, None, 0.1, 0.2, 0.3, None],),
     ([None, None, None, 0.8, None],),
     ([None, 0.4, 0.5, 0.6, None, 0.9],)],
    ['vec_comb'])
df.show(truncate=0)
# +---------------------------------+
# |vec_comb                         |
# +---------------------------------+
# |[null, null, null, 0.7]          |
# |[null, null, 0.1, 0.2, 0.3, null]|
# |[null, null, null, 0.8, null]    |
# |[null, 0.4, 0.5, 0.6, null, 0.9] |
# +---------------------------------+

df = df.withColumn('vec_comb', F.array_compact('vec_comb'))

df.show()
# +--------------------+
# |            vec_comb|
# +--------------------+
# |               [0.7]|
# |     [0.1, 0.2, 0.3]|
# |               [0.8]|
# |[0.4, 0.5, 0.6, 0.9]|
# +--------------------+

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.