I have created a pyspark dataframe as below:
df = spark.createDataFrame([([0.1,0.2], 2), ([0.1], 3), ([0.3,0.3,0.4], 2)], ("a", "b"))
df.show()
+---------------+---+
|              a|  b|
+---------------+---+
|     [0.1, 0.2]|  2|
|          [0.1]|  3|
|[0.3, 0.3, 0.4]|  2|
+---------------+---+
Now, i am trying to parse the column 'a' one row at a time as below:
parse_col = udf(lambda row: [ x for x in row.a], ArrayType(FloatType()))
new_df = df.withColumn("a_new", parse_col(struct([df[x] for x in df.columns if x == 'a'])))
new_df.show()
This works fine.
+---------------+---+---------------+
|              a|  b|          a_new|
+---------------+---+---------------+
|     [0.1, 0.2]|  2|     [0.1, 0.2]|
|          [0.1]|  3|          [0.1]|
|[0.3, 0.3, 0.4]|  2|[0.3, 0.3, 0.4]|
+---------------+---+---------------+
But when i try to format the values, as below:
count_empty_columns = udf(lambda row: ["{:.2f}".format(x) for x in row.a], ArrayType(FloatType()))
new_df = df.withColumn("a_new", count_empty_columns(struct([df[x] for x in df.columns if x == 'a'])))
new_df.show()
It's not working - the values are missing
+---------------+---+-----+
|              a|  b|a_new|
+---------------+---+-----+
|     [0.1, 0.2]|  2|  [,]|
|          [0.1]|  3|   []|
|[0.3, 0.3, 0.4]|  2| [,,]|
+---------------+---+-----+
I am using spark v2.3.1
Any idea what i am doing wrong here ?
Thanks