I have a pyspark function that looks like this. \
spark.sql("select count(*) from student_table where student_id is NULL") \
spark.sql("select count(*) from student_table where student_scores is NULL") \
spark.sql("select count(*) from student_table where student_health is NULL")
I get a result that looks like \
+-------+|count(1)|\n-------+| 0|+-------+\n|count(1)|\n-------+| 100|+-------+\n|count(1)|\n-------+| 24145|
What I want to do is to make the result into a dataframe for each column by using pandas or pyspark function.
The result should have each null value result for each column.
For example,
Thanks in advance if someone can help me out.
