In pyspark, suppose I have dataframe with columns named as 'a1','a2','a3'...'a99', how do I apply operation on each of them to create new columns with new names dynamically?
For example, to getnew columns such as sum('a1') as 'total_a1' , ... sum('a99') as 'total_a99'.