I have the following 2 PySpark DataFrames, both with the same number of rows (say 100 rows):
df1:
|_ Column_a
|_ Column_b
df2:
|_ Column_c
|_ Column_d
How do I create df_final which has 100 rows and the following columns?:
df_final:
|_ Column_a
|_ Column_b
|_ Column_c
|_ Column_d
I looked at concat(), join(), union() but I don't think that's right.