I have a PySpark dataframe df1-
data1 = [(("u1", 'w1', 20)), (("u2", 'w1', 30)), (("u3", 'w2', 40))]
df1 = spark.createDataFrame(data, ["ID", "week", "var"])
df1.show()
+---+----+---+
| ID|week|var|
+---+----+---+
| u1| w1| 20|
| u2| w1| 30|
| u3| w2| 40|
+---+----+---+
I have another PySpark dataframe, df2-
data2 = [(("u1", 'w1', 20)), (("u1", 'w2', 10)), (("u2", 'w1', 30)), (("u3", 'w2', 40)), (("u3", 'w2', 50)), (("u4", 'w1', 100)), (("u4", 'w2', 0))]
df2 = spark.createDataFrame(data2, ["ID", "week", "var"])
df2.show()
+---+----+---+
| ID|week|var|
+---+----+---+
| u1| w1| 20|
| u1| w2| 10|
| u2| w1| 30|
| u3| w2| 40|
| u3| w2| 50|
| u4| w1|100|
| u4| w2| 0|
+---+----+---+
I only want to keep the rows of df2 for which df2.ID is present in df1.ID
The desired output is-
+---+----+---+
| ID|week|var|
+---+----+---+
| u1| w1| 20|
| u1| w2| 10|
| u2| w1| 30|
| u3| w2| 40|
| u3| w2| 50|
+---+----+---+
How can I get this done?