I'm loading a parquet file as a spark dataset. I can query and create new datasets from the query. Now, I would like to add a new column to the dataset ("hashkey") and generate the values (e.g. md5sum(nameValue)). How can i achieve this?
public static void main(String[] args) {
SparkConf sparkConf = new SparkConf();
sparkConf.setAppName("Hello Spark");
sparkConf.setMaster("local");
SparkSession spark = SparkSession.builder().appName("Java Spark SQL basic example")
.config("spark.master", "local").config("spark.sql.warehouse.dir", "file:///C:\\spark_warehouse")
.getOrCreate();
Dataset<org.apache.spark.sql.Row> df = spark.read().parquet("meetup.parquet");
df.show();
df.createOrReplaceTempView("tmpview");
Dataset<Row> namesDF = spark.sql("SELECT * FROM tmpview where name like 'Spark-%'");
namesDF.show();
}
The output looks like this:
+-------------+-----------+-----+---------+--------------------+
| name|meetup_date|going|organizer| topics|
+-------------+-----------+-----+---------+--------------------+
| Spark-H20| 2016-01-01| 50|airisdata|[h2o, repeated sh...|
| Spark-Avro| 2016-01-02| 60|airisdata| [avro, usecases]|
|Spark-Parquet| 2016-01-03| 70|airisdata| [parquet, usecases]|
+-------------+-----------+-----+---------+--------------------+