Skip to content

krzsam/scala-spark-example

Repository files navigation

scala-spark-example

Example Spark application to count word occurrences in the provided file located on HDFS

Application structure

Building

In this version the application is packaged into jar using standard sbt package command, which will produce a jar containing only application specific classes - this is fine in situation when the application uses only libraries provided by either Spark or Hadoop.

In situation when other libraries might be used, the sbt-assembly plugin is included - in this case, application needs to be packaged into fat jar using plugin's assembly command - also, this jar will then need to be put on HDFS and provided to Spark to be run. Care needs to be taken in sbt dependency configuration to mark all libraries already available within Spark or Hadoop as provided, so they will not be unnecessarily included withing fat jar.

Functionality

The application reads file provided as parameter and located on HDFS, and then count words in it and outputs the words and their counts in an alphabetically sorted way to the output file.

Note

  • 0.0.0.0 has to be passed as value for spark.driver.bindAddress configuration parameter from the application

Links