Example Spark application to count word occurrences in the provided file located on HDFS
In this version the application is packaged into jar using standard sbt package command, which will produce a jar containing only application specific classes - this is fine in situation when the application uses only libraries provided by either Spark or Hadoop.
In situation when other libraries might be used, the sbt-assembly plugin is included - in this case, application needs to be packaged into fat jar using plugin's assembly command - also, this jar will then need to be put on HDFS and provided to Spark to be run. Care needs to be taken in sbt dependency configuration to mark all libraries already available within Spark or Hadoop as provided, so they will not be unnecessarily included withing fat jar.
The application reads file provided as parameter and located on HDFS, and then count words in it and outputs the words and their counts in an alphabetically sorted way to the output file.
- 0.0.0.0 has to be passed as value for spark.driver.bindAddress configuration parameter from the application
- Hadoop/HDFS: 3.2.0
- HDFS FS shell reference
- for latest version it is recommended to use hdfs instead of hadoop command
- HDFS FS shell reference
- Spark: 2.4.3
- this install included Scala 2.11.12
- this install did not include Hadoop and instead used jars from the above separate Hadoop installation
- Submitting applications in Spark
- Running Spark on K8s
- Spark configuration parameters
- Docker
- Kubernetes
- sbt-assembly: 0.14.9
- Scala: 2.11.12 (this exact version as it was the version included in Spark)
- Infrastructure
- AWS, 3 nodes t3a.xlarge (4 processors, 16GB memory)
- For simplicity, all network traffic on all TCP and UDP ports is enabled in between each of the nodes
- ip-172-31-36-93 : k8s Master (also serves as Worker node), Spark standalone Master and Slave node
- ip-172-31-38-214 : k8s Worker node, Spark standalone Slave node
- ip-172-31-45-170 : k8s Worker node, Spark standalone Slave node