Introduction to Spark-Submit: A Comprehensive Guide to Submitting Spark Applications

Naveen Nelamali
SparkByExamples
Published in
5 min readJan 11, 2024

What is Spark Sumit?

Spark Submit is a command-line tool that comes with Apache Spark, a powerful open-source distributed computing system designed for large-scale data processing.

Why do you need spark-submit Command?

Spark Submit allows users to submit their Spark applications to a cluster for execution. It is a crucial component of the Spark ecosystem and plays a vital role in developing, testing, and deploying Spark applications.

Apache Spark applications are written in various programming languages such as Scala, Java, Python, and R. Once the application is developed, it needs to be submitted to a cluster for execution. This is where Spark Submit comes into play. By using Spark Submit, users can submit their applications to the cluster in a few simple steps.

Spark Submit Configurations

One of the significant advantages of using Spark Submit is that it allows users to specify various configuration parameters while submitting the applications. These configuration parameters include cluster URL, application name, number of executors, executor memory, and many others.

Configure Driver Memory

Spark Driver Memory refers to the amount of memory allocated to the driver process. The driver is the process responsible for coordinating the tasks and executing the application on the cluster.

The amount of memory allocated to the driver is important because it determines the maximum amount of data that can be processed by the driver. If the driver memory is too low, it may cause out-of-memory errors or slow down the application. On the other hand, if the driver memory is too high, it may lead to unnecessary memory allocation and overhead, which can reduce the performance of the application.

The default value of driver memory in Spark is 1g, but it can be increased or decreased using the ` — driver-memory` command-line option or the `spark.driver.memory` configuration property.

Configure Executor Memory

The amount of memory allocated to the executor is important because it determines the maximum amount of data that can be processed by each task. If the executor memory is too low, it may cause out-of-memory errors or slow down the application. On the other hand, if the executor memory is too high, it may lead to unnecessary memory allocation and overhead, which can reduce the performance of the application.

The default value of executor memory in Spark is 1g, but it can be increased or decreased using the ` — executor-memory` command-line option or the `spark.executor.memory` configuration property.

Configure Executor Core

Executor Cores refers to the number of parallel processing units available for a single executor process on a worker node in the cluster. Each executor can have one or more cores assigned to it, depending on the available resources and the requirements of the Spark application.

The optimal number of executor cores can be determined based on the specific requirements of the application, the available resources in the cluster, and the size of the data being processed. As a general rule, it is recommended to allocate 5 cores per executor for CPU-bound tasks and 1 core per executor for I/O-bound tasks.

What are the modes of Submitting Spark Applications?

There are two modes in which Spark Submit can be run Client mode and Cluster mode. Use --deploy-mode to specify the mode.

Client mode: The Spark driver runs on the client machine, and the application is submitted to the cluster for execution.

Cluster mode: The Spark driver runs on one of the nodes in the cluster, and the application is also executed on the cluster.

What Cluster Managers Does Spark Support?

Apache Spark supports submitting applications to different cluster managers. Use --master to specify the master

Yarn, Mesos, Standalone. Kubernetes and local

Yarn: Use yarn if your cluster resources are managed by Hadoop Yarn.

Mesos: Use mesos://HOST:PORT for Mesos cluster manager, replace the host and port of Mesos cluster manager.

Standalone: Use spark://HOST:PORT for the Standalone cluster, replace the host and port of stand-alone cluster.

Kubernetes: Use k8s://HOST:PORT for Kubernetes, replace the host and port of Kubernetes. This by default connects with https, but if you want to use unsecured use k8s://https://HOST:PORT

local: Use local to run locally with a one-worker thread.
Use local[k] and specify k with the number of cores you have locally, this runs the application with k worker threads.
use local[k,F] and specify F with number of attempts it should run when failed.

How do you Run Spark Application by the spark-submit command?

The first step in submitting a Spark application is to create a Spark application package. A Spark application package is a bundle of all the files required to run the application. This includes the code files, configuration files, and any dependent libraries. Once the application package is created, it needs to be submitted to the cluster using Spark Submit.

./bin/spark-submit \
--master <master-url> \
--deploy-mode <deploy-mode> \
--conf <key<=<value> \
--driver-memory <value>g \
--executor-memory <value>g \
--executor-cores <number of cores> \
--jars <comma separated dependencies>
--class <main-class> \
<application-jar> \
[application-arguments]

Here is an example of submitting a spark application

./bin/spark2-submit \
--master yarn \
--deploy-mode cluster \
--driver-memory 8g \
--executor-memory 16g \
--executor-cores 2 \
--files /path/log4j.properties,/path/file2.conf,/path/file3.json \
--class org.apache.spark.examples.SparkPi \
/spark-home/examples/jars/spark-examples_replace-spark-version.jar 80

These parameters can significantly impact the performance and resource utilization of the application. By being able to specify these parameters, users can optimize their application’s performance and resource utilization.

How to Submit PySpark Application?

Similarly, you can also submit a PySpark (Spark with Python) application using the same spark-submit command.

./bin/spark-submit \
--master yarn \
--deploy-mode cluster \
--driver-memory 8g \
--executor-memory 16g \
--executor-cores 2 \
--py-files file1.py,file2.py,file3.zip
wordByExample.py

Another significant advantage of using Spark Submit is that it handles the complexities of the distributed environment. When an application is submitted to a cluster, Spark Submit takes care of distributing the application files, setting up the environment, launching the driver program, and managing the execution of the application. This makes it much easier for users to deploy and manage their Spark applications.

Conclusion

In conclusion, Spark Submit is a command-line tool that is an integral part of the Spark ecosystem. It allows users to submit Spark applications to a cluster for execution and provides various functionalities such as running examples, testing applications, and performing administrative tasks. By being able to specify configuration parameters and handling the complexities of the distributed environment, Spark Submit makes it easier for users to develop, test, and deploy their Spark applications.

Happy Learning !!

--

--

SparkByExamples
SparkByExamples

Published in SparkByExamples

SparkByExamples is a BigData, Data Engineering, Machine Learning, AI, and Cloud platform knowledge-sharing publication platform with the intent to share the knowledge that we come across in real-time projects.

Naveen Nelamali
Naveen Nelamali

Written by Naveen Nelamali

I am seasoned Data Engineer Architect with extensive knowledge in Apache Spark, Hive, Kafka, Azure Databricks, AWS and GenAI