Apache-Spark

如何使用 Slurm 在集群上執行 Spark?

  • February 15, 2019

我編寫了一個example.jar使用火花上下文的程序。如何在使用 Slurm 的集群上執行它?這與https://stackoverflow.com/questions/29308202/running-spark-on-top-of-slurm有關,但答案不是很詳細,也不是關於 serverfault。

為了使用 spark 上下文執行應用程序,首先需要執行一個 Slurm 作業,該作業啟動一個 master 和一些 worker。使用 Slurm 時需要注意一些事項:

  • 不要將 Spark 作為守護程序啟動
  • 使 Spark 工作人員僅使用 Slurm 作業所需的核心和記憶體
  • 為了在同一個工作中執行 master 和 worker,你必須在腳本中的某個地方分支

我正在使用安裝到$HOME/spark-1.5.2-bin-hadoop2.6/. 請記住用腳本中的一些有效值替換<username>和。<shared folder>

#!/bin/bash
#start_spark_slurm.sh

#SBATCH --nodes=3
#  ntasks per node MUST be one, because multiple slaves per work doesn't
#  work well with slurm + spark in this script (they would need increasing 
#  ports among other things)
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=4
#SBATCH --mem-per-cpu=500
#  Beware! $HOME will not be expanded and invalid paths will result Slurm jobs
#  hanging indefinitely with status CG (completing) when calling scancel!
#SBATCH --output="/home/<username>/spark/logs/%j.out"
#SBATCH --error="/home/<username>/spark/logs/%j.err"
#SBATCH --time=01:00:00

# This section will be run when started by sbatch
if [ "$1" != 'srunning' ]; then
   this=$0
   # I experienced problems with some nodes not finding the script:
   #   slurmstepd: execve(): /var/spool/slurm/job123/slurm_script:
   #   No such file or directory
   # that's why this script is being copied to a shared location to which 
   # all nodes have access to:
   script=/<shared folder>/${SLURM_JOBID}_$( basename -- "$0" )
   cp "$this" "$script"

   # This might not be necessary on all clusters
   module load scala/2.10.4 java/jdk1.7.0_25 cuda/7.0.28

   export sparkLogs=$HOME/spark/logs
   export sparkTmp=$HOME/spark/tmp
   mkdir -p -- "$sparkLogs" "$sparkTmp"

   export SPARK_ROOT=$HOME/spark-1.5.2-bin-hadoop2.6/
   export SPARK_WORKER_DIR=$sparkLogs
   export SPARK_LOCAL_DIRS=$sparkLogs
   export SPARK_MASTER_PORT=7077
   export SPARK_MASTER_WEBUI_PORT=8080
   export SPARK_WORKER_CORES=$SLURM_CPUS_PER_TASK
   export SPARK_DAEMON_MEMORY=$(( $SLURM_MEM_PER_CPU * $SLURM_CPUS_PER_TASK / 2 ))m
   export SPARK_MEM=$SPARK_DAEMON_MEMORY

   srun "$script" 'srunning'
# If run by srun, then decide by $SLURM_PROCID whether we are master or worker
else
   source "$SPARK_ROOT/sbin/spark-config.sh"
   source "$SPARK_PREFIX/bin/load-spark-env.sh"
   if [ "$SLURM_PROCID" -eq 0 ]; then
       export SPARK_MASTER_IP=$( hostname )
       MASTER_NODE=$( scontrol show hostname $SLURM_NODELIST | head -n 1 )

       # The saved IP address + port is necessary alter for submitting jobs
       echo "spark://$SPARK_MASTER_IP:$SPARK_MASTER_PORT" > "$sparkLogs/${SLURM_JOBID}_spark_master"

       "$SPARK_ROOT/bin/spark-class" org.apache.spark.deploy.master.Master \
           --ip "$SPARK_MASTER_IP"                                         \
           --port "$SPARK_MASTER_PORT "                                    \
           --webui-port "$SPARK_MASTER_WEBUI_PORT"
   else
       # $(scontrol show hostname) is used to convert e.g. host20[39-40]
       # to host2039 this step assumes that SLURM_PROCID=0 corresponds to 
       # the first node in SLURM_NODELIST !
       MASTER_NODE=spark://$( scontrol show hostname $SLURM_NODELIST | head -n 1 ):7077
       "$SPARK_ROOT/bin/spark-class" org.apache.spark.deploy.worker.Worker $MASTER_NODE
   fi
fi

現在開始 sbatch 作業,然後example.jar

mkdir -p -- "$HOME/spark/logs"
jobid=$( sbatch ./start_spark_slurm.sh )
jobid=${jobid##Submitted batch job }
MASTER_WEB_UI=''
while [ -z "$MASTER_WEB_UI" ]; do 
   sleep 1s
   if [ -f "$HOME/spark/logs/$jobid.err" ]; then
       MASTER_WEB_UI=$( sed -n -r 's|.*Started MasterWebUI at (http://[0-9.:]*)|\1|p' "$HOME/spark/logs/$jobid.err" )
   fi
done
MASTER_ADDRESS=$( cat -- "$HOME/spark/logs/${jobid}_spark_master" ) 
"$HOME/spark-1.5.2-bin-hadoop2.6/bin/spark-submit" --master "$MASTER_ADDRESS" example.jar
firefox "$MASTER_WEB_UI"

引用自:https://serverfault.com/questions/776687