Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Civil-Comp Proceedings
ISSN 1759-3433
CCP: 112
PROCEEDINGS OF THE SIXTH INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, GPU AND CLOUD COMPUTING FOR ENGINEERING
Edited by:
Paper 26

Automatic, cloud-independent, scalable Spark cluster deployment in cloud

E. Nagy, Á. Hajnal, I. Pintye and P. Kacsuk

Institute for Computer Science and Control, Hungarian Academy of Sciences (MTA SZTAKI), Budapest, Hungary

Full Bibliographic Reference for this paper
, "Automatic, cloud-independent, scalable Spark cluster deployment in cloud", in , (Editors), "Proceedings of the Sixth International Conference on Parallel, Distributed, GPU and Cloud Computing for Engineering", Civil-Comp Press, Stirlingshire, UK, Paper 26, 2019. doi:10.4203/ccp.112.26
Keywords: artificial intelligence, big data, cloud computing, distributed computing, orchestration, Spark.

Summary
Cloud-based Big Data and Machine Learning applications are becoming increasingly popular in the industry, also in academic and education sectors. In many cases, clouds are used to support the computation and storage needs of such applications by building and managing multi-VM virtual infrastructures (clusters) using some cloud-based system (IaaS), temporarily or for a longer period of time, respectively. Recently, a popular choice to convey big data analytics or machine learning is to use Apache Spark, which is an open source distributed, cluster computing system. For best performance, it is also recommended to use Hadoop File System (HDFS) along with Spark, with which Spark can perform data processing in data locality-aware way, i.e. moves computation to the site of the data, so avoiding data movement overhead. Manual deployment and configuration of such a cluster in a cloud is non-trivial, error-prone and tedious task, requiring considerable expertise in both cloud and Spark-Hadoop technologies. It might take days, which may even exceed the time required to perform the actual data processing. After discarding the infrastructure when the current computation is done, the same infrastructure might have to be re-built again for a subsequent computation later, or when deciding to choose another cloud provider, respectively.

This paper proposes a solution to manage (create and discard) such infrastructures rapidly, easily, and efficiently in clouds, which are guaranteed to be consistent, properly configured, well integrated, controllable, scalable and fault-tolerant. An important advantage of the proposed solution is that the main parameters of the Apache Spark architecture (such as the size of the cluster, number of CPU cores and memory configurations per workers, etc.) can be customized, the computing capacity required for processing can be scaled and cloud-independent. To fulfill these goals we used a hybrid-cloud orchestration tool called Occopus, which was developed by MTA SZTAKI. Occopus uses so called descriptors to define the required infrastructure, which contains the definition of the node types (Spark Master and Worker). The Spark Master will also be the HDFS Name Node, and the Worker nodes will be Data Nodes for the HDFS system at the same time to enable data locality-aware processing. The number of the worker nodes is configurable before deployment and scalable even at runtime. Occopus uses these descriptors (infrastructure descriptor, node definition descriptor and optionally the cloud-init files for contextualization) to deploy the required infrastructure in the target cloud. Occopus supports several interfaces to various clouds: EC2, OpenStack Nova, Docker, Cloud- Broker, CloudSigma, which allows of easily deploying the very same infrastructure in almost any commercial or private cloud providing such an interface (Amazon, OpenStack, OpenNebula, etc.). As a result of this work, the MTA Cloud research community is now already able to create a scalable Spark-HDFS cluster in the cloud of the Hungarian Academy of Sciences (MTA Cloud), in a user-friendly way, using only a few commands. Researchers can thus focus on their own work in their specific domain of knowledge, without having to know technical details about cloud computing, Spark deployment or networking at all.

purchase the full-text of this paper (price £22)

go to the previous paper
go to the next paper
return to the table of contents
return to the book description
purchase this book (price £25 +P&P)