In this post I will show you how to create a fully operational environment in 5 minutes, which will include: Apache Airflow WebServerApache Airflow WorkerApache Airflow SchedulerFlower - is a…
In today's world, we often meet requirements for real-time data processing. There are quite a few tools on the market that allow us to achieve this. At the forefront we…
In this short post I will show you how you can change the name of the file / files created by Apache Spark to HDFS or simply rename or delete any file.
In this post I will show you how to run the shell command by programming in Scala and how you can use it in Apache Spark.
If you want to save DataFrame as a file on HDFS, there may be a problem that it will be saved as many files. This is the most correct behavior and it results from the parallel work in Apache Spark.
We will use the FileSystem and Path classes from the org.apache.hadoop.fs library to achieve it.
Cloudera is one of the three major players in the market alongside Hortonworks and MapR, which distributes the Hadoop general-interest.
Today I will show you how you can use Machine Learning libraries (ML), which are available in Spark as a library under the name Spark MLib.
In this tutorial I will show you how you can easily install Apache Spark in CentOs
Simple short tip how to check if table exists int Hive using Spark