In this short post I will show you how you can change the name of the file / files created by Apache Spark to HDFS or simply rename or delete any file.
In this post I will show you how to run the shell command by programming in Scala and how you can use it in Apache Spark.
If you want to save DataFrame as a file on HDFS, there may be a problem that it will be saved as many files. This is the most correct behavior and it results from the parallel work in Apache Spark.
We will use the FileSystem and Path classes from the org.apache.hadoop.fs library to achieve it.
Cloudera is one of the three major players in the market alongside Hortonworks and MapR, which distributes the Hadoop general-interest.
A short tutorial on how to edit the hosts file under Windows system.
Very often you can meet the requirement to create an application that will collect data from an external system through its API. It may turn out that the "bottleneck" of such an application will be the time of its implementation.
Today I will show you how you can use Machine Learning libraries (ML), which are available in Spark as a library under the name Spark MLib.
You would like to run several commands at the same time, but each of them should be run in a separate thread.
To add an environment variable, start the Start Menu and depending on whether you have set the PL or ENG language in Windows enter the appropriate phrase: