- Hands-On Big Data Analytics with PySpark
- Rudy Lai Bart?omiej Potaczek
- 103字
- 2021-06-24 15:52:34
Getting data into Spark
- Next, load the KDD cup data into PySpark using sc, as shown in the following command:
raw_data = sc.textFile("./kddcup.data.gz")
- In the following command, we can see that the raw data is now in the raw_data variable:
raw_data
This output is as demonstrated in the following code snippet:
./kddcup.data,gz MapPartitionsRDD[3] at textFile at NativeMethodAccessorImpl.java:0
If we enter the raw_data variable, it gives us details regarding kddcup.data.gz, where raw data underlying the data file is located, and tells us about MapPartitionsRDD.
Now that we know how to load the data into Spark, let's learn about parallelization with Spark RDDs.