How to use spark.read.format() In Python
It is used to load text files into DataFrame. The .format() specifies the input data source format as “text”. The .load() loads data from a data source and returns DataFrame.
Syntax: spark.read.format(“text”).load(path=None, format=None, schema=None, **options)
Parameters: This method accepts the following parameter as mentioned above and described below.
- paths : It is a string, or list of strings, for input path(s).
- format : It is an optional string for format of the data source. Default to ‘parquet’.
- schema : It is an optional pyspark.sql.types.StructType for the input schema.
- options : all other string options
Returns: DataFrame
Example: Read text file using spark.read.format().
First, import the modules and create a spark session and then read the file with spark.read.format(), then create columns and split the data from the txt file show into a dataframe.
Python3
from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() df = spark.read. format ( "text" ).load( "output.txt" ) df.selectExpr("split(value, ' ' )\ as Text_Data_In_Rows_Using_format_load").show( 4 , False ) |
Output:
Read Text file into PySpark Dataframe
In this article, we are going to see how to read text files in PySpark Dataframe.
There are three ways to read text files into PySpark DataFrame.
- Using spark.read.text()
- Using spark.read.csv()
- Using spark.read.format().load()
Using these we can read a single text file, multiple files, and all files from a directory into Spark DataFrame and Dataset.
Text file Used: