Parquet Files
- Loading Data Programmatically
- Parquet files are self-describing so the schema is preserved.
- The result of loading a parquet file is also a DataFrame.
- | name|
- +——+
- |Justin|
- +——+
- Create a simple DataFrame, stored into a partition directory
- adding a new column and dropping an existing column
- with the partitioning column appeared in the partition directory paths.
- root
- |– double: long (nullable = true)
- |– single: long (nullable = true)
- |– triple: long (nullable = true)
- |– key: integer (nullable = true)
Parquet is a columnar format that is supported by many other data processing systems. Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data. When reading Parquet files, all columns are automatically converted to be nullable for compatibility reasons.
Loading Data Programmatically
Using the data from the above example:
// Encoders for most common types are automatically provided by importing spark.implicits._ import spark.implicits._
val peopleDF = spark.read.json(“examples/src/main/resources/people.json”)
// DataFrames can be saved as Parquet files, maintaining the schema information peopleDF.write.parquet(“people.parquet”)
// Read in the parquet file created above // Parquet files are self-describing so the schema is preserved // The result of loading a Parquet file is also a DataFrame val parquetFileDF = spark.read.parquet(“people.parquet”)
// Parquet files can also be used to create a temporary view and then used in SQL statements parquetFileDF.createOrReplaceTempView(“parquetFile”) val namesDF = spark.sql(“SELECT name FROM parquetFile WHERE age BETWEEN 13 AND 19”) namesDF.map(attributes => “Name: “ + attributes(0)).show() // +————+ // | value| // +————+ // |Name: Justin| // +————+
import org.apache.spark.api.java.function.MapFunction; import org.apache.spark.sql.Encoders; import org.apache.spark.sql.Dataset; import org.apache.spark.sql.Row;
Dataset<Row> peopleDF = spark.read().json(“examples/src/main/resources/people.json”);
// DataFrames can be saved as Parquet files, maintaining the schema information peopleDF.write().parquet(“people.parquet”);
// Read in the Parquet file created above. // Parquet files are self-describing so the schema is preserved // The result of loading a parquet file is also a DataFrame Dataset<Row> parquetFileDF = spark.read().parquet(“people.parquet”);
// Parquet files can also be used to create a temporary view and then used in SQL statements parquetFileDF.createOrReplaceTempView(“parquetFile”); Dataset<Row> namesDF = spark.sql(“SELECT name FROM parquetFile WHERE age BETWEEN 13 AND 19”); Dataset<String> namesDS = namesDF.map( (MapFunction<Row, String>) row -> “Name: “ + row.getString(0), Encoders.STRING()); namesDS.show(); // +————+ // | value| // +————+ // |Name: Justin| // +————+
peopleDF = spark.read.json(“examples/src/main/resources/people.json”)
# DataFrames can be saved as Parquet files, maintaining the schema information. peopleDF.write.parquet(“people.parquet”)
# Read in the Parquet file created above.
Parquet files are self-describing so the schema is preserved.
The result of loading a parquet file is also a DataFrame.
</span>parquetFile = spark.read.parquet(“people.parquet”)
# Parquet files can also be used to create a temporary view and then used in SQL statements. parquetFile.createOrReplaceTempView(“parquetFile”) teenagers = spark.sql(“SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19”) teenagers.show() # +——+
| name|
+——+
|Justin|
+——+
</span>
df <- read.df(“examples/src/main/resources/people.json”, “json”)
</span># SparkDataFrame can be saved as Parquet files, maintaining the schema information. write.parquet(df, “people.parquet”)
</span># Read in the Parquet file created above. Parquet files are self-describing so the schema is preserved. # The result of loading a parquet file is also a DataFrame. parquetFile <- read.parquet(“people.parquet”)
</span># Parquet files can also be used to create a temporary view and then used in SQL statements. createOrReplaceTempView(parquetFile, “parquetFile”) teenagers <- sql(“SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19”) head(teenagers) ## name ## 1 Justin
</span># We can also run custom R-UDFs on Spark DataFrames. Here we prefix all the names with “Name:” schema <- structType(structField(“name”, “string”)) teenNames <- dapply(df, function(p) { cbind(paste(“Name:”, p$name)) }, schema) for (teenName in collect(teenNames)$name) { cat(teenName, “\n”) } ## Name: Michael ## Name: Andy ## Name: Justin
</span><div>Find full example code at “examples/src/main/r/RSparkSQLExample.R” in the Spark repo.</div>
Partition Discovery
Table partitioning is a common optimization approach used in systems like Hive. In a partitioned
table, data are usually stored in different directories, with partitioning column values encoded in
the path of each partition directory. All built-in file sources (including Text/CSV/JSON/ORC/Parquet)
are able to discover and infer partitioning information automatically.
For example, we can store all our previously used
population data into a partitioned table using the following directory structure, with two extra
columns, gender
and country
as partitioning columns:
By passing path/to/table
to either SparkSession.read.parquet
or SparkSession.read.load
, Spark SQL
will automatically extract the partitioning information from the paths.
Now the schema of the returned DataFrame becomes:
Notice that the data types of the partitioning columns are automatically inferred. Currently,
numeric data types, date, timestamp and string type are supported. Sometimes users may not want
to automatically infer the data types of the partitioning columns. For these use cases, the
automatic type inference can be configured by
spark.sql.sources.partitionColumnTypeInference.enabled
, which is default to true
. When type
inference is disabled, string type will be used for the partitioning columns.
Starting from Spark 1.6.0, partition discovery only finds partitions under the given paths
by default. For the above example, if users pass path/to/table/gender=male
to either
SparkSession.read.parquet
or SparkSession.read.load
, gender
will not be considered as a
partitioning column. If users need to specify the base path that partition discovery
should start with, they can set basePath
in the data source options. For example,
when path/to/table/gender=male
is the path of the data and
users set basePath
to path/to/table/
, gender
will be a partitioning column.
Schema Merging
Like Protocol Buffer, Avro, and Thrift, Parquet also supports schema evolution. Users can start with a simple schema, and gradually add more columns to the schema as needed. In this way, users may end up with multiple Parquet files with different but mutually compatible schemas. The Parquet data source is now able to automatically detect this case and merge schemas of all these files.
Since schema merging is a relatively expensive operation, and is not a necessity in most cases, we turned it off by default starting from 1.5.0. You may enable it by
- setting data source option
mergeSchema
totrue
when reading Parquet files (as shown in the examples below), or - setting the global SQL option
spark.sql.parquet.mergeSchema
totrue
.
// This is used to implicitly convert an RDD to a DataFrame. import spark.implicits._
// Create a simple DataFrame, store into a partition directory val squaresDF = spark.sparkContext.makeRDD(1 to 5).map(i => (i, i * i)).toDF(“value”, “square”) squaresDF.write.parquet(“data/test_table/key=1”)
// Create another DataFrame in a new partition directory, // adding a new column and dropping an existing column val cubesDF = spark.sparkContext.makeRDD(6 to 10).map(i => (i, i * i * i)).toDF(“value”, “cube”) cubesDF.write.parquet(“data/test_table/key=2”)
// Read the partitioned table val mergedDF = spark.read.option(“mergeSchema”, “true”).parquet(“data/test_table”) mergedDF.printSchema()
// The final schema consists of all 3 columns in the Parquet files together // with the partitioning column appeared in the partition directory paths // root // |– value: int (nullable = true) // |– square: int (nullable = true) // |– cube: int (nullable = true) // |– key: int (nullable = true)
import java.io.Serializable; import java.util.ArrayList; import java.util.Arrays; import java.util.List;
import org.apache.spark.sql.Dataset; import org.apache.spark.sql.Row;
public static class Square implements Serializable { private int value; private int square;
// Getters and setters…
}
public static class Cube implements Serializable { private int value; private int cube;
// Getters and setters…
}
List<Square> squares = new ArrayList<>(); for (int value = 1; value <= 5; value++) { Square square = new Square(); square.setValue(value); square.setSquare(value * value); squares.add(square); }
// Create a simple DataFrame, store into a partition directory Dataset<Row> squaresDF = spark.createDataFrame(squares, Square.class); squaresDF.write().parquet(“data/test_table/key=1”);
List<Cube> cubes = new ArrayList<>(); for (int value = 6; value <= 10; value++) { Cube cube = new Cube(); cube.setValue(value); cube.setCube(value * value * value); cubes.add(cube); }
// Create another DataFrame in a new partition directory, // adding a new column and dropping an existing column Dataset<Row> cubesDF = spark.createDataFrame(cubes, Cube.class); cubesDF.write().parquet(“data/test_table/key=2”);
// Read the partitioned table Dataset<Row> mergedDF = spark.read().option(“mergeSchema”, true).parquet(“data/test_table”); mergedDF.printSchema();
// The final schema consists of all 3 columns in the Parquet files together // with the partitioning column appeared in the partition directory paths // root // |– value: int (nullable = true) // |– square: int (nullable = true) // |– cube: int (nullable = true) // |– key: int (nullable = true)
from pyspark.sql import Row
# spark is from the previous example.
Create a simple DataFrame, stored into a partition directory
</span>sc = spark.sparkContext
squaresDF = spark.createDataFrame(sc.parallelize(range(1, 6)) .map(lambda i: Row(single=i, double=i ** 2))) squaresDF.write.parquet(“data/test_table/key=1”)
# Create another DataFrame in a new partition directory,
adding a new column and dropping an existing column
</span>cubesDF = spark.createDataFrame(sc.parallelize(range(6, 11)) .map(lambda i: Row(single=i, triple=i ** 3))) cubesDF.write.parquet(“data/test_table/key=2”)
# Read the partitioned table mergedDF = spark.read.option(“mergeSchema”, “true”).parquet(“data/test_table”) mergedDF.printSchema()
# The final schema consists of all 3 columns in the Parquet files together
with the partitioning column appeared in the partition directory paths.
root
|– double: long (nullable = true)
|– single: long (nullable = true)
|– triple: long (nullable = true)
|– key: integer (nullable = true)
</span>
df1 <- createDataFrame(data.frame(single=c(12, 29), double=c(19, 23))) df2 <- createDataFrame(data.frame(double=c(19, 23), triple=c(23, 18)))
</span># Create a simple DataFrame, stored into a partition directory write.df(df1, “data/test_table/key=1”, “parquet”, “overwrite”)
</span># Create another DataFrame in a new partition directory, # adding a new column and dropping an existing column write.df(df2, “data/test_table/key=2”, “parquet”, “overwrite”)
</span># Read the partitioned table df3 <- read.df(“data/test_table”, “parquet”, mergeSchema = “true”) printSchema(df3) # The final schema consists of all 3 columns in the Parquet files together # with the partitioning column appeared in the partition directory paths ## root ## |– single: double (nullable = true) ## |– double: double (nullable = true) ## |– triple: double (nullable = true) ## |– key: integer (nullable = true)
</span><div>Find full example code at “examples/src/main/r/RSparkSQLExample.R” in the Spark repo.</div>
Hive metastore Parquet table conversion
When reading from Hive metastore Parquet tables and writing to non-partitioned Hive metastore
Parquet tables, Spark SQL will try to use its own Parquet support instead of Hive SerDe for
better performance. This behavior is controlled by the spark.sql.hive.convertMetastoreParquet
configuration, and is turned on by default.
Hive/Parquet Schema Reconciliation
There are two key differences between Hive and Parquet from the perspective of table schema processing.
- Hive is case insensitive, while Parquet is not
- Hive considers all columns nullable, while nullability in Parquet is significant
Due to this reason, we must reconcile Hive metastore schema with Parquet schema when converting a Hive metastore Parquet table to a Spark SQL Parquet table. The reconciliation rules are:
-
Fields that have the same name in both schema must have the same data type regardless of nullability. The reconciled field should have the data type of the Parquet side, so that nullability is respected.
-
The reconciled schema contains exactly those fields defined in Hive metastore schema.
- Any fields that only appear in the Parquet schema are dropped in the reconciled schema.
- Any fields that only appear in the Hive metastore schema are added as nullable field in the reconciled schema.
Metadata Refreshing
Spark SQL caches Parquet metadata for better performance. When Hive metastore Parquet table conversion is enabled, metadata of those converted tables are also cached. If these tables are updated by Hive or other external tools, you need to refresh them manually to ensure consistent metadata.
Configuration
Configuration of Parquet can be done using the setConf
method on SparkSession
or by running
SET key=value
commands using SQL.
Property Name | Default | Meaning |
---|---|---|
spark.sql.parquet.binaryAsString |
false | Some other Parquet-producing systems, in particular Impala, Hive, and older versions of Spark SQL, do not differentiate between binary data and strings when writing out the Parquet schema. This flag tells Spark SQL to interpret binary data as a string to provide compatibility with these systems. |
spark.sql.parquet.int96AsTimestamp |
true | Some Parquet-producing systems, in particular Impala and Hive, store Timestamp into INT96. This flag tells Spark SQL to interpret INT96 data as a timestamp to provide compatibility with these systems. |
spark.sql.parquet.compression.codec |
snappy |
Sets the compression codec used when writing Parquet files. If either compression or
parquet.compression is specified in the table-specific options/properties, the precedence would be
compression , parquet.compression , spark.sql.parquet.compression.codec . Acceptable values include:
none, uncompressed, snappy, gzip, lzo, brotli, lz4, zstd.
Note that zstd requires ZStandardCodec to be installed before Hadoop 2.9.0, brotli requires
BrotliCodec to be installed.
|
spark.sql.parquet.filterPushdown |
true | Enables Parquet filter push-down optimization when set to true. |
spark.sql.hive.convertMetastoreParquet |
true | When set to false, Spark SQL will use the Hive SerDe for parquet tables instead of the built in support. |
spark.sql.parquet.mergeSchema |
false |
When true, the Parquet data source merges schemas collected from all data files, otherwise the schema is picked from the summary file or a random data file if no summary file is available. |
spark.sql.parquet.writeLegacyFormat |
false | If true, data will be written in a way of Spark 1.4 and earlier. For example, decimal values will be written in Apache Parquet's fixed-length byte array format, which other systems such as Apache Hive and Apache Impala use. If false, the newer format in Parquet will be used. For example, decimals will be written in int-based format. If Parquet output is intended for use with systems that do not support this newer format, set to true. |