DataStreamReader.
json
Loads a JSON file stream and returns the results as a DataFrame.
DataFrame
JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the multiLine parameter to true.
multiLine
true
If the schema parameter is not specified, this function goes through the input once to determine the input schema.
schema
New in version 2.0.0.
string represents path to the JSON dataset, or RDD of Strings storing JSON objects.
pyspark.sql.types.StructType
an optional pyspark.sql.types.StructType for the input schema or a DDL-formatted string (For example col0 INT, col1 DOUBLE).
col0 INT, col1 DOUBLE
infers all primitive values as a string type. If None is set, it uses the default value, false.
false
infers all floating-point values as a decimal type. If the values do not fit in decimal, then it infers them as doubles. If None is set, it uses the default value, false.
ignores Java/C++ style comment in JSON records. If None is set, it uses the default value, false.
allows unquoted JSON field names. If None is set, it uses the default value, false.
allows single quotes in addition to double quotes. If None is set, it uses the default value, true.
allows leading zeros in numbers (e.g. 00012). If None is set, it uses the default value, false.
allows accepting quoting of all character using backslash quoting mechanism. If None is set, it uses the default value, false.
allows a mode for dealing with corrupt records during parsing. If None is set, it uses the default value, PERMISSIVE.
PERMISSIVE
PERMISSIVE: when it meets a corrupted record, puts the malformed string into a field configured by columnNameOfCorruptRecord, and sets malformed fields to null. To keep corrupt records, an user can set a string type field named columnNameOfCorruptRecord in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. When inferring a schema, it implicitly adds a columnNameOfCorruptRecord field in an output schema.
columnNameOfCorruptRecord
null
DROPMALFORMED: ignores the whole corrupted records.
DROPMALFORMED
FAILFAST: throws an exception when it meets corrupted records.
FAILFAST
allows renaming the new field having malformed string created by PERMISSIVE mode. This overrides spark.sql.columnNameOfCorruptRecord. If None is set, it uses the value specified in spark.sql.columnNameOfCorruptRecord.
spark.sql.columnNameOfCorruptRecord
sets the string that indicates a date format. Custom date formats follow the formats at datetime pattern. # noqa This applies to date type. If None is set, it uses the default value, yyyy-MM-dd.
yyyy-MM-dd
sets the string that indicates a timestamp format. Custom date formats follow the formats at datetime pattern. # noqa This applies to timestamp type. If None is set, it uses the default value, yyyy-MM-dd'T'HH:mm:ss[.SSS][XXX].
yyyy-MM-dd'T'HH:mm:ss[.SSS][XXX]
parse one record, which may span multiple lines, per file. If None is set, it uses the default value, false.
allows JSON Strings to contain unquoted control characters (ASCII characters with value less than 32, including tab and line feed characters) or not.
defines the line separator that should be used for parsing. If None is set, it covers all \r, \r\n and \n.
\r
\r\n
\n
sets a locale as language tag in IETF BCP 47 format. If None is set, it uses the default value, en-US. For instance, locale is used while parsing dates and timestamps.
en-US
locale
whether to ignore column of all null values or empty array/struct during schema inference. If None is set, it uses the default value, false.
allows to forcibly set one of standard basic or extended encoding for the JSON files. For example UTF-16BE, UTF-32LE. If None is set, the encoding of input JSON will be detected automatically when the multiLine option is set to true.
an optional glob pattern to only include files with paths matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of partition discovery. # noqa
recursively scan a directory for files. Using this option disables partition discovery. # noqa
allows JSON parser to recognize set of “Not-a-Number” (NaN) tokens as legal floating number values. If None is set, it uses the default value, true.
+INF: for positive infinity, as well as alias of+Infinity and Infinity. -INF: for negative infinity, alias -Infinity. NaN: for other not-a-numbers, like result of division by zero.
+INF
+Infinity and Infinity.
+Infinity
Infinity
-INF: for negative infinity, alias -Infinity.
-INF
-Infinity
NaN: for other not-a-numbers, like result of division by zero.
NaN
Notes
This API is evolving.
Examples
>>> json_sdf = spark.readStream.json(tempfile.mkdtemp(), schema = sdf_schema) >>> json_sdf.isStreaming True >>> json_sdf.schema == sdf_schema True