read.df {SparkR} | R Documentation |
Returns the dataset in a data source as a SparkDataFrame
read.df(path = NULL, source = NULL, schema = NULL, na.strings = "NA", ...) loadDF(path = NULL, source = NULL, schema = NULL, ...)
path |
The path of files to load |
source |
The name of external data source |
schema |
The data schema defined in structType or a DDL-formatted string. |
na.strings |
Default string value for NA when source is "csv" |
... |
additional external data source specific named properties. |
The data source is specified by the source
and a set of options(...).
If source
is not specified, the default data source configured by
"spark.sql.sources.default" will be used.
Similar to R read.csv, when source
is "csv", by default, a value of "NA" will be
interpreted as NA.
SparkDataFrame
read.df since 1.4.0
loadDF since 1.6.0
## Not run:
##D sparkR.session()
##D df1 <- read.df("path/to/file.json", source = "json")
##D schema <- structType(structField("name", "string"),
##D structField("info", "map"))
##D df2 <- read.df(mapTypeJsonPath, "json", schema, multiLine = TRUE)
##D df3 <- loadDF("data/test_table", "parquet", mergeSchema = "true")
##D stringSchema <- "name STRING, info MAP"
##D df4 <- read.df(mapTypeJsonPath, "json", stringSchema, multiLine = TRUE)
## End(Not run)