unionAll {SparkR} | R Documentation |
Return a new SparkDataFrame containing the union of rows in this SparkDataFrame and another SparkDataFrame. This is equivalent to 'UNION ALL' in SQL. Note that this does not remove duplicate rows across the two SparkDataFrames.
Returns a new SparkDataFrame containing rows of all parameters.
unionAll(x, y) rbind(..., deparse.level = 1) ## S4 method for signature 'SparkDataFrame,SparkDataFrame' unionAll(x, y) ## S4 method for signature 'SparkDataFrame' rbind(x, ..., deparse.level = 1)
x |
A SparkDataFrame |
y |
A SparkDataFrame |
A SparkDataFrame containing the result of the union.
Other SparkDataFrame functions: SparkDataFrame-class
,
[[
, agg
,
arrange
, as.data.frame
,
attach
, cache
,
collect
, colnames
,
coltypes
, columns
,
count
, dapply
,
describe
, dim
,
distinct
, dropDuplicates
,
dropna
, drop
,
dtypes
, except
,
explain
, filter
,
first
, group_by
,
head
, histogram
,
insertInto
, intersect
,
isLocal
, join
,
limit
, merge
,
mutate
, ncol
,
persist
, printSchema
,
registerTempTable
, rename
,
repartition
, sample
,
saveAsTable
, selectExpr
,
select
, showDF
,
show
, str
,
take
, unpersist
,
withColumn
, write.df
,
write.jdbc
, write.json
,
write.parquet
, write.text
## Not run:
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D df1 <- read.json(sqlContext, path)
##D df2 <- read.json(sqlContext, path2)
##D unioned <- unionAll(df, df2)
## End(Not run)