histogram {SparkR} | R Documentation |
This function computes a histogram for a given SparkR Column.
## S4 method for signature 'SparkDataFrame,characterOrColumn' histogram(df, col, nbins = 10)
df |
the SparkDataFrame containing the Column to build the histogram from. |
col |
the column as Character string or a Column to build the histogram from. |
nbins |
the number of bins (optional). Default value is 10. |
a data.frame with the histogram statistics, i.e., counts and centroids.
histogram since 2.0.0
Other SparkDataFrame functions: SparkDataFrame-class
,
agg
, alias
,
arrange
, as.data.frame
,
attach,SparkDataFrame-method
,
broadcast
, cache
,
checkpoint
, coalesce
,
collect
, colnames
,
coltypes
,
createOrReplaceTempView
,
crossJoin
, cube
,
dapplyCollect
, dapply
,
describe
, dim
,
distinct
, dropDuplicates
,
dropna
, drop
,
dtypes
, exceptAll
,
except
, explain
,
filter
, first
,
gapplyCollect
, gapply
,
getNumPartitions
, group_by
,
head
, hint
,
insertInto
, intersectAll
,
intersect
, isLocal
,
isStreaming
, join
,
limit
, localCheckpoint
,
merge
, mutate
,
ncol
, nrow
,
persist
, printSchema
,
randomSplit
, rbind
,
rename
, repartitionByRange
,
repartition
, rollup
,
sample
, saveAsTable
,
schema
, selectExpr
,
select
, showDF
,
show
, storageLevel
,
str
, subset
,
summary
, take
,
toJSON
, unionByName
,
union
, unpersist
,
withColumn
, withWatermark
,
with
, write.df
,
write.jdbc
, write.json
,
write.orc
, write.parquet
,
write.stream
, write.text
## Not run:
##D
##D # Create a SparkDataFrame from the Iris dataset
##D irisDF <- createDataFrame(iris)
##D
##D # Compute histogram statistics
##D histStats <- histogram(irisDF, irisDF$Sepal_Length, nbins = 12)
##D
##D # Once SparkR has computed the histogram statistics, the histogram can be
##D # rendered using the ggplot2 library:
##D
##D require(ggplot2)
##D plot <- ggplot(histStats, aes(x = centroids, y = counts)) +
##D geom_bar(stat = "identity") +
##D xlab("Sepal_Length") + ylab("Frequency")
## End(Not run)