A B C D E F G H I J K L M N O P Q R S T U V W Y misc
abs | abs |
abs-method | abs |
acos | acos |
acos-method | acos |
add_months | add_months |
add_months-method | add_months |
AFTSurvivalRegressionModel-class | S4 class that represents a AFTSurvivalRegressionModel |
agg | Summarize data across columns |
agg-method | Summarize data across columns |
alias | alias |
alias-method | alias |
ALSModel-class | S4 class that represents an ALSModel |
approxCountDistinct | Returns the approximate number of distinct items in a group |
approxCountDistinct-method | Returns the approximate number of distinct items in a group |
approxQuantile | Calculates the approximate quantiles of numerical columns of a SparkDataFrame |
approxQuantile-method | Calculates the approximate quantiles of numerical columns of a SparkDataFrame |
arrange | Arrange Rows by Variables |
arrange-method | Arrange Rows by Variables |
array_contains | array_contains |
array_contains-method | array_contains |
as.data.frame | Download data from a SparkDataFrame into a R data.frame |
as.data.frame-method | Download data from a SparkDataFrame into a R data.frame |
as.DataFrame | Create a SparkDataFrame |
as.DataFrame.default | Create a SparkDataFrame |
asc | A set of operations working with SparkDataFrame columns |
ascii | ascii |
ascii-method | ascii |
asin | asin |
asin-method | asin |
associationRules-method | FP-growth |
atan | atan |
atan-method | atan |
atan2 | atan2 |
atan2-method | atan2 |
attach | Attach SparkDataFrame to R search path |
attach-method | Attach SparkDataFrame to R search path |
avg | avg |
avg-method | avg |
awaitTermination | awaitTermination |
awaitTermination-method | awaitTermination |
base64 | base64 |
base64-method | base64 |
between | between |
between-method | between |
bin | bin |
bin-method | bin |
BisectingKMeansModel-class | S4 class that represents a BisectingKMeansModel |
bitwiseNOT | bitwiseNOT |
bitwiseNOT-method | bitwiseNOT |
bround | bround |
bround-method | bround |
cache | Cache |
cache-method | Cache |
cacheTable | Cache Table |
cacheTable.default | Cache Table |
cancelJobGroup | Cancel active jobs for the specified group |
cancelJobGroup.default | Cancel active jobs for the specified group |
cast | Casts the column to a different data type. |
cast-method | Casts the column to a different data type. |
cbrt | cbrt |
cbrt-method | cbrt |
ceil | Computes the ceiling of the given value |
ceil-method | Computes the ceiling of the given value |
ceiling | Computes the ceiling of the given value |
ceiling-method | Computes the ceiling of the given value |
checkpoint | checkpoint |
checkpoint-method | checkpoint |
clearCache | Clear Cache |
clearCache.default | Clear Cache |
clearJobGroup | Clear current job group ID and its description |
clearJobGroup.default | Clear current job group ID and its description |
coalesce | Coalesce |
coalesce-method | Coalesce |
collect | Collects all the elements of a SparkDataFrame and coerces them into an R data.frame. |
collect-method | Collects all the elements of a SparkDataFrame and coerces them into an R data.frame. |
colnames | Column Names of SparkDataFrame |
colnames-method | Column Names of SparkDataFrame |
colnames<- | Column Names of SparkDataFrame |
colnames<--method | Column Names of SparkDataFrame |
coltypes | coltypes |
coltypes-method | coltypes |
coltypes<- | coltypes |
coltypes<--method | coltypes |
column | S4 class that represents a SparkDataFrame column |
Column-class | S4 class that represents a SparkDataFrame column |
column-method | S4 class that represents a SparkDataFrame column |
columnfunctions | A set of operations working with SparkDataFrame columns |
columns | Column Names of SparkDataFrame |
columns-method | Column Names of SparkDataFrame |
concat | concat |
concat-method | concat |
concat_ws | concat_ws |
concat_ws-method | concat_ws |
contains | A set of operations working with SparkDataFrame columns |
conv | conv |
conv-method | conv |
corr | corr |
corr-method | corr |
cos | cos |
cos-method | cos |
cosh | cosh |
cosh-method | cosh |
count | Returns the number of items in a group |
count-method | Returns the number of items in a group |
count-method | Returns the number of rows in a SparkDataFrame |
countDistinct | Count Distinct Values |
countDistinct-method | Count Distinct Values |
cov | cov |
cov-method | cov |
covar_pop | covar_pop |
covar_pop-method | covar_pop |
covar_samp | cov |
covar_samp-method | cov |
crc32 | crc32 |
crc32-method | crc32 |
createDataFrame | Create a SparkDataFrame |
createDataFrame.default | Create a SparkDataFrame |
createExternalTable | (Deprecated) Create an external table |
createExternalTable.default | (Deprecated) Create an external table |
createOrReplaceTempView | Creates a temporary view using the given name. |
createOrReplaceTempView-method | Creates a temporary view using the given name. |
createTable | Creates a table based on the dataset in a data source |
crossJoin | CrossJoin |
crossJoin-method | CrossJoin |
crosstab | Computes a pair-wise frequency table of the given columns |
crosstab-method | Computes a pair-wise frequency table of the given columns |
cume_dist | cume_dist |
cume_dist-method | cume_dist |
currentDatabase | Returns the current default database |
dapply | dapply |
dapply-method | dapply |
dapplyCollect | dapplyCollect |
dapplyCollect-method | dapplyCollect |
datediff | datediff |
datediff-method | datediff |
date_add | date_add |
date_add-method | date_add |
date_format | date_format |
date_format-method | date_format |
date_sub | date_sub |
date_sub-method | date_sub |
dayofmonth | dayofmonth |
dayofmonth-method | dayofmonth |
dayofyear | dayofyear |
dayofyear-method | dayofyear |
decode | decode |
decode-method | decode |
dense_rank | dense_rank |
dense_rank-method | dense_rank |
desc | A set of operations working with SparkDataFrame columns |
describe | summary |
describe-method | summary |
dim | Returns the dimensions of SparkDataFrame |
dim-method | Returns the dimensions of SparkDataFrame |
distinct | Distinct |
distinct-method | Distinct |
drop | drop |
drop-method | drop |
dropDuplicates | dropDuplicates |
dropDuplicates-method | dropDuplicates |
dropna | A set of SparkDataFrame functions working with NA values |
dropna-method | A set of SparkDataFrame functions working with NA values |
dropTempTable | (Deprecated) Drop Temporary Table |
dropTempTable.default | (Deprecated) Drop Temporary Table |
dropTempView | Drops the temporary view with the given view name in the catalog. |
dtypes | DataTypes |
dtypes-method | DataTypes |
encode | encode |
encode-method | encode |
endsWith | endsWith |
endsWith-method | endsWith |
except | except |
except-method | except |
exp | exp |
exp-method | exp |
explain | Explain |
explain-method | Explain |
explode | explode |
explode-method | explode |
expm1 | expm1 |
expm1-method | expm1 |
expr | expr |
expr-method | expr |
factorial | factorial |
factorial-method | factorial |
fillna | A set of SparkDataFrame functions working with NA values |
fillna-method | A set of SparkDataFrame functions working with NA values |
filter | Filter |
filter-method | Filter |
first | Return the first row of a SparkDataFrame |
first-method | Return the first row of a SparkDataFrame |
fitted | Get fitted result from a k-means model |
fitted-method | Get fitted result from a k-means model |
fitted-method | Bisecting K-Means Clustering Model |
floor | floor |
floor-method | floor |
format_number | format_number |
format_number-method | format_number |
format_string | format_string |
format_string-method | format_string |
FPGrowthModel-class | S4 class that represents a FPGrowthModel |
freqItems | Finding frequent items for columns, possibly with false positives |
freqItems-method | Finding frequent items for columns, possibly with false positives |
freqItemsets-method | FP-growth |
from_json | from_json |
from_json-method | from_json |
from_unixtime | from_unixtime |
from_unixtime-method | from_unixtime |
from_utc_timestamp | from_utc_timestamp |
from_utc_timestamp-method | from_utc_timestamp |
gapply | gapply |
gapply-method | gapply |
gapplyCollect | gapplyCollect |
gapplyCollect-method | gapplyCollect |
GaussianMixtureModel-class | S4 class that represents a GaussianMixtureModel |
GBTClassificationModel-class | S4 class that represents a GBTClassificationModel |
GBTRegressionModel-class | S4 class that represents a GBTRegressionModel |
GeneralizedLinearRegressionModel-class | S4 class that represents a generalized linear model |
generateAliasesForIntersectedCols | Creates a list of columns by replacing the intersected ones with aliases |
getField | A set of operations working with SparkDataFrame columns |
getItem | A set of operations working with SparkDataFrame columns |
getNumPartitions | getNumPartitions |
getNumPartitions-method | getNumPartitions |
glm | Generalized Linear Models (R-compliant) |
glm-method | Generalized Linear Models (R-compliant) |
greatest | greatest |
greatest-method | greatest |
groupBy | GroupBy |
groupBy-method | GroupBy |
groupedData | S4 class that represents a GroupedData |
GroupedData-class | S4 class that represents a GroupedData |
group_by | GroupBy |
group_by-method | GroupBy |
hash | hash |
hash-method | hash |
hashCode | Compute the hashCode of an object |
head | Head |
head-method | Head |
hex | hex |
hex-method | hex |
hint | hint |
hint-method | hint |
histogram | Compute histogram statistics for given column |
histogram-method | Compute histogram statistics for given column |
hour | hour |
hour-method | hour |
hypot | hypot |
hypot-method | hypot |
ifelse | ifelse |
ifelse-method | ifelse |
initcap | initcap |
initcap-method | initcap |
insertInto | insertInto |
insertInto-method | insertInto |
install.spark | Download and Install Apache Spark to a Local Directory |
instr | instr |
instr-method | instr |
intersect | Intersect |
intersect-method | Intersect |
is.nan | is.nan |
is.nan-method | is.nan |
isActive | isActive |
isActive-method | isActive |
isLocal | isLocal |
isLocal-method | isLocal |
isNaN | A set of operations working with SparkDataFrame columns |
isnan | is.nan |
isnan-method | is.nan |
isNotNull | A set of operations working with SparkDataFrame columns |
isNull | A set of operations working with SparkDataFrame columns |
IsotonicRegressionModel-class | S4 class that represents an IsotonicRegressionModel |
isStreaming | isStreaming |
isStreaming-method | isStreaming |
join | Join |
join-method | Join |
jsonFile | Create a SparkDataFrame from a JSON file. |
jsonFile.default | Create a SparkDataFrame from a JSON file. |
KMeansModel-class | S4 class that represents a KMeansModel |
KSTest-class | S4 class that represents an KSTest |
kurtosis | kurtosis |
kurtosis-method | kurtosis |
lag | lag |
lag-method | lag |
last | last |
last-method | last |
lastProgress | lastProgress |
lastProgress-method | lastProgress |
last_day | last_day |
last_day-method | last_day |
LDAModel-class | S4 class that represents an LDAModel |
lead | lead |
lead-method | lead |
least | least |
least-method | least |
length | length |
length-method | length |
levenshtein | levenshtein |
levenshtein-method | levenshtein |
like | A set of operations working with SparkDataFrame columns |
limit | Limit |
limit-method | Limit |
LinearSVCModel-class | S4 class that represents an LinearSVCModel |
listColumns | Returns a list of columns for the given table/view in the specified database |
listDatabases | Returns a list of databases available |
listFunctions | Returns a list of functions registered in the specified database |
listTables | Returns a list of tables or views in the specified database |
lit | lit |
lit-method | lit |
loadDF | Load a SparkDataFrame |
loadDF.default | Load a SparkDataFrame |
locate | locate |
locate-method | locate |
log | log |
log-method | log |
log10 | log10 |
log10-method | log10 |
log1p | log1p |
log1p-method | log1p |
log2 | log2 |
log2-method | log2 |
LogisticRegressionModel-class | S4 class that represents an LogisticRegressionModel |
lower | lower |
lower-method | lower |
lpad | lpad |
lpad-method | lpad |
ltrim | ltrim |
ltrim-method | ltrim |
max | max |
max-method | max |
md5 | md5 |
md5-method | md5 |
mean | mean |
mean-method | mean |
merge | Merges two data frames |
merge-method | Merges two data frames |
min | min |
min-method | min |
minute | minute |
minute-method | minute |
monotonically_increasing_id | monotonically_increasing_id |
monotonically_increasing_id-method | monotonically_increasing_id |
month | month |
month-method | month |
months_between | months_between |
months_between-method | months_between |
MultilayerPerceptronClassificationModel-class | S4 class that represents a MultilayerPerceptronClassificationModel |
mutate | Mutate |
mutate-method | Mutate |
n | Returns the number of items in a group |
n-method | Returns the number of items in a group |
na.omit | A set of SparkDataFrame functions working with NA values |
na.omit-method | A set of SparkDataFrame functions working with NA values |
NaiveBayesModel-class | S4 class that represents a NaiveBayesModel |
names | Column Names of SparkDataFrame |
names-method | Column Names of SparkDataFrame |
names<- | Column Names of SparkDataFrame |
names<--method | Column Names of SparkDataFrame |
nanvl | nanvl |
nanvl-method | nanvl |
ncol | Returns the number of columns in a SparkDataFrame |
ncol-method | Returns the number of columns in a SparkDataFrame |
negate | negate |
negate-method | negate |
next_day | next_day |
next_day-method | next_day |
nrow | Returns the number of rows in a SparkDataFrame |
nrow-method | Returns the number of rows in a SparkDataFrame |
ntile | ntile |
ntile-method | ntile |
n_distinct | Count Distinct Values |
n_distinct-method | Count Distinct Values |
orderBy | Ordering Columns in a WindowSpec |
orderBy-method | Arrange Rows by Variables |
orderBy-method | Ordering Columns in a WindowSpec |
otherwise | otherwise |
otherwise-method | otherwise |
over | over |
over-method | over |
parquetFile | Create a SparkDataFrame from a Parquet file. |
parquetFile.default | Create a SparkDataFrame from a Parquet file. |
partitionBy | partitionBy |
partitionBy-method | partitionBy |
percent_rank | percent_rank |
percent_rank-method | percent_rank |
persist | Persist |
persist-method | Persist |
pivot | Pivot a column of the GroupedData and perform the specified aggregation. |
pivot-method | Pivot a column of the GroupedData and perform the specified aggregation. |
pmod | pmod |
pmod-method | pmod |
posexplode | posexplode |
posexplode-method | posexplode |
predict | Makes predictions from a MLlib model |
predict-method | Alternating Least Squares (ALS) for Collaborative Filtering |
predict-method | Bisecting K-Means Clustering Model |
predict-method | FP-growth |
predict-method | Multivariate Gaussian Mixture Model (GMM) |
predict-method | Gradient Boosted Tree Model for Regression and Classification |
predict-method | Generalized Linear Models |
predict-method | Isotonic Regression Model |
predict-method | K-Means Clustering Model |
predict-method | Logistic Regression Model |
predict-method | Multilayer Perceptron Classification Model |
predict-method | Naive Bayes Models |
predict-method | Random Forest Model for Regression and Classification |
predict-method | Accelerated Failure Time (AFT) Survival Regression Model |
predict-method | Linear SVM Model |
print.jobj | Print a JVM object reference. |
print.structField | Print a Spark StructField. |
print.structType | Print a Spark StructType. |
print.summary.GBTClassificationModel | Gradient Boosted Tree Model for Regression and Classification |
print.summary.GBTRegressionModel | Gradient Boosted Tree Model for Regression and Classification |
print.summary.GeneralizedLinearRegressionModel | Generalized Linear Models |
print.summary.KSTest | (One-Sample) Kolmogorov-Smirnov Test |
print.summary.RandomForestClassificationModel | Random Forest Model for Regression and Classification |
print.summary.RandomForestRegressionModel | Random Forest Model for Regression and Classification |
printSchema | Print Schema of a SparkDataFrame |
printSchema-method | Print Schema of a SparkDataFrame |
quarter | quarter |
quarter-method | quarter |
queryName | queryName |
queryName-method | queryName |
rand | rand |
rand-method | rand |
randn | randn |
randn-method | randn |
RandomForestClassificationModel-class | S4 class that represents a RandomForestClassificationModel |
RandomForestRegressionModel-class | S4 class that represents a RandomForestRegressionModel |
randomSplit | randomSplit |
randomSplit-method | randomSplit |
rangeBetween | rangeBetween |
rangeBetween-method | rangeBetween |
rank | rank |
rank-method | rank |
rbind | Union two or more SparkDataFrames |
rbind-method | Union two or more SparkDataFrames |
read.df | Load a SparkDataFrame |
read.df.default | Load a SparkDataFrame |
read.jdbc | Create a SparkDataFrame representing the database table accessible via JDBC URL |
read.json | Create a SparkDataFrame from a JSON file. |
read.json.default | Create a SparkDataFrame from a JSON file. |
read.ml | Load a fitted MLlib model from the input path. |
read.orc | Create a SparkDataFrame from an ORC file. |
read.parquet | Create a SparkDataFrame from a Parquet file. |
read.parquet.default | Create a SparkDataFrame from a Parquet file. |
read.stream | Load a streaming SparkDataFrame |
read.text | Create a SparkDataFrame from a text file. |
read.text.default | Create a SparkDataFrame from a text file. |
recoverPartitions | Recovers all the partitions in the directory of a table and update the catalog |
refreshByPath | Invalidates and refreshes all the cached data and metadata for SparkDataFrame containing path |
refreshTable | Invalidates and refreshes all the cached data and metadata of the given table |
regexp_extract | regexp_extract |
regexp_extract-method | regexp_extract |
regexp_replace | regexp_replace |
regexp_replace-method | regexp_replace |
registerTempTable | (Deprecated) Register Temporary Table |
registerTempTable-method | (Deprecated) Register Temporary Table |
rename | rename |
rename-method | rename |
repartition | Repartition |
repartition-method | Repartition |
reverse | reverse |
reverse-method | reverse |
rint | rint |
rint-method | rint |
rlike | A set of operations working with SparkDataFrame columns |
round | round |
round-method | round |
rowsBetween | rowsBetween |
rowsBetween-method | rowsBetween |
row_number | row_number |
row_number-method | row_number |
rpad | rpad |
rpad-method | rpad |
rtrim | rtrim |
rtrim-method | rtrim |
sample | Sample |
sample-method | Sample |
sampleBy | Returns a stratified sample without replacement |
sampleBy-method | Returns a stratified sample without replacement |
sample_frac | Sample |
sample_frac-method | Sample |
saveAsParquetFile | Save the contents of SparkDataFrame as a Parquet file, preserving the schema. |
saveAsParquetFile-method | Save the contents of SparkDataFrame as a Parquet file, preserving the schema. |
saveAsTable | Save the contents of the SparkDataFrame to a data source as a table |
saveAsTable-method | Save the contents of the SparkDataFrame to a data source as a table |
saveDF | Save the contents of SparkDataFrame to a data source. |
saveDF-method | Save the contents of SparkDataFrame to a data source. |
schema | Get schema object |
schema-method | Get schema object |
sd | sd |
sd-method | sd |
second | second |
second-method | second |
select | Select |
select-method | Select |
selectExpr | SelectExpr |
selectExpr-method | SelectExpr |
setCheckpointDir | Set checkpoint directory |
setCurrentDatabase | Sets the current default database |
setJobGroup | Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared. |
setJobGroup.default | Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared. |
setLogLevel | Set new log level |
sha1 | sha1 |
sha1-method | sha1 |
sha2 | sha2 |
sha2-method | sha2 |
shiftLeft | shiftLeft |
shiftLeft-method | shiftLeft |
shiftRight | shiftRight |
shiftRight-method | shiftRight |
shiftRightUnsigned | shiftRightUnsigned |
shiftRightUnsigned-method | shiftRightUnsigned |
show | show |
show-method | show |
showDF | showDF |
showDF-method | showDF |
sign | signum |
sign-method | signum |
signum | signum |
signum-method | signum |
sin | sin |
sin-method | sin |
sinh | sinh |
sinh-method | sinh |
size | size |
size-method | size |
skewness | skewness |
skewness-method | skewness |
sort_array | sort_array |
sort_array-method | sort_array |
soundex | soundex |
soundex-method | soundex |
spark.addFile | Add a file or directory to be downloaded with this Spark job on every node. |
spark.als | Alternating Least Squares (ALS) for Collaborative Filtering |
spark.als-method | Alternating Least Squares (ALS) for Collaborative Filtering |
spark.associationRules | FP-growth |
spark.associationRules-method | FP-growth |
spark.bisectingKmeans | Bisecting K-Means Clustering Model |
spark.bisectingKmeans-method | Bisecting K-Means Clustering Model |
spark.fpGrowth | FP-growth |
spark.fpGrowth-method | FP-growth |
spark.freqItemsets | FP-growth |
spark.freqItemsets-method | FP-growth |
spark.gaussianMixture | Multivariate Gaussian Mixture Model (GMM) |
spark.gaussianMixture-method | Multivariate Gaussian Mixture Model (GMM) |
spark.gbt | Gradient Boosted Tree Model for Regression and Classification |
spark.gbt-method | Gradient Boosted Tree Model for Regression and Classification |
spark.getSparkFiles | Get the absolute path of a file added through spark.addFile. |
spark.getSparkFilesRootDirectory | Get the root directory that contains files added through spark.addFile. |
spark.glm | Generalized Linear Models |
spark.glm-method | Generalized Linear Models |
spark.isoreg | Isotonic Regression Model |
spark.isoreg-method | Isotonic Regression Model |
spark.kmeans | K-Means Clustering Model |
spark.kmeans-method | K-Means Clustering Model |
spark.kstest | (One-Sample) Kolmogorov-Smirnov Test |
spark.kstest-method | (One-Sample) Kolmogorov-Smirnov Test |
spark.lapply | Run a function over a list of elements, distributing the computations with Spark |
spark.lda | Latent Dirichlet Allocation |
spark.lda-method | Latent Dirichlet Allocation |
spark.logit | Logistic Regression Model |
spark.logit-method | Logistic Regression Model |
spark.mlp | Multilayer Perceptron Classification Model |
spark.mlp-method | Multilayer Perceptron Classification Model |
spark.naiveBayes | Naive Bayes Models |
spark.naiveBayes-method | Naive Bayes Models |
spark.perplexity | Latent Dirichlet Allocation |
spark.perplexity-method | Latent Dirichlet Allocation |
spark.posterior | Latent Dirichlet Allocation |
spark.posterior-method | Latent Dirichlet Allocation |
spark.randomForest | Random Forest Model for Regression and Classification |
spark.randomForest-method | Random Forest Model for Regression and Classification |
spark.survreg | Accelerated Failure Time (AFT) Survival Regression Model |
spark.survreg-method | Accelerated Failure Time (AFT) Survival Regression Model |
spark.svmLinear | Linear SVM Model |
spark.svmLinear-method | Linear SVM Model |
SparkDataFrame-class | S4 class that represents a SparkDataFrame |
sparkR.callJMethod | Call Java Methods |
sparkR.callJStatic | Call Static Java Methods |
sparkR.conf | Get Runtime Config from the current active SparkSession |
sparkR.init | (Deprecated) Initialize a new Spark Context |
sparkR.newJObject | Create Java Objects |
sparkR.session | Get the existing SparkSession or initialize a new SparkSession. |
sparkR.session.stop | Stop the Spark Session and Spark Context |
sparkR.stop | Stop the Spark Session and Spark Context |
sparkR.uiWebUrl | Get the URL of the SparkUI instance for the current active SparkSession |
sparkR.version | Get version of Spark on which this application is running |
sparkRHive.init | (Deprecated) Initialize a new HiveContext |
sparkRSQL.init | (Deprecated) Initialize a new SQLContext |
spark_partition_id | Return the partition ID as a column |
spark_partition_id-method | Return the partition ID as a column |
sql | SQL Query |
sql.default | SQL Query |
sqrt | sqrt |
sqrt-method | sqrt |
startsWith | startsWith |
startsWith-method | startsWith |
status | status |
status-method | status |
stddev | sd |
stddev-method | sd |
stddev_pop | stddev_pop |
stddev_pop-method | stddev_pop |
stddev_samp | stddev_samp |
stddev_samp-method | stddev_samp |
stopQuery | stopQuery |
stopQuery-method | stopQuery |
storageLevel | StorageLevel |
storageLevel-method | StorageLevel |
str | Compactly display the structure of a dataset |
str-method | Compactly display the structure of a dataset |
StreamingQuery-class | S4 class that represents a StreamingQuery |
struct | struct |
struct-method | struct |
structField | structField |
structField.character | structField |
structField.jobj | structField |
structType | structType |
structType.jobj | structType |
structType.structField | structType |
subset | Subset |
subset-method | Subset |
substr | substr |
substr-method | substr |
substring_index | substring_index |
substring_index-method | substring_index |
sum | sum |
sum-method | sum |
sumDistinct | sumDistinct |
sumDistinct-method | sumDistinct |
summarize | Summarize data across columns |
summarize-method | Summarize data across columns |
summary | summary |
summary-method | Alternating Least Squares (ALS) for Collaborative Filtering |
summary-method | Bisecting K-Means Clustering Model |
summary-method | Multivariate Gaussian Mixture Model (GMM) |
summary-method | Gradient Boosted Tree Model for Regression and Classification |
summary-method | Generalized Linear Models |
summary-method | Isotonic Regression Model |
summary-method | K-Means Clustering Model |
summary-method | (One-Sample) Kolmogorov-Smirnov Test |
summary-method | Latent Dirichlet Allocation |
summary-method | Logistic Regression Model |
summary-method | Multilayer Perceptron Classification Model |
summary-method | Naive Bayes Models |
summary-method | Random Forest Model for Regression and Classification |
summary-method | Accelerated Failure Time (AFT) Survival Regression Model |
summary-method | Linear SVM Model |
summary-method | summary |
tableNames | Table Names |
tableNames.default | Table Names |
tables | Tables |
tables.default | Tables |
tableToDF | Create a SparkDataFrame from a SparkSQL table or view |
take | Take the first NUM rows of a SparkDataFrame and return the results as a R data.frame |
take-method | Take the first NUM rows of a SparkDataFrame and return the results as a R data.frame |
tan | tan |
tan-method | tan |
tanh | tanh |
tanh-method | tanh |
toDegrees | toDegrees |
toDegrees-method | toDegrees |
toJSON | toJSON |
toJSON-method | toJSON |
toRadians | toRadians |
toRadians-method | toRadians |
to_date | to_date |
to_date-method | to_date |
to_json | to_json |
to_json-method | to_json |
to_timestamp | to_timestamp |
to_timestamp-method | to_timestamp |
to_utc_timestamp | to_utc_timestamp |
to_utc_timestamp-method | to_utc_timestamp |
transform | Mutate |
transform-method | Mutate |
translate | translate |
translate-method | translate |
trim | trim |
trim-method | trim |
unbase64 | unbase64 |
unbase64-method | unbase64 |
uncacheTable | Uncache Table |
uncacheTable.default | Uncache Table |
unhex | unhex |
unhex-method | unhex |
union | Return a new SparkDataFrame containing the union of rows |
union-method | Return a new SparkDataFrame containing the union of rows |
unionAll | Return a new SparkDataFrame containing the union of rows |
unionAll-method | Return a new SparkDataFrame containing the union of rows |
unique | Distinct |
unique-method | Distinct |
unix_timestamp | unix_timestamp |
unix_timestamp-method | unix_timestamp |
unpersist | Unpersist |
unpersist-method | Unpersist |
upper | upper |
upper-method | upper |
var | var |
var-method | var |
variance | var |
variance-method | var |
var_pop | var_pop |
var_pop-method | var_pop |
var_samp | var_samp |
var_samp-method | var_samp |
weekofyear | weekofyear |
weekofyear-method | weekofyear |
when | when |
when-method | when |
where | Filter |
where-method | Filter |
window | window |
window-method | window |
windowOrderBy | windowOrderBy |
windowOrderBy-method | windowOrderBy |
windowPartitionBy | windowPartitionBy |
windowPartitionBy-method | windowPartitionBy |
WindowSpec-class | S4 class that represents a WindowSpec |
with | Evaluate a R expression in an environment constructed from a SparkDataFrame |
with-method | Evaluate a R expression in an environment constructed from a SparkDataFrame |
withColumn | WithColumn |
withColumn-method | WithColumn |
withColumnRenamed | rename |
withColumnRenamed-method | rename |
write.df | Save the contents of SparkDataFrame to a data source. |
write.df-method | Save the contents of SparkDataFrame to a data source. |
write.jdbc | Save the content of SparkDataFrame to an external database table via JDBC. |
write.jdbc-method | Save the content of SparkDataFrame to an external database table via JDBC. |
write.json | Save the contents of SparkDataFrame as a JSON file |
write.json-method | Save the contents of SparkDataFrame as a JSON file |
write.ml | Saves the MLlib model to the input path |
write.ml-method | Alternating Least Squares (ALS) for Collaborative Filtering |
write.ml-method | Bisecting K-Means Clustering Model |
write.ml-method | FP-growth |
write.ml-method | Multivariate Gaussian Mixture Model (GMM) |
write.ml-method | Gradient Boosted Tree Model for Regression and Classification |
write.ml-method | Generalized Linear Models |
write.ml-method | Isotonic Regression Model |
write.ml-method | K-Means Clustering Model |
write.ml-method | Latent Dirichlet Allocation |
write.ml-method | Logistic Regression Model |
write.ml-method | Multilayer Perceptron Classification Model |
write.ml-method | Naive Bayes Models |
write.ml-method | Random Forest Model for Regression and Classification |
write.ml-method | Accelerated Failure Time (AFT) Survival Regression Model |
write.ml-method | Linear SVM Model |
write.orc | Save the contents of SparkDataFrame as an ORC file, preserving the schema. |
write.orc-method | Save the contents of SparkDataFrame as an ORC file, preserving the schema. |
write.parquet | Save the contents of SparkDataFrame as a Parquet file, preserving the schema. |
write.parquet-method | Save the contents of SparkDataFrame as a Parquet file, preserving the schema. |
write.stream | Write the streaming SparkDataFrame to a data source. |
write.stream-method | Write the streaming SparkDataFrame to a data source. |
write.text | Save the content of SparkDataFrame in a text file at the specified path. |
write.text-method | Save the content of SparkDataFrame in a text file at the specified path. |
year | year |
year-method | year |
$ | Select |
$-method | Select |
$<- | Select |
$<--method | Select |
%in% | Match a column with given values. |
%in%-method | Match a column with given values. |
[ | Subset |
[-method | Subset |
[[ | Subset |
[[-method | Subset |
[[<- | Subset |
[[<--method | Subset |