public class FeatureHasher extends Transformer implements HasInputCols, HasOutputCol, DefaultParamsWritable
The FeatureHasher
transformer operates on multiple columns. Each column may contain either
numeric or categorical features. Behavior and handling of column data types is as follows:
-Numeric columns: For numeric features, the hash value of the column name is used to map the
feature value to its index in the feature vector. By default, numeric features
are not treated as categorical (even when they are integers). To treat them
as categorical, specify the relevant columns in categoricalCols
.
-String columns: For categorical features, the hash value of the string "column_name=value"
is used to map to the vector index, with an indicator value of 1.0
.
Thus, categorical features are "one-hot" encoded
(similarly to using OneHotEncoder
with dropLast=false
).
-Boolean columns: Boolean values are treated in the same way as string columns. That is,
boolean features are represented as "column_name=true" or "column_name=false",
with an indicator value of 1.0
.
Null (missing) values are ignored (implicitly zero in the resulting feature vector).
The hash function used here is also the MurmurHash 3 used in HashingTF
. Since a simple modulo
on the hashed value is used to determine the vector index, it is advisable to use a power of two
as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector
indices.
val df = Seq(
(2.0, true, "1", "foo"),
(3.0, false, "2", "bar")
).toDF("real", "bool", "stringNum", "string")
val hasher = new FeatureHasher()
.setInputCols("real", "bool", "stringNum", "string")
.setOutputCol("features")
hasher.transform(df).show(false)
+----+-----+---------+------+------------------------------------------------------+
|real|bool |stringNum|string|features |
+----+-----+---------+------+------------------------------------------------------+
|2.0 |true |1 |foo |(262144,[51871,63643,174475,253195],[1.0,1.0,2.0,1.0])|
|3.0 |false|2 |bar |(262144,[6031,80619,140467,174475],[1.0,1.0,1.0,3.0]) |
+----+-----+---------+------+------------------------------------------------------+
Constructor and Description |
---|
FeatureHasher() |
FeatureHasher(String uid) |
Modifier and Type | Method and Description |
---|---|
StringArrayParam |
categoricalCols()
Numeric columns to treat as categorical features.
|
FeatureHasher |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
String[] |
getCategoricalCols() |
int |
getNumFeatures() |
static FeatureHasher |
load(String path) |
IntParam |
numFeatures()
Number of features.
|
static MLReader<T> |
read() |
FeatureHasher |
setCategoricalCols(String[] value) |
FeatureHasher |
setInputCols(scala.collection.Seq<String> values) |
FeatureHasher |
setInputCols(String[] value) |
FeatureHasher |
setNumFeatures(int value) |
FeatureHasher |
setOutputCol(String value) |
Dataset<Row> |
transform(Dataset<?> dataset)
Transforms the input dataset.
|
StructType |
transformSchema(StructType schema)
:: DeveloperApi ::
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
transform, transform, transform
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getInputCols, inputCols
getOutputCol, outputCol
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
toString
write
save
initializeLogging, initializeLogIfNecessary, initializeLogIfNecessary, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public FeatureHasher(String uid)
public FeatureHasher()
public static FeatureHasher load(String path)
public static MLReader<T> read()
public String uid()
Identifiable
uid
in interface Identifiable
public StringArrayParam categoricalCols()
inputCols
.public IntParam numFeatures()
public int getNumFeatures()
public FeatureHasher setNumFeatures(int value)
public FeatureHasher setInputCols(scala.collection.Seq<String> values)
public FeatureHasher setInputCols(String[] value)
public FeatureHasher setOutputCol(String value)
public String[] getCategoricalCols()
public FeatureHasher setCategoricalCols(String[] value)
public Dataset<Row> transform(Dataset<?> dataset)
Transformer
transform
in class Transformer
dataset
- (undocumented)public FeatureHasher copy(ParamMap extra)
Params
defaultCopy()
.copy
in interface Params
copy
in class Transformer
extra
- (undocumented)public StructType transformSchema(StructType schema)
PipelineStage
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)