:: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file.
:: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file. A Block can be uniquely identified by its filename, but each type of Block has a different set of keys which produce its unique name.
If your BlockId should be serializable, be sure to add it to the BlockId.apply() method.
:: DeveloperApi :: This class represent an unique identifier for a BlockManager.
:: DeveloperApi :: This class represent an unique identifier for a BlockManager.
The first 2 constructors of this class are made private to ensure that BlockManagerId objects can be created only using the apply method in the companion object. This allows de-duplication of ID objects. Also, constructor parameters are private to ensure that parameters cannot be modified from outside this class.
::DeveloperApi:: BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for replicating blocks.
::DeveloperApi:: BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for replicating blocks. BlockManager will replicate to each peer returned in order until the desired replication order is reached. If a replication fails, prioritize() will be called again to get a fresh prioritization.
:: DeveloperApi :: Stores information about a block status in a block manager.
:: DeveloperApi :: Stores information about a block status in a block manager.
A TopologyMapper that assumes all nodes are in the same rack
A TopologyMapper that assumes all nodes are in the same rack
A simple file based topology mapper.
A simple file based topology mapper. This expects topology information provided as a
java.util.Properties
file. The name of the file is obtained from SparkConf property
spark.storage.replication.topologyFile
. To use this topology mapper, set the
spark.storage.replication.topologyMapper
property to
org.apache.spark.storage.FileBasedTopologyMapper
:: DeveloperApi :: Flags for controlling the storage of an RDD.
:: DeveloperApi :: Flags for controlling the storage of an RDD. Each StorageLevel records whether to use memory, or ExternalBlockStore, whether to drop the RDD to disk if it falls out of memory or ExternalBlockStore, whether to keep the data in memory in a serialized format, and whether to replicate the RDD partitions on multiple nodes.
The org.apache.spark.storage.StorageLevel singleton object contains some static constants
for commonly useful storage levels. To create your own storage level object, use the
factory method of the singleton object (StorageLevel(...)
).
::DeveloperApi:: TopologyMapper provides topology information for a given host
::DeveloperApi:: TopologyMapper provides topology information for a given host
:: DeveloperApi :: Storage information for each BlockManager.
:: DeveloperApi :: Storage information for each BlockManager.
This class assumes BlockId and BlockStatus are immutable, such that the consumers of this class cannot mutate the source of the information. Accesses are not thread-safe.
(Since version 2.2.0) This class may be removed or made private in a future release.
:: DeveloperApi :: A SparkListener that maintains executor storage status.
:: DeveloperApi :: A SparkListener that maintains executor storage status.
This class is thread-safe (unlike JobProgressListener)
(Since version 2.2.0) This class will be removed in a future release.
Various org.apache.spark.storage.StorageLevel defined and utility functions for creating new storage levels.