RDD.
saveAsNewAPIHadoopFile
Output a Python RDD of key-value pairs (of form RDD[(K, V)]) to any Hadoop file system, using the new Hadoop OutputFormat API (mapreduce package). Key and value types will be inferred if not specified. Keys and values are converted for output using either user specified converters or “org.apache.spark.api.python.JavaToWritableConverter”. The conf is applied on top of the base Hadoop conf associated with the SparkContext of this RDD to create a merged Hadoop MapReduce job configuration for saving the data.
RDD[(K, V)]
New in version 1.1.0.
path to Hadoop file
fully qualified classname of Hadoop OutputFormat (e.g. “org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat”)
(e.g. “org.apache.hadoop.io.IntWritable”, None by default)
fully qualified classname of value Writable class (e.g. “org.apache.hadoop.io.Text”, None by default)
fully qualified classname of key converter (None by default)
fully qualified classname of value converter (None by default)
Hadoop job configuration (None by default)
See also
SparkContext.newAPIHadoopFile()
RDD.saveAsHadoopDataset()
RDD.saveAsNewAPIHadoopDataset()
RDD.saveAsHadoopFile()
RDD.saveAsSequenceFile()
Examples
>>> import os >>> import tempfile
Set the class of output format
>>> output_format_class = "org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat"
>>> with tempfile.TemporaryDirectory() as d: ... path = os.path.join(d, "hadoop_file") ... ... # Write a temporary Hadoop file ... rdd = sc.parallelize([(1, {3.0: "bb"}), (2, {1.0: "aa"}), (3, {2.0: "dd"})]) ... rdd.saveAsNewAPIHadoopFile(path, output_format_class) ... ... # Load this Hadoop file as an RDD ... sorted(sc.sequenceFile(path).collect()) [(1, {3.0: 'bb'}), (2, {1.0: 'aa'}), (3, {2.0: 'dd'})]