Package org.apache.hadoop.hbase.mapred
Class TableOutputFormat
java.lang.Object
org.apache.hadoop.mapred.FileOutputFormat<ImmutableBytesWritable,Put>
org.apache.hadoop.hbase.mapred.TableOutputFormat
- All Implemented Interfaces:
org.apache.hadoop.mapred.OutputFormat<ImmutableBytesWritable,
Put>
@Public
public class TableOutputFormat
extends org.apache.hadoop.mapred.FileOutputFormat<ImmutableBytesWritable,Put>
Convert Map/Reduce output and write it to an HBase table
-
Nested Class Summary
Modifier and TypeClassDescriptionprotected static class
Convert Reduce output (key, value) to (HStoreKey, KeyedDataArrayWritable) and write to an HBase table.Nested classes/interfaces inherited from class org.apache.hadoop.mapred.FileOutputFormat
org.apache.hadoop.mapred.FileOutputFormat.Counter
-
Field Summary
Modifier and TypeFieldDescriptionstatic final String
JobConf parameter that specifies the output table -
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionvoid
checkOutputSpecs
(org.apache.hadoop.fs.FileSystem ignored, org.apache.hadoop.mapred.JobConf job) org.apache.hadoop.mapred.RecordWriter
getRecordWriter
(org.apache.hadoop.fs.FileSystem ignored, org.apache.hadoop.mapred.JobConf job, String name, org.apache.hadoop.util.Progressable progress) Creates a new record writer.Methods inherited from class org.apache.hadoop.mapred.FileOutputFormat
getCompressOutput, getOutputCompressorClass, getOutputPath, getPathForCustomFile, getTaskOutputPath, getUniqueName, getWorkOutputPath, setCompressOutput, setOutputCompressorClass, setOutputPath, setWorkOutputPath
-
Field Details
-
OUTPUT_TABLE
JobConf parameter that specifies the output table- See Also:
-
-
Constructor Details
-
TableOutputFormat
public TableOutputFormat()
-
-
Method Details
-
getRecordWriter
public org.apache.hadoop.mapred.RecordWriter getRecordWriter(org.apache.hadoop.fs.FileSystem ignored, org.apache.hadoop.mapred.JobConf job, String name, org.apache.hadoop.util.Progressable progress) throws IOException Creates a new record writer. Be aware that the baseline javadoc gives the impression that there is a singleRecordWriter
per job but in HBase, it is more natural if we give you a new RecordWriter per call of this method. You must close the returned RecordWriter when done. Failure to do so will drop writes.- Specified by:
getRecordWriter
in interfaceorg.apache.hadoop.mapred.OutputFormat<ImmutableBytesWritable,
Put> - Specified by:
getRecordWriter
in classorg.apache.hadoop.mapred.FileOutputFormat<ImmutableBytesWritable,
Put> - Parameters:
ignored
- Ignored filesystemjob
- Current JobConfname
- Name of the job- Returns:
- The newly created writer instance.
- Throws:
IOException
- When creating the writer fails.
-
checkOutputSpecs
public void checkOutputSpecs(org.apache.hadoop.fs.FileSystem ignored, org.apache.hadoop.mapred.JobConf job) throws org.apache.hadoop.fs.FileAlreadyExistsException, org.apache.hadoop.mapred.InvalidJobConfException, IOException - Specified by:
checkOutputSpecs
in interfaceorg.apache.hadoop.mapred.OutputFormat<ImmutableBytesWritable,
Put> - Overrides:
checkOutputSpecs
in classorg.apache.hadoop.mapred.FileOutputFormat<ImmutableBytesWritable,
Put> - Throws:
org.apache.hadoop.fs.FileAlreadyExistsException
org.apache.hadoop.mapred.InvalidJobConfException
IOException
-