@InterfaceAudience.Public public class TableOutputFormat<KEY> extends org.apache.hadoop.mapreduce.OutputFormat<KEY,Mutation> implements org.apache.hadoop.conf.Configurable
Modifier and Type | Class and Description |
---|---|
protected class |
TableOutputFormat.TableRecordWriter
Writes the reducer output to an HBase table.
|
Modifier and Type | Field and Description |
---|---|
private org.apache.hadoop.conf.Configuration |
conf
The configuration.
|
private static org.slf4j.Logger |
LOG |
static String |
OUTPUT_CONF_PREFIX
Prefix for configuration property overrides to apply in
setConf(Configuration) . |
static String |
OUTPUT_TABLE
Job parameter that specifies the output table.
|
static String |
QUORUM_ADDRESS
Optional job parameter to specify a peer cluster.
|
static String |
QUORUM_PORT
Optional job parameter to specify peer cluster's ZK client port
|
static String |
REGION_SERVER_CLASS
Optional specification of the rs class name of the peer cluster
|
static String |
REGION_SERVER_IMPL
Optional specification of the rs impl name of the peer cluster
|
Constructor and Description |
---|
TableOutputFormat() |
Modifier and Type | Method and Description |
---|---|
void |
checkOutputSpecs(org.apache.hadoop.mapreduce.JobContext context)
Checks if the output table exists and is enabled.
|
org.apache.hadoop.conf.Configuration |
getConf() |
org.apache.hadoop.mapreduce.OutputCommitter |
getOutputCommitter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
Returns the output committer.
|
org.apache.hadoop.mapreduce.RecordWriter<KEY,Mutation> |
getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
Creates a new record writer.
|
void |
setConf(org.apache.hadoop.conf.Configuration otherConf) |
private static final org.slf4j.Logger LOG
public static final String OUTPUT_TABLE
public static final String OUTPUT_CONF_PREFIX
setConf(Configuration)
.
For keys matching this prefix, the prefix is stripped, and the value is set in the
configuration with the resulting key, ie. the entry "hbase.mapred.output.key1 = value1"
would be set in the configuration as "key1 = value1". Use this to set properties
which should only be applied to the TableOutputFormat
configuration and not the
input configuration.public static final String QUORUM_ADDRESS
hbase-site.xml
).public static final String QUORUM_PORT
public static final String REGION_SERVER_CLASS
public static final String REGION_SERVER_IMPL
private org.apache.hadoop.conf.Configuration conf
public TableOutputFormat()
public org.apache.hadoop.mapreduce.RecordWriter<KEY,Mutation> getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context) throws IOException, InterruptedException
RecordWriter
per job but in HBase, it is more natural if we give you a new
RecordWriter per call of this method. You must close the returned RecordWriter when done.
Failure to do so will drop writes.getRecordWriter
in class org.apache.hadoop.mapreduce.OutputFormat<KEY,Mutation>
context
- The current task context.IOException
- When creating the writer fails.InterruptedException
- When the jobs is cancelled.public void checkOutputSpecs(org.apache.hadoop.mapreduce.JobContext context) throws IOException, InterruptedException
checkOutputSpecs
in class org.apache.hadoop.mapreduce.OutputFormat<KEY,Mutation>
context
- The current context.IOException
- When the check fails.InterruptedException
- When the job is aborted.OutputFormat.checkOutputSpecs(JobContext)
public org.apache.hadoop.mapreduce.OutputCommitter getOutputCommitter(org.apache.hadoop.mapreduce.TaskAttemptContext context) throws IOException, InterruptedException
getOutputCommitter
in class org.apache.hadoop.mapreduce.OutputFormat<KEY,Mutation>
context
- The current context.IOException
- When creating the committer fails.InterruptedException
- When the job is aborted.OutputFormat.getOutputCommitter(TaskAttemptContext)
public org.apache.hadoop.conf.Configuration getConf()
getConf
in interface org.apache.hadoop.conf.Configurable
public void setConf(org.apache.hadoop.conf.Configuration otherConf)
setConf
in interface org.apache.hadoop.conf.Configurable
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.