@InterfaceAudience.Public @InterfaceStability.Stable public class IdentityTableReducer extends TableReducer<org.apache.hadoop.io.Writable,Mutation,org.apache.hadoop.io.Writable>
Put
or
Delete
instances)
passed to it out to the configured HBase table. This works in combination
with TableOutputFormat
which actually does the writing to HBase.Keys are passed along but ignored in TableOutputFormat. However, they can be used to control how your values will be divided up amongst the specified number of reducers.
You can also use the TableMapReduceUtil
class to set up the two
classes in one step:
TableMapReduceUtil.initTableReducerJob("table", IdentityTableReducer.class, job);
This will also set the proper TableOutputFormat
which is given the
table
parameter. The
Put
or
Delete
define the
row and columns implicitly.Modifier and Type | Field and Description |
---|---|
private static org.apache.commons.logging.Log |
LOG |
Constructor and Description |
---|
IdentityTableReducer() |
public void reduce(org.apache.hadoop.io.Writable key, Iterable<Mutation> values, org.apache.hadoop.mapreduce.Reducer.Context context) throws IOException, InterruptedException
OutputFormat
.
It is emitting the row key and each
Put
or
Delete
as separate pairs.reduce
in class org.apache.hadoop.mapreduce.Reducer<org.apache.hadoop.io.Writable,Mutation,org.apache.hadoop.io.Writable,Mutation>
key
- The current row key.values
- The Put
or
Delete
list for the given
row.context
- The context of the reduce.IOException
- When writing the record fails.InterruptedException
- When the job gets interrupted.Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.