@InterfaceAudience.Public public class IdentityTableReducer extends TableReducer<org.apache.hadoop.io.Writable,Mutation,org.apache.hadoop.io.Writable>
Put or
 Delete instances)
 passed to it out to the configured HBase table. This works in combination
 with TableOutputFormat which actually does the writing to HBase.Keys are passed along but ignored in TableOutputFormat. However, they can be used to control how your values will be divided up amongst the specified number of reducers.
 You can also use the TableMapReduceUtil class to set up the two
 classes in one step:
 
 TableMapReduceUtil.initTableReducerJob("table", IdentityTableReducer.class, job);
 TableOutputFormat which is given the
 table parameter. The
 Put or
 Delete define the
 row and columns implicitly.| Constructor and Description | 
|---|
| IdentityTableReducer() | 
| Modifier and Type | Method and Description | 
|---|---|
| void | reduce(org.apache.hadoop.io.Writable key,
      Iterable<Mutation> values,
      org.apache.hadoop.mapreduce.Reducer.Context context)Writes each given record, consisting of the row key and the given values,
 to the configured  OutputFormat. | 
public IdentityTableReducer()
public void reduce(org.apache.hadoop.io.Writable key, Iterable<Mutation> values, org.apache.hadoop.mapreduce.Reducer.Context context) throws IOException, InterruptedException
OutputFormat.
 It is emitting the row key and each Put
 or Delete as separate pairs.reduce in class org.apache.hadoop.mapreduce.Reducer<org.apache.hadoop.io.Writable,Mutation,org.apache.hadoop.io.Writable,Mutation>key - The current row key.values - The Put or
   Delete list for the given
   row.context - The context of the reduce.IOException - When writing the record fails.InterruptedException - When the job gets interrupted.Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.