| Package | Description | 
|---|---|
| org.apache.hadoop.hbase.io | |
| org.apache.hadoop.hbase.mapred | 
 Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods. 
 | 
| org.apache.hadoop.hbase.mapreduce | 
 Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods. 
 | 
| Modifier and Type | Method and Description | 
|---|---|
int | 
ImmutableBytesWritable.compareTo(ImmutableBytesWritable that)
Define the sort order of the BytesWritable. 
 | 
| Constructor and Description | 
|---|
ImmutableBytesWritable(ImmutableBytesWritable ibw)
Set the new ImmutableBytesWritable to the contents of the passed
  
ibw. | 
| Modifier and Type | Method and Description | 
|---|---|
protected ImmutableBytesWritable | 
GroupingTableMap.createGroupKey(byte[][] vals)
Create a key by concatenating multiple column values. 
 | 
ImmutableBytesWritable | 
TableRecordReader.createKey()  | 
ImmutableBytesWritable | 
TableRecordReaderImpl.createKey()  | 
| Modifier and Type | Method and Description | 
|---|---|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> | 
MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
               org.apache.hadoop.mapred.JobConf job,
               org.apache.hadoop.mapred.Reporter reporter)  | 
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> | 
TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
               org.apache.hadoop.mapred.JobConf job,
               org.apache.hadoop.mapred.Reporter reporter)  | 
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> | 
TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
               org.apache.hadoop.mapred.JobConf job,
               org.apache.hadoop.mapred.Reporter reporter)
Builds a TableRecordReader. 
 | 
| Modifier and Type | Method and Description | 
|---|---|
int | 
HRegionPartitioner.getPartition(ImmutableBytesWritable key,
            V2 value,
            int numPartitions)  | 
void | 
IdentityTableMap.map(ImmutableBytesWritable key,
   Result value,
   org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
   org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce 
 | 
void | 
GroupingTableMap.map(ImmutableBytesWritable key,
   Result value,
   org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
   org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key. 
 | 
boolean | 
TableRecordReader.next(ImmutableBytesWritable key,
    Result value)  | 
boolean | 
TableRecordReaderImpl.next(ImmutableBytesWritable key,
    Result value)  | 
void | 
IdentityTableReduce.reduce(ImmutableBytesWritable key,
      Iterator<Put> values,
      org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Put> output,
      org.apache.hadoop.mapred.Reporter reporter)
No aggregation, output pairs of (key, record) 
 | 
| Modifier and Type | Method and Description | 
|---|---|
void | 
IdentityTableMap.map(ImmutableBytesWritable key,
   Result value,
   org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
   org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce 
 | 
void | 
GroupingTableMap.map(ImmutableBytesWritable key,
   Result value,
   org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
   org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key. 
 | 
void | 
IdentityTableReduce.reduce(ImmutableBytesWritable key,
      Iterator<Put> values,
      org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Put> output,
      org.apache.hadoop.mapred.Reporter reporter)
No aggregation, output pairs of (key, record) 
 | 
| Modifier and Type | Method and Description | 
|---|---|
protected ImmutableBytesWritable | 
GroupingTableMapper.createGroupKey(byte[][] vals)
Create a key by concatenating multiple column values. 
 | 
ImmutableBytesWritable | 
TableRecordReader.getCurrentKey()
Returns the current key. 
 | 
ImmutableBytesWritable | 
TableRecordReaderImpl.getCurrentKey()
Returns the current key. 
 | 
| Modifier and Type | Method and Description | 
|---|---|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> | 
TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
                  org.apache.hadoop.mapreduce.TaskAttemptContext context)  | 
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> | 
MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
                  org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a TableRecordReader. 
 | 
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> | 
TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
                  org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a  
TableRecordReader. | 
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Cell> | 
HFileOutputFormat2.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)  | 
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Mutation> | 
MultiTableOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)  | 
| Modifier and Type | Method and Description | 
|---|---|
static byte[] | 
MultiTableHFileOutputFormat.createCompositeKey(byte[] tableName,
                  ImmutableBytesWritable suffix)
Alternate api which accepts an ImmutableBytesWritable for the suffix 
 | 
static byte[] | 
MultiTableHFileOutputFormat.createCompositeKey(String tableName,
                  ImmutableBytesWritable suffix)
Alternate api which accepts a String for the tableName and ImmutableBytesWritable for the
 suffix 
 | 
int | 
HRegionPartitioner.getPartition(ImmutableBytesWritable key,
            VALUE value,
            int numPartitions)
Gets the partition number for a given key (hence record) given the total
 number of partitions i.e. 
 | 
int | 
SimpleTotalOrderPartitioner.getPartition(ImmutableBytesWritable key,
            VALUE value,
            int reduces)  | 
void | 
GroupingTableMapper.map(ImmutableBytesWritable key,
   Result value,
   org.apache.hadoop.mapreduce.Mapper.Context context)
Extract the grouping columns from value to construct a new key. 
 | 
void | 
IdentityTableMapper.map(ImmutableBytesWritable key,
   Result value,
   org.apache.hadoop.mapreduce.Mapper.Context context)
Pass the key, value to reduce. 
 | 
protected void | 
CellSortReducer.reduce(ImmutableBytesWritable row,
      Iterable<Cell> kvs,
      org.apache.hadoop.mapreduce.Reducer.Context context)  | 
protected void | 
KeyValueSortReducer.reduce(ImmutableBytesWritable row,
      Iterable<org.apache.hadoop.hbase.KeyValue> kvs,
      org.apache.hadoop.mapreduce.Reducer.Context context)
Deprecated.  
  | 
protected void | 
PutSortReducer.reduce(ImmutableBytesWritable row,
      Iterable<Put> puts,
      org.apache.hadoop.mapreduce.Reducer.Context context)  | 
protected void | 
TextSortReducer.reduce(ImmutableBytesWritable rowKey,
      Iterable<org.apache.hadoop.io.Text> lines,
      org.apache.hadoop.mapreduce.Reducer.Context context)  | 
Copyright © 2007–2021 The Apache Software Foundation. All rights reserved.