Package | Description |
---|---|
org.apache.hadoop.hbase.io | |
org.apache.hadoop.hbase.mapred |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mapreduce |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mapreduce.replication | |
org.apache.hadoop.hbase.mob.mapreduce |
Modifier and Type | Method and Description |
---|---|
int |
ImmutableBytesWritable.compareTo(ImmutableBytesWritable that)
Define the sort order of the BytesWritable.
|
Constructor and Description |
---|
ImmutableBytesWritable(ImmutableBytesWritable ibw)
Set the new ImmutableBytesWritable to the contents of the passed
ibw . |
Modifier and Type | Method and Description |
---|---|
protected ImmutableBytesWritable |
GroupingTableMap.createGroupKey(byte[][] vals)
Create a key by concatenating multiple column values.
|
ImmutableBytesWritable |
TableSnapshotInputFormat.TableSnapshotRecordReader.createKey() |
ImmutableBytesWritable |
TableRecordReader.createKey() |
ImmutableBytesWritable |
TableRecordReaderImpl.createKey() |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
Builds a TableRecordReader.
|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
int |
HRegionPartitioner.getPartition(ImmutableBytesWritable key,
V2 value,
int numPartitions) |
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce
|
boolean |
TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableRecordReader.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableRecordReaderImpl.next(ImmutableBytesWritable key,
Result value) |
void |
IdentityTableReduce.reduce(ImmutableBytesWritable key,
Iterator<Put> values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Put> output,
org.apache.hadoop.mapred.Reporter reporter)
No aggregation, output pairs of (key, record)
|
void |
TableOutputFormat.TableRecordWriter.write(ImmutableBytesWritable key,
Put value) |
Modifier and Type | Method and Description |
---|---|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce
|
void |
IdentityTableReduce.reduce(ImmutableBytesWritable key,
Iterator<Put> values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Put> output,
org.apache.hadoop.mapred.Reporter reporter)
No aggregation, output pairs of (key, record)
|
Modifier and Type | Field and Description |
---|---|
private ImmutableBytesWritable |
HashTable.ResultHasher.batchHash |
private ImmutableBytesWritable |
HashTable.ResultHasher.batchStartKey |
private ImmutableBytesWritable |
HashTable.HashMapper.currentRow |
(package private) ImmutableBytesWritable |
SyncTable.SyncMapper.currentSourceHash |
private ImmutableBytesWritable |
HashTable.TableHash.Reader.hash |
private ImmutableBytesWritable |
MultithreadedTableMapper.SubMapRecordReader.key |
private ImmutableBytesWritable |
TableRecordReaderImpl.key |
private ImmutableBytesWritable |
HashTable.TableHash.Reader.key |
(package private) ImmutableBytesWritable |
SyncTable.SyncMapper.nextSourceKey |
private ImmutableBytesWritable |
TableSnapshotInputFormatImpl.RecordReader.row |
Modifier and Type | Field and Description |
---|---|
private TreeMap<byte[],ImmutableBytesWritable> |
IndexBuilder.Map.indexes |
private Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.mapClass |
private org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2> |
MultithreadedTableMapper.MapRunner.mapper |
(package private) Map<ImmutableBytesWritable,BufferedMutator> |
MultiTableOutputFormat.MultiTableRecordWriter.mutatorMap |
(package private) List<ImmutableBytesWritable> |
HashTable.TableHash.partitions |
Modifier and Type | Method and Description |
---|---|
protected ImmutableBytesWritable |
GroupingTableMapper.createGroupKey(byte[][] vals)
Create a key by concatenating multiple column values.
|
ImmutableBytesWritable |
HashTable.ResultHasher.getBatchHash() |
ImmutableBytesWritable |
HashTable.ResultHasher.getBatchStartKey() |
ImmutableBytesWritable |
HashTable.TableHash.Reader.getCurrentHash()
Get the current hash
|
ImmutableBytesWritable |
TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentKey() |
ImmutableBytesWritable |
TableRecordReader.getCurrentKey()
Returns the current key.
|
ImmutableBytesWritable |
MultithreadedTableMapper.SubMapRecordReader.getCurrentKey() |
ImmutableBytesWritable |
TableSnapshotInputFormatImpl.RecordReader.getCurrentKey() |
ImmutableBytesWritable |
TableRecordReaderImpl.getCurrentKey()
Returns the current key.
|
ImmutableBytesWritable |
HashTable.TableHash.Reader.getCurrentKey()
Get the current key
|
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a
TableRecordReader . |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a TableRecordReader.
|
(package private) static <V extends Cell> |
HFileOutputFormat2.createRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context,
org.apache.hadoop.mapreduce.OutputCommitter committer) |
static <K2,V2> Class<org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.getMapperClass(org.apache.hadoop.mapreduce.JobContext job)
Get the application's mapper class.
|
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Cell> |
HFileOutputFormat2.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context) |
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Mutation> |
MultiTableOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context) |
private static List<ImmutableBytesWritable> |
HFileOutputFormat2.getRegionStartKeys(List<RegionLocator> regionLocators,
boolean writeMultipleTables)
Return the start keys of all of the regions in this table,
as a list of ImmutableBytesWritable.
|
Modifier and Type | Method and Description |
---|---|
static byte[] |
MultiTableHFileOutputFormat.createCompositeKey(byte[] tableName,
ImmutableBytesWritable suffix)
Alternate api which accepts an ImmutableBytesWritable for the suffix
|
static byte[] |
MultiTableHFileOutputFormat.createCompositeKey(String tableName,
ImmutableBytesWritable suffix)
Alternate api which accepts a String for the tableName and ImmutableBytesWritable for the
suffix
|
(package private) BufferedMutator |
MultiTableOutputFormat.MultiTableRecordWriter.getBufferedMutator(ImmutableBytesWritable tableName) |
int |
HRegionPartitioner.getPartition(ImmutableBytesWritable key,
VALUE value,
int numPartitions)
Gets the partition number for a given key (hence record) given the total
number of partitions i.e.
|
int |
SimpleTotalOrderPartitioner.getPartition(ImmutableBytesWritable key,
VALUE value,
int reduces) |
void |
Import.CellImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
HashTable.HashMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.KeyValueImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Deprecated.
|
protected void |
SyncTable.SyncMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.Importer.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
IndexBuilder.Map.map(ImmutableBytesWritable rowKey,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
IdentityTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Pass the key, value to reduce.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
void |
GroupingTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Extract the grouping columns from value to construct a new key.
|
void |
Import.CellSortImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.KeyValueSortImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Deprecated.
|
void |
CellCounter.CellCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
HashTable.TableHash.Reader |
HashTable.TableHash.newReader(org.apache.hadoop.conf.Configuration conf,
ImmutableBytesWritable startKey)
Open a TableHash.Reader starting at the first hash at or after the given key.
|
protected void |
Import.Importer.processKV(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context,
Put put,
Delete delete) |
protected void |
CellSortReducer.reduce(ImmutableBytesWritable row,
Iterable<Cell> kvs,
org.apache.hadoop.mapreduce.Reducer.Context context) |
protected void |
KeyValueSortReducer.reduce(ImmutableBytesWritable row,
Iterable<KeyValue> kvs,
org.apache.hadoop.mapreduce.Reducer.Context context)
Deprecated.
|
protected void |
PutSortReducer.reduce(ImmutableBytesWritable row,
Iterable<Put> puts,
org.apache.hadoop.mapreduce.Reducer.Context context) |
protected void |
TextSortReducer.reduce(ImmutableBytesWritable rowKey,
Iterable<org.apache.hadoop.io.Text> lines,
org.apache.hadoop.mapreduce.Reducer.Context context) |
void |
HashTable.ResultHasher.startBatch(ImmutableBytesWritable row) |
private void |
SyncTable.SyncMapper.syncRange(org.apache.hadoop.mapreduce.Mapper.Context context,
ImmutableBytesWritable startRow,
ImmutableBytesWritable stopRow)
Rescan the given range directly from the source and target tables.
|
private static String |
SyncTable.SyncMapper.toHex(ImmutableBytesWritable bytes) |
void |
MultiTableOutputFormat.MultiTableRecordWriter.write(ImmutableBytesWritable tableName,
Mutation action)
Writes an action (Put or Delete) to the specified table.
|
private void |
Import.Importer.writeResult(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
Modifier and Type | Method and Description |
---|---|
(package private) static void |
HFileOutputFormat2.configurePartitioner(org.apache.hadoop.mapreduce.Job job,
List<ImmutableBytesWritable> splitPoints,
boolean writeMultipleTables)
Configure
job with a TotalOrderPartitioner, partitioning against
splitPoints . |
static <K2,V2> void |
MultithreadedTableMapper.setMapperClass(org.apache.hadoop.mapreduce.Job job,
Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> cls)
Set the application's mapper class.
|
private static void |
HFileOutputFormat2.writePartitions(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path partitionsPath,
List<ImmutableBytesWritable> startKeys,
boolean writeMultipleTables)
Write out a
SequenceFile that can be read by
TotalOrderPartitioner that contains the split points in startKeys. |
Constructor and Description |
---|
Reader(org.apache.hadoop.conf.Configuration conf,
ImmutableBytesWritable startKey) |
Modifier and Type | Method and Description |
---|---|
void |
VerifyReplication.Verifier.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Map method that compares every scanned row with the equivalent from
a distant cluster.
|
Modifier and Type | Method and Description |
---|---|
void |
MobRefReporter.MobRefMapper.map(ImmutableBytesWritable r,
Result columns,
org.apache.hadoop.mapreduce.Mapper.Context context) |
Modifier and Type | Method and Description |
---|---|
private org.apache.hadoop.io.Text |
MobRefReporter.MobRefReducer.encodeRows(org.apache.hadoop.mapreduce.Reducer.Context context,
org.apache.hadoop.io.Text key,
Iterable<ImmutableBytesWritable> rows)
reuses the passed Text key.
|
void |
MobRefReporter.MobRefReducer.reduce(org.apache.hadoop.io.Text key,
Iterable<ImmutableBytesWritable> rows,
org.apache.hadoop.mapreduce.Reducer.Context context) |
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.