Package | Description |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.io | |
org.apache.hadoop.hbase.mapred |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mapreduce |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mapreduce.replication |
Modifier and Type | Field and Description |
---|---|
private static Set<ImmutableBytesWritable> |
HTableDescriptor.RESERVED_KEYWORDS |
private static Set<ImmutableBytesWritable> |
HColumnDescriptor.RESERVED_KEYWORDS |
private Map<ImmutableBytesWritable,ImmutableBytesWritable> |
HTableDescriptor.values
A map which holds the metadata information of the table.
|
private Map<ImmutableBytesWritable,ImmutableBytesWritable> |
HTableDescriptor.values
A map which holds the metadata information of the table.
|
private Map<ImmutableBytesWritable,ImmutableBytesWritable> |
HColumnDescriptor.values |
private Map<ImmutableBytesWritable,ImmutableBytesWritable> |
HColumnDescriptor.values |
Modifier and Type | Method and Description |
---|---|
Map<ImmutableBytesWritable,ImmutableBytesWritable> |
HTableDescriptor.getValues()
Getter for fetching an unmodifiable
HTableDescriptor.values map. |
Map<ImmutableBytesWritable,ImmutableBytesWritable> |
HTableDescriptor.getValues()
Getter for fetching an unmodifiable
HTableDescriptor.values map. |
Map<ImmutableBytesWritable,ImmutableBytesWritable> |
HColumnDescriptor.getValues() |
Map<ImmutableBytesWritable,ImmutableBytesWritable> |
HColumnDescriptor.getValues() |
Modifier and Type | Method and Description |
---|---|
private byte[] |
HTableDescriptor.getValue(ImmutableBytesWritable key) |
private boolean |
HTableDescriptor.isSomething(ImmutableBytesWritable key,
boolean valueIfNull) |
void |
HTableDescriptor.remove(ImmutableBytesWritable key)
Remove metadata represented by the key from the
HTableDescriptor.values map |
HTableDescriptor |
HTableDescriptor.setValue(ImmutableBytesWritable key,
ImmutableBytesWritable value) |
private HTableDescriptor |
HTableDescriptor.setValue(ImmutableBytesWritable key,
String value) |
Modifier and Type | Method and Description |
---|---|
CompoundConfiguration |
CompoundConfiguration.addWritableMap(Map<ImmutableBytesWritable,ImmutableBytesWritable> map)
Add ImmutableBytesWritable map to config list.
|
CompoundConfiguration |
CompoundConfiguration.addWritableMap(Map<ImmutableBytesWritable,ImmutableBytesWritable> map)
Add ImmutableBytesWritable map to config list.
|
Constructor and Description |
---|
HTableDescriptor(TableName name,
HColumnDescriptor[] families,
Map<ImmutableBytesWritable,ImmutableBytesWritable> values)
INTERNAL Private constructor used internally creating table descriptors for
catalog tables,
hbase:meta and -ROOT- . |
HTableDescriptor(TableName name,
HColumnDescriptor[] families,
Map<ImmutableBytesWritable,ImmutableBytesWritable> values)
INTERNAL Private constructor used internally creating table descriptors for
catalog tables,
hbase:meta and -ROOT- . |
Modifier and Type | Method and Description |
---|---|
int |
ImmutableBytesWritable.compareTo(ImmutableBytesWritable that)
Define the sort order of the BytesWritable.
|
Constructor and Description |
---|
ImmutableBytesWritable(ImmutableBytesWritable ibw)
Set the new ImmutableBytesWritable to the contents of the passed
ibw . |
Modifier and Type | Method and Description |
---|---|
protected ImmutableBytesWritable |
GroupingTableMap.createGroupKey(byte[][] vals)
Create a key by concatenating multiple column values.
|
ImmutableBytesWritable |
TableRecordReader.createKey() |
ImmutableBytesWritable |
TableSnapshotInputFormat.TableSnapshotRecordReader.createKey() |
ImmutableBytesWritable |
TableRecordReaderImpl.createKey() |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter) |
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
Builds a TableRecordReader.
|
Modifier and Type | Method and Description |
---|---|
int |
HRegionPartitioner.getPartition(ImmutableBytesWritable key,
V2 value,
int numPartitions) |
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce
|
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
boolean |
TableRecordReader.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key,
Result value) |
boolean |
TableRecordReaderImpl.next(ImmutableBytesWritable key,
Result value) |
void |
IdentityTableReduce.reduce(ImmutableBytesWritable key,
Iterator<Put> values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Put> output,
org.apache.hadoop.mapred.Reporter reporter)
No aggregation, output pairs of (key, record)
|
void |
TableOutputFormat.TableRecordWriter.write(ImmutableBytesWritable key,
Put value) |
Modifier and Type | Method and Description |
---|---|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Pass the key, value to reduce
|
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Extract the grouping columns from value to construct a new key.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter) |
void |
IdentityTableReduce.reduce(ImmutableBytesWritable key,
Iterator<Put> values,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Put> output,
org.apache.hadoop.mapred.Reporter reporter)
No aggregation, output pairs of (key, record)
|
Modifier and Type | Field and Description |
---|---|
private ImmutableBytesWritable |
HashTable.ResultHasher.batchHash |
private ImmutableBytesWritable |
HashTable.ResultHasher.batchStartKey |
private ImmutableBytesWritable |
HashTable.HashMapper.currentRow |
(package private) ImmutableBytesWritable |
SyncTable.SyncMapper.currentSourceHash |
private ImmutableBytesWritable |
HashTable.TableHash.Reader.hash |
private ImmutableBytesWritable |
TableRecordReaderImpl.key |
private ImmutableBytesWritable |
MultithreadedTableMapper.SubMapRecordReader.key |
private ImmutableBytesWritable |
HashTable.TableHash.Reader.key |
(package private) ImmutableBytesWritable |
SyncTable.SyncMapper.nextSourceKey |
private ImmutableBytesWritable |
TableSnapshotInputFormatImpl.RecordReader.row |
Modifier and Type | Field and Description |
---|---|
private TreeMap<byte[],ImmutableBytesWritable> |
IndexBuilder.Map.indexes |
private Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.mapClass |
private org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2> |
MultithreadedTableMapper.MapRunner.mapper |
(package private) Map<ImmutableBytesWritable,BufferedMutator> |
MultiTableOutputFormat.MultiTableRecordWriter.mutatorMap |
(package private) List<ImmutableBytesWritable> |
HashTable.TableHash.partitions |
Modifier and Type | Method and Description |
---|---|
protected ImmutableBytesWritable |
GroupingTableMapper.createGroupKey(byte[][] vals)
Create a key by concatenating multiple column values.
|
ImmutableBytesWritable |
HashTable.ResultHasher.getBatchHash() |
ImmutableBytesWritable |
HashTable.ResultHasher.getBatchStartKey() |
ImmutableBytesWritable |
HashTable.TableHash.Reader.getCurrentHash()
Get the current hash
|
ImmutableBytesWritable |
TableRecordReader.getCurrentKey()
Returns the current key.
|
ImmutableBytesWritable |
TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentKey() |
ImmutableBytesWritable |
TableRecordReaderImpl.getCurrentKey()
Returns the current key.
|
ImmutableBytesWritable |
MultithreadedTableMapper.SubMapRecordReader.getCurrentKey() |
ImmutableBytesWritable |
TableSnapshotInputFormatImpl.RecordReader.getCurrentKey() |
ImmutableBytesWritable |
HashTable.TableHash.Reader.getCurrentKey()
Get the current key
|
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a
TableRecordReader . |
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a TableRecordReader.
|
(package private) static <V extends Cell> |
HFileOutputFormat2.createRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context,
org.apache.hadoop.mapreduce.OutputCommitter committer) |
static <K2,V2> Class<org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> |
MultithreadedTableMapper.getMapperClass(org.apache.hadoop.mapreduce.JobContext job)
Get the application's mapper class.
|
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Mutation> |
MultiTableOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context) |
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Cell> |
HFileOutputFormat2.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context) |
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,KeyValue> |
HFileOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
Deprecated.
|
private static List<ImmutableBytesWritable> |
HFileOutputFormat2.getRegionStartKeys(RegionLocator table)
Return the start keys of all of the regions in this table,
as a list of ImmutableBytesWritable.
|
Modifier and Type | Method and Description |
---|---|
(package private) BufferedMutator |
MultiTableOutputFormat.MultiTableRecordWriter.getBufferedMutator(ImmutableBytesWritable tableName) |
int |
SimpleTotalOrderPartitioner.getPartition(ImmutableBytesWritable key,
VALUE value,
int reduces) |
int |
HRegionPartitioner.getPartition(ImmutableBytesWritable key,
VALUE value,
int numPartitions)
Gets the partition number for a given key (hence record) given the total
number of partitions i.e.
|
protected void |
HashTable.HashMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.KeyValueImporter.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
Import.Importer.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
SyncTable.SyncMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context) |
protected void |
IndexBuilder.Map.map(ImmutableBytesWritable rowKey,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
void |
IdentityTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Pass the key, value to reduce.
|
void |
GroupingTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Extract the grouping columns from value to construct a new key.
|
void |
RowCounter.RowCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
void |
CellCounter.CellCounterMapper.map(ImmutableBytesWritable row,
Result values,
org.apache.hadoop.mapreduce.Mapper.Context context)
Maps the data.
|
HashTable.TableHash.Reader |
HashTable.TableHash.newReader(org.apache.hadoop.conf.Configuration conf,
ImmutableBytesWritable startKey)
Open a TableHash.Reader starting at the first hash at or after the given key.
|
protected void |
Import.Importer.processKV(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context,
Put put,
Delete delete) |
protected void |
KeyValueSortReducer.reduce(ImmutableBytesWritable row,
Iterable<KeyValue> kvs,
org.apache.hadoop.mapreduce.Reducer.Context context) |
protected void |
PutSortReducer.reduce(ImmutableBytesWritable row,
Iterable<Put> puts,
org.apache.hadoop.mapreduce.Reducer.Context context) |
protected void |
TextSortReducer.reduce(ImmutableBytesWritable rowKey,
Iterable<org.apache.hadoop.io.Text> lines,
org.apache.hadoop.mapreduce.Reducer.Context context) |
void |
HashTable.ResultHasher.startBatch(ImmutableBytesWritable row) |
private void |
SyncTable.SyncMapper.syncRange(org.apache.hadoop.mapreduce.Mapper.Context context,
ImmutableBytesWritable startRow,
ImmutableBytesWritable stopRow)
Rescan the given range directly from the source and target tables.
|
private static String |
SyncTable.SyncMapper.toHex(ImmutableBytesWritable bytes) |
void |
MultiTableOutputFormat.MultiTableRecordWriter.write(ImmutableBytesWritable tableName,
Mutation action)
Writes an action (Put or Delete) to the specified table.
|
private void |
Import.Importer.writeResult(ImmutableBytesWritable key,
Result result,
org.apache.hadoop.mapreduce.Mapper.Context context) |
Modifier and Type | Method and Description |
---|---|
(package private) static void |
HFileOutputFormat2.configurePartitioner(org.apache.hadoop.mapreduce.Job job,
List<ImmutableBytesWritable> splitPoints)
Configure
job with a TotalOrderPartitioner, partitioning against
splitPoints . |
(package private) static void |
HFileOutputFormat.configurePartitioner(org.apache.hadoop.mapreduce.Job job,
List<ImmutableBytesWritable> splitPoints)
Deprecated.
Configure
job with a TotalOrderPartitioner, partitioning against
splitPoints . |
static <K2,V2> void |
MultithreadedTableMapper.setMapperClass(org.apache.hadoop.mapreduce.Job job,
Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> cls)
Set the application's mapper class.
|
private static void |
HFileOutputFormat2.writePartitions(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path partitionsPath,
List<ImmutableBytesWritable> startKeys)
Write out a
SequenceFile that can be read by
TotalOrderPartitioner that contains the split points in startKeys. |
Constructor and Description |
---|
HashTable.TableHash.Reader(org.apache.hadoop.conf.Configuration conf,
ImmutableBytesWritable startKey) |
Modifier and Type | Method and Description |
---|---|
void |
VerifyReplication.Verifier.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Map method that compares every scanned row with the equivalent from
a distant cluster.
|
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.