Uses of Class
org.apache.hadoop.hbase.io.ImmutableBytesWritable
Packages that use ImmutableBytesWritable
Package
Description
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
- 
Uses of ImmutableBytesWritable in org.apache.hadoop.hbaseMethods in org.apache.hadoop.hbase that return ImmutableBytesWritableModifier and TypeMethodDescription(package private) static ImmutableBytesWritableHFilePerformanceEvaluation.format(int i, ImmutableBytesWritable w) Methods in org.apache.hadoop.hbase with parameters of type ImmutableBytesWritableModifier and TypeMethodDescription(package private) static ImmutableBytesWritableHFilePerformanceEvaluation.format(int i, ImmutableBytesWritable w) protected voidScanPerformanceEvaluation.MyMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, KEYOUT, VALUEOUT>.org.apache.hadoop.mapreduce.Mapper.Context context) 
- 
Uses of ImmutableBytesWritable in org.apache.hadoop.hbase.ioMethods in org.apache.hadoop.hbase.io with parameters of type ImmutableBytesWritableModifier and TypeMethodDescriptionintImmutableBytesWritable.compareTo(ImmutableBytesWritable that) Define the sort order of the BytesWritable.Constructors in org.apache.hadoop.hbase.io with parameters of type ImmutableBytesWritableModifierConstructorDescriptionSet the new ImmutableBytesWritable to the contents of the passedibw.
- 
Uses of ImmutableBytesWritable in org.apache.hadoop.hbase.mapredMethods in org.apache.hadoop.hbase.mapred that return ImmutableBytesWritableModifier and TypeMethodDescriptionprotected ImmutableBytesWritableGroupingTableMap.createGroupKey(byte[][] vals) Create a key by concatenating multiple column values.TableRecordReader.createKey()TableRecordReaderImpl.createKey()TableSnapshotInputFormat.TableSnapshotRecordReader.createKey()Methods in org.apache.hadoop.hbase.mapred that return types with arguments of type ImmutableBytesWritableModifier and TypeMethodDescriptionorg.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter) org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter) Builds a TableRecordReader.org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter) Methods in org.apache.hadoop.hbase.mapred with parameters of type ImmutableBytesWritableModifier and TypeMethodDescriptionintHRegionPartitioner.getPartition(ImmutableBytesWritable key, V2 value, int numPartitions) voidGroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) Extract the grouping columns from value to construct a new key.voidIdentityTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) Pass the key, value to reducevoidRowCounter.RowCounterMapper.map(ImmutableBytesWritable row, Result values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) booleanTableRecordReader.next(ImmutableBytesWritable key, Result value) booleanTableRecordReaderImpl.next(ImmutableBytesWritable key, Result value) booleanTableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key, Result value) voidIdentityTableReduce.reduce(ImmutableBytesWritable key, Iterator<Put> values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Put> output, org.apache.hadoop.mapred.Reporter reporter) No aggregation, output pairs of (key, record)voidTableOutputFormat.TableRecordWriter.write(ImmutableBytesWritable key, Put value) Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type ImmutableBytesWritableModifier and TypeMethodDescriptionvoidGroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) Extract the grouping columns from value to construct a new key.voidIdentityTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) Pass the key, value to reducevoidRowCounter.RowCounterMapper.map(ImmutableBytesWritable row, Result values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) voidIdentityTableReduce.reduce(ImmutableBytesWritable key, Iterator<Put> values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Put> output, org.apache.hadoop.mapred.Reporter reporter) No aggregation, output pairs of (key, record)
- 
Uses of ImmutableBytesWritable in org.apache.hadoop.hbase.mapreduceFields in org.apache.hadoop.hbase.mapreduce declared as ImmutableBytesWritableModifier and TypeFieldDescriptionprivate ImmutableBytesWritableHashTable.ResultHasher.batchHashprivate ImmutableBytesWritableHashTable.ResultHasher.batchStartKeyprivate ImmutableBytesWritableHashTable.HashMapper.currentRow(package private) ImmutableBytesWritableSyncTable.SyncMapper.currentSourceHashprivate ImmutableBytesWritableHashTable.TableHash.Reader.hashprivate ImmutableBytesWritableHashTable.TableHash.Reader.keyprivate ImmutableBytesWritableMultithreadedTableMapper.SubMapRecordReader.keyprivate ImmutableBytesWritableTableRecordReaderImpl.key(package private) ImmutableBytesWritableSyncTable.SyncMapper.nextSourceKeyprivate ImmutableBytesWritableTableSnapshotInputFormatImpl.RecordReader.rowFields in org.apache.hadoop.hbase.mapreduce with type parameters of type ImmutableBytesWritableModifier and TypeFieldDescriptionprivate TreeMap<byte[],ImmutableBytesWritable> IndexBuilder.Map.indexesprivate Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result, K2, V2>> MultithreadedTableMapper.mapClassprivate org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result, K2, V2> MultithreadedTableMapper.MapRunner.mapper(package private) Map<ImmutableBytesWritable,BufferedMutator> MultiTableOutputFormat.MultiTableRecordWriter.mutatorMap(package private) List<ImmutableBytesWritable>HashTable.TableHash.partitionsMethods in org.apache.hadoop.hbase.mapreduce that return ImmutableBytesWritableModifier and TypeMethodDescriptionprotected ImmutableBytesWritableGroupingTableMapper.createGroupKey(byte[][] vals) Create a key by concatenating multiple column values.HashTable.ResultHasher.getBatchHash()HashTable.ResultHasher.getBatchStartKey()HashTable.TableHash.Reader.getCurrentHash()Get the current hashHashTable.TableHash.Reader.getCurrentKey()Get the current keyMultithreadedTableMapper.SubMapRecordReader.getCurrentKey()TableRecordReader.getCurrentKey()Returns the current key.TableRecordReaderImpl.getCurrentKey()Returns the current key.TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentKey()TableSnapshotInputFormatImpl.RecordReader.getCurrentKey()Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type ImmutableBytesWritableModifier and TypeMethodDescriptionorg.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) Builds a TableRecordReader.org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) Builds aTableRecordReader.org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) (package private) static <V extends Cell>
 org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,V> HFileOutputFormat2.createRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context, org.apache.hadoop.mapreduce.OutputCommitter committer) static <K2,V2> Class<org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, K2, V2>> MultithreadedTableMapper.getMapperClass(org.apache.hadoop.mapreduce.JobContext job) Get the application's mapper class.org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Cell> HFileOutputFormat2.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context) org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Mutation> MultiTableOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context) private static List<ImmutableBytesWritable>HFileOutputFormat2.getRegionStartKeys(List<RegionLocator> regionLocators, boolean writeMultipleTables) Return the start keys of all of the regions in this table, as a list of ImmutableBytesWritable.Methods in org.apache.hadoop.hbase.mapreduce with parameters of type ImmutableBytesWritableModifier and TypeMethodDescriptionstatic byte[]MultiTableHFileOutputFormat.createCompositeKey(byte[] tableName, ImmutableBytesWritable suffix) Alternate api which accepts an ImmutableBytesWritable for the suffixstatic byte[]MultiTableHFileOutputFormat.createCompositeKey(String tableName, ImmutableBytesWritable suffix) Alternate api which accepts a String for the tableName and ImmutableBytesWritable for the suffix(package private) BufferedMutatorMultiTableOutputFormat.MultiTableRecordWriter.getBufferedMutator(ImmutableBytesWritable tableName) the name of the table, as a stringintHRegionPartitioner.getPartition(ImmutableBytesWritable key, VALUE value, int numPartitions) Gets the partition number for a given key (hence record) given the total number of partitions i.e.intSimpleTotalOrderPartitioner.getPartition(ImmutableBytesWritable key, VALUE value, int reduces) voidCellCounter.CellCounterMapper.map(ImmutableBytesWritable row, Result values, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, org.apache.hadoop.io.Text, org.apache.hadoop.io.LongWritable>.org.apache.hadoop.mapreduce.Mapper.Context context) Maps the data.voidGroupingTableMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Result>.org.apache.hadoop.mapreduce.Mapper.Context context) Extract the grouping columns from value to construct a new key.protected voidHashTable.HashMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, ImmutableBytesWritable>.org.apache.hadoop.mapreduce.Mapper.Context context) voidIdentityTableMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Result>.org.apache.hadoop.mapreduce.Mapper.Context context) Pass the key, value to reduce.voidImport.CellImporter.map(ImmutableBytesWritable row, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Cell>.org.apache.hadoop.mapreduce.Mapper.Context context) voidImport.CellSortImporter.map(ImmutableBytesWritable row, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, Import.CellWritableComparable, Cell>.org.apache.hadoop.mapreduce.Mapper.Context context) voidImport.Importer.map(ImmutableBytesWritable row, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Mutation>.org.apache.hadoop.mapreduce.Mapper.Context context) protected voidIndexBuilder.Map.map(ImmutableBytesWritable rowKey, Result result, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Put>.org.apache.hadoop.mapreduce.Mapper.Context context) voidRowCounter.RowCounterMapper.map(ImmutableBytesWritable row, Result values, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Result>.org.apache.hadoop.mapreduce.Mapper.Context context) Maps the data.protected voidSyncTable.SyncMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Mutation>.org.apache.hadoop.mapreduce.Mapper.Context context) HashTable.TableHash.newReader(org.apache.hadoop.conf.Configuration conf, ImmutableBytesWritable startKey) Open a TableHash.Reader starting at the first hash at or after the given key.protected voidImport.Importer.processKV(ImmutableBytesWritable key, Result result, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Mutation>.org.apache.hadoop.mapreduce.Mapper.Context context, Put put, Delete delete) protected voidCellSortReducer.reduce(ImmutableBytesWritable row, Iterable<Cell> kvs, org.apache.hadoop.mapreduce.Reducer<ImmutableBytesWritable, Cell, ImmutableBytesWritable, Cell>.org.apache.hadoop.mapreduce.Reducer.Context context) protected voidPutSortReducer.reduce(ImmutableBytesWritable row, Iterable<Put> puts, org.apache.hadoop.mapreduce.Reducer<ImmutableBytesWritable, Put, ImmutableBytesWritable, KeyValue>.org.apache.hadoop.mapreduce.Reducer.Context context) protected voidTextSortReducer.reduce(ImmutableBytesWritable rowKey, Iterable<org.apache.hadoop.io.Text> lines, org.apache.hadoop.mapreduce.Reducer<ImmutableBytesWritable, org.apache.hadoop.io.Text, ImmutableBytesWritable, KeyValue>.org.apache.hadoop.mapreduce.Reducer.Context context) voidHashTable.ResultHasher.startBatch(ImmutableBytesWritable row) private voidSyncTable.SyncMapper.syncRange(org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Mutation>.org.apache.hadoop.mapreduce.Mapper.Context context, ImmutableBytesWritable startRow, ImmutableBytesWritable stopRow) Rescan the given range directly from the source and target tables.private static StringSyncTable.SyncMapper.toHex(ImmutableBytesWritable bytes) voidMultiTableOutputFormat.MultiTableRecordWriter.write(ImmutableBytesWritable tableName, Mutation action) Writes an action (Put or Delete) to the specified table.private voidImport.Importer.writeResult(ImmutableBytesWritable key, Result result, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Mutation>.org.apache.hadoop.mapreduce.Mapper.Context context) Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type ImmutableBytesWritableModifier and TypeMethodDescription(package private) static voidHFileOutputFormat2.configurePartitioner(org.apache.hadoop.mapreduce.Job job, List<ImmutableBytesWritable> splitPoints, boolean writeMultipleTables) Configurejobwith a TotalOrderPartitioner, partitioning againstsplitPoints.static <K2,V2> void MultithreadedTableMapper.setMapperClass(org.apache.hadoop.mapreduce.Job job, Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, K2, V2>> cls) Set the application's mapper class.private static voidHFileOutputFormat2.writePartitions(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path partitionsPath, List<ImmutableBytesWritable> startKeys, boolean writeMultipleTables) Write out aSequenceFilethat can be read byTotalOrderPartitionerthat contains the split points in startKeys.Constructors in org.apache.hadoop.hbase.mapreduce with parameters of type ImmutableBytesWritableModifierConstructorDescription(package private)Reader(org.apache.hadoop.conf.Configuration conf, ImmutableBytesWritable startKey) 
- 
Uses of ImmutableBytesWritable in org.apache.hadoop.hbase.mapreduce.replicationMethods in org.apache.hadoop.hbase.mapreduce.replication with parameters of type ImmutableBytesWritableModifier and TypeMethodDescriptionvoidVerifyReplication.Verifier.map(ImmutableBytesWritable row, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Put>.org.apache.hadoop.mapreduce.Mapper.Context context) Map method that compares every scanned row with the equivalent from a distant cluster.
- 
Uses of ImmutableBytesWritable in org.apache.hadoop.hbase.mob.mapreduceMethods in org.apache.hadoop.hbase.mob.mapreduce with parameters of type ImmutableBytesWritableModifier and TypeMethodDescriptionvoidMobRefReporter.MobRefMapper.map(ImmutableBytesWritable r, Result columns, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, org.apache.hadoop.io.Text, ImmutableBytesWritable>.org.apache.hadoop.mapreduce.Mapper.Context context) Method parameters in org.apache.hadoop.hbase.mob.mapreduce with type arguments of type ImmutableBytesWritableModifier and TypeMethodDescriptionprivate org.apache.hadoop.io.TextMobRefReporter.MobRefReducer.encodeRows(org.apache.hadoop.mapreduce.Reducer<org.apache.hadoop.io.Text, ImmutableBytesWritable, org.apache.hadoop.io.Text, org.apache.hadoop.io.Text>.org.apache.hadoop.mapreduce.Reducer.Context context, org.apache.hadoop.io.Text key, Iterable<ImmutableBytesWritable> rows) reuses the passed Text key.voidMobRefReporter.MobRefReducer.reduce(org.apache.hadoop.io.Text key, Iterable<ImmutableBytesWritable> rows, org.apache.hadoop.mapreduce.Reducer<org.apache.hadoop.io.Text, ImmutableBytesWritable, org.apache.hadoop.io.Text, org.apache.hadoop.io.Text>.org.apache.hadoop.mapreduce.Reducer.Context context)