Uses of Class
org.apache.hadoop.hbase.client.Result
Packages that use Result
Package
Description
Provides HBase Client
Table of Contents
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Multi Cluster Replication
HBase REST
Provides an HBase Thrift
service.
Provides an HBase Thrift
service.
-
Uses of Result in org.apache.hadoop.hbase
Methods in org.apache.hadoop.hbase that return ResultModifier and TypeMethodDescriptionstatic ResultMetaTableAccessor.getCatalogFamilyRow(Connection connection, RegionInfo ri) Returns Return theHConstants.CATALOG_FAMILYrow from hbase:meta.HBaseTestingUtility.getClosestRowBefore(Region r, byte[] row, byte[] family) Deprecated.static ResultMetaTableAccessor.getRegionResult(Connection connection, RegionInfo regionInfo) Gets the result in hbase:meta for the specified region.static ResultMetaTableAccessor.scanByRegionEncodedName(Connection connection, String regionEncodedName) Scans META table for a row whose key contains the specified regionEncodedName, returning a single relatedResultinstance if any row is found, null otherwise.Methods in org.apache.hadoop.hbase that return types with arguments of type ResultModifier and TypeMethodDescriptionMetaTableAccessor.fullScan(Connection connection, ClientMetaTableAccessor.QueryType type) Performs a full scan ofhbase:meta.MetaTableAccessor.fullScanRegions(Connection connection) Performs a full scan ofhbase:metafor regions.Methods in org.apache.hadoop.hbase with parameters of type ResultModifier and TypeMethodDescription(package private) void(package private) abstract void(package private) voidstatic PairOfSameType<RegionInfo>MetaTableAccessor.getDaughterRegions(Result data) Returns the daughter regions by reading the corresponding columns of the catalog table Result.static RegionInfoCatalogFamilyFormat.getRegionInfo(Result data) Returns RegionInfo object from the column HConstants.CATALOG_FAMILY:HConstants.REGIONINFO_QUALIFIER of the catalog table Result.static RegionInfoCatalogFamilyFormat.getRegionInfo(Result r, byte[] qualifier) Returns the RegionInfo object from the columnHConstants.CATALOG_FAMILYandqualifierof the catalog table result.static HRegionLocationCatalogFamilyFormat.getRegionLocation(Result r, RegionInfo regionInfo, int replicaId) Returns the HRegionLocation parsed from the given meta row Result for the given regionInfo and replicaId.static RegionLocationsCatalogFamilyFormat.getRegionLocations(Result r) Returns an HRegionLocationList extracted from the result.private static Optional<RegionLocations>ClientMetaTableAccessor.getRegionLocations(Result r) Returns an HRegionLocationList extracted from the result.private static longCatalogFamilyFormat.getSeqNumDuringOpen(Result r, int replicaId) The latest seqnum that the server writing to meta observed when opening the region.static ServerNameCatalogFamilyFormat.getServerName(Result r, int replicaId) Returns aServerNamefrom catalog tableResult.static TableStateCatalogFamilyFormat.getTableState(Result r) Decode table state from META Result.private static Optional<TableState>ClientMetaTableAccessor.getTableState(Result r) static ServerNameMetaTableAccessor.getTargetServerName(Result r, int replicaId) Returns theServerNamefrom catalog tableResultwhere the region is transitioning on.protected voidScanPerformanceEvaluation.MyMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, KEYOUT, VALUEOUT>.org.apache.hadoop.mapreduce.Mapper.Context context) voidClientMetaTableAccessor.MetaTableScanResultConsumer.onNext(Result[] results, AdvancedScanResultConsumer.ScanController controller) (package private) voidPerformanceEvaluation.TestBase.updateValueSize(Result r) (package private) voidPerformanceEvaluation.TestBase.updateValueSize(Result[] rs) (package private) voidPerformanceEvaluation.TestBase.updateValueSize(Result[] rs, long latency) (package private) voidPerformanceEvaluation.TestBase.updateValueSize(Result r, long latency) booleanbooleanbooleanVisit the catalog table row. -
Uses of Result in org.apache.hadoop.hbase.backup.impl
Methods in org.apache.hadoop.hbase.backup.impl with parameters of type ResultModifier and TypeMethodDescriptionprivate BackupInfoBackupSystemTable.resultToBackupInfo(Result res) Converts Result to BackupInfo -
Uses of Result in org.apache.hadoop.hbase.client
Fields in org.apache.hadoop.hbase.client declared as ResultModifier and TypeFieldDescriptionstatic final ResultResult.EMPTY_RESULTstatic final Result[]ScanResultCache.EMPTY_RESULT_ARRAYprivate final ResultCheckAndMutateResult.resultprivate ResultSingleResponse.Entry.resultFields in org.apache.hadoop.hbase.client with type parameters of type ResultModifier and TypeFieldDescriptionBatchScanResultCache.partialResultsCompleteScanResultCache.partialResultsAsyncTableResultScanner.queueMethods in org.apache.hadoop.hbase.client that return ResultModifier and TypeMethodDescriptionResult[]Result[]Result[]Result[]Add the given results to cache and get valid results back.default ResultAppends values to one or more columns within a single row.private ResultCompleteScanResultCache.combine()static ResultInstantiate a Result with the specified List of KeyValues.static Resultstatic Resultstatic ResultResult.create(List<? extends Cell> cells, Boolean exists, boolean stale, boolean mayHaveMoreCellsInRow) static ResultInstantiate a Result with the specified array of KeyValues.static Resultstatic Result(package private) static ResultResult.create(ExtendedCell[] cells) (package private) static ResultResult.create(ExtendedCell[] cells, Boolean exists, boolean stale) (package private) static ResultResult.create(ExtendedCell[] cells, Boolean exists, boolean stale, boolean mayHaveMoreCellsInRow) private ResultBatchScanResultCache.createCompletedResult()static ResultResult.createCompleteResult(Iterable<Result> partialResults) Forms a single result from the partial results in the partialResults list.static ResultResult.createCursorResult(Cursor cursor) static ResultClientInternalHelper.createResult(ExtendedCell[] cells) static ResultClientInternalHelper.createResult(ExtendedCell[] cells, Boolean exists, boolean stale, boolean mayHaveMoreCellsInRow) (package private) static ResultConnectionUtils.filterCells(Result result, ExtendedCell keepCellsAfter) default Result[]Extracts specified cells from the given rows, as a batch.default ResultExtracts certain cells from a given row.Result[]CheckAndMutateResult.getResult()Returns It is used only for CheckAndMutate operations with Increment/Append.SingleResponse.Entry.getResult()default ResultIncrements one or more columns within a single row.default ResultTable.mutateRow(RowMutations rm) Performs multiple mutations atomically on a single row.TableOverAsyncTable.mutateRow(RowMutations rm) AsyncTableResultScanner.next()ClientSideRegionScanner.next()ResultScanner.next()Grab the next row's worth of values.default Result[]ResultScanner.next(int nbRows) Get nbRows rows.TableSnapshotScanner.next()private Result[]CompleteScanResultCache.prependCombined(Result[] results, int length) private ResultBatchScanResultCache.regroupResults(Result result) private static ResultRawAsyncTableImpl.toResult(HBaseRpcController controller, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutateResponse resp) private Result[]CompleteScanResultCache.updateNumberOfCompleteResultsAndReturn(Result... results) Methods in org.apache.hadoop.hbase.client that return types with arguments of type ResultModifier and TypeMethodDescriptionAppends values to one or more columns within a single row.Extracts certain cells from the given rows, in batch.Extracts certain cells from a given row.private CompletableFuture<Result>default CompletableFuture<List<Result>>A simple version for batch get.Increments one or more columns within a single row.ResultScanner.iterator()AsyncTable.mutateRow(RowMutations mutation) Performs multiple mutations atomically on a single row.AsyncTableImpl.mutateRow(RowMutations mutation) RawAsyncTableImpl.mutateRow(RowMutations mutations) Return all the results that match the given scan object.Methods in org.apache.hadoop.hbase.client with parameters of type ResultModifier and TypeMethodDescriptionResult[]Result[]Result[]Result[]Add the given results to cache and get valid results back.private voidAsyncTableResultScanner.addToCache(Result result) (package private) static longConnectionUtils.calcEstimatedSize(Result rs) static voidResult.compareResults(Result res1, Result res2) Does a deep comparison of two Results, down to the byte arrays.static voidResult.compareResults(Result res1, Result res2, boolean verbose) Does a deep comparison of two Results, down to the byte arrays.voidCopy another Result into this one.(package private) static ResultConnectionUtils.filterCells(Result result, ExtendedCell keepCellsAfter) static ExtendedCell[]ClientInternalHelper.getExtendedRawCells(Result result) static longResult.getTotalSizeOfCells(Result result) Get total size of raw cellsvoidAdvancedScanResultConsumer.onNext(Result[] results, AdvancedScanResultConsumer.ScanController controller) Indicate that we have receive some data.voidAsyncTableResultScanner.onNext(Result[] results, AdvancedScanResultConsumer.ScanController controller) booleanReturnfalseif you want to terminate the scan process.private booleanAsyncNonMetaRegionLocator.onScanNext(TableName tableName, AsyncNonMetaRegionLocator.LocateRequest req, Result result) private Result[]CompleteScanResultCache.prependCombined(Result[] results, int length) private voidAllowPartialScanResultCache.recordLastResult(Result result) private voidBatchScanResultCache.recordLastResult(Result result) private ResultBatchScanResultCache.regroupResults(Result result) voidprivate voidAsyncScanSingleRegionRpcRetryingCaller.updateNextStartRowWhenError(Result result) private Result[]CompleteScanResultCache.updateNumberOfCompleteResultsAndReturn(Result... results) (package private) static voidConnectionUtils.updateResultsMetrics(ScanMetrics scanMetrics, Result[] rrs, boolean isRegionServerRemote) Method parameters in org.apache.hadoop.hbase.client with type arguments of type ResultModifier and TypeMethodDescriptionstatic ResultResult.createCompleteResult(Iterable<Result> partialResults) Forms a single result from the partial results in the partialResults list.Constructors in org.apache.hadoop.hbase.client with parameters of type Result -
Uses of Result in org.apache.hadoop.hbase.coprocessor
Methods in org.apache.hadoop.hbase.coprocessor that return ResultModifier and TypeMethodDescriptiondefault ResultRegionObserver.postAppend(ObserverContext<? extends RegionCoprocessorEnvironment> c, Append append, Result result) Deprecated.since 3.0.0 and will be removed in 4.0.0.default ResultRegionObserver.postAppend(ObserverContext<? extends RegionCoprocessorEnvironment> c, Append append, Result result, WALEdit edit) Called after Appenddefault ResultRegionObserver.postIncrement(ObserverContext<? extends RegionCoprocessorEnvironment> c, Increment increment, Result result) Deprecated.since 3.0.0 and will be removed in 4.0.0.default ResultRegionObserver.postIncrement(ObserverContext<? extends RegionCoprocessorEnvironment> c, Increment increment, Result result, WALEdit edit) Called after incrementdefault ResultRegionObserver.preAppend(ObserverContext<? extends RegionCoprocessorEnvironment> c, Append append) Deprecated.since 3.0.0 and will be removed in 4.0.0.default ResultRegionObserver.preAppend(ObserverContext<? extends RegionCoprocessorEnvironment> c, Append append, WALEdit edit) Called before Append.default ResultRegionObserver.preAppendAfterRowLock(ObserverContext<? extends RegionCoprocessorEnvironment> c, Append append) Deprecated.since 3.0.0 and will be removed in 4.0.0.default ResultRegionObserver.preIncrement(ObserverContext<? extends RegionCoprocessorEnvironment> c, Increment increment) Deprecated.since 3.0.0 and will be removed in 4.0.0.default ResultRegionObserver.preIncrement(ObserverContext<? extends RegionCoprocessorEnvironment> c, Increment increment, WALEdit edit) Called before Increment.default ResultRegionObserver.preIncrementAfterRowLock(ObserverContext<? extends RegionCoprocessorEnvironment> c, Increment increment) Deprecated.since 3.0.0 and will be removed in 4.0.0.Methods in org.apache.hadoop.hbase.coprocessor with parameters of type ResultModifier and TypeMethodDescriptiondefault ResultRegionObserver.postAppend(ObserverContext<? extends RegionCoprocessorEnvironment> c, Append append, Result result) Deprecated.since 3.0.0 and will be removed in 4.0.0.default ResultRegionObserver.postAppend(ObserverContext<? extends RegionCoprocessorEnvironment> c, Append append, Result result, WALEdit edit) Called after Appenddefault ResultRegionObserver.postIncrement(ObserverContext<? extends RegionCoprocessorEnvironment> c, Increment increment, Result result) Deprecated.since 3.0.0 and will be removed in 4.0.0.default ResultRegionObserver.postIncrement(ObserverContext<? extends RegionCoprocessorEnvironment> c, Increment increment, Result result, WALEdit edit) Called after incrementMethod parameters in org.apache.hadoop.hbase.coprocessor with type arguments of type ResultModifier and TypeMethodDescriptiondefault booleanRegionObserver.postScannerNext(ObserverContext<? extends RegionCoprocessorEnvironment> c, InternalScanner s, List<Result> result, int limit, boolean hasNext) Called after the client asks for the next row on a scanner.default booleanRegionObserver.preScannerNext(ObserverContext<? extends RegionCoprocessorEnvironment> c, InternalScanner s, List<Result> result, int limit, boolean hasNext) Called before the client asks for the next row on a scanner. -
Uses of Result in org.apache.hadoop.hbase.coprocessor.example
Methods in org.apache.hadoop.hbase.coprocessor.example that return ResultModifier and TypeMethodDescriptionWriteHeavyIncrementObserver.preIncrement(ObserverContext<? extends RegionCoprocessorEnvironment> c, Increment increment) -
Uses of Result in org.apache.hadoop.hbase.mapred
Methods in org.apache.hadoop.hbase.mapred that return ResultModifier and TypeMethodDescriptionTableRecordReader.createValue()TableRecordReaderImpl.createValue()TableSnapshotInputFormat.TableSnapshotRecordReader.createValue()Methods in org.apache.hadoop.hbase.mapred that return types with arguments of type ResultModifier and TypeMethodDescriptionorg.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter) org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter) Builds a TableRecordReader.org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter) Methods in org.apache.hadoop.hbase.mapred with parameters of type ResultModifier and TypeMethodDescriptionprotected byte[][]GroupingTableMap.extractKeyValues(Result r) Extract columns values from the current record.voidGroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) Extract the grouping columns from value to construct a new key.voidIdentityTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) Pass the key, value to reducevoidRowCounter.RowCounterMapper.map(ImmutableBytesWritable row, Result values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) booleanTableRecordReader.next(ImmutableBytesWritable key, Result value) booleanTableRecordReaderImpl.next(ImmutableBytesWritable key, Result value) booleanTableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key, Result value) Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type ResultModifier and TypeMethodDescriptionvoidGroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) Extract the grouping columns from value to construct a new key.voidIdentityTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) Pass the key, value to reducevoidRowCounter.RowCounterMapper.map(ImmutableBytesWritable row, Result values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable, Result> output, org.apache.hadoop.mapred.Reporter reporter) -
Uses of Result in org.apache.hadoop.hbase.mapreduce
Fields in org.apache.hadoop.hbase.mapreduce declared as ResultModifier and TypeFieldDescriptionprivate ResultSyncTable.SyncMapper.CellScanner.currentRowResultprivate ResultSyncTable.SyncMapper.CellScanner.nextRowResultprivate ResultTableSnapshotInputFormatImpl.RecordReader.resultprivate ResultMultithreadedTableMapper.SubMapRecordReader.valueprivate ResultTableRecordReaderImpl.valueFields in org.apache.hadoop.hbase.mapreduce with type parameters of type ResultModifier and TypeFieldDescriptionprivate Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result, K2, V2>> MultithreadedTableMapper.mapClassprivate org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result, K2, V2> MultithreadedTableMapper.MapRunner.mapperSyncTable.SyncMapper.CellScanner.resultsMethods in org.apache.hadoop.hbase.mapreduce that return ResultModifier and TypeMethodDescriptionResultSerialization.Result94Deserializer.deserialize(Result mutation) ResultSerialization.ResultDeserializer.deserialize(Result mutation) MultithreadedTableMapper.SubMapRecordReader.getCurrentValue()TableRecordReader.getCurrentValue()Returns the current value.TableRecordReaderImpl.getCurrentValue()Returns the current value.TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentValue()TableSnapshotInputFormatImpl.RecordReader.getCurrentValue()Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type ResultModifier and TypeMethodDescriptionorg.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) Builds a TableRecordReader.org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) Builds aTableRecordReader.org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) org.apache.hadoop.io.serializer.Deserializer<Result>ResultSerialization.getDeserializer(Class<Result> c) static <K2,V2> Class<org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, K2, V2>> MultithreadedTableMapper.getMapperClass(org.apache.hadoop.mapreduce.JobContext job) Get the application's mapper class.org.apache.hadoop.io.serializer.Serializer<Result>ResultSerialization.getSerializer(Class<Result> c) Methods in org.apache.hadoop.hbase.mapreduce with parameters of type ResultModifier and TypeMethodDescriptionResultSerialization.Result94Deserializer.deserialize(Result mutation) ResultSerialization.ResultDeserializer.deserialize(Result mutation) protected byte[][]GroupingTableMapper.extractKeyValues(Result r) Extract columns values from the current record.voidHashTable.ResultHasher.hashResult(Result result) voidCellCounter.CellCounterMapper.map(ImmutableBytesWritable row, Result values, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, org.apache.hadoop.io.Text, org.apache.hadoop.io.LongWritable>.org.apache.hadoop.mapreduce.Mapper.Context context) Maps the data.voidGroupingTableMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Result>.org.apache.hadoop.mapreduce.Mapper.Context context) Extract the grouping columns from value to construct a new key.protected voidHashTable.HashMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, ImmutableBytesWritable>.org.apache.hadoop.mapreduce.Mapper.Context context) voidIdentityTableMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Result>.org.apache.hadoop.mapreduce.Mapper.Context context) Pass the key, value to reduce.voidImport.CellImporter.map(ImmutableBytesWritable row, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Cell>.org.apache.hadoop.mapreduce.Mapper.Context context) voidImport.CellSortImporter.map(ImmutableBytesWritable row, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, Import.CellWritableComparable, Cell>.org.apache.hadoop.mapreduce.Mapper.Context context) voidImport.Importer.map(ImmutableBytesWritable row, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Mutation>.org.apache.hadoop.mapreduce.Mapper.Context context) protected voidIndexBuilder.Map.map(ImmutableBytesWritable rowKey, Result result, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Put>.org.apache.hadoop.mapreduce.Mapper.Context context) voidRowCounter.RowCounterMapper.map(ImmutableBytesWritable row, Result values, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Result>.org.apache.hadoop.mapreduce.Mapper.Context context) Maps the data.protected voidSyncTable.SyncMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Mutation>.org.apache.hadoop.mapreduce.Mapper.Context context) protected voidImport.Importer.processKV(ImmutableBytesWritable key, Result result, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Mutation>.org.apache.hadoop.mapreduce.Mapper.Context context, Put put, Delete delete) voidprivate voidImport.Importer.writeResult(ImmutableBytesWritable key, Result result, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Mutation>.org.apache.hadoop.mapreduce.Mapper.Context context) Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type ResultModifier and TypeMethodDescriptionorg.apache.hadoop.io.serializer.Deserializer<Result>ResultSerialization.getDeserializer(Class<Result> c) org.apache.hadoop.io.serializer.Serializer<Result>ResultSerialization.getSerializer(Class<Result> c) static <K2,V2> void MultithreadedTableMapper.setMapperClass(org.apache.hadoop.mapreduce.Job job, Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, K2, V2>> cls) Set the application's mapper class.Constructor parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type Result -
Uses of Result in org.apache.hadoop.hbase.mapreduce.replication
Fields in org.apache.hadoop.hbase.mapreduce.replication declared as ResultModifier and TypeFieldDescriptionprivate ResultVerifyReplication.Verifier.currentCompareRowInPeerTableprivate ResultVerifyReplicationRecompareRunnable.replicatedResultprivate ResultVerifyReplicationRecompareRunnable.sourceResultMethods in org.apache.hadoop.hbase.mapreduce.replication with parameters of type ResultModifier and TypeMethodDescriptionprotected static byte[]private voidVerifyReplication.Verifier.logFailRowAndIncreaseCounter(org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Put>.org.apache.hadoop.mapreduce.Mapper.Context context, VerifyReplication.Verifier.Counters counter, Result row, Result replicatedRow) voidVerifyReplication.Verifier.map(ImmutableBytesWritable row, Result value, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Put>.org.apache.hadoop.mapreduce.Mapper.Context context) Map method that compares every scanned row with the equivalent from a distant cluster.private booleanVerifyReplicationRecompareRunnable.matches(Result original, Result updated, VerifyReplication.Verifier.Counters failCounter) Constructors in org.apache.hadoop.hbase.mapreduce.replication with parameters of type ResultModifierConstructorDescriptionVerifyReplicationRecompareRunnable(org.apache.hadoop.mapreduce.Mapper.Context context, Result sourceResult, Result replicatedResult, VerifyReplication.Verifier.Counters originalCounter, String delimiter, Scan tableScan, Table sourceTable, Table replicatedTable, int reCompareTries, int sleepMsBeforeReCompare, int reCompareBackoffExponent, boolean verbose) -
Uses of Result in org.apache.hadoop.hbase.master
Methods in org.apache.hadoop.hbase.master that return ResultMethods in org.apache.hadoop.hbase.master with parameters of type ResultModifier and TypeMethodDescriptionprivate voidTableNamespaceManager.addToCache(Result result, byte[] family, byte[] qualifier) private voidSnapshotOfRegionAssignmentFromMeta.processMetaRecord(Result result) -
Uses of Result in org.apache.hadoop.hbase.master.assignment
Methods in org.apache.hadoop.hbase.master.assignment that return ResultModifier and TypeMethodDescriptionprivate ResultRegionStateStore.getRegionCatalogResult(RegionInfo region) Methods in org.apache.hadoop.hbase.master.assignment with parameters of type ResultModifier and TypeMethodDescriptionstatic RegionState.StateRegionStateStore.getRegionState(Result r, RegionInfo regionInfo) Pull the region state from a catalog tableResult.static voidRegionStateStore.visitMetaEntry(RegionStateStore.RegionStateVisitor visitor, Result result) voidAssignmentManager.RegionMetaLoadingVisitor.visitRegionState(Result result, RegionInfo regionInfo, RegionState.State state, ServerName regionLocation, ServerName lastHost, long openSeqNum) voidRegionStateStore.RegionStateVisitor.visitRegionState(Result result, RegionInfo regionInfo, RegionState.State state, ServerName regionLocation, ServerName lastHost, long openSeqNum) -
Uses of Result in org.apache.hadoop.hbase.master.http
Methods in org.apache.hadoop.hbase.master.http with parameters of type ResultConstructors in org.apache.hadoop.hbase.master.http with parameters of type Result -
Uses of Result in org.apache.hadoop.hbase.master.janitor
Fields in org.apache.hadoop.hbase.master.janitor with type parameters of type ResultModifier and TypeFieldDescription(package private) final Map<RegionInfo,Result> CatalogJanitorReport.mergedRegions(package private) final Map<RegionInfo,Result> CatalogJanitorReport.splitParentsMethods in org.apache.hadoop.hbase.master.janitor that return types with arguments of type ResultMethods in org.apache.hadoop.hbase.master.janitor with parameters of type ResultModifier and TypeMethodDescriptionprivate booleanCatalogJanitor.cleanParent(RegionInfo parent, Result rowContent) If daughters no longer hold reference to the parents, delete the parent.(package private) static booleanCatalogJanitor.cleanParent(MasterServices services, RegionInfo parent, Result rowContent) private RegionInfoReportMakingVisitor.metaTableConsistencyCheck(Result metaTableRow) Check row.boolean -
Uses of Result in org.apache.hadoop.hbase.master.procedure
Methods in org.apache.hadoop.hbase.master.procedure with parameters of type Result -
Uses of Result in org.apache.hadoop.hbase.master.region
Methods in org.apache.hadoop.hbase.master.region that return Result -
Uses of Result in org.apache.hadoop.hbase.master.replication
Methods in org.apache.hadoop.hbase.master.replication with parameters of type ResultModifier and TypeMethodDescriptionprivate voidOfflineTableReplicationQueueStorage.loadHFileRefs(Result result) private voidOfflineTableReplicationQueueStorage.loadLastSequenceIds(Result result) private voidOfflineTableReplicationQueueStorage.loadOffsets(Result result) -
Uses of Result in org.apache.hadoop.hbase.mob.mapreduce
Methods in org.apache.hadoop.hbase.mob.mapreduce with parameters of type ResultModifier and TypeMethodDescriptionvoidMobRefReporter.MobRefMapper.map(ImmutableBytesWritable r, Result columns, org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable, Result, org.apache.hadoop.io.Text, ImmutableBytesWritable>.org.apache.hadoop.mapreduce.Mapper.Context context) -
Uses of Result in org.apache.hadoop.hbase.quotas
Methods in org.apache.hadoop.hbase.quotas that return ResultModifier and TypeMethodDescriptionprotected static Result[]QuotaTableUtil.doGet(Connection connection, List<Get> gets) protected static ResultQuotaTableUtil.doGet(Connection connection, Get get) Methods in org.apache.hadoop.hbase.quotas with parameters of type ResultModifier and TypeMethodDescriptionvoidDefaultOperationQuota.addGetResult(Result result) voidNoopOperationQuota.addGetResult(Result result) voidOperationQuota.addGetResult(Result result) Add a get result.static longQuotaUtil.calculateResultSize(Result result) static voidQuotaTableUtil.extractQuotaSnapshot(Result result, Map<TableName, SpaceQuotaSnapshot> snapshots) Extracts theSpaceViolationPolicyandTableNamefrom the providedResultand adds them to the givenMap.(package private) voidSpaceQuotaRefresherChore.extractQuotaSnapshot(Result result, Map<TableName, SpaceQuotaSnapshot> snapshots) Wrapper aroundQuotaTableUtil.extractQuotaSnapshot(Result, Map)for testing.(package private) longFileArchiverNotifierImpl.getSnapshotSizeFromResult(Result r) Extracts the size component from a serializedSpaceQuotaSnapshotprotobuf.protected static voidQuotaTableUtil.parseNamespaceResult(String namespace, Result result, QuotaTableUtil.NamespaceQuotasVisitor visitor) static voidQuotaTableUtil.parseNamespaceResult(Result result, QuotaTableUtil.NamespaceQuotasVisitor visitor) private static voidQuotaTableUtil.parseRegionServerResult(String regionServer, Result result, QuotaTableUtil.RegionServerQuotasVisitor visitor) private static voidQuotaTableUtil.parseRegionServerResult(Result result, QuotaTableUtil.RegionServerQuotasVisitor visitor) static voidQuotaTableUtil.parseResult(Result result, QuotaTableUtil.QuotasVisitor visitor) static voidQuotaTableUtil.parseResultToCollection(Result result, Collection<QuotaSettings> quotaSettings) static voidQuotaTableUtil.parseTableResult(Result result, QuotaTableUtil.TableQuotasVisitor visitor) protected static voidQuotaTableUtil.parseTableResult(TableName table, Result result, QuotaTableUtil.TableQuotasVisitor visitor) protected static voidQuotaTableUtil.parseUserResult(String userName, Result result, QuotaTableUtil.UserQuotasVisitor visitor) static voidQuotaTableUtil.parseUserResult(Result result, QuotaTableUtil.UserQuotasVisitor visitor) Method parameters in org.apache.hadoop.hbase.quotas with type arguments of type ResultModifier and TypeMethodDescriptionvoidDefaultOperationQuota.addScanResult(List<Result> results) voidNoopOperationQuota.addScanResult(List<Result> results) voidOperationQuota.addScanResult(List<Result> results) Add a scan result.static longQuotaUtil.calculateResultSize(List<Result> results) -
Uses of Result in org.apache.hadoop.hbase.regionserver
Fields in org.apache.hadoop.hbase.regionserver declared as ResultModifier and TypeFieldDescriptionprivate final ResultOperationStatus.resultprotected final Result[]HRegion.BatchOperation.resultsMethods in org.apache.hadoop.hbase.regionserver that return ResultModifier and TypeMethodDescriptionPerform one or more append operations on a row.private ResultRSRpcServices.append(HRegion region, OperationQuota quota, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto mutation, CellScanner cellScanner, long nonceGroup, ActivePolicyEnforcement spaceQuota, RpcCallContext context) Execute an append mutation.Do a get based on the get parameter.private ResultRSRpcServices.get(Get get, HRegion region, RSRpcServices.RegionScannersCloseCallBack closeCallBack, RpcCallContext context) OperationStatus.getResult()Perform one or more increment operations on a row.private ResultRSRpcServices.increment(HRegion region, OperationQuota quota, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto mutation, CellScanner cells, long nonceGroup, ActivePolicyEnforcement spaceQuota, RpcCallContext context) Execute an increment mutation.HRegion.mutateRow(RowMutations rm) HRegion.mutateRow(RowMutations rm, long nonceGroup, long nonce) Region.mutateRow(RowMutations mutations) Performs multiple mutations atomically on a single row.RegionCoprocessorHost.postAppend(Append append, Result result, WALEdit edit) RegionCoprocessorHost.postIncrement(Increment increment, Result result, WALEdit edit) Supports Coprocessor 'bypass'.RegionCoprocessorHost.preAppendAfterRowLock(Append append) Supports Coprocessor 'bypass'.RegionCoprocessorHost.preIncrement(Increment increment, WALEdit edit) Supports Coprocessor 'bypass'.RegionCoprocessorHost.preIncrementAfterRowLock(Increment increment) Supports Coprocessor 'bypass'.Methods in org.apache.hadoop.hbase.regionserver with parameters of type ResultModifier and TypeMethodDescriptionprivate voidRSRpcServices.addResult(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutateResponse.Builder builder, Result result, HBaseRpcController rpcc, boolean clientCellBlockSupported) (package private) voidRSRpcServices.addSize(RpcCallContext context, Result r) Method to account for the size of retained cells.RegionCoprocessorHost.postAppend(Append append, Result result, WALEdit edit) RegionCoprocessorHost.postIncrement(Increment increment, Result result, WALEdit edit) Method parameters in org.apache.hadoop.hbase.regionserver with type arguments of type ResultModifier and TypeMethodDescriptionprivate voidRSRpcServices.addResults(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanResponse.Builder builder, List<Result> results, HBaseRpcController controller, boolean isDefaultRegion, boolean clientCellBlockSupported) booleanRegionCoprocessorHost.postScannerNext(InternalScanner s, List<Result> results, int limit, boolean hasMore) RegionCoprocessorHost.preScannerNext(InternalScanner s, List<Result> results, int limit) private voidRSRpcServices.scan(HBaseRpcController controller, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanRequest request, RSRpcServices.RegionScannerHolder rsh, long maxQuotaResultSize, int maxResults, int limitOfRows, List<Result> results, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanResponse.Builder builder, RpcCall rpcCall, ServerSideScanMetrics scanMetrics) Constructors in org.apache.hadoop.hbase.regionserver with parameters of type ResultModifierConstructorDescriptionOperationStatus(HConstants.OperationStatusCode code, Result result) privateOperationStatus(HConstants.OperationStatusCode code, Result result, String exceptionMsg) -
Uses of Result in org.apache.hadoop.hbase.replication
Methods in org.apache.hadoop.hbase.replication with parameters of type ResultModifier and TypeMethodDescriptionprivate static ReplicationBarrierFamilyFormat.ReplicationBarrierResultReplicationBarrierFamilyFormat.getReplicationBarrierResult(Result result) static long[]ReplicationBarrierFamilyFormat.getReplicationBarriers(Result result) private org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap<String,ReplicationGroupOffset> TableReplicationQueueStorage.parseOffsets(Result result) -
Uses of Result in org.apache.hadoop.hbase.rest
Fields in org.apache.hadoop.hbase.rest declared as ResultModifier and TypeFieldDescriptionprivate ResultScannerResultGenerator.cachedprivate Result[]MultiRowResultReader.resultsMethods in org.apache.hadoop.hbase.rest that return ResultMethods in org.apache.hadoop.hbase.rest with parameters of type ResultModifier and TypeMethodDescriptionprivate CellSetModelProtobufStreamingOutput.createModelFromResults(Result[] results) static RowModelRestUtil.createRowModelFromResult(Result r) Speed-optimized method to convert an HBase result to a RowModel. -
Uses of Result in org.apache.hadoop.hbase.security.access
Methods in org.apache.hadoop.hbase.security.access that return ResultModifier and TypeMethodDescriptionAccessController.preAppend(ObserverContext<? extends RegionCoprocessorEnvironment> c, Append append) AccessController.preIncrement(ObserverContext<? extends RegionCoprocessorEnvironment> c, Increment increment) Methods in org.apache.hadoop.hbase.security.access with parameters of type ResultModifier and TypeMethodDescriptionprivate static org.apache.hbase.thirdparty.com.google.common.collect.ListMultimap<String,UserPermission> PermissionStorage.parsePermissions(byte[] entryName, Result result, byte[] cf, byte[] cq, String user, boolean hasFilterUser) Parse and filter permission based on the specified column family, column qualifier and user name.Method parameters in org.apache.hadoop.hbase.security.access with type arguments of type ResultModifier and TypeMethodDescriptionbooleanAccessController.preScannerNext(ObserverContext<? extends RegionCoprocessorEnvironment> c, InternalScanner s, List<Result> result, int limit, boolean hasNext) -
Uses of Result in org.apache.hadoop.hbase.security.visibility
Method parameters in org.apache.hadoop.hbase.security.visibility with type arguments of type ResultModifier and TypeMethodDescriptionbooleanVisibilityController.preScannerNext(ObserverContext<? extends RegionCoprocessorEnvironment> c, InternalScanner s, List<Result> result, int limit, boolean hasNext) -
Uses of Result in org.apache.hadoop.hbase.thrift
Methods in org.apache.hadoop.hbase.thrift that return ResultModifier and TypeMethodDescriptionprivate ResultThriftHBaseServiceHandler.getReverseScanResult(byte[] tableName, byte[] row, byte[] family) Methods in org.apache.hadoop.hbase.thrift with parameters of type ResultModifier and TypeMethodDescriptionstatic List<org.apache.hadoop.hbase.thrift.generated.TRowResult>ThriftUtilities.rowResultFromHBase(Result in) static List<org.apache.hadoop.hbase.thrift.generated.TRowResult>ThriftUtilities.rowResultFromHBase(Result[] in) This utility method creates a list of Thrift TRowResult "struct" based on an array of Hbase RowResult objects.static List<org.apache.hadoop.hbase.thrift.generated.TRowResult>ThriftUtilities.rowResultFromHBase(Result[] in, boolean sortColumns) This utility method creates a list of Thrift TRowResult "struct" based on an Hbase RowResult object. -
Uses of Result in org.apache.hadoop.hbase.thrift2
Fields in org.apache.hadoop.hbase.thrift2 declared as ResultModifier and TypeFieldDescriptionprivate static final ResultThriftUtilities.EMPTY_RESULTprivate static final ResultThriftUtilities.EMPTY_RESULT_STALEMethods in org.apache.hadoop.hbase.thrift2 that return ResultModifier and TypeMethodDescriptionstatic ResultThriftUtilities.resultFromThrift(org.apache.hadoop.hbase.thrift2.generated.TResult in) static Result[]ThriftUtilities.resultsFromThrift(List<org.apache.hadoop.hbase.thrift2.generated.TResult> in) Methods in org.apache.hadoop.hbase.thrift2 with parameters of type ResultModifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.thrift2.generated.TResultThriftUtilities.resultFromHBase(Result in) Creates aTResult(Thrift) from aResult(HBase).static List<org.apache.hadoop.hbase.thrift2.generated.TResult>ThriftUtilities.resultsFromHBase(Result[] in) Converts multipleResults (HBase) into a list ofTResults (Thrift). -
Uses of Result in org.apache.hadoop.hbase.thrift2.client
Fields in org.apache.hadoop.hbase.thrift2.client declared as ResultFields in org.apache.hadoop.hbase.thrift2.client with type parameters of type ResultMethods in org.apache.hadoop.hbase.thrift2.client that return Result -
Uses of Result in org.apache.hadoop.hbase.util
Fields in org.apache.hadoop.hbase.util with type parameters of type ResultMethods in org.apache.hadoop.hbase.util that return ResultMethods in org.apache.hadoop.hbase.util with parameters of type ResultModifier and TypeMethodDescriptionprivate voidMultiThreadedAction.printLocations(Result r) private StringMultiThreadedAction.resultToString(Result result) booleanMultiThreadedAction.verifyResultAgainstDataGenerator(Result result, boolean verifyValues) booleanMultiThreadedAction.verifyResultAgainstDataGenerator(Result result, boolean verifyValues, boolean verifyCfAndColumnIntegrity) Verifies the result from get or scan using the dataGenerator (that was presumably also used to generate said result).protected voidMultiThreadedReader.HBaseReaderThread.verifyResultsAndUpdateMetrics(boolean verify, Get[] gets, long elapsedNano, Result[] results, Table table, boolean isNullExpected) protected voidMultiThreadedReader.HBaseReaderThread.verifyResultsAndUpdateMetrics(boolean verify, Get get, long elapsedNano, Result result, Table table, boolean isNullExpected) private voidMultiThreadedReader.HBaseReaderThread.verifyResultsAndUpdateMetricsOnAPerGetBasis(boolean verify, Get get, Result result, Table table, boolean isNullExpected)