Uses of Class
org.apache.hadoop.hbase.client.Scan
Package
Description
Provides HBase Client
Table of Contents
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Multi Cluster Replication
Provides an HBase Thrift
service.
-
Uses of Scan in org.apache.hadoop.hbase
Modifier and TypeMethodDescriptionprotected Scan
PerformanceEvaluation.FilteredScanTest.constructScan
(byte[] valuePrefix) private static Scan
ClientMetaTableAccessor.getMetaScan
(AsyncTable<?> metaTable, int rowUpperLimit) private static Scan
MetaTableAccessor.getMetaScan
(org.apache.hadoop.conf.Configuration conf, int rowUpperLimit) private Scan
ScanPerformanceEvaluation.getScan()
static Scan
MetaTableAccessor.getScanForTableName
(org.apache.hadoop.conf.Configuration conf, TableName tableName) This method creates a Scan object that will only scan catalog rows that belong to the specified table. -
Uses of Scan in org.apache.hadoop.hbase.backup.impl
Modifier and TypeMethodDescriptionprivate Scan
BackupSystemTable.createScanForBackupHistory()
Creates Scan operation to load backup historyprivate Scan
BackupSystemTable.createScanForBackupSetList()
Creates Scan operation to load backup set list(package private) static Scan
BackupSystemTable.createScanForBulkLoadedFiles
(String backupId) (package private) static Scan
BackupSystemTable.createScanForOrigBulkLoadedFiles
(TableName table) private Scan
BackupSystemTable.createScanForReadLogTimestampMap
(String backupRoot) Creates Scan to load table-> { RS -> ts} map of mapsprivate Scan
BackupSystemTable.createScanForReadRegionServerLastLogRollResult
(String backupRoot) Creates Scan operation to load last RS log roll results -
Uses of Scan in org.apache.hadoop.hbase.client
Modifier and TypeFieldDescriptionprivate final Scan
ImmutableScan.delegateScan
private final Scan
AsyncClientScanner.scan
private Scan
AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.scan
private final Scan
AsyncScanSingleRegionRpcRetryingCaller.scan
private final Scan
AsyncTableResultScanner.scan
private Scan
OnlineLogRecord.OnlineLogRecordBuilder.scan
private Scan
TableSnapshotScanner.scan
Modifier and TypeMethodDescriptionImmutableScan.addColumn
(byte[] family, byte[] qualifier) Scan.addColumn
(byte[] family, byte[] qualifier) Get the column from the specified family with the specified qualifier.ImmutableScan.addFamily
(byte[] family) Scan.addFamily
(byte[] family) Get all columns from the specified family.static Scan
Scan.createScanFromCursor
(Cursor cursor) Create a new Scan with a cursor.ImmutableScan.readAllVersions()
Scan.readAllVersions()
Get all available versions.ImmutableScan.readVersions
(int versions) Scan.readVersions
(int versions) Get up to the specified number of versions of each column.(package private) Scan
ImmutableScan.resetMvccReadPoint()
(package private) Scan
Scan.resetMvccReadPoint()
Set the mvcc read point to -1 which means do not use it.ImmutableScan.setACL
(String user, Permission perms) ImmutableScan.setACL
(Map<String, Permission> perms) Scan.setACL
(String user, Permission perms) Scan.setACL
(Map<String, Permission> perms) ImmutableScan.setAllowPartialResults
(boolean allowPartialResults) Scan.setAllowPartialResults
(boolean allowPartialResults) Setting whether the caller wants to see the partial results when server returns less-than-expected cells.ImmutableScan.setAsyncPrefetch
(boolean asyncPrefetch) Deprecated.Scan.setAsyncPrefetch
(boolean asyncPrefetch) Deprecated.Since 3.0.0, will be removed in 4.0.0.ImmutableScan.setAttribute
(String name, byte[] value) Scan.setAttribute
(String name, byte[] value) ImmutableScan.setAuthorizations
(Authorizations authorizations) Scan.setAuthorizations
(Authorizations authorizations) ImmutableScan.setBatch
(int batch) Scan.setBatch
(int batch) Set the maximum number of cells to return for each call to next().ImmutableScan.setCacheBlocks
(boolean cacheBlocks) Scan.setCacheBlocks
(boolean cacheBlocks) Set whether blocks should be cached for this Scan.ImmutableScan.setCaching
(int caching) Scan.setCaching
(int caching) Set the number of rows for caching that will be passed to scanners.ImmutableScan.setColumnFamilyTimeRange
(byte[] cf, long minStamp, long maxStamp) Scan.setColumnFamilyTimeRange
(byte[] cf, long minStamp, long maxStamp) ImmutableScan.setConsistency
(Consistency consistency) Scan.setConsistency
(Consistency consistency) private Scan
RawAsyncTableImpl.setDefaultScanConfig
(Scan scan) ImmutableScan.setFamilyMap
(Map<byte[], NavigableSet<byte[]>> familyMap) Scan.setFamilyMap
(Map<byte[], NavigableSet<byte[]>> familyMap) Setting the familyMapImmutableScan.setIsolationLevel
(IsolationLevel level) Scan.setIsolationLevel
(IsolationLevel level) ImmutableScan.setLimit
(int limit) Scan.setLimit
(int limit) Set the limit of rows for this scan.ImmutableScan.setLoadColumnFamiliesOnDemand
(boolean value) Scan.setLoadColumnFamiliesOnDemand
(boolean value) ImmutableScan.setMaxResultSize
(long maxResultSize) Scan.setMaxResultSize
(long maxResultSize) Set the maximum result size.ImmutableScan.setMaxResultsPerColumnFamily
(int limit) Scan.setMaxResultsPerColumnFamily
(int limit) Set the maximum number of values to return per row per Column Family(package private) Scan
ImmutableScan.setMvccReadPoint
(long mvccReadPoint) (package private) Scan
Scan.setMvccReadPoint
(long mvccReadPoint) Set the mvcc read point used to open a scanner.ImmutableScan.setNeedCursorResult
(boolean needCursorResult) Scan.setNeedCursorResult
(boolean needCursorResult) When the server is slow or we scan a table with many deleted data or we use a sparse filter, the server will response heartbeat to prevent timeout.ImmutableScan.setOneRowLimit()
Scan.setOneRowLimit()
Call this when you only want to get one row.ImmutableScan.setPriority
(int priority) Scan.setPriority
(int priority) ImmutableScan.setRaw
(boolean raw) Scan.setRaw
(boolean raw) Enable/disable "raw" mode for this scan.ImmutableScan.setReadType
(Scan.ReadType readType) Scan.setReadType
(Scan.ReadType readType) Set the read type for this scan.ImmutableScan.setReplicaId
(int id) Scan.setReplicaId
(int Id) ImmutableScan.setReversed
(boolean reversed) Scan.setReversed
(boolean reversed) Set whether this scan is a reversed oneImmutableScan.setRowOffsetPerColumnFamily
(int offset) Scan.setRowOffsetPerColumnFamily
(int offset) Set offset for the row per Column Family.Scan.setRowPrefixFilter
(byte[] rowPrefix) Deprecated.since 2.5.0, will be removed in 4.0.0.ImmutableScan.setScanMetricsEnabled
(boolean enabled) Scan.setScanMetricsEnabled
(boolean enabled) Enable collection ofScanMetrics
.ImmutableScan.setStartStopRowForPrefixScan
(byte[] rowPrefix) Scan.setStartStopRowForPrefixScan
(byte[] rowPrefix) Set a filter (using stopRow and startRow) so the result set only contains rows where the rowKey starts with the specified prefix.ImmutableScan.setTimeRange
(long minStamp, long maxStamp) Scan.setTimeRange
(long minStamp, long maxStamp) Get versions of columns only within the specified timestamp range, [minStamp, maxStamp).ImmutableScan.setTimestamp
(long timestamp) Scan.setTimestamp
(long timestamp) Get versions of columns with the specified timestamp.ImmutableScan.withStartRow
(byte[] startRow) ImmutableScan.withStartRow
(byte[] startRow, boolean inclusive) Scan.withStartRow
(byte[] startRow) Set the start row of the scan.Scan.withStartRow
(byte[] startRow, boolean inclusive) Set the start row of the scan.ImmutableScan.withStopRow
(byte[] stopRow) ImmutableScan.withStopRow
(byte[] stopRow, boolean inclusive) Scan.withStopRow
(byte[] stopRow) Set the stop row of the scan.Scan.withStopRow
(byte[] stopRow, boolean inclusive) Set the stop row of the scan.Modifier and TypeMethodDescriptionOnlineLogRecord.getScan()
If "hbase.slowlog.scan.payload.enabled" is enabled then this value may be present and should represent the Scan that produced the givenOnlineLogRecord
Modifier and TypeMethodDescriptionstatic ScanResultCache
ConnectionUtils.createScanResultCache
(Scan scan) (package private) static RegionLocateType
ConnectionUtils.getLocateType
(Scan scan) static long
ClientInternalHelper.getMvccReadPoint
(Scan scan) AsyncTable.getScanner
(Scan scan) Returns a scanner on the current table as specified by theScan
object.AsyncTableImpl.getScanner
(Scan scan) RawAsyncTableImpl.getScanner
(Scan scan) default ResultScanner
Table.getScanner
(Scan scan) Returns a scanner on the current table as specified by theScan
object.TableOverAsyncTable.getScanner
(Scan scan) protected void
AbstractClientScanner.initScanMetrics
(Scan scan) Check and initialize if application wants to collect scan metrics(package private) static boolean
ConnectionUtils.noMoreResultsForReverseScan
(Scan scan, RegionInfo info) (package private) static boolean
ConnectionUtils.noMoreResultsForScan
(Scan scan, RegionInfo info) void
The scan API uses the observer pattern.void
AsyncTableImpl.scan
(Scan scan, ScanResultConsumer consumer) void
RawAsyncTableImpl.scan
(Scan scan, AdvancedScanResultConsumer consumer) private void
AsyncTableImpl.scan0
(Scan scan, ScanResultConsumer consumer) Return all the results that match the given scan object.private Scan
RawAsyncTableImpl.setDefaultScanConfig
(Scan scan) static void
ClientInternalHelper.setMvccReadPoint
(Scan scan, long mvccReadPoint) ModifierConstructorDescriptionAsyncClientScanner
(Scan scan, AdvancedScanResultConsumer consumer, TableName tableName, AsyncConnectionImpl conn, org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long scanTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) AsyncScanSingleRegionRpcRetryingCaller
(org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, AsyncConnectionImpl conn, Scan scan, ScanMetrics scanMetrics, long scannerId, ScanResultCache resultCache, AdvancedScanResultConsumer consumer, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.Interface stub, HRegionLocation loc, boolean isRegionServerRemote, int priority, long scannerLeaseTimeoutPeriodNs, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long scanTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) AsyncTableResultScanner
(TableName tableName, Scan scan, long maxCacheSize) ClientSideRegionScanner
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, TableDescriptor htd, RegionInfo hri, Scan scan, ScanMetrics scanMetrics) ImmutableScan
(Scan scan) Create Immutable instance of Scan from given Scan object(package private)
OnlineLogRecord
(long startTime, int processingTime, int queueTime, long responseSize, long blockBytesScanned, long fsReadTime, String clientAddress, String serverClass, String methodName, String callDetails, String param, String regionName, String userName, int multiGetsCount, int multiMutationsCount, int multiServiceCalls, Scan scan, Map<String, byte[]> requestAttributes, Map<String, byte[]> connectionAttributes) Creates a new instance of this class while copying all values.TableSnapshotScanner
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path restoreDir, String snapshotName, Scan scan) Creates a TableSnapshotScanner.TableSnapshotScanner
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.fs.Path restoreDir, String snapshotName, Scan scan) TableSnapshotScanner
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.fs.Path restoreDir, String snapshotName, Scan scan, boolean snapshotAlreadyRestored) Creates a TableSnapshotScanner. -
Uses of Scan in org.apache.hadoop.hbase.client.trace
Modifier and TypeMethodDescriptionTableOperationSpanBuilder.setOperation
(Scan scan) private static HBaseSemanticAttributes.Operation
-
Uses of Scan in org.apache.hadoop.hbase.coprocessor
Modifier and TypeMethodDescriptionRegionCoprocessorEnvironment.checkScanQuota
(Scan scan, long maxBlockBytesScanned, long prevBlockBytesScannedDifference) Check the quota for the current (rpc-context) user.default RegionScanner
RegionObserver.postScannerOpen
(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s) Called after the client opens a new scanner.default void
RegionObserver.preScannerOpen
(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan) Called before the client opens a new scanner. -
Uses of Scan in org.apache.hadoop.hbase.coprocessor.example
Modifier and TypeMethodDescriptionvoid
ScanModifyingObserver.preScannerOpen
(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.io
-
Uses of Scan in org.apache.hadoop.hbase.mapred
Modifier and TypeMethodDescriptionstatic void
TableMapReduceUtil.initMultiTableSnapshotMapperJob
(Map<String, Collection<Scan>> snapshotScans, Class<? extends TableMap> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapred.JobConf job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir) Sets up the job for reading from one or more multiple table snapshots, with one or more scans per snapshot.static void
MultiTableSnapshotInputFormat.setInput
(org.apache.hadoop.conf.Configuration conf, Map<String, Collection<Scan>> snapshotScans, org.apache.hadoop.fs.Path restoreDir) ModifierConstructorDescriptionTableSnapshotRegionSplit
(TableDescriptor htd, RegionInfo regionInfo, List<String> locations, Scan scan, org.apache.hadoop.fs.Path restoreDir) -
Uses of Scan in org.apache.hadoop.hbase.mapreduce
Modifier and TypeFieldDescriptionprivate Scan
TableRecordReaderImpl.currentScan
private Scan
TableInputFormatBase.scan
Holds the details for the internal scanner.private Scan
TableRecordReaderImpl.scan
private Scan
TableSnapshotInputFormatImpl.RecordReader.scan
Modifier and TypeFieldDescriptionMultiTableInputFormatBase.scans
Holds the set of scans used to define the input.Modifier and TypeMethodDescriptionstatic Scan
TableMapReduceUtil.convertStringToScan
(String base64) Converts the given Base64 string back into a Scan instance.static Scan
TableInputFormat.createScanFromConfiguration
(org.apache.hadoop.conf.Configuration conf) Sets up aScan
instance, applying settings from the configuration property constants defined inTableInputFormat
.static Scan
TableSnapshotInputFormatImpl.extractScanFromConf
(org.apache.hadoop.conf.Configuration conf) private static Scan
CellCounter.getConfiguredScanForJob
(org.apache.hadoop.conf.Configuration conf, String[] args) TableInputFormatBase.getScan()
Gets the scan defining the actual details like columns etc.TableSplit.getScan()
Returns a Scan object from the stored string representation.(package private) static Scan
ExportUtils.getScanFromCommandLine
(org.apache.hadoop.conf.Configuration conf, String[] args) (package private) Scan
HashTable.TableHash.initScan()
Modifier and TypeMethodDescriptionExportUtils.getArgumentsFromCommandLine
(org.apache.hadoop.conf.Configuration conf, String[] args) MultiTableInputFormatBase.getScans()
Allows subclasses to get the list ofScan
objects.MultiTableSnapshotInputFormatImpl.getSnapshotsToScans
(org.apache.hadoop.conf.Configuration conf) Retrieve the snapshot name -> list<scan> mapping pushed to configuration byMultiTableSnapshotInputFormatImpl.setSnapshotToScans(Configuration, Map)
Modifier and TypeMethodDescriptionprivate static void
Parses a combined family and qualifier and adds either both or just the family in case there is no qualifier.static void
TableInputFormat.addColumns
(Scan scan, byte[][] columns) Adds an array of columns specified using old format, family:qualifier.private static void
TableInputFormat.addColumns
(Scan scan, String columns) Convenience method to parse a string representation of an array of column specifiers.static String
TableMapReduceUtil.convertScanToString
(Scan scan) Writes the given scan into a Base64 encoded string.TableSnapshotInputFormatImpl.getSplits
(Scan scan, SnapshotManifest manifest, List<RegionInfo> regionManifests, org.apache.hadoop.fs.Path restoreDir, org.apache.hadoop.conf.Configuration conf) TableSnapshotInputFormatImpl.getSplits
(Scan scan, SnapshotManifest manifest, List<RegionInfo> regionManifests, org.apache.hadoop.fs.Path restoreDir, org.apache.hadoop.conf.Configuration conf, RegionSplitter.SplitAlgorithm sa, int numSplits) private void
CopyTable.initCopyTableMapperJob
(org.apache.hadoop.mapreduce.Job job, Scan scan) static void
GroupingTableMapper.initJob
(String table, Scan scan, String groupColumns, Class<? extends TableMapper> mapper, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static void
IdentityTableMapper.initJob
(String table, Scan scan, Class<? extends TableMapper> mapper, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars) Use this before submitting a TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass) Use this before submitting a TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars) Use this before submitting a TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, boolean initCredentials, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass) Use this before submitting a TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass) Use this before submitting a TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(TableName table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static void
TableMapReduceUtil.initTableSnapshotMapperJob
(String snapshotName, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir) Sets up the job for reading from a table snapshot.static void
TableMapReduceUtil.initTableSnapshotMapperJob
(String snapshotName, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir, RegionSplitter.SplitAlgorithm splitAlgo, int numSplitsPerRegion) Sets up the job for reading from a table snapshot.void
Sets the scan defining the actual details like columns etc.void
Sets the scan defining the actual details like columns etc.void
Sets the scan defining the actual details like columns etc.private static void
RowCounter.setScanFilter
(Scan scan, List<MultiRowRangeFilter.RowRange> rowRangeList) Sets filterFilterBase
to theScan
instance.Modifier and TypeMethodDescriptionstatic void
TableMapReduceUtil.initMultiTableSnapshotMapperJob
(Map<String, Collection<Scan>> snapshotScans, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir) Sets up the job for reading from one or more table snapshots, with one or more scans per snapshot.static void
TableMapReduceUtil.initTableMapperJob
(List<Scan> scans, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a Multi TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(List<Scan> scans, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars) Use this before submitting a Multi TableMap job.static void
TableMapReduceUtil.initTableMapperJob
(List<Scan> scans, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, boolean initCredentials) Use this before submitting a Multi TableMap job.static void
MultiTableSnapshotInputFormat.setInput
(org.apache.hadoop.conf.Configuration configuration, Map<String, Collection<Scan>> snapshotScans, org.apache.hadoop.fs.Path tmpRestoreDir) void
MultiTableSnapshotInputFormatImpl.setInput
(org.apache.hadoop.conf.Configuration conf, Map<String, Collection<Scan>> snapshotScans, org.apache.hadoop.fs.Path restoreDir) Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of restoreDir.protected void
Allows subclasses to set the list ofScan
objects.void
MultiTableSnapshotInputFormatImpl.setSnapshotToScans
(org.apache.hadoop.conf.Configuration conf, Map<String, Collection<Scan>> snapshotScans) Push snapshotScans to conf (under the keyMultiTableSnapshotInputFormatImpl.SNAPSHOT_TO_SCANS_KEY
)ModifierConstructorDescriptionInputSplit
(TableDescriptor htd, RegionInfo regionInfo, List<String> locations, Scan scan, org.apache.hadoop.fs.Path restoreDir) TableSnapshotRegionSplit
(TableDescriptor htd, RegionInfo regionInfo, List<String> locations, Scan scan, org.apache.hadoop.fs.Path restoreDir) TableSplit
(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location) Creates a new instance while assigning all variables.TableSplit
(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location, long length) Creates a new instance while assigning all variables.TableSplit
(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location, String encodedRegionName, long length) Creates a new instance while assigning all variables. -
Uses of Scan in org.apache.hadoop.hbase.mapreduce.replication
Modifier and TypeFieldDescriptionprivate Scan
VerifyReplication.Verifier.tableScan
private final Scan
VerifyReplicationRecompareRunnable.tableScan
Modifier and TypeMethodDescriptionprivate static void
VerifyReplication.setRowPrefixFilter
(Scan scan, String rowPrefixes) private static void
VerifyReplication.setStartAndStopRows
(Scan scan, byte[] startPrefixRow, byte[] lastPrefixRow) ModifierConstructorDescriptionVerifyReplicationRecompareRunnable
(org.apache.hadoop.mapreduce.Mapper.Context context, Result sourceResult, Result replicatedResult, VerifyReplication.Verifier.Counters originalCounter, String delimiter, Scan tableScan, Table sourceTable, Table replicatedTable, int reCompareTries, int sleepMsBeforeReCompare, int reCompareBackoffExponent, boolean verbose) -
Uses of Scan in org.apache.hadoop.hbase.master.assignment
Modifier and TypeMethodDescriptionprivate Scan
RegionStateStore.getScanForUpdateRegionReplicas
(TableName tableName) -
Uses of Scan in org.apache.hadoop.hbase.master.http
-
Uses of Scan in org.apache.hadoop.hbase.master.region
Modifier and TypeMethodDescriptionMasterRegion.getRegionScanner
(Scan scan) MasterRegion.getScanner
(Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.mob
Modifier and TypeMethodDescriptionstatic boolean
MobUtils.isCacheMobBlocks
(Scan scan) Indicates whether the scan contains the information of caching blocks.static boolean
MobUtils.isRawMobScan
(Scan scan) Indicates whether it's a raw scan.static boolean
MobUtils.isReadEmptyValueOnMobCellMiss
(Scan scan) Indicates whether return null value when the mob file is missing or corrupt.static boolean
MobUtils.isRefOnlyScan
(Scan scan) Indicates whether it's a reference only scan.static void
MobUtils.setCacheMobBlocks
(Scan scan, boolean cacheBlocks) Sets the attribute of caching blocks in the scan. -
Uses of Scan in org.apache.hadoop.hbase.quotas
Modifier and TypeMethodDescription(package private) static Scan
QuotaTableUtil.createScanForNamespaceSnapshotSizes()
Returns a scanner for all existing namespace snapshot entries.(package private) static Scan
QuotaTableUtil.createScanForNamespaceSnapshotSizes
(String namespace) Returns a scanner for all namespace snapshot entries of the given namespace(package private) static Scan
QuotaTableUtil.createScanForSpaceSnapshotSizes()
(package private) static Scan
QuotaTableUtil.createScanForSpaceSnapshotSizes
(TableName table) static Scan
QuotaTableUtil.makeQuotaSnapshotScan()
Creates aScan
which returns only quota snapshots from the quota table.static Scan
QuotaTableUtil.makeQuotaSnapshotScanForTable
(TableName tn) Creates aScan
which returns onlySpaceQuotaSnapshot
from the quota table for a specific table.static Scan
QuotaTableUtil.makeScan
(QuotaFilter filter) Modifier and TypeMethodDescriptionQuotaTableUtil.createDeletesForExistingSnapshotsFromScan
(Connection connection, Scan scan) Returns a list ofDelete
to remove all entries returned by the passed scanner.private void
QuotaRetriever.init
(Connection conn, Scan scan) ModifierConstructorDescription(package private)
QuotaRetriever
(org.apache.hadoop.conf.Configuration conf, Scan scan) QuotaRetriever
(Connection conn, Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.regionserver
Modifier and TypeClassDescriptionclass
Special scanner, currently used for increment operations to allow additional server-side arguments for Scan operations.Modifier and TypeFieldDescriptionprivate final Scan
CustomizedScanInfoBuilder.scan
private final Scan
StoreScanner.scan
private static final Scan
StoreScanner.SCAN_FOR_COMPACTION
Modifier and TypeMethodDescriptionCustomizedScanInfoBuilder.getScan()
ScanOptions.getScan()
Returns a copy of the Scan object.Modifier and TypeMethodDescriptionRegionCoprocessorHost.RegionEnvironment.checkScanQuota
(Scan scan, long maxBlockBytesScanned, long prevBlockBytesScannedDifference) protected KeyValueScanner
HMobStore.createScanner
(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> targetCols, long readPt) Gets the MobStoreScanner or MobReversedStoreScanner.protected KeyValueScanner
HStore.createScanner
(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> targetCols, long readPt) HRegion.getScanner
(Scan scan) HRegion.getScanner
(Scan scan, List<KeyValueScanner> additionalScanners) private RegionScannerImpl
HRegion.getScanner
(Scan scan, List<KeyValueScanner> additionalScanners, long nonceGroup, long nonce) HStore.getScanner
(Scan scan, NavigableSet<byte[]> targetCols, long readPt) Return a scanner for both the memstore and the HStore files.Region.getScanner
(Scan scan) Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by theScan
.Region.getScanner
(Scan scan, List<KeyValueScanner> additionalScanners) Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by theScan
.private void
RegionScannerImpl.initializeScanners
(Scan scan, List<KeyValueScanner> additionalScanners) protected RegionScannerImpl
HRegion.instantiateRegionScanner
(Scan scan, List<KeyValueScanner> additionalScanners, long nonceGroup, long nonce) private boolean
RSRpcServices.isFullRegionScan
(Scan scan, HRegion region) private static boolean
StoreScanner.isOnlyLatestVersionScan
(Scan scan) (package private) boolean
StoreFileReader.passesBloomFilter
(Scan scan, SortedSet<byte[]> columns) Checks whether the given scan passes the Bloom filter (if present).private boolean
StoreFileReader.passesGeneralRowPrefixBloomFilter
(Scan scan) A method for checking Bloom filters.boolean
StoreFileReader.passesKeyRangeFilter
(Scan scan) Checks whether the given scan rowkey range overlaps with the current storefile'sRegionCoprocessorHost.postScannerOpen
(Scan scan, RegionScanner s) void
RegionCoprocessorHost.preScannerOpen
(Scan scan) RegionCoprocessorHost.preStoreScannerOpen
(HStore store, Scan scan) Called before open store scanner for user scan.boolean
KeyValueScanner.shouldUseScanner
(Scan scan, HStore store, long oldestUnexpiredTS) Allows to filter out scanners (both StoreFile and memstore) that we don't want to use based on criteria such as Bloom filters and timestamp ranges.boolean
NonLazyKeyValueScanner.shouldUseScanner
(Scan scan, HStore store, long oldestUnexpiredTS) boolean
SegmentScanner.shouldUseScanner
(Scan scan, HStore store, long oldestUnexpiredTS) This functionality should be resolved in the higher level which is MemStoreScanner, currently returns true as default.boolean
StoreFileScanner.shouldUseScanner
(Scan scan, HStore store, long oldestUnexpiredTS) ModifierConstructorDescriptionCustomizedScanInfoBuilder
(ScanInfo scanInfo, Scan scan) InternalScan
(Scan scan) MobStoreScanner
(HStore store, ScanInfo scanInfo, Scan scan, NavigableSet<byte[]> columns, long readPt) (package private)
RegionScannerImpl
(Scan scan, List<KeyValueScanner> additionalScanners, HRegion region, long nonceGroup, long nonce) (package private)
ReversedMobStoreScanner
(HStore store, ScanInfo scanInfo, Scan scan, NavigableSet<byte[]> columns, long readPt) (package private)
ReversedRegionScannerImpl
(Scan scan, List<KeyValueScanner> additionalScanners, HRegion region, long nonceGroup, long nonce) ReversedStoreScanner
(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, List<? extends KeyValueScanner> scanners) Constructor for testing.ReversedStoreScanner
(HStore store, ScanInfo scanInfo, Scan scan, NavigableSet<byte[]> columns, long readPt) Opens a scanner across memstore, snapshot, and all StoreFiles.(package private)
StoreScanner
(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, List<? extends KeyValueScanner> scanners) (package private)
StoreScanner
(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, List<? extends KeyValueScanner> scanners, ScanType scanType) private
StoreScanner
(HStore store, Scan scan, ScanInfo scanInfo, int numColumns, long readPt, boolean cacheBlocks, ScanType scanType) An internal constructor.StoreScanner
(HStore store, ScanInfo scanInfo, Scan scan, NavigableSet<byte[]> columns, long readPt) Opens a scanner across memstore, snapshot, and all StoreFiles. -
Uses of Scan in org.apache.hadoop.hbase.regionserver.querymatcher
Modifier and TypeMethodDescriptionstatic NormalUserScanQueryMatcher
NormalUserScanQueryMatcher.create
(Scan scan, ScanInfo scanInfo, ColumnTracker columns, DeleteTracker deletes, boolean hasNullColumn, long oldestUnexpiredTS, long now) static RawScanQueryMatcher
RawScanQueryMatcher.create
(Scan scan, ScanInfo scanInfo, ColumnTracker columns, boolean hasNullColumn, long oldestUnexpiredTS, long now) static UserScanQueryMatcher
UserScanQueryMatcher.create
(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, long oldestUnexpiredTS, long now, RegionCoprocessorHost regionCoprocessorHost) private static ExtendedCell
UserScanQueryMatcher.createStartKey
(Scan scan, ScanInfo scanInfo) protected static Pair<DeleteTracker,
ColumnTracker> ScanQueryMatcher.getTrackers
(RegionCoprocessorHost host, NavigableSet<byte[]> columns, ScanInfo scanInfo, long oldestUnexpiredTS, Scan userScan) ModifierConstructorDescriptionprotected
NormalUserScanQueryMatcher
(Scan scan, ScanInfo scanInfo, ColumnTracker columns, boolean hasNullColumn, DeleteTracker deletes, long oldestUnexpiredTS, long now) protected
RawScanQueryMatcher
(Scan scan, ScanInfo scanInfo, ColumnTracker columns, boolean hasNullColumn, long oldestUnexpiredTS, long now) protected
UserScanQueryMatcher
(Scan scan, ScanInfo scanInfo, ColumnTracker columns, boolean hasNullColumn, long oldestUnexpiredTS, long now) -
Uses of Scan in org.apache.hadoop.hbase.replication
Modifier and TypeMethodDescriptionprivate void
TableReplicationQueueStorage.listAllQueueIds
(Table table, Scan scan, List<ReplicationQueueId> queueIds) private <T extends Collection<String>>
TTableReplicationQueueStorage.scanHFiles
(Scan scan, Supplier<T> creator) -
Uses of Scan in org.apache.hadoop.hbase.rest.model
-
Uses of Scan in org.apache.hadoop.hbase.security.access
Modifier and TypeMethodDescriptionAccessController.postScannerOpen
(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s) void
AccessController.preScannerOpen
(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.security.visibility
Modifier and TypeMethodDescriptionVisibilityController.postScannerOpen
(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s) void
VisibilityController.preScannerOpen
(ObserverContext<? extends RegionCoprocessorEnvironment> e, Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.thrift2
Modifier and TypeMethodDescriptionstatic Scan
ThriftUtilities.scanFromThrift
(org.apache.hadoop.hbase.thrift2.generated.TScan in) Modifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.thrift2.generated.TScan
ThriftUtilities.scanFromHBase
(Scan in) -
Uses of Scan in org.apache.hadoop.hbase.thrift2.client