Uses of Class
org.apache.hadoop.hbase.client.Scan
Packages that use Scan
Package
Description
Provides HBase Client
Table of Contents
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Multi Cluster Replication
Provides an HBase Thrift
service.
-
Uses of Scan in org.apache.hadoop.hbase
Methods in org.apache.hadoop.hbase that return ScanModifier and TypeMethodDescriptionprotected ScanPerformanceEvaluation.FilteredScanTest.constructScan(byte[] valuePrefix) private static ScanClientMetaTableAccessor.getMetaScan(AsyncTable<?> metaTable, int rowUpperLimit) private static ScanMetaTableAccessor.getMetaScan(org.apache.hadoop.conf.Configuration conf, int rowUpperLimit) private ScanScanPerformanceEvaluation.getScan()static ScanMetaTableAccessor.getScanForTableName(org.apache.hadoop.conf.Configuration conf, TableName tableName) This method creates a Scan object that will only scan catalog rows that belong to the specified table.Methods in org.apache.hadoop.hbase with parameters of type Scan -
Uses of Scan in org.apache.hadoop.hbase.backup.impl
Methods in org.apache.hadoop.hbase.backup.impl that return ScanModifier and TypeMethodDescriptionprivate ScanBackupSystemTable.createScanForBackupHistory()Creates Scan operation to load backup historyprivate ScanBackupSystemTable.createScanForBackupSetList()Creates Scan operation to load backup set list(package private) static ScanBackupSystemTable.createScanForBulkLoadedFiles(String backupId) (package private) static ScanBackupSystemTable.createScanForOrigBulkLoadedFiles(TableName table) Creates a scan to read all registered bulk loads for the given table, or for all tables iftableisnull.private ScanBackupSystemTable.createScanForReadLogTimestampMap(String backupRoot) Creates Scan to load table-> { RS -> ts} map of mapsprivate ScanBackupSystemTable.createScanForReadRegionServerLastLogRollResult(String backupRoot) Creates Scan operation to load last RS log roll resultsMethods in org.apache.hadoop.hbase.backup.impl with parameters of type Scan -
Uses of Scan in org.apache.hadoop.hbase.client
Subclasses of Scan in org.apache.hadoop.hbase.clientFields in org.apache.hadoop.hbase.client declared as ScanModifier and TypeFieldDescriptionprivate final ScanImmutableScan.delegateScanprivate final ScanAsyncClientScanner.scanprivate ScanAsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.scanprivate final ScanAsyncScanSingleRegionRpcRetryingCaller.scanprivate final ScanAsyncTableResultScanner.scanprivate ScanOnlineLogRecord.OnlineLogRecordBuilder.scanprivate ScanTableSnapshotScanner.scanFields in org.apache.hadoop.hbase.client with type parameters of type ScanMethods in org.apache.hadoop.hbase.client that return ScanModifier and TypeMethodDescriptionImmutableScan.addColumn(byte[] family, byte[] qualifier) Scan.addColumn(byte[] family, byte[] qualifier) Get the column from the specified family with the specified qualifier.ImmutableScan.addFamily(byte[] family) Scan.addFamily(byte[] family) Get all columns from the specified family.static ScanScan.createScanFromCursor(Cursor cursor) Create a new Scan with a cursor.ImmutableScan.readAllVersions()Scan.readAllVersions()Get all available versions.ImmutableScan.readVersions(int versions) Scan.readVersions(int versions) Get up to the specified number of versions of each column.(package private) ScanImmutableScan.resetMvccReadPoint()(package private) ScanScan.resetMvccReadPoint()Set the mvcc read point to -1 which means do not use it.ImmutableScan.setACL(String user, Permission perms) ImmutableScan.setACL(Map<String, Permission> perms) Scan.setACL(String user, Permission perms) Scan.setACL(Map<String, Permission> perms) ImmutableScan.setAllowPartialResults(boolean allowPartialResults) Scan.setAllowPartialResults(boolean allowPartialResults) Setting whether the caller wants to see the partial results when server returns less-than-expected cells.ImmutableScan.setAsyncPrefetch(boolean asyncPrefetch) Deprecated.Scan.setAsyncPrefetch(boolean asyncPrefetch) Deprecated.Since 3.0.0, will be removed in 4.0.0.ImmutableScan.setAttribute(String name, byte[] value) Scan.setAttribute(String name, byte[] value) ImmutableScan.setAuthorizations(Authorizations authorizations) Scan.setAuthorizations(Authorizations authorizations) ImmutableScan.setBatch(int batch) Scan.setBatch(int batch) Set the maximum number of cells to return for each call to next().ImmutableScan.setCacheBlocks(boolean cacheBlocks) Scan.setCacheBlocks(boolean cacheBlocks) Set whether blocks should be cached for this Scan.ImmutableScan.setCaching(int caching) Scan.setCaching(int caching) Set the number of rows for caching that will be passed to scanners.ImmutableScan.setColumnFamilyTimeRange(byte[] cf, long minStamp, long maxStamp) Scan.setColumnFamilyTimeRange(byte[] cf, long minStamp, long maxStamp) ImmutableScan.setConsistency(Consistency consistency) Scan.setConsistency(Consistency consistency) private ScanRawAsyncTableImpl.setDefaultScanConfig(Scan scan) ImmutableScan.setEnableScanMetricsByRegion(boolean enable) Scan.setEnableScanMetricsByRegion(boolean enable) Enables region level scan metrics.ImmutableScan.setFamilyMap(Map<byte[], NavigableSet<byte[]>> familyMap) Scan.setFamilyMap(Map<byte[], NavigableSet<byte[]>> familyMap) Setting the familyMapImmutableScan.setIsolationLevel(IsolationLevel level) Scan.setIsolationLevel(IsolationLevel level) ImmutableScan.setLimit(int limit) Scan.setLimit(int limit) Set the limit of rows for this scan.ImmutableScan.setLoadColumnFamiliesOnDemand(boolean value) Scan.setLoadColumnFamiliesOnDemand(boolean value) ImmutableScan.setMaxResultSize(long maxResultSize) Scan.setMaxResultSize(long maxResultSize) Set the maximum result size.ImmutableScan.setMaxResultsPerColumnFamily(int limit) Scan.setMaxResultsPerColumnFamily(int limit) Set the maximum number of values to return per row per Column Family(package private) ScanImmutableScan.setMvccReadPoint(long mvccReadPoint) (package private) ScanScan.setMvccReadPoint(long mvccReadPoint) Set the mvcc read point used to open a scanner.ImmutableScan.setNeedCursorResult(boolean needCursorResult) Scan.setNeedCursorResult(boolean needCursorResult) When the server is slow or we scan a table with many deleted data or we use a sparse filter, the server will response heartbeat to prevent timeout.ImmutableScan.setOneRowLimit()Scan.setOneRowLimit()Call this when you only want to get one row.ImmutableScan.setPriority(int priority) Scan.setPriority(int priority) ImmutableScan.setRaw(boolean raw) Scan.setRaw(boolean raw) Enable/disable "raw" mode for this scan.ImmutableScan.setReadType(Scan.ReadType readType) Scan.setReadType(Scan.ReadType readType) Set the read type for this scan.ImmutableScan.setReplicaId(int id) Scan.setReplicaId(int Id) ImmutableScan.setReversed(boolean reversed) Scan.setReversed(boolean reversed) Set whether this scan is a reversed oneImmutableScan.setRowOffsetPerColumnFamily(int offset) Scan.setRowOffsetPerColumnFamily(int offset) Set offset for the row per Column Family.Scan.setRowPrefixFilter(byte[] rowPrefix) Deprecated.since 2.5.0, will be removed in 4.0.0.ImmutableScan.setScanMetricsEnabled(boolean enabled) Scan.setScanMetricsEnabled(boolean enabled) Enable collection ofScanMetrics.ImmutableScan.setStartStopRowForPrefixScan(byte[] rowPrefix) Scan.setStartStopRowForPrefixScan(byte[] rowPrefix) Set a filter (using stopRow and startRow) so the result set only contains rows where the rowKey starts with the specified prefix.ImmutableScan.setTimeRange(long minStamp, long maxStamp) Scan.setTimeRange(long minStamp, long maxStamp) Get versions of columns only within the specified timestamp range, [minStamp, maxStamp).ImmutableScan.setTimestamp(long timestamp) Scan.setTimestamp(long timestamp) Get versions of columns with the specified timestamp.ImmutableScan.withStartRow(byte[] startRow) ImmutableScan.withStartRow(byte[] startRow, boolean inclusive) Scan.withStartRow(byte[] startRow) Set the start row of the scan.Scan.withStartRow(byte[] startRow, boolean inclusive) Set the start row of the scan.ImmutableScan.withStopRow(byte[] stopRow) ImmutableScan.withStopRow(byte[] stopRow, boolean inclusive) Scan.withStopRow(byte[] stopRow) Set the stop row of the scan.Scan.withStopRow(byte[] stopRow, boolean inclusive) Set the stop row of the scan.Methods in org.apache.hadoop.hbase.client that return types with arguments of type ScanModifier and TypeMethodDescriptionOnlineLogRecord.getScan()If "hbase.slowlog.scan.payload.enabled" is enabled then this value may be present and should represent the Scan that produced the givenOnlineLogRecordMethods in org.apache.hadoop.hbase.client with parameters of type ScanModifier and TypeMethodDescriptionstatic ScanResultCacheConnectionUtils.createScanResultCache(Scan scan) (package private) static RegionLocateTypeConnectionUtils.getLocateType(Scan scan) static longClientInternalHelper.getMvccReadPoint(Scan scan) AsyncTable.getScanner(Scan scan) Returns a scanner on the current table as specified by theScanobject.AsyncTableImpl.getScanner(Scan scan) RawAsyncTableImpl.getScanner(Scan scan) default ResultScannerTable.getScanner(Scan scan) Returns a scanner on the current table as specified by theScanobject.TableOverAsyncTable.getScanner(Scan scan) protected voidAbstractClientScanner.initScanMetrics(Scan scan) Check and initialize if application wants to collect scan metrics(package private) static booleanConnectionUtils.noMoreResultsForReverseScan(Scan scan, RegionInfo info) (package private) static booleanConnectionUtils.noMoreResultsForScan(Scan scan, RegionInfo info) voidThe scan API uses the observer pattern.voidAsyncTableImpl.scan(Scan scan, ScanResultConsumer consumer) voidRawAsyncTableImpl.scan(Scan scan, AdvancedScanResultConsumer consumer) private voidAsyncTableImpl.scan0(Scan scan, ScanResultConsumer consumer) Return all the results that match the given scan object.private ScanRawAsyncTableImpl.setDefaultScanConfig(Scan scan) static voidClientInternalHelper.setMvccReadPoint(Scan scan, long mvccReadPoint) Constructors in org.apache.hadoop.hbase.client with parameters of type ScanModifierConstructorDescriptionAsyncClientScanner(Scan scan, AdvancedScanResultConsumer consumer, TableName tableName, AsyncConnectionImpl conn, org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long scanTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) AsyncScanSingleRegionRpcRetryingCaller(org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, AsyncConnectionImpl conn, Scan scan, ScanMetrics scanMetrics, long scannerId, ScanResultCache resultCache, AdvancedScanResultConsumer consumer, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.Interface stub, HRegionLocation loc, boolean isRegionServerRemote, int priority, long scannerLeaseTimeoutPeriodNs, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long scanTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) AsyncTableResultScanner(TableName tableName, Scan scan, long maxCacheSize) ClientSideRegionScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, TableDescriptor htd, RegionInfo hri, Scan scan, ScanMetrics scanMetrics) ImmutableScan(Scan scan) Create Immutable instance of Scan from given Scan object(package private)OnlineLogRecord(long startTime, int processingTime, int queueTime, long responseSize, long blockBytesScanned, long fsReadTime, String clientAddress, String serverClass, String methodName, String callDetails, String param, String regionName, String userName, int multiGetsCount, int multiMutationsCount, int multiServiceCalls, Scan scan, Map<String, byte[]> requestAttributes, Map<String, byte[]> connectionAttributes) Creates a new instance of this class while copying all values.TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path restoreDir, String snapshotName, Scan scan) Creates a TableSnapshotScanner.TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.fs.Path restoreDir, String snapshotName, Scan scan) TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.fs.Path restoreDir, String snapshotName, Scan scan, boolean snapshotAlreadyRestored) Creates a TableSnapshotScanner. -
Uses of Scan in org.apache.hadoop.hbase.client.trace
Methods in org.apache.hadoop.hbase.client.trace with parameters of type ScanModifier and TypeMethodDescriptionTableOperationSpanBuilder.setOperation(Scan scan) private static HBaseSemanticAttributes.Operation -
Uses of Scan in org.apache.hadoop.hbase.coprocessor
Methods in org.apache.hadoop.hbase.coprocessor with parameters of type ScanModifier and TypeMethodDescriptionRegionCoprocessorEnvironment.checkScanQuota(Scan scan, long maxBlockBytesScanned, long prevBlockBytesScannedDifference) Check the quota for the current (rpc-context) user.default RegionScannerRegionObserver.postScannerOpen(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s) Called after the client opens a new scanner.default voidRegionObserver.preScannerOpen(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan) Called before the client opens a new scanner. -
Uses of Scan in org.apache.hadoop.hbase.coprocessor.example
Methods in org.apache.hadoop.hbase.coprocessor.example with parameters of type ScanModifier and TypeMethodDescriptionvoidScanModifyingObserver.preScannerOpen(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.io
Methods in org.apache.hadoop.hbase.io with parameters of type Scan -
Uses of Scan in org.apache.hadoop.hbase.mapred
Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type ScanModifier and TypeMethodDescriptionstatic voidTableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String, Collection<Scan>> snapshotScans, Class<? extends TableMap> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapred.JobConf job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir) Sets up the job for reading from one or more multiple table snapshots, with one or more scans per snapshot.static voidMultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration conf, Map<String, Collection<Scan>> snapshotScans, org.apache.hadoop.fs.Path restoreDir) Constructors in org.apache.hadoop.hbase.mapred with parameters of type ScanModifierConstructorDescriptionTableSnapshotRegionSplit(TableDescriptor htd, RegionInfo regionInfo, List<String> locations, Scan scan, org.apache.hadoop.fs.Path restoreDir) -
Uses of Scan in org.apache.hadoop.hbase.mapreduce
Fields in org.apache.hadoop.hbase.mapreduce declared as ScanModifier and TypeFieldDescriptionprivate ScanTableRecordReaderImpl.currentScanprivate ScanTableInputFormatBase.scanHolds the details for the internal scanner.private ScanTableRecordReaderImpl.scanprivate ScanTableSnapshotInputFormatImpl.RecordReader.scanFields in org.apache.hadoop.hbase.mapreduce with type parameters of type ScanModifier and TypeFieldDescriptionMultiTableInputFormatBase.scansHolds the set of scans used to define the input.Methods in org.apache.hadoop.hbase.mapreduce that return ScanModifier and TypeMethodDescriptionstatic ScanTableMapReduceUtil.convertStringToScan(String base64) Converts the given Base64 string back into a Scan instance.static ScanTableInputFormat.createScanFromConfiguration(org.apache.hadoop.conf.Configuration conf) Sets up aScaninstance, applying settings from the configuration property constants defined inTableInputFormat.static ScanTableSnapshotInputFormatImpl.extractScanFromConf(org.apache.hadoop.conf.Configuration conf) private static ScanCellCounter.getConfiguredScanForJob(org.apache.hadoop.conf.Configuration conf, String[] args) TableInputFormatBase.getScan()Gets the scan defining the actual details like columns etc.TableSplit.getScan()Returns a Scan object from the stored string representation.(package private) static ScanExportUtils.getScanFromCommandLine(org.apache.hadoop.conf.Configuration conf, String[] args) (package private) ScanHashTable.TableHash.initScan()Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type ScanModifier and TypeMethodDescriptionExportUtils.getArgumentsFromCommandLine(org.apache.hadoop.conf.Configuration conf, String[] args) MultiTableInputFormatBase.getScans()Allows subclasses to get the list ofScanobjects.MultiTableSnapshotInputFormatImpl.getSnapshotsToScans(org.apache.hadoop.conf.Configuration conf) Retrieve the snapshot name -> list<scan> mapping pushed to configuration byMultiTableSnapshotInputFormatImpl.setSnapshotToScans(Configuration, Map)Methods in org.apache.hadoop.hbase.mapreduce with parameters of type ScanModifier and TypeMethodDescriptionprivate static voidParses a combined family and qualifier and adds either both or just the family in case there is no qualifier.static voidTableInputFormat.addColumns(Scan scan, byte[][] columns) Adds an array of columns specified using old format, family:qualifier.private static voidTableInputFormat.addColumns(Scan scan, String columns) Convenience method to parse a string representation of an array of column specifiers.static StringTableMapReduceUtil.convertScanToString(Scan scan) Writes the given scan into a Base64 encoded string.TableSnapshotInputFormatImpl.getSplits(Scan scan, SnapshotManifest manifest, List<RegionInfo> regionManifests, org.apache.hadoop.fs.Path restoreDir, org.apache.hadoop.conf.Configuration conf) TableSnapshotInputFormatImpl.getSplits(Scan scan, SnapshotManifest manifest, List<RegionInfo> regionManifests, org.apache.hadoop.fs.Path restoreDir, org.apache.hadoop.conf.Configuration conf, RegionSplitter.SplitAlgorithm sa, int numSplits) private voidCopyTable.initCopyTableMapperJob(org.apache.hadoop.mapreduce.Job job, Scan scan) static voidGroupingTableMapper.initJob(String table, Scan scan, String groupColumns, Class<? extends TableMapper> mapper, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static voidIdentityTableMapper.initJob(String table, Scan scan, Class<? extends TableMapper> mapper, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static voidTableMapReduceUtil.initTableMapperJob(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static voidTableMapReduceUtil.initTableMapperJob(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars) Use this before submitting a TableMap job.static voidTableMapReduceUtil.initTableMapperJob(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass) Use this before submitting a TableMap job.static voidTableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static voidTableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars) Use this before submitting a TableMap job.static voidTableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, boolean initCredentials, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass) Use this before submitting a TableMap job.static voidTableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass) Use this before submitting a TableMap job.static voidTableMapReduceUtil.initTableMapperJob(TableName table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.static voidTableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir) Sets up the job for reading from a table snapshot.static voidTableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir, RegionSplitter.SplitAlgorithm splitAlgo, int numSplitsPerRegion) Sets up the job for reading from a table snapshot.voidSets the scan defining the actual details like columns etc.voidSets the scan defining the actual details like columns etc.voidSets the scan defining the actual details like columns etc.private static voidRowCounter.setScanFilter(Scan scan, List<MultiRowRangeFilter.RowRange> rowRangeList, boolean countDeleteMarkers) Sets filterFilterBaseto theScaninstance.Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type ScanModifier and TypeMethodDescriptionstatic voidTableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String, Collection<Scan>> snapshotScans, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir) Sets up the job for reading from one or more table snapshots, with one or more scans per snapshot.static voidTableMapReduceUtil.initTableMapperJob(List<Scan> scans, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a Multi TableMap job.static voidTableMapReduceUtil.initTableMapperJob(List<Scan> scans, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars) Use this before submitting a Multi TableMap job.static voidTableMapReduceUtil.initTableMapperJob(List<Scan> scans, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, boolean initCredentials) Use this before submitting a Multi TableMap job.static voidMultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration configuration, Map<String, Collection<Scan>> snapshotScans, org.apache.hadoop.fs.Path tmpRestoreDir) voidMultiTableSnapshotInputFormatImpl.setInput(org.apache.hadoop.conf.Configuration conf, Map<String, Collection<Scan>> snapshotScans, org.apache.hadoop.fs.Path restoreDir) Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of restoreDir.protected voidAllows subclasses to set the list ofScanobjects.voidMultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration conf, Map<String, Collection<Scan>> snapshotScans) Push snapshotScans to conf (under the keyMultiTableSnapshotInputFormatImpl.SNAPSHOT_TO_SCANS_KEY)Constructors in org.apache.hadoop.hbase.mapreduce with parameters of type ScanModifierConstructorDescriptionInputSplit(TableDescriptor htd, RegionInfo regionInfo, List<String> locations, Scan scan, org.apache.hadoop.fs.Path restoreDir) TableSnapshotRegionSplit(TableDescriptor htd, RegionInfo regionInfo, List<String> locations, Scan scan, org.apache.hadoop.fs.Path restoreDir) TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location) Creates a new instance while assigning all variables.TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location, long length) Creates a new instance while assigning all variables.TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location, String encodedRegionName, long length) Creates a new instance while assigning all variables. -
Uses of Scan in org.apache.hadoop.hbase.mapreduce.replication
Fields in org.apache.hadoop.hbase.mapreduce.replication declared as ScanModifier and TypeFieldDescriptionprivate ScanVerifyReplication.Verifier.tableScanprivate final ScanVerifyReplicationRecompareRunnable.tableScanMethods in org.apache.hadoop.hbase.mapreduce.replication with parameters of type ScanModifier and TypeMethodDescriptionprivate static voidVerifyReplication.setRowPrefixFilter(Scan scan, String rowPrefixes) private static voidVerifyReplication.setStartAndStopRows(Scan scan, byte[] startPrefixRow, byte[] lastPrefixRow) Constructors in org.apache.hadoop.hbase.mapreduce.replication with parameters of type ScanModifierConstructorDescriptionVerifyReplicationRecompareRunnable(org.apache.hadoop.mapreduce.Mapper.Context context, Result sourceResult, Result replicatedResult, VerifyReplication.Verifier.Counters originalCounter, String delimiter, Scan tableScan, Table sourceTable, Table replicatedTable, int reCompareTries, int sleepMsBeforeReCompare, int reCompareBackoffExponent, boolean verbose) -
Uses of Scan in org.apache.hadoop.hbase.master.assignment
Methods in org.apache.hadoop.hbase.master.assignment that return ScanModifier and TypeMethodDescriptionprivate ScanRegionStateStore.getScanForUpdateRegionReplicas(TableName tableName) -
Uses of Scan in org.apache.hadoop.hbase.master.http
Methods in org.apache.hadoop.hbase.master.http that return Scan -
Uses of Scan in org.apache.hadoop.hbase.master.region
Methods in org.apache.hadoop.hbase.master.region with parameters of type ScanModifier and TypeMethodDescriptionMasterRegion.getRegionScanner(Scan scan) MasterRegion.getScanner(Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.mob
Methods in org.apache.hadoop.hbase.mob with parameters of type ScanModifier and TypeMethodDescriptionstatic booleanMobUtils.isCacheMobBlocks(Scan scan) Indicates whether the scan contains the information of caching blocks.static booleanMobUtils.isRawMobScan(Scan scan) Indicates whether it's a raw scan.static booleanMobUtils.isReadEmptyValueOnMobCellMiss(Scan scan) Indicates whether return null value when the mob file is missing or corrupt.static booleanMobUtils.isRefOnlyScan(Scan scan) Indicates whether it's a reference only scan.static voidMobUtils.setCacheMobBlocks(Scan scan, boolean cacheBlocks) Sets the attribute of caching blocks in the scan. -
Uses of Scan in org.apache.hadoop.hbase.quotas
Methods in org.apache.hadoop.hbase.quotas that return ScanModifier and TypeMethodDescription(package private) static ScanQuotaTableUtil.createScanForNamespaceSnapshotSizes()Returns a scanner for all existing namespace snapshot entries.(package private) static ScanQuotaTableUtil.createScanForNamespaceSnapshotSizes(String namespace) Returns a scanner for all namespace snapshot entries of the given namespace(package private) static ScanQuotaTableUtil.createScanForSpaceSnapshotSizes()(package private) static ScanQuotaTableUtil.createScanForSpaceSnapshotSizes(TableName table) static ScanQuotaTableUtil.makeQuotaSnapshotScan()Creates aScanwhich returns only quota snapshots from the quota table.static ScanQuotaTableUtil.makeQuotaSnapshotScanForTable(TableName tn) Creates aScanwhich returns onlySpaceQuotaSnapshotfrom the quota table for a specific table.static ScanQuotaTableUtil.makeScan(QuotaFilter filter) Methods in org.apache.hadoop.hbase.quotas with parameters of type ScanModifier and TypeMethodDescriptionQuotaTableUtil.createDeletesForExistingSnapshotsFromScan(Connection connection, Scan scan) Returns a list ofDeleteto remove all entries returned by the passed scanner.static <K> Map<K,QuotaState> QuotaUtil.fetchGlobalQuotas(org.apache.hadoop.conf.Configuration conf, String type, Scan scan, Connection connection, QuotaUtil.KeyFromRow<K> kfr) private voidQuotaRetriever.init(Connection conn, Scan scan) Constructors in org.apache.hadoop.hbase.quotas with parameters of type ScanModifierConstructorDescription(package private)QuotaRetriever(org.apache.hadoop.conf.Configuration conf, Scan scan) QuotaRetriever(Connection conn, Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.regionserver
Subclasses of Scan in org.apache.hadoop.hbase.regionserverModifier and TypeClassDescriptionclassSpecial scanner, currently used for increment operations to allow additional server-side arguments for Scan operations.Fields in org.apache.hadoop.hbase.regionserver declared as ScanModifier and TypeFieldDescriptionprivate final ScanCustomizedScanInfoBuilder.scanprivate final ScanStoreScanner.scanprivate static final ScanStoreScanner.SCAN_FOR_COMPACTIONMethods in org.apache.hadoop.hbase.regionserver that return ScanModifier and TypeMethodDescriptionCustomizedScanInfoBuilder.getScan()ScanOptions.getScan()Returns a copy of the Scan object.Methods in org.apache.hadoop.hbase.regionserver with parameters of type ScanModifier and TypeMethodDescriptionRegionCoprocessorHost.RegionEnvironment.checkScanQuota(Scan scan, long maxBlockBytesScanned, long prevBlockBytesScannedDifference) protected KeyValueScannerHMobStore.createScanner(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> targetCols, long readPt) Gets the MobStoreScanner or MobReversedStoreScanner.protected KeyValueScannerHStore.createScanner(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> targetCols, long readPt) HRegion.getScanner(Scan scan) HRegion.getScanner(Scan scan, List<KeyValueScanner> additionalScanners) private RegionScannerImplHRegion.getScanner(Scan scan, List<KeyValueScanner> additionalScanners, long nonceGroup, long nonce) HStore.getScanner(Scan scan, NavigableSet<byte[]> targetCols, long readPt) Return a scanner for both the memstore and the HStore files.Region.getScanner(Scan scan) Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by theScan.Region.getScanner(Scan scan, List<KeyValueScanner> additionalScanners) Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by theScan.private voidRegionScannerImpl.initializeScanners(Scan scan, List<KeyValueScanner> additionalScanners) protected RegionScannerImplHRegion.instantiateRegionScanner(Scan scan, List<KeyValueScanner> additionalScanners, long nonceGroup, long nonce) private booleanRSRpcServices.isFullRegionScan(Scan scan, HRegion region) private static booleanStoreScanner.isOnlyLatestVersionScan(Scan scan) (package private) booleanStoreFileReader.passesBloomFilter(Scan scan, SortedSet<byte[]> columns) Checks whether the given scan passes the Bloom filter (if present).private booleanStoreFileReader.passesGeneralRowPrefixBloomFilter(Scan scan) A method for checking Bloom filters.booleanStoreFileReader.passesKeyRangeFilter(Scan scan) Checks whether the given scan rowkey range overlaps with the current storefile'sRegionCoprocessorHost.postScannerOpen(Scan scan, RegionScanner s) voidRegionCoprocessorHost.preScannerOpen(Scan scan) RegionCoprocessorHost.preStoreScannerOpen(HStore store, Scan scan) Called before open store scanner for user scan.booleanKeyValueScanner.shouldUseScanner(Scan scan, HStore store, long oldestUnexpiredTS) Allows to filter out scanners (both StoreFile and memstore) that we don't want to use based on criteria such as Bloom filters and timestamp ranges.booleanNonLazyKeyValueScanner.shouldUseScanner(Scan scan, HStore store, long oldestUnexpiredTS) booleanSegmentScanner.shouldUseScanner(Scan scan, HStore store, long oldestUnexpiredTS) This functionality should be resolved in the higher level which is MemStoreScanner, currently returns true as default.booleanStoreFileScanner.shouldUseScanner(Scan scan, HStore store, long oldestUnexpiredTS) Constructors in org.apache.hadoop.hbase.regionserver with parameters of type ScanModifierConstructorDescriptionCustomizedScanInfoBuilder(ScanInfo scanInfo, Scan scan) InternalScan(Scan scan) MobStoreScanner(HStore store, ScanInfo scanInfo, Scan scan, NavigableSet<byte[]> columns, long readPt) (package private)RegionScannerImpl(Scan scan, List<KeyValueScanner> additionalScanners, HRegion region, long nonceGroup, long nonce) (package private)ReversedMobStoreScanner(HStore store, ScanInfo scanInfo, Scan scan, NavigableSet<byte[]> columns, long readPt) (package private)ReversedRegionScannerImpl(Scan scan, List<KeyValueScanner> additionalScanners, HRegion region, long nonceGroup, long nonce) ReversedStoreScanner(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, List<? extends KeyValueScanner> scanners) Constructor for testing.ReversedStoreScanner(HStore store, ScanInfo scanInfo, Scan scan, NavigableSet<byte[]> columns, long readPt) Opens a scanner across memstore, snapshot, and all StoreFiles.(package private)StoreScanner(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, List<? extends KeyValueScanner> scanners) (package private)StoreScanner(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, List<? extends KeyValueScanner> scanners, ScanType scanType) privateStoreScanner(HStore store, Scan scan, ScanInfo scanInfo, int numColumns, long readPt, boolean cacheBlocks, ScanType scanType) An internal constructor.StoreScanner(HStore store, ScanInfo scanInfo, Scan scan, NavigableSet<byte[]> columns, long readPt) Opens a scanner across memstore, snapshot, and all StoreFiles. -
Uses of Scan in org.apache.hadoop.hbase.regionserver.querymatcher
Methods in org.apache.hadoop.hbase.regionserver.querymatcher with parameters of type ScanModifier and TypeMethodDescriptionstatic NormalUserScanQueryMatcherNormalUserScanQueryMatcher.create(Scan scan, ScanInfo scanInfo, ColumnTracker columns, DeleteTracker deletes, boolean hasNullColumn, long oldestUnexpiredTS, long now) static RawScanQueryMatcherRawScanQueryMatcher.create(Scan scan, ScanInfo scanInfo, ColumnTracker columns, boolean hasNullColumn, long oldestUnexpiredTS, long now) static UserScanQueryMatcherUserScanQueryMatcher.create(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, long oldestUnexpiredTS, long now, RegionCoprocessorHost regionCoprocessorHost) private static ExtendedCellUserScanQueryMatcher.createStartKey(Scan scan, ScanInfo scanInfo) protected static Pair<DeleteTracker,ColumnTracker> ScanQueryMatcher.getTrackers(RegionCoprocessorHost host, NavigableSet<byte[]> columns, ScanInfo scanInfo, long oldestUnexpiredTS, Scan userScan) Constructors in org.apache.hadoop.hbase.regionserver.querymatcher with parameters of type ScanModifierConstructorDescriptionprotectedNormalUserScanQueryMatcher(Scan scan, ScanInfo scanInfo, ColumnTracker columns, boolean hasNullColumn, DeleteTracker deletes, long oldestUnexpiredTS, long now) protectedRawScanQueryMatcher(Scan scan, ScanInfo scanInfo, ColumnTracker columns, boolean hasNullColumn, long oldestUnexpiredTS, long now) protectedUserScanQueryMatcher(Scan scan, ScanInfo scanInfo, ColumnTracker columns, boolean hasNullColumn, long oldestUnexpiredTS, long now) -
Uses of Scan in org.apache.hadoop.hbase.replication
Methods in org.apache.hadoop.hbase.replication with parameters of type ScanModifier and TypeMethodDescriptionprivate voidTableReplicationQueueStorage.listAllQueueIds(Table table, Scan scan, List<ReplicationQueueId> queueIds) private <T extends Collection<String>>
TTableReplicationQueueStorage.scanHFiles(Scan scan, Supplier<T> creator) -
Uses of Scan in org.apache.hadoop.hbase.rest.model
Methods in org.apache.hadoop.hbase.rest.model with parameters of type Scan -
Uses of Scan in org.apache.hadoop.hbase.security.access
Methods in org.apache.hadoop.hbase.security.access with parameters of type ScanModifier and TypeMethodDescriptionAccessController.postScannerOpen(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s) voidAccessController.preScannerOpen(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.security.visibility
Methods in org.apache.hadoop.hbase.security.visibility with parameters of type ScanModifier and TypeMethodDescriptionVisibilityController.postScannerOpen(ObserverContext<? extends RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s) voidVisibilityController.preScannerOpen(ObserverContext<? extends RegionCoprocessorEnvironment> e, Scan scan) -
Uses of Scan in org.apache.hadoop.hbase.thrift2
Methods in org.apache.hadoop.hbase.thrift2 that return ScanModifier and TypeMethodDescriptionstatic ScanThriftUtilities.scanFromThrift(org.apache.hadoop.hbase.thrift2.generated.TScan in) Methods in org.apache.hadoop.hbase.thrift2 with parameters of type ScanModifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.thrift2.generated.TScanThriftUtilities.scanFromHBase(Scan in) -
Uses of Scan in org.apache.hadoop.hbase.thrift2.client
Methods in org.apache.hadoop.hbase.thrift2.client with parameters of type ScanConstructors in org.apache.hadoop.hbase.thrift2.client with parameters of type Scan