Modifier and Type | Method and Description |
---|---|
private static Scan |
AsyncMetaTableAccessor.getMetaScan(AsyncTable<?> metaTable,
int rowUpperLimit) |
private static Scan |
MetaTableAccessor.getMetaScan(org.apache.hadoop.conf.Configuration conf,
int rowUpperLimit) |
static Scan |
MetaTableAccessor.getScanForTableName(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
This method creates a Scan object that will only scan catalog rows that belong to the specified
table.
|
Modifier and Type | Class and Description |
---|---|
class |
ImmutableScan
Immutable version of Scan
|
Modifier and Type | Field and Description |
---|---|
private Scan |
ImmutableScan.delegateScan |
private Scan |
TableSnapshotScanner.scan |
private Scan |
AsyncClientScanner.scan |
private Scan |
ScannerCallableWithReplicas.scan |
private Scan |
AsyncScanSingleRegionRpcRetryingCaller.scan |
private Scan |
AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.scan |
protected Scan |
ClientScanner.scan |
protected Scan |
ScannerCallable.scan |
private Scan |
AsyncTableResultScanner.scan |
Modifier and Type | Method and Description |
---|---|
Scan |
Scan.addColumn(byte[] family,
byte[] qualifier)
Get the column from the specified family with the specified qualifier.
|
Scan |
ImmutableScan.addColumn(byte[] family,
byte[] qualifier) |
Scan |
Scan.addFamily(byte[] family)
Get all columns from the specified family.
|
Scan |
ImmutableScan.addFamily(byte[] family) |
static Scan |
Scan.createScanFromCursor(Cursor cursor)
Create a new Scan with a cursor.
|
protected Scan |
ClientScanner.getScan() |
protected Scan |
ScannerCallable.getScan() |
Scan |
Scan.readAllVersions()
Get all available versions.
|
Scan |
ImmutableScan.readAllVersions() |
Scan |
Scan.readVersions(int versions)
Get up to the specified number of versions of each column.
|
Scan |
ImmutableScan.readVersions(int versions) |
(package private) Scan |
Scan.resetMvccReadPoint()
Set the mvcc read point to -1 which means do not use it.
|
(package private) Scan |
ImmutableScan.resetMvccReadPoint() |
Scan |
Scan.setACL(Map<String,Permission> perms) |
Scan |
ImmutableScan.setACL(Map<String,Permission> perms) |
Scan |
Scan.setACL(String user,
Permission perms) |
Scan |
ImmutableScan.setACL(String user,
Permission perms) |
Scan |
Scan.setAllowPartialResults(boolean allowPartialResults)
Setting whether the caller wants to see the partial results when server returns
less-than-expected cells.
|
Scan |
ImmutableScan.setAllowPartialResults(boolean allowPartialResults) |
Scan |
Scan.setAsyncPrefetch(boolean asyncPrefetch) |
Scan |
ImmutableScan.setAsyncPrefetch(boolean asyncPrefetch)
Deprecated.
|
Scan |
Scan.setAttribute(String name,
byte[] value) |
Scan |
ImmutableScan.setAttribute(String name,
byte[] value) |
Scan |
Scan.setAuthorizations(Authorizations authorizations) |
Scan |
ImmutableScan.setAuthorizations(Authorizations authorizations) |
Scan |
Scan.setBatch(int batch)
Set the maximum number of cells to return for each call to next().
|
Scan |
ImmutableScan.setBatch(int batch) |
Scan |
Scan.setCacheBlocks(boolean cacheBlocks)
Set whether blocks should be cached for this Scan.
|
Scan |
ImmutableScan.setCacheBlocks(boolean cacheBlocks) |
Scan |
Scan.setCaching(int caching)
Set the number of rows for caching that will be passed to scanners.
|
Scan |
ImmutableScan.setCaching(int caching) |
Scan |
Scan.setColumnFamilyTimeRange(byte[] cf,
long minStamp,
long maxStamp) |
Scan |
ImmutableScan.setColumnFamilyTimeRange(byte[] cf,
long minStamp,
long maxStamp) |
Scan |
Scan.setConsistency(Consistency consistency) |
Scan |
ImmutableScan.setConsistency(Consistency consistency) |
private Scan |
RawAsyncTableImpl.setDefaultScanConfig(Scan scan) |
Scan |
Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
Setting the familyMap
|
Scan |
ImmutableScan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap) |
Scan |
Scan.setFilter(Filter filter) |
Scan |
ImmutableScan.setFilter(Filter filter) |
Scan |
Scan.setId(String id) |
Scan |
ImmutableScan.setId(String id) |
Scan |
Scan.setIsolationLevel(IsolationLevel level) |
Scan |
ImmutableScan.setIsolationLevel(IsolationLevel level) |
Scan |
Scan.setLimit(int limit)
Set the limit of rows for this scan.
|
Scan |
ImmutableScan.setLimit(int limit) |
Scan |
Scan.setLoadColumnFamiliesOnDemand(boolean value) |
Scan |
ImmutableScan.setLoadColumnFamiliesOnDemand(boolean value) |
Scan |
Scan.setMaxResultSize(long maxResultSize)
Set the maximum result size.
|
Scan |
ImmutableScan.setMaxResultSize(long maxResultSize) |
Scan |
Scan.setMaxResultsPerColumnFamily(int limit)
Set the maximum number of values to return per row per Column Family
|
Scan |
ImmutableScan.setMaxResultsPerColumnFamily(int limit) |
Scan |
Scan.setMaxVersions()
Deprecated.
|
Scan |
Scan.setMaxVersions(int maxVersions)
Deprecated.
|
(package private) Scan |
Scan.setMvccReadPoint(long mvccReadPoint)
Set the mvcc read point used to open a scanner.
|
(package private) Scan |
ImmutableScan.setMvccReadPoint(long mvccReadPoint) |
Scan |
Scan.setNeedCursorResult(boolean needCursorResult)
When the server is slow or we scan a table with many deleted data or we use a sparse filter,
the server will response heartbeat to prevent timeout.
|
Scan |
ImmutableScan.setNeedCursorResult(boolean needCursorResult) |
Scan |
Scan.setOneRowLimit()
Call this when you only want to get one row.
|
Scan |
ImmutableScan.setOneRowLimit() |
Scan |
Scan.setPriority(int priority) |
Scan |
ImmutableScan.setPriority(int priority) |
Scan |
Scan.setRaw(boolean raw)
Enable/disable "raw" mode for this scan.
|
Scan |
ImmutableScan.setRaw(boolean raw) |
Scan |
Scan.setReadType(Scan.ReadType readType)
Set the read type for this scan.
|
Scan |
ImmutableScan.setReadType(Scan.ReadType readType) |
Scan |
Scan.setReplicaId(int Id) |
Scan |
ImmutableScan.setReplicaId(int id) |
Scan |
Scan.setReversed(boolean reversed)
Set whether this scan is a reversed one
|
Scan |
ImmutableScan.setReversed(boolean reversed) |
Scan |
Scan.setRowOffsetPerColumnFamily(int offset)
Set offset for the row per Column Family.
|
Scan |
ImmutableScan.setRowOffsetPerColumnFamily(int offset) |
Scan |
Scan.setRowPrefixFilter(byte[] rowPrefix)
Set a filter (using stopRow and startRow) so the result set only contains rows where the rowKey
starts with the specified prefix.
|
Scan |
Scan.setScanMetricsEnabled(boolean enabled)
Enable collection of
ScanMetrics . |
Scan |
ImmutableScan.setScanMetricsEnabled(boolean enabled) |
Scan |
Scan.setSmall(boolean small)
Deprecated.
since 2.0.0 and will be removed in 3.0.0. Use
setLimit(int) and
setReadType(ReadType) instead. And for the one rpc optimization, now we
will also fetch data when openScanner, and if the number of rows reaches the limit
then we will close the scanner automatically which means we will fall back to one
rpc. |
Scan |
ImmutableScan.setSmall(boolean small)
Deprecated.
|
Scan |
Scan.setStartRow(byte[] startRow)
Deprecated.
since 2.0.0 and will be removed in 3.0.0. Use
withStartRow(byte[])
instead. This method may change the inclusive of the stop row to keep compatible
with the old behavior. |
Scan |
Scan.setStartStopRowForPrefixScan(byte[] rowPrefix)
Set a filter (using stopRow and startRow) so the result set only contains rows where the rowKey
starts with the specified prefix.
|
Scan |
ImmutableScan.setStartStopRowForPrefixScan(byte[] rowPrefix) |
Scan |
Scan.setStopRow(byte[] stopRow)
Deprecated.
since 2.0.0 and will be removed in 3.0.0. Use
withStopRow(byte[]) instead.
This method may change the inclusive of the stop row to keep compatible with the
old behavior. |
Scan |
Scan.setTimeRange(long minStamp,
long maxStamp)
Get versions of columns only within the specified timestamp range, [minStamp, maxStamp).
|
Scan |
ImmutableScan.setTimeRange(long minStamp,
long maxStamp) |
Scan |
Scan.setTimestamp(long timestamp)
Get versions of columns with the specified timestamp.
|
Scan |
ImmutableScan.setTimestamp(long timestamp) |
Scan |
Scan.setTimeStamp(long timestamp)
Deprecated.
|
Scan |
ImmutableScan.setTimeStamp(long timestamp)
Deprecated.
|
Scan |
Scan.withStartRow(byte[] startRow)
Set the start row of the scan.
|
Scan |
ImmutableScan.withStartRow(byte[] startRow) |
Scan |
Scan.withStartRow(byte[] startRow,
boolean inclusive)
Set the start row of the scan.
|
Scan |
ImmutableScan.withStartRow(byte[] startRow,
boolean inclusive) |
Scan |
Scan.withStopRow(byte[] stopRow)
Set the stop row of the scan.
|
Scan |
ImmutableScan.withStopRow(byte[] stopRow) |
Scan |
Scan.withStopRow(byte[] stopRow,
boolean inclusive)
Set the stop row of the scan.
|
Scan |
ImmutableScan.withStopRow(byte[] stopRow,
boolean inclusive) |
Modifier and Type | Method and Description |
---|---|
static ScanResultCache |
ConnectionUtils.createScanResultCache(Scan scan) |
(package private) static RegionLocateType |
ConnectionUtils.getLocateType(Scan scan) |
static long |
PackagePrivateFieldAccessor.getMvccReadPoint(Scan scan) |
ResultScanner |
HTable.getScanner(Scan scan)
The underlying
HTable must not be closed. |
ResultScanner |
AsyncTable.getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan object. |
default ResultScanner |
Table.getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan object. |
AsyncTableResultScanner |
RawAsyncTableImpl.getScanner(Scan scan) |
ResultScanner |
AsyncTableImpl.getScanner(Scan scan) |
protected void |
AbstractClientScanner.initScanMetrics(Scan scan)
Check and initialize if application wants to collect scan metrics
|
(package private) static boolean |
ConnectionUtils.noMoreResultsForReverseScan(Scan scan,
RegionInfo info) |
(package private) static boolean |
ConnectionUtils.noMoreResultsForScan(Scan scan,
RegionInfo info) |
void |
RawAsyncTableImpl.scan(Scan scan,
AdvancedScanResultConsumer consumer) |
void |
AsyncTable.scan(Scan scan,
C consumer)
The scan API uses the observer pattern.
|
void |
AsyncTableImpl.scan(Scan scan,
ScanResultConsumer consumer) |
private void |
AsyncTableImpl.scan0(Scan scan,
ScanResultConsumer consumer) |
CompletableFuture<List<Result>> |
AsyncTable.scanAll(Scan scan)
Return all the results that match the given scan object.
|
CompletableFuture<List<Result>> |
RawAsyncTableImpl.scanAll(Scan scan) |
CompletableFuture<List<Result>> |
AsyncTableImpl.scanAll(Scan scan) |
private Scan |
RawAsyncTableImpl.setDefaultScanConfig(Scan scan) |
static void |
PackagePrivateFieldAccessor.setMvccReadPoint(Scan scan,
long mvccReadPoint) |
AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder |
AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.setScan(Scan scan) |
Constructor and Description |
---|
AsyncClientScanner(Scan scan,
AdvancedScanResultConsumer consumer,
TableName tableName,
AsyncConnectionImpl conn,
org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer,
long pauseNs,
long pauseNsForServerOverloaded,
int maxAttempts,
long scanTimeoutNs,
long rpcTimeoutNs,
int startLogErrorsCnt) |
AsyncScanSingleRegionRpcRetryingCaller(org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer,
AsyncConnectionImpl conn,
Scan scan,
ScanMetrics scanMetrics,
long scannerId,
ScanResultCache resultCache,
AdvancedScanResultConsumer consumer,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.Interface stub,
HRegionLocation loc,
boolean isRegionServerRemote,
int priority,
long scannerLeaseTimeoutPeriodNs,
long pauseNs,
long pauseNsForServerOverloaded,
int maxAttempts,
long scanTimeoutNs,
long rpcTimeoutNs,
int startLogErrorsCnt) |
AsyncTableResultScanner(TableName tableName,
Scan scan,
long maxCacheSize) |
ClientAsyncPrefetchScanner(org.apache.hadoop.conf.Configuration configuration,
Scan scan,
TableName name,
ClusterConnection connection,
RpcRetryingCallerFactory rpcCallerFactory,
RpcControllerFactory rpcControllerFactory,
ExecutorService pool,
int scanReadRpcTimeout,
int scannerTimeout,
int replicaCallTimeoutMicroSecondScan) |
ClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int scanReadRpcTimeout,
int scannerTimeout,
int primaryOperationTimeout)
Create a new ClientScanner for the specified table Note that the passed
Scan 's start
row maybe changed changed. |
ClientSideRegionScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootDir,
TableDescriptor htd,
RegionInfo hri,
Scan scan,
ScanMetrics scanMetrics) |
ClientSimpleScanner(org.apache.hadoop.conf.Configuration configuration,
Scan scan,
TableName name,
ClusterConnection connection,
RpcRetryingCallerFactory rpcCallerFactory,
RpcControllerFactory rpcControllerFactory,
ExecutorService pool,
int scanReadRpcTimeout,
int scannerTimeout,
int replicaCallTimeoutMicroSecondScan) |
ImmutableScan(Scan scan)
Create Immutable instance of Scan from given Scan object
|
ReversedClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int scanReadRpcTimeout,
int scannerTimeout,
int primaryOperationTimeout)
Create a new ReversibleClientScanner for the specified table Note that the passed
Scan 's start row maybe changed. |
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcFactory,
int replicaId) |
Scan(Scan scan)
Creates a new instance of this class while copying all values.
|
ScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcControllerFactory,
int id) |
ScannerCallableWithReplicas(TableName tableName,
ClusterConnection cConnection,
ScannerCallable baseCallable,
ExecutorService pool,
int timeBeforeReplicas,
Scan scan,
int retries,
int readRpcTimeout,
int scannerTimeout,
int caching,
org.apache.hadoop.conf.Configuration conf,
RpcRetryingCaller<Result[]> caller) |
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan) |
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan,
boolean snapshotAlreadyRestored)
Creates a TableSnapshotScanner.
|
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
Modifier and Type | Method and Description |
---|---|
TableOperationSpanBuilder |
TableOperationSpanBuilder.setOperation(Scan scan) |
private static HBaseSemanticAttributes.Operation |
TableOperationSpanBuilder.valueFrom(Scan scan) |
Modifier and Type | Method and Description |
---|---|
default RegionScanner |
RegionObserver.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s)
Called after the client opens a new scanner.
|
default void |
RegionObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan)
Called before the client opens a new scanner.
|
Modifier and Type | Method and Description |
---|---|
void |
ScanModifyingObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan) |
Modifier and Type | Method and Description |
---|---|
boolean |
HalfStoreFileReader.passesKeyRangeFilter(Scan scan) |
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMap> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapred.JobConf job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more multiple table snapshots, with one or more scans
per snapshot.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path restoreDir) |
Constructor and Description |
---|
TableSnapshotRegionSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
Modifier and Type | Field and Description |
---|---|
private Scan |
TableRecordReaderImpl.currentScan |
private Scan |
TableInputFormatBase.scan
Holds the details for the internal scanner.
|
private Scan |
TableSnapshotInputFormatImpl.RecordReader.scan |
private Scan |
TableRecordReaderImpl.scan |
Modifier and Type | Field and Description |
---|---|
private List<Scan> |
MultiTableInputFormatBase.scans
Holds the set of scans used to define the input.
|
Modifier and Type | Method and Description |
---|---|
static Scan |
TableMapReduceUtil.convertStringToScan(String base64)
Converts the given Base64 string back into a Scan instance.
|
static Scan |
TableInputFormat.createScanFromConfiguration(org.apache.hadoop.conf.Configuration conf)
Sets up a
Scan instance, applying settings from the configuration property constants
defined in TableInputFormat . |
static Scan |
TableSnapshotInputFormatImpl.extractScanFromConf(org.apache.hadoop.conf.Configuration conf) |
private static Scan |
CellCounter.getConfiguredScanForJob(org.apache.hadoop.conf.Configuration conf,
String[] args) |
Scan |
TableInputFormatBase.getScan()
Gets the scan defining the actual details like columns etc.
|
Scan |
TableSplit.getScan()
Returns a Scan object from the stored string representation.
|
(package private) static Scan |
ExportUtils.getScanFromCommandLine(org.apache.hadoop.conf.Configuration conf,
String[] args) |
(package private) Scan |
HashTable.TableHash.initScan() |
Modifier and Type | Method and Description |
---|---|
static Triple<TableName,Scan,org.apache.hadoop.fs.Path> |
ExportUtils.getArgumentsFromCommandLine(org.apache.hadoop.conf.Configuration conf,
String[] args) |
protected List<Scan> |
MultiTableInputFormatBase.getScans()
Allows subclasses to get the list of
Scan objects. |
Map<String,Collection<Scan>> |
MultiTableSnapshotInputFormatImpl.getSnapshotsToScans(org.apache.hadoop.conf.Configuration conf)
Retrieve the snapshot name -> list<scan> mapping pushed to configuration by
MultiTableSnapshotInputFormatImpl.setSnapshotToScans(Configuration, Map) |
Modifier and Type | Method and Description |
---|---|
private static void |
TableInputFormat.addColumn(Scan scan,
byte[] familyAndQualifier)
Parses a combined family and qualifier and adds either both or just the family in case there is
no qualifier.
|
static void |
TableInputFormat.addColumns(Scan scan,
byte[][] columns)
Adds an array of columns specified using old format, family:qualifier.
|
private static void |
TableInputFormat.addColumns(Scan scan,
String columns)
Convenience method to parse a string representation of an array of column specifiers.
|
static String |
TableMapReduceUtil.convertScanToString(Scan scan)
Writes the given scan into a Base64 encoded string.
|
static List<TableSnapshotInputFormatImpl.InputSplit> |
TableSnapshotInputFormatImpl.getSplits(Scan scan,
SnapshotManifest manifest,
List<HRegionInfo> regionManifests,
org.apache.hadoop.fs.Path restoreDir,
org.apache.hadoop.conf.Configuration conf) |
static List<TableSnapshotInputFormatImpl.InputSplit> |
TableSnapshotInputFormatImpl.getSplits(Scan scan,
SnapshotManifest manifest,
List<HRegionInfo> regionManifests,
org.apache.hadoop.fs.Path restoreDir,
org.apache.hadoop.conf.Configuration conf,
RegionSplitter.SplitAlgorithm sa,
int numSplits) |
private void |
CopyTable.initCopyTableMapperReducerJob(org.apache.hadoop.mapreduce.Job job,
Scan scan) |
static void |
IdentityTableMapper.initJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
GroupingTableMapper.initJob(String table,
Scan scan,
String groupColumns,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(TableName table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from a table snapshot.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir,
RegionSplitter.SplitAlgorithm splitAlgo,
int numSplitsPerRegion)
Sets up the job for reading from a table snapshot.
|
void |
TableInputFormatBase.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableRecordReader.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableRecordReaderImpl.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
private static void |
RowCounter.setScanFilter(Scan scan,
List<MultiRowRangeFilter.RowRange> rowRangeList)
Sets filter
FilterBase to the Scan instance. |
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more table snapshots, with one or more scans per
snapshot.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials)
Use this before submitting a Multi TableMap job.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration configuration,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path tmpRestoreDir) |
void |
MultiTableSnapshotInputFormatImpl.setInput(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path restoreDir)
Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of
restoreDir.
|
protected void |
MultiTableInputFormatBase.setScans(List<Scan> scans)
Allows subclasses to set the list of
Scan objects. |
void |
MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans)
Push snapshotScans to conf (under the key
MultiTableSnapshotInputFormatImpl.SNAPSHOT_TO_SCANS_KEY ) nnn |
Constructor and Description |
---|
InputSplit(TableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
TableSnapshotRegionSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
long length)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
String encodedRegionName,
long length)
Creates a new instance while assigning all variables.
|
Modifier and Type | Method and Description |
---|---|
private static void |
VerifyReplication.setRowPrefixFilter(Scan scan,
String rowPrefixes) |
private static void |
VerifyReplication.setStartAndStopRows(Scan scan,
byte[] startPrefixRow,
byte[] lastPrefixRow) |
Modifier and Type | Method and Description |
---|---|
private Scan |
RegionStateStore.getScanForUpdateRegionReplicas(TableName tableName) |
Modifier and Type | Method and Description |
---|---|
private Scan |
MetaBrowser.buildScan() |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
MasterRegion.getRegionScanner(Scan scan) |
ResultScanner |
MasterRegion.getScanner(Scan scan) |
Modifier and Type | Method and Description |
---|---|
static boolean |
MobUtils.isCacheMobBlocks(Scan scan)
Indicates whether the scan contains the information of caching blocks.
|
static boolean |
MobUtils.isRawMobScan(Scan scan)
Indicates whether it's a raw scan.
|
static boolean |
MobUtils.isReadEmptyValueOnMobCellMiss(Scan scan)
Indicates whether return null value when the mob file is missing or corrupt.
|
static boolean |
MobUtils.isRefOnlyScan(Scan scan)
Indicates whether it's a reference only scan.
|
static void |
MobUtils.setCacheMobBlocks(Scan scan,
boolean cacheBlocks)
Sets the attribute of caching blocks in the scan.
|
Modifier and Type | Method and Description |
---|---|
(package private) static Scan |
QuotaTableUtil.createScanForNamespaceSnapshotSizes()
Returns a scanner for all existing namespace snapshot entries.
|
(package private) static Scan |
QuotaTableUtil.createScanForNamespaceSnapshotSizes(String namespace)
Returns a scanner for all namespace snapshot entries of the given namespace
|
(package private) static Scan |
QuotaTableUtil.createScanForSpaceSnapshotSizes() |
(package private) static Scan |
QuotaTableUtil.createScanForSpaceSnapshotSizes(TableName table) |
static Scan |
QuotaTableUtil.makeQuotaSnapshotScan()
Creates a
Scan which returns only quota snapshots from the quota table. |
static Scan |
QuotaTableUtil.makeQuotaSnapshotScanForTable(TableName tn)
Creates a
Scan which returns only SpaceQuotaSnapshot from the quota table for a
specific table. |
static Scan |
QuotaTableUtil.makeScan(QuotaFilter filter) |
Modifier and Type | Method and Description |
---|---|
(package private) static List<Delete> |
QuotaTableUtil.createDeletesForExistingSnapshotsFromScan(Connection connection,
Scan scan)
Returns a list of
Delete to remove all entries returned by the passed scanner. |
(package private) void |
QuotaRetriever.init(org.apache.hadoop.conf.Configuration conf,
Scan scan) |
(package private) void |
QuotaRetriever.init(Connection conn,
Scan scan) |
Modifier and Type | Class and Description |
---|---|
class |
InternalScan
Special scanner, currently used for increment operations to allow additional server-side
arguments for Scan operations.
|
Modifier and Type | Field and Description |
---|---|
private Scan |
CustomizedScanInfoBuilder.scan |
private Scan |
StoreScanner.scan |
private static Scan |
StoreScanner.SCAN_FOR_COMPACTION |
Modifier and Type | Method and Description |
---|---|
Scan |
CustomizedScanInfoBuilder.getScan() |
Scan |
ScanOptions.getScan()
Returns a copy of the Scan object.
|
Modifier and Type | Method and Description |
---|---|
protected KeyValueScanner |
HMobStore.createScanner(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> targetCols,
long readPt)
Gets the MobStoreScanner or MobReversedStoreScanner.
|
protected KeyValueScanner |
HStore.createScanner(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> targetCols,
long readPt) |
RegionScanner |
Region.getScanner(Scan scan)
Return an iterator that scans over the HRegion, returning the indicated columns and rows
specified by the
Scan . |
RegionScannerImpl |
HRegion.getScanner(Scan scan) |
RegionScanner |
Region.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners)
Return an iterator that scans over the HRegion, returning the indicated columns and rows
specified by the
Scan . |
RegionScannerImpl |
HRegion.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners) |
private RegionScannerImpl |
HRegion.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners,
long nonceGroup,
long nonce) |
KeyValueScanner |
HStore.getScanner(Scan scan,
NavigableSet<byte[]> targetCols,
long readPt)
Return a scanner for both the memstore and the HStore files.
|
private void |
RegionScannerImpl.initializeScanners(Scan scan,
List<KeyValueScanner> additionalScanners) |
protected RegionScannerImpl |
HRegion.instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners,
long nonceGroup,
long nonce) |
private boolean |
RSRpcServices.isFullRegionScan(Scan scan,
HRegion region) |
(package private) boolean |
StoreFileReader.passesBloomFilter(Scan scan,
SortedSet<byte[]> columns)
Checks whether the given scan passes the Bloom filter (if present).
|
private boolean |
StoreFileReader.passesGeneralRowPrefixBloomFilter(Scan scan)
A method for checking Bloom filters.
|
boolean |
StoreFileReader.passesKeyRangeFilter(Scan scan)
Checks whether the given scan rowkey range overlaps with the current storefile's
|
RegionScanner |
RegionCoprocessorHost.postScannerOpen(Scan scan,
RegionScanner s) |
void |
RegionCoprocessorHost.preScannerOpen(Scan scan) |
ScanInfo |
RegionCoprocessorHost.preStoreScannerOpen(HStore store,
Scan scan)
Called before open store scanner for user scan.
|
boolean |
StoreFileScanner.shouldUseScanner(Scan scan,
HStore store,
long oldestUnexpiredTS) |
boolean |
NonLazyKeyValueScanner.shouldUseScanner(Scan scan,
HStore store,
long oldestUnexpiredTS) |
boolean |
KeyValueScanner.shouldUseScanner(Scan scan,
HStore store,
long oldestUnexpiredTS)
Allows to filter out scanners (both StoreFile and memstore) that we don't want to use based on
criteria such as Bloom filters and timestamp ranges.
|
boolean |
SegmentScanner.shouldUseScanner(Scan scan,
HStore store,
long oldestUnexpiredTS)
This functionality should be resolved in the higher level which is MemStoreScanner, currently
returns true as default.
|
Constructor and Description |
---|
CustomizedScanInfoBuilder(ScanInfo scanInfo,
Scan scan) |
InternalScan(Scan scan) |
MobStoreScanner(HStore store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt) |
RegionScannerImpl(Scan scan,
List<KeyValueScanner> additionalScanners,
HRegion region,
long nonceGroup,
long nonce) |
ReversedMobStoreScanner(HStore store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt) |
ReversedRegionScannerImpl(Scan scan,
List<KeyValueScanner> additionalScanners,
HRegion region,
long nonceGroup,
long nonce) |
ReversedStoreScanner(HStore store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
ReversedStoreScanner(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
List<? extends KeyValueScanner> scanners)
Constructor for testing.
|
StoreScanner(HStore store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
StoreScanner(HStore store,
Scan scan,
ScanInfo scanInfo,
int numColumns,
long readPt,
boolean cacheBlocks,
ScanType scanType)
An internal constructor.
|
StoreScanner(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
List<? extends KeyValueScanner> scanners) |
StoreScanner(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
List<? extends KeyValueScanner> scanners,
ScanType scanType) |
Modifier and Type | Method and Description |
---|---|
static RawScanQueryMatcher |
RawScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
static NormalUserScanQueryMatcher |
NormalUserScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
DeleteTracker deletes,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
static UserScanQueryMatcher |
UserScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
long oldestUnexpiredTS,
long now,
RegionCoprocessorHost regionCoprocessorHost) |
private static Cell |
UserScanQueryMatcher.createStartKey(Scan scan,
ScanInfo scanInfo) |
protected static Pair<DeleteTracker,ColumnTracker> |
ScanQueryMatcher.getTrackers(RegionCoprocessorHost host,
NavigableSet<byte[]> columns,
ScanInfo scanInfo,
long oldestUnexpiredTS,
Scan userScan) |
Constructor and Description |
---|
NormalUserScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
DeleteTracker deletes,
long oldestUnexpiredTS,
long now) |
RawScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
UserScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
Modifier and Type | Method and Description |
---|---|
static ScannerModel |
ScannerModel.fromScan(Scan scan) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
AccessController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
void |
AccessController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
VisibilityController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
void |
VisibilityController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e,
Scan scan) |
Modifier and Type | Method and Description |
---|---|
static Scan |
ThriftUtilities.scanFromThrift(org.apache.hadoop.hbase.thrift2.generated.TScan in) |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hbase.thrift2.generated.TScan |
ThriftUtilities.scanFromHBase(Scan in) |
Modifier and Type | Method and Description |
---|---|
ResultScanner |
ThriftTable.getScanner(Scan scan) |
Constructor and Description |
---|
Scanner(Scan scan) |
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.