Modifier and Type | Method and Description |
---|---|
static Scan |
MetaTableAccessor.getScanForTableName(TableName tableName)
This method creates a Scan object that will only scan catalog rows that
belong to the specified table.
|
Modifier and Type | Field and Description |
---|---|
protected Scan |
ClientScanner.scan |
protected Scan |
ScannerCallable.scan |
Modifier and Type | Method and Description |
---|---|
Scan |
Scan.addColumn(byte[] family,
byte[] qualifier)
Get the column from the specified family with the specified qualifier.
|
Scan |
Scan.addFamily(byte[] family)
Get all columns from the specified family.
|
static Scan |
Scan.createScanFromCursor(Cursor cursor)
Create a new Scan with a cursor.
|
protected Scan |
ClientScanner.getScan() |
protected Scan |
ScannerCallable.getScan() |
Scan |
Scan.setACL(Map<String,Permission> perms) |
Scan |
Scan.setACL(String user,
Permission perms) |
Scan |
Scan.setAllowPartialResults(boolean allowPartialResults)
Setting whether the caller wants to see the partial results when server returns
less-than-expected cells.
|
Scan |
Scan.setAttribute(String name,
byte[] value) |
Scan |
Scan.setAuthorizations(Authorizations authorizations) |
Scan |
Scan.setBatch(int batch)
Set the maximum number of cells to return for each call to next().
|
Scan |
Scan.setCacheBlocks(boolean cacheBlocks)
Set whether blocks should be cached for this Scan.
|
Scan |
Scan.setCaching(int caching)
Set the number of rows for caching that will be passed to scanners.
|
Scan |
Scan.setColumnFamilyTimeRange(byte[] cf,
long minStamp,
long maxStamp) |
Scan |
Scan.setConsistency(Consistency consistency) |
Scan |
Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
Setting the familyMap
|
Scan |
Scan.setFilter(Filter filter) |
Scan |
Scan.setId(String id) |
Scan |
Scan.setIsolationLevel(IsolationLevel level) |
Scan |
Scan.setLimit(int limit)
Set the limit of rows for this scan.
|
Scan |
Scan.setLoadColumnFamiliesOnDemand(boolean value) |
Scan |
Scan.setMaxResultSize(long maxResultSize)
Set the maximum result size.
|
Scan |
Scan.setMaxResultsPerColumnFamily(int limit)
Set the maximum number of values to return per row per Column Family
|
Scan |
Scan.setMaxVersions()
Get all available versions.
|
Scan |
Scan.setMaxVersions(int maxVersions)
Get up to the specified number of versions of each column.
|
Scan |
Scan.setNeedCursorResult(boolean needCursorResult)
When the server is slow or we scan a table with many deleted data or we use a sparse filter,
the server will response heartbeat to prevent timeout.
|
Scan |
Scan.setOneRowLimit()
Call this when you only want to get one row.
|
Scan |
Scan.setPriority(int priority) |
Scan |
Scan.setRaw(boolean raw)
Enable/disable "raw" mode for this scan.
|
Scan |
Scan.setReadType(Scan.ReadType readType)
Set the read type for this scan.
|
Scan |
Scan.setReplicaId(int Id) |
Scan |
Scan.setReversed(boolean reversed)
Set whether this scan is a reversed one
|
Scan |
Scan.setRowOffsetPerColumnFamily(int offset)
Set offset for the row per Column Family.
|
Scan |
Scan.setRowPrefixFilter(byte[] rowPrefix)
Set a filter (using stopRow and startRow) so the result set only contains rows where the
rowKey starts with the specified prefix.
|
Scan |
Scan.setScanMetricsEnabled(boolean enabled)
Enable collection of
ScanMetrics . |
Scan |
Scan.setSmall(boolean small)
Set whether this scan is a small scan
|
Scan |
Scan.setStartRow(byte[] startRow)
Deprecated.
use
withStartRow(byte[]) instead. This method may change the inclusive of
the stop row to keep compatible with the old behavior. |
Scan |
Scan.setStopRow(byte[] stopRow)
Deprecated.
use
withStartRow(byte[]) instead. This method may change the inclusive of
the stop row to keep compatible with the old behavior. |
Scan |
Scan.setTimeRange(long minStamp,
long maxStamp)
Get versions of columns only within the specified timestamp range,
[minStamp, maxStamp).
|
Scan |
Scan.setTimeStamp(long timestamp)
Get versions of columns with the specified timestamp.
|
Scan |
Scan.withStartRow(byte[] startRow)
Set the start row of the scan.
|
Scan |
Scan.withStartRow(byte[] startRow,
boolean inclusive)
Set the start row of the scan.
|
Scan |
Scan.withStopRow(byte[] stopRow)
Set the stop row of the scan.
|
Scan |
Scan.withStopRow(byte[] stopRow,
boolean inclusive)
Set the stop row of the scan.
|
Modifier and Type | Method and Description |
---|---|
static ScanResultCache |
ConnectionUtils.createScanResultCache(Scan scan,
List<Result> cache) |
static long |
PackagePrivateFieldAccessor.getMvccReadPoint(Scan scan) |
ResultScanner |
HTableWrapper.getScanner(Scan scan) |
ResultScanner |
Table.getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan
object. |
ResultScanner |
HTable.getScanner(Scan scan)
The underlying
HTable must not be closed. |
protected void |
AbstractClientScanner.initScanMetrics(Scan scan)
Check and initialize if application wants to collect scan metrics
|
static void |
PackagePrivateFieldAccessor.setMvccReadPoint(Scan scan,
long mvccReadPoint) |
Constructor and Description |
---|
ClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ClientScanner for the specified table Note that the passed
Scan 's start
row maybe changed changed. |
ClientSideRegionScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootDir,
HTableDescriptor htd,
HRegionInfo hri,
Scan scan,
ScanMetrics scanMetrics) |
ClientSimpleScanner(org.apache.hadoop.conf.Configuration configuration,
Scan scan,
TableName name,
ClusterConnection connection,
RpcRetryingCallerFactory rpcCallerFactory,
RpcControllerFactory rpcControllerFactory,
ExecutorService pool,
int replicaCallTimeoutMicroSecondScan) |
ReversedClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ReversibleClientScanner for the specified table Note that the passed
Scan 's start row maybe changed. |
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcFactory) |
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcFactory,
int replicaId) |
Scan(Scan scan)
Creates a new instance of this class while copying all values.
|
ScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcControllerFactory) |
ScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcControllerFactory,
int id) |
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
Modifier and Type | Method and Description |
---|---|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.avg(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for
a given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.avg(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for
a given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.max(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.max(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.median(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.median(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.min(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.min(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.rowCount(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from
regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.rowCount(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from
regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.std(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.std(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.sum(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.sum(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions.
|
Modifier and Type | Method and Description |
---|---|
KeyValueScanner |
ZooKeeperScanPolicyObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
KeyValueScanner s) |
Modifier and Type | Method and Description |
---|---|
boolean |
HalfStoreFileReader.passesKeyRangeFilter(Scan scan) |
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMap> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapred.JobConf job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more multiple table snapshots, with one or more scans
per snapshot.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path restoreDir)
Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of
restoreDir.
|
Constructor and Description |
---|
TableSnapshotInputFormat.TableSnapshotRegionSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
Modifier and Type | Method and Description |
---|---|
static Scan |
TableMapReduceUtil.convertStringToScan(String base64)
Converts the given Base64 string back into a Scan instance.
|
static Scan |
TableInputFormat.createScanFromConfiguration(org.apache.hadoop.conf.Configuration conf)
Sets up a
Scan instance, applying settings from the configuration property
constants defined in TableInputFormat . |
static Scan |
TableSnapshotInputFormatImpl.extractScanFromConf(org.apache.hadoop.conf.Configuration conf) |
Scan |
TableSplit.getScan()
Returns a Scan object from the stored string representation.
|
Scan |
TableInputFormatBase.getScan()
Gets the scan defining the actual details like columns etc.
|
Modifier and Type | Method and Description |
---|---|
protected List<Scan> |
MultiTableInputFormatBase.getScans()
Allows subclasses to get the list of
Scan objects. |
Map<String,Collection<Scan>> |
MultiTableSnapshotInputFormatImpl.getSnapshotsToScans(org.apache.hadoop.conf.Configuration conf)
Retrieve the snapshot name -> list<scan> mapping pushed to configuration by
MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration, java.util.Map) |
Modifier and Type | Method and Description |
---|---|
static void |
TableInputFormat.addColumns(Scan scan,
byte[][] columns)
Adds an array of columns specified using old format, family:qualifier.
|
static String |
TableMapReduceUtil.convertScanToString(Scan scan)
Writes the given scan into a Base64 encoded string.
|
static List<TableSnapshotInputFormatImpl.InputSplit> |
TableSnapshotInputFormatImpl.getSplits(Scan scan,
SnapshotManifest manifest,
List<HRegionInfo> regionManifests,
org.apache.hadoop.fs.Path restoreDir,
org.apache.hadoop.conf.Configuration conf) |
static List<TableSnapshotInputFormatImpl.InputSplit> |
TableSnapshotInputFormatImpl.getSplits(Scan scan,
SnapshotManifest manifest,
List<HRegionInfo> regionManifests,
org.apache.hadoop.fs.Path restoreDir,
org.apache.hadoop.conf.Configuration conf,
RegionSplitter.SplitAlgorithm sa,
int numSplits) |
static void |
IdentityTableMapper.initJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
GroupingTableMapper.initJob(String table,
Scan scan,
String groupColumns,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(TableName table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from a table snapshot.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir,
RegionSplitter.SplitAlgorithm splitAlgo,
int numSplitsPerRegion)
Sets up the job for reading from a table snapshot.
|
void |
TableRecordReader.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableInputFormatBase.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableRecordReaderImpl.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more table snapshots, with one or more scans
per snapshot.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials)
Use this before submitting a Multi TableMap job.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration configuration,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path tmpRestoreDir) |
void |
MultiTableSnapshotInputFormatImpl.setInput(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path restoreDir)
Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of
restoreDir.
|
protected void |
MultiTableInputFormatBase.setScans(List<Scan> scans)
Allows subclasses to set the list of
Scan objects. |
void |
MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans)
Push snapshotScans to conf (under the key
MultiTableSnapshotInputFormatImpl.SNAPSHOT_TO_SCANS_KEY ) |
Constructor and Description |
---|
TableSnapshotInputFormat.TableSnapshotRegionSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
TableSnapshotInputFormatImpl.InputSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
TableSplit(byte[] tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Deprecated.
As of release 0.96
(HBASE-9508).
This will be removed in HBase 2.0.0.
Use
TableSplit.TableSplit(TableName, byte[], byte[], String) . |
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
long length)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
String encodedRegionName,
long length)
Creates a new instance while assigning all variables.
|
Modifier and Type | Method and Description |
---|---|
static Scan |
ProtobufUtil.toScan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.Scan proto)
Convert a protocol buffer Scan to a client Scan
|
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanRequest |
RequestConverter.buildScanRequest(byte[] regionName,
Scan scan,
int numberOfRows,
boolean closeScanner)
Create a protocol buffer ScanRequest for a client Scan
|
static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.Scan |
ProtobufUtil.toScan(Scan scan)
Convert a client Scan to a protocol buffer Scan
|
Modifier and Type | Method and Description |
---|---|
static Scan |
QuotaTableUtil.makeScan(QuotaFilter filter) |
Modifier and Type | Class and Description |
---|---|
class |
InternalScan
Special scanner, currently used for increment operations to
allow additional server-side arguments for Scan operations.
|
Modifier and Type | Field and Description |
---|---|
protected Scan |
StoreScanner.scan |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
HRegion.getScanner(Scan scan) |
RegionScanner |
Region.getScanner(Scan scan)
Return an iterator that scans over the HRegion, returning the indicated
columns and rows specified by the
Scan . |
RegionScanner |
HRegion.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners) |
RegionScanner |
Region.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners)
Return an iterator that scans over the HRegion, returning the indicated columns and rows
specified by the
Scan . |
KeyValueScanner |
Store.getScanner(Scan scan,
NavigableSet<byte[]> targetCols,
long readPt)
Return a scanner for both the memstore and the HStore files.
|
KeyValueScanner |
HStore.getScanner(Scan scan,
NavigableSet<byte[]> targetCols,
long readPt) |
protected RegionScanner |
HRegion.instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners) |
protected RegionScanner |
HRegion.instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners,
long nonceGroup,
long nonce) |
boolean |
StoreFile.Reader.passesKeyRangeFilter(Scan scan)
Checks whether the given scan rowkey range overlaps with the current storefile's
|
RegionScanner |
RegionCoprocessorHost.postScannerOpen(Scan scan,
RegionScanner s) |
RegionScanner |
RegionCoprocessorHost.preScannerOpen(Scan scan) |
KeyValueScanner |
RegionCoprocessorHost.preStoreScannerOpen(Store store,
Scan scan,
NavigableSet<byte[]> targetCols)
|
boolean |
DefaultMemStore.shouldSeek(Scan scan,
Store store,
long oldestUnexpiredTS)
Check if this memstore may contain the required keys
|
boolean |
StoreFileScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS) |
boolean |
KeyValueScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS)
Allows to filter out scanners (both StoreFile and memstore) that we don't
want to use based on criteria such as Bloom filters and timestamp ranges.
|
boolean |
DefaultMemStore.MemStoreScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS) |
boolean |
NonLazyKeyValueScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS) |
Constructor and Description |
---|
InternalScan(Scan scan) |
StoreScanner(Scan scan,
ScanInfo scanInfo,
ScanType scanType,
NavigableSet<byte[]> columns,
List<KeyValueScanner> scanners,
long earliestPutTs,
long readPt) |
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
long smallestReadPoint,
long earliestPutTs,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow)
Used for compactions that drop deletes from a limited range of rows.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs)
Used for compactions.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
StoreScanner(Store store,
Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
long readPt,
boolean cacheBlocks)
An internal constructor.
|
Modifier and Type | Method and Description |
---|---|
static RawScanQueryMatcher |
RawScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
static NormalUserScanQueryMatcher |
NormalUserScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now,
RegionCoprocessorHost regionCoprocessorHost) |
static UserScanQueryMatcher |
UserScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
long oldestUnexpiredTS,
long now,
RegionCoprocessorHost regionCoprocessorHost) |
static LegacyScanQueryMatcher |
LegacyScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
ScanType scanType,
long readPointToUse,
long earliestPutTs,
long oldestUnexpiredTS,
long now,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow,
RegionCoprocessorHost regionCoprocessorHost)
Deprecated.
|
Constructor and Description |
---|
NormalUserScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
DeleteTracker deletes,
long oldestUnexpiredTS,
long now) |
RawScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
UserScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
Modifier and Type | Method and Description |
---|---|
ResultScanner |
RemoteHTable.getScanner(Scan scan) |
Modifier and Type | Method and Description |
---|---|
static ScannerModel |
ScannerModel.fromScan(Scan scan) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
AccessController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
RegionScanner |
AccessController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
VisibilityController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
RegionScanner |
VisibilityController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e,
Scan scan,
RegionScanner s) |
Modifier and Type | Method and Description |
---|---|
static Scan |
ThriftUtilities.scanFromThrift(TScan in) |
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.