Package | Description |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.client |
Provides HBase Client
|
org.apache.hadoop.hbase.client.coprocessor |
Provides client classes for invoking Coprocessor RPC protocols
|
org.apache.hadoop.hbase.coprocessor |
Table of Contents
|
org.apache.hadoop.hbase.coprocessor.example | |
org.apache.hadoop.hbase.io | |
org.apache.hadoop.hbase.mapreduce |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.protobuf |
Holds classes generated from protobuf
src/main/protobuf definition files. |
org.apache.hadoop.hbase.quotas | |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.rest.client | |
org.apache.hadoop.hbase.rest.model | |
org.apache.hadoop.hbase.security.access | |
org.apache.hadoop.hbase.security.visibility | |
org.apache.hadoop.hbase.thrift2 |
Provides an HBase Thrift
service.
|
Modifier and Type | Method and Description |
---|---|
static Scan |
MetaTableAccessor.getScanForTableName(TableName tableName)
This method creates a Scan object that will only scan catalog rows that
belong to the specified table.
|
Modifier and Type | Field and Description |
---|---|
protected Scan |
ClientScanner.scan |
Modifier and Type | Method and Description |
---|---|
Scan |
Scan.addColumn(byte[] family,
byte[] qualifier)
Get the column from the specified family with the specified qualifier.
|
Scan |
Scan.addFamily(byte[] family)
Get all columns from the specified family.
|
protected Scan |
ScannerCallable.getScan() |
protected Scan |
ClientScanner.getScan() |
Scan |
Scan.setACL(Map<String,Permission> perms) |
Scan |
Scan.setACL(String user,
Permission perms) |
Scan |
Scan.setAllowPartialResults(boolean allowPartialResults)
Setting whether the caller wants to see the partial results that may be returned from the
server.
|
Scan |
Scan.setAttribute(String name,
byte[] value) |
Scan |
Scan.setAuthorizations(Authorizations authorizations) |
Scan |
Scan.setBatch(int batch)
Set the maximum number of values to return for each call to next().
|
Scan |
Scan.setCacheBlocks(boolean cacheBlocks)
Set whether blocks should be cached for this Scan.
|
Scan |
Scan.setCaching(int caching)
Set the number of rows for caching that will be passed to scanners.
|
Scan |
Scan.setConsistency(Consistency consistency) |
Scan |
Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
Setting the familyMap
|
Scan |
Scan.setFilter(Filter filter) |
Scan |
Scan.setId(String id) |
Scan |
Scan.setIsolationLevel(IsolationLevel level) |
Scan |
Scan.setLoadColumnFamiliesOnDemand(boolean value)
Set the value indicating whether loading CFs on demand should be allowed (cluster
default is false).
|
Scan |
Scan.setMaxResultSize(long maxResultSize)
Set the maximum result size.
|
Scan |
Scan.setMaxResultsPerColumnFamily(int limit)
Set the maximum number of values to return per row per Column Family
|
Scan |
Scan.setMaxVersions()
Get all available versions.
|
Scan |
Scan.setMaxVersions(int maxVersions)
Get up to the specified number of versions of each column.
|
Scan |
Scan.setRaw(boolean raw)
Enable/disable "raw" mode for this scan.
|
Scan |
Scan.setReplicaId(int Id) |
Scan |
Scan.setReversed(boolean reversed)
Set whether this scan is a reversed one
|
Scan |
Scan.setRowOffsetPerColumnFamily(int offset)
Set offset for the row per Column Family.
|
Scan |
Scan.setRowPrefixFilter(byte[] rowPrefix)
Set a filter (using stopRow and startRow) so the result set only contains rows where the
rowKey starts with the specified prefix.
|
Scan |
Scan.setScanMetricsEnabled(boolean enabled)
Enable collection of
ScanMetrics . |
Scan |
Scan.setSmall(boolean small)
Set whether this scan is a small scan
|
Scan |
Scan.setStartRow(byte[] startRow)
Set the start row of the scan.
|
Scan |
Scan.setStopRow(byte[] stopRow)
Set the stop row.
|
Scan |
Scan.setTimeRange(long minStamp,
long maxStamp)
Get versions of columns only within the specified timestamp range,
[minStamp, maxStamp).
|
Scan |
Scan.setTimeStamp(long timestamp)
Get versions of columns with the specified timestamp.
|
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas |
ClientSmallScanner.SmallScannerCallableFactory.getCallable(ClusterConnection connection,
TableName table,
Scan scan,
ScanMetrics scanMetrics,
byte[] localStartKey,
int cacheNum,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout,
int retries,
int scannerTimeout,
org.apache.hadoop.conf.Configuration conf,
RpcRetryingCaller<Result[]> caller) |
ResultScanner |
HTableWrapper.getScanner(Scan scan) |
ResultScanner |
Table.getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan
object. |
ResultScanner |
HTable.getScanner(Scan scan)
The underlying
HTable must not be closed. |
protected void |
AbstractClientScanner.initScanMetrics(Scan scan)
Check and initialize if application wants to collect scan metrics
|
Constructor and Description |
---|
ClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ClientScanner for the specified table Note that the passed
Scan 's start
row maybe changed changed. |
ClientSideRegionScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootDir,
HTableDescriptor htd,
HRegionInfo hri,
Scan scan,
ScanMetrics scanMetrics) |
ClientSmallReversedScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ReversibleClientScanner for the specified table.
|
ClientSmallScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ShortClientScanner for the specified table.
|
ReversedClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ReversibleClientScanner for the specified table Note that the
passed
Scan 's start row maybe changed. |
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
byte[] locateStartRow)
|
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
byte[] locateStartRow,
RpcControllerFactory rpcFactory) |
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
byte[] locateStartRow,
RpcControllerFactory rpcFactory,
int replicaId) |
Scan(Scan scan)
Creates a new instance of this class while copying all values.
|
ScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcControllerFactory) |
ScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcControllerFactory,
int id) |
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
Modifier and Type | Method and Description |
---|---|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.avg(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for
a given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.avg(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for
a given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.max(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.max(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.median(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.median(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.min(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.min(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.rowCount(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from
regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.rowCount(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from
regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.std(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.std(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.sum(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.sum(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions.
|
Modifier and Type | Method and Description |
---|---|
KeyValueScanner |
ZooKeeperScanPolicyObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
KeyValueScanner s) |
Modifier and Type | Method and Description |
---|---|
boolean |
HalfStoreFileReader.passesKeyRangeFilter(Scan scan) |
Modifier and Type | Method and Description |
---|---|
Scan |
TableSplit.getScan()
Returns a Scan object from the stored string representation.
|
Scan |
TableInputFormatBase.getScan()
Gets the scan defining the actual details like columns etc.
|
Modifier and Type | Method and Description |
---|---|
protected List<Scan> |
MultiTableInputFormatBase.getScans()
Allows subclasses to get the list of
Scan objects. |
Modifier and Type | Method and Description |
---|---|
static void |
TableInputFormat.addColumns(Scan scan,
byte[][] columns)
Adds an array of columns specified using old format, family:qualifier.
|
static void |
IdentityTableMapper.initJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
GroupingTableMapper.initJob(String table,
Scan scan,
String groupColumns,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(TableName table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from a table snapshot.
|
void |
TableRecordReaderImpl.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableRecordReader.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableInputFormatBase.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials)
Use this before submitting a Multi TableMap job.
|
protected void |
MultiTableInputFormatBase.setScans(List<Scan> scans)
Allows subclasses to set the list of
Scan objects. |
Constructor and Description |
---|
TableSplit(byte[] tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Deprecated.
As of release 0.96
(HBASE-9508).
This will be removed in HBase 2.0.0.
Use
TableSplit.TableSplit(TableName, byte[], byte[], String) . |
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
long length)
Creates a new instance while assigning all variables.
|
Modifier and Type | Method and Description |
---|---|
static Scan |
ProtobufUtil.toScan(ClientProtos.Scan proto)
Convert a protocol buffer Scan to a client Scan
|
Modifier and Type | Method and Description |
---|---|
static ClientProtos.ScanRequest |
RequestConverter.buildScanRequest(byte[] regionName,
Scan scan,
int numberOfRows,
boolean closeScanner)
Create a protocol buffer ScanRequest for a client Scan
|
static ClientProtos.Scan |
ProtobufUtil.toScan(Scan scan)
Convert a client Scan to a protocol buffer Scan
|
Modifier and Type | Method and Description |
---|---|
static Scan |
QuotaTableUtil.makeScan(QuotaFilter filter) |
Modifier and Type | Class and Description |
---|---|
class |
InternalScan
Special scanner, currently used for increment operations to
allow additional server-side arguments for Scan operations.
|
Modifier and Type | Field and Description |
---|---|
protected Scan |
StoreScanner.scan |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
Region.getScanner(Scan scan)
Return an iterator that scans over the HRegion, returning the indicated
columns and rows specified by the
Scan . |
RegionScanner |
HRegion.getScanner(Scan scan) |
protected RegionScanner |
HRegion.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners) |
KeyValueScanner |
Store.getScanner(Scan scan,
NavigableSet<byte[]> targetCols,
long readPt)
Return a scanner for both the memstore and the HStore files.
|
KeyValueScanner |
HStore.getScanner(Scan scan,
NavigableSet<byte[]> targetCols,
long readPt) |
protected RegionScanner |
HRegion.instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners) |
boolean |
StoreFile.Reader.passesKeyRangeFilter(Scan scan)
Checks whether the given scan rowkey range overlaps with the current storefile's
|
RegionScanner |
RegionCoprocessorHost.postScannerOpen(Scan scan,
RegionScanner s) |
RegionScanner |
RegionCoprocessorHost.preScannerOpen(Scan scan) |
KeyValueScanner |
RegionCoprocessorHost.preStoreScannerOpen(Store store,
Scan scan,
NavigableSet<byte[]> targetCols)
|
boolean |
DefaultMemStore.shouldSeek(Scan scan,
long oldestUnexpiredTS)
Check if this memstore may contain the required keys
|
boolean |
StoreFileScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS) |
boolean |
NonLazyKeyValueScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS) |
boolean |
KeyValueScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS)
Allows to filter out scanners (both StoreFile and memstore) that we don't
want to use based on criteria such as Bloom filters and timestamp ranges.
|
boolean |
DefaultMemStore.MemStoreScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS) |
Constructor and Description |
---|
InternalScan(Scan scan) |
ScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
long readPointToUse,
long earliestPutTs,
long oldestUnexpiredTS,
long now,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow,
RegionCoprocessorHost regionCoprocessorHost)
Construct a QueryMatcher for a scan that drop deletes from a limited range of rows.
|
ScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
ScanType scanType,
long readPointToUse,
long earliestPutTs,
long oldestUnexpiredTS,
long now,
RegionCoprocessorHost regionCoprocessorHost)
Construct a QueryMatcher for a scan
|
StoreScanner(Store store,
boolean cacheBlocks,
Scan scan,
NavigableSet<byte[]> columns,
long ttl,
int minVersions,
long readPt)
An internal constructor.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
long smallestReadPoint,
long earliestPutTs,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow)
Used for compactions that drop deletes from a limited range of rows.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs)
Used for compactions.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
Modifier and Type | Method and Description |
---|---|
ResultScanner |
RemoteHTable.getScanner(Scan scan) |
Modifier and Type | Method and Description |
---|---|
static ScannerModel |
ScannerModel.fromScan(Scan scan) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
AccessController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
RegionScanner |
AccessController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
VisibilityController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
RegionScanner |
VisibilityController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e,
Scan scan,
RegionScanner s) |
Modifier and Type | Method and Description |
---|---|
static Scan |
ThriftUtilities.scanFromThrift(TScan in) |
Copyright © 2007–2016 The Apache Software Foundation. All rights reserved.