Package | Description |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.client |
Provides HBase Client
|
org.apache.hadoop.hbase.client.coprocessor |
Provides client classes for invoking Coprocessor RPC protocols
Overview
Example Usage
|
org.apache.hadoop.hbase.coprocessor |
Table of Contents
|
org.apache.hadoop.hbase.coprocessor.example | |
org.apache.hadoop.hbase.io | |
org.apache.hadoop.hbase.mapred |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mapreduce |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.quotas | |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.rest.client | |
org.apache.hadoop.hbase.rest.model | |
org.apache.hadoop.hbase.security.access | |
org.apache.hadoop.hbase.security.visibility | |
org.apache.hadoop.hbase.thrift2 |
Provides an HBase Thrift
service.
|
Modifier and Type | Method and Description |
---|---|
static Scan |
MetaTableAccessor.getScanForTableName(TableName tableName)
This method creates a Scan object that will only scan catalog rows that
belong to the specified table.
|
Modifier and Type | Field and Description |
---|---|
private Scan |
ScannerCallable.scan |
protected Scan |
ClientScanner.scan |
private Scan |
ScannerCallableWithReplicas.scan |
private Scan |
TableSnapshotScanner.scan |
Modifier and Type | Method and Description |
---|---|
Scan |
Scan.addColumn(byte[] family,
byte[] qualifier)
Get the column from the specified family with the specified qualifier.
|
Scan |
Scan.addFamily(byte[] family)
Get all columns from the specified family.
|
(package private) static Scan |
Scan.createGetClosestRowOrBeforeReverseScan(byte[] row)
Utility that creates a Scan that will do a small scan in reverse from passed row
looking for next closest row.
|
protected Scan |
ScannerCallable.getScan() |
protected Scan |
ClientScanner.getScan() |
Scan |
Scan.setACL(Map<String,Permission> perms) |
Scan |
Scan.setACL(String user,
Permission perms) |
Scan |
Scan.setAllowPartialResults(boolean allowPartialResults)
Setting whether the caller wants to see the partial results that may be returned from the
server.
|
Scan |
Scan.setAttribute(String name,
byte[] value) |
Scan |
Scan.setAuthorizations(Authorizations authorizations) |
Scan |
Scan.setBatch(int batch)
Set the maximum number of values to return for each call to next().
|
Scan |
Scan.setCacheBlocks(boolean cacheBlocks)
Set whether blocks should be cached for this Scan.
|
Scan |
Scan.setCaching(int caching)
Set the number of rows for caching that will be passed to scanners.
|
Scan |
Scan.setColumnFamilyTimeRange(byte[] cf,
long minStamp,
long maxStamp) |
Scan |
Scan.setConsistency(Consistency consistency) |
Scan |
Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
Setting the familyMap
|
Scan |
Scan.setFilter(Filter filter) |
Scan |
Scan.setId(String id) |
Scan |
Scan.setIsolationLevel(IsolationLevel level) |
Scan |
Scan.setLoadColumnFamiliesOnDemand(boolean value) |
Scan |
Scan.setMaxResultSize(long maxResultSize)
Set the maximum result size.
|
Scan |
Scan.setMaxResultsPerColumnFamily(int limit)
Set the maximum number of values to return per row per Column Family
|
Scan |
Scan.setMaxVersions()
Get all available versions.
|
Scan |
Scan.setMaxVersions(int maxVersions)
Get up to the specified number of versions of each column.
|
Scan |
Scan.setRaw(boolean raw)
Enable/disable "raw" mode for this scan.
|
Scan |
Scan.setReplicaId(int Id) |
Scan |
Scan.setReversed(boolean reversed)
Set whether this scan is a reversed one
|
Scan |
Scan.setRowOffsetPerColumnFamily(int offset)
Set offset for the row per Column Family.
|
Scan |
Scan.setRowPrefixFilter(byte[] rowPrefix)
Set a filter (using stopRow and startRow) so the result set only contains rows where the
rowKey starts with the specified prefix.
|
Scan |
Scan.setScanMetricsEnabled(boolean enabled)
Enable collection of
ScanMetrics . |
Scan |
Scan.setSmall(boolean small)
Set whether this scan is a small scan
|
Scan |
Scan.setStartRow(byte[] startRow)
Set the start row of the scan.
|
Scan |
Scan.setStopRow(byte[] stopRow)
Set the stop row.
|
Scan |
Scan.setTimeRange(long minStamp,
long maxStamp)
Get versions of columns only within the specified timestamp range,
[minStamp, maxStamp).
|
Scan |
Scan.setTimeStamp(long timestamp)
Get versions of columns with the specified timestamp.
|
Modifier and Type | Method and Description |
---|---|
ScannerCallableWithReplicas |
ClientSmallScanner.SmallScannerCallableFactory.getCallable(ClusterConnection connection,
TableName table,
Scan scan,
ScanMetrics scanMetrics,
byte[] localStartKey,
int cacheNum,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout,
int retries,
int scannerTimeout,
org.apache.hadoop.conf.Configuration conf,
RpcRetryingCaller<Result[]> caller) |
ScannerCallableWithReplicas |
ClientSmallReversedScanner.SmallReversedScannerCallableFactory.getCallable(ClusterConnection connection,
TableName table,
Scan scan,
ScanMetrics scanMetrics,
byte[] localStartKey,
int cacheNum,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout,
int retries,
int scannerTimeout,
org.apache.hadoop.conf.Configuration conf,
RpcRetryingCaller<Result[]> caller,
boolean isFirstRegionToLocate) |
ResultScanner |
HTable.getScanner(Scan scan)
The underlying
HTable must not be closed. |
ResultScanner |
Table.getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan
object. |
ResultScanner |
HTablePool.PooledHTable.getScanner(Scan scan) |
ResultScanner |
HTableWrapper.getScanner(Scan scan) |
protected void |
AbstractClientScanner.initScanMetrics(Scan scan)
Check and initialize if application wants to collect scan metrics
|
Constructor and Description |
---|
ClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ClientScanner for the specified table Note that the passed
Scan 's start
row maybe changed changed. |
ClientSideRegionScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootDir,
HTableDescriptor htd,
HRegionInfo hri,
Scan scan,
ScanMetrics scanMetrics) |
ClientSmallReversedScanner.SmallReversedScannerCallable(ClusterConnection connection,
TableName table,
Scan scan,
ScanMetrics scanMetrics,
byte[] locateStartRow,
RpcControllerFactory controllerFactory,
int caching,
int replicaId) |
ClientSmallReversedScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ReversibleClientScanner for the specified table.
|
ClientSmallReversedScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout,
ClientSmallReversedScanner.SmallReversedScannerCallableFactory callableFactory)
Create a new ReversibleClientScanner for the specified table.
|
ClientSmallScanner.SmallScannerCallable(ClusterConnection connection,
TableName table,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory controllerFactory,
int caching,
int id) |
ClientSmallScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ShortClientScanner for the specified table.
|
ClientSmallScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout,
ClientSmallScanner.SmallScannerCallableFactory callableFactory)
Create a new ShortClientScanner for the specified table.
|
ReversedClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ReversibleClientScanner for the specified table Note that the
passed
Scan 's start row maybe changed. |
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
byte[] locateStartRow)
|
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
byte[] locateStartRow,
RpcControllerFactory rpcFactory) |
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
byte[] locateStartRow,
RpcControllerFactory rpcFactory,
int replicaId) |
Scan(Scan scan)
Creates a new instance of this class while copying all values.
|
ScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcControllerFactory) |
ScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcControllerFactory,
int id) |
ScannerCallableWithReplicas(TableName tableName,
ClusterConnection cConnection,
ScannerCallable baseCallable,
ExecutorService pool,
int timeBeforeReplicas,
Scan scan,
int retries,
int scannerTimeout,
int caching,
org.apache.hadoop.conf.Configuration conf,
RpcRetryingCaller<Result[]> caller) |
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
Modifier and Type | Method and Description |
---|---|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.avg(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for
a given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.avg(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for
a given cf-cq combination.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getAvgArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes average while fetching sum and row count from all the
corresponding regions.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getAvgArgs(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes average while fetching sum and row count from all the
corresponding regions.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getMedianArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It helps locate the region with median for a given column whose weight
is specified in an optional column.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getStdArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes a global standard deviation for a given column and its value.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.max(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.max(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.median(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.median(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.min(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.min(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.rowCount(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from
regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.rowCount(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from
regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.std(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.std(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.sum(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.sum(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions.
|
(package private) <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.validateArgAndGetPB(Scan scan,
ColumnInterpreter<R,S,P,Q,T> ci,
boolean canFamilyBeAbsent) |
private void |
AggregationClient.validateParameters(Scan scan,
boolean canFamilyBeAbsent) |
Modifier and Type | Method and Description |
---|---|
KeyValueScanner |
ZooKeeperScanPolicyObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
KeyValueScanner s) |
Modifier and Type | Method and Description |
---|---|
boolean |
HalfStoreFileReader.passesKeyRangeFilter(Scan scan) |
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMap> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapred.JobConf job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more multiple table snapshots, with one or more scans
per snapshot.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path restoreDir)
Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of
restoreDir.
|
Constructor and Description |
---|
TableSnapshotInputFormat.TableSnapshotRegionSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
Modifier and Type | Field and Description |
---|---|
private Scan |
TableRecordReaderImpl.currentScan |
private Scan |
TableRecordReaderImpl.scan |
private Scan |
TableInputFormatBase.scan
Holds the details for the internal scanner.
|
private Scan |
TableSnapshotInputFormatImpl.RecordReader.scan |
Modifier and Type | Field and Description |
---|---|
private List<Scan> |
MultiTableInputFormatBase.scans
Holds the set of scans used to define the input.
|
Modifier and Type | Method and Description |
---|---|
(package private) static Scan |
TableMapReduceUtil.convertStringToScan(String base64)
Converts the given Base64 string back into a Scan instance.
|
static Scan |
TableSnapshotInputFormatImpl.extractScanFromConf(org.apache.hadoop.conf.Configuration conf) |
private static Scan |
CellCounter.getConfiguredScanForJob(org.apache.hadoop.conf.Configuration conf,
String[] args) |
private static Scan |
Export.getConfiguredScanForJob(org.apache.hadoop.conf.Configuration conf,
String[] args) |
Scan |
TableSplit.getScan()
Returns a Scan object from the stored string representation.
|
Scan |
TableInputFormatBase.getScan()
Gets the scan defining the actual details like columns etc.
|
(package private) Scan |
HashTable.TableHash.initScan() |
Modifier and Type | Method and Description |
---|---|
protected List<Scan> |
MultiTableInputFormatBase.getScans()
Allows subclasses to get the list of
Scan objects. |
Map<String,Collection<Scan>> |
MultiTableSnapshotInputFormatImpl.getSnapshotsToScans(org.apache.hadoop.conf.Configuration conf)
Retrieve the snapshot name -> list<scan> mapping pushed to configuration by
MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration, java.util.Map) |
Modifier and Type | Method and Description |
---|---|
private static void |
TableInputFormat.addColumn(Scan scan,
byte[] familyAndQualifier)
Parses a combined family and qualifier and adds either both or just the
family in case there is no qualifier.
|
static void |
TableInputFormat.addColumns(Scan scan,
byte[][] columns)
Adds an array of columns specified using old format, family:qualifier.
|
private static void |
TableInputFormat.addColumns(Scan scan,
String columns)
Convenience method to parse a string representation of an array of column specifiers.
|
(package private) static String |
TableMapReduceUtil.convertScanToString(Scan scan)
Writes the given scan into a Base64 encoded string.
|
static List<TableSnapshotInputFormatImpl.InputSplit> |
TableSnapshotInputFormatImpl.getSplits(Scan scan,
SnapshotManifest manifest,
List<HRegionInfo> regionManifests,
org.apache.hadoop.fs.Path restoreDir,
org.apache.hadoop.conf.Configuration conf) |
static void |
IdentityTableMapper.initJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
GroupingTableMapper.initJob(String table,
Scan scan,
String groupColumns,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(TableName table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from a table snapshot.
|
void |
TableRecordReader.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableRecordReaderImpl.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableInputFormatBase.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more table snapshots, with one or more scans
per snapshot.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials)
Use this before submitting a Multi TableMap job.
|
void |
MultiTableSnapshotInputFormatImpl.setInput(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path restoreDir)
Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of
restoreDir.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration configuration,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path tmpRestoreDir) |
protected void |
MultiTableInputFormatBase.setScans(List<Scan> scans)
Allows subclasses to set the list of
Scan objects. |
void |
MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans)
Push snapshotScans to conf (under the key
MultiTableSnapshotInputFormatImpl.SNAPSHOT_TO_SCANS_KEY ) |
Constructor and Description |
---|
TableSnapshotInputFormat.TableSnapshotRegionSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
TableSnapshotInputFormatImpl.InputSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
TableSplit(byte[] tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Deprecated.
As of release 0.96
(HBASE-9508).
This will be removed in HBase 2.0.0.
Use
TableSplit.TableSplit(TableName, byte[], byte[], String) . |
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
long length)
Creates a new instance while assigning all variables.
|
Modifier and Type | Method and Description |
---|---|
static Scan |
QuotaTableUtil.makeScan(QuotaFilter filter) |
Modifier and Type | Method and Description |
---|---|
(package private) void |
QuotaRetriever.init(org.apache.hadoop.conf.Configuration conf,
Scan scan) |
Modifier and Type | Class and Description |
---|---|
class |
InternalScan
Special scanner, currently used for increment operations to
allow additional server-side arguments for Scan operations.
|
Modifier and Type | Field and Description |
---|---|
protected Scan |
StoreScanner.scan |
Modifier and Type | Method and Description |
---|---|
private Scan |
HRegion.buildScanForGetWithClosestRowBefore(Get get) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
Region.getScanner(Scan scan)
Return an iterator that scans over the HRegion, returning the indicated
columns and rows specified by the
Scan . |
RegionScanner |
HRegion.getScanner(Scan scan) |
RegionScanner |
Region.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners)
Return an iterator that scans over the HRegion, returning the indicated columns and rows
specified by the
Scan . |
RegionScanner |
HRegion.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners) |
KeyValueScanner |
Store.getScanner(Scan scan,
NavigableSet<byte[]> targetCols,
long readPt)
Return a scanner for both the memstore and the HStore files.
|
KeyValueScanner |
HStore.getScanner(Scan scan,
NavigableSet<byte[]> targetCols,
long readPt) |
protected RegionScanner |
HRegion.instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners) |
(package private) boolean |
StoreFile.Reader.passesBloomFilter(Scan scan,
SortedSet<byte[]> columns)
Checks whether the given scan passes the Bloom filter (if present).
|
boolean |
StoreFile.Reader.passesKeyRangeFilter(Scan scan)
Checks whether the given scan rowkey range overlaps with the current storefile's
|
RegionScanner |
RegionCoprocessorHost.postScannerOpen(Scan scan,
RegionScanner s) |
RegionScanner |
RegionCoprocessorHost.preScannerOpen(Scan scan) |
KeyValueScanner |
RegionCoprocessorHost.preStoreScannerOpen(Store store,
Scan scan,
NavigableSet<byte[]> targetCols)
|
boolean |
DefaultMemStore.shouldSeek(Scan scan,
Store store,
long oldestUnexpiredTS)
Check if this memstore may contain the required keys
|
boolean |
DefaultMemStore.MemStoreScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS) |
boolean |
KeyValueScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS)
Allows to filter out scanners (both StoreFile and memstore) that we don't
want to use based on criteria such as Bloom filters and timestamp ranges.
|
boolean |
NonLazyKeyValueScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS) |
boolean |
StoreFileScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS) |
Constructor and Description |
---|
HRegion.RegionScannerImpl(Scan scan,
List<KeyValueScanner> additionalScanners,
HRegion region) |
InternalScan(Scan scan) |
ReversedRegionScannerImpl(Scan scan,
List<KeyValueScanner> additionalScanners,
HRegion region) |
ReversedStoreScanner(Scan scan,
ScanInfo scanInfo,
ScanType scanType,
NavigableSet<byte[]> columns,
List<KeyValueScanner> scanners)
Constructor for testing.
|
ReversedStoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
ScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
long oldestUnexpiredTS,
long now) |
ScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
long readPointToUse,
long earliestPutTs,
long oldestUnexpiredTS,
long now,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow,
RegionCoprocessorHost regionCoprocessorHost)
Construct a QueryMatcher for a scan that drop deletes from a limited range of rows.
|
ScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
ScanType scanType,
long readPointToUse,
long earliestPutTs,
long oldestUnexpiredTS,
long now,
RegionCoprocessorHost regionCoprocessorHost)
Construct a QueryMatcher for a scan
|
StoreScanner(Scan scan,
ScanInfo scanInfo,
ScanType scanType,
NavigableSet<byte[]> columns,
List<KeyValueScanner> scanners) |
StoreScanner(Scan scan,
ScanInfo scanInfo,
ScanType scanType,
NavigableSet<byte[]> columns,
List<KeyValueScanner> scanners,
long earliestPutTs) |
StoreScanner(Scan scan,
ScanInfo scanInfo,
ScanType scanType,
NavigableSet<byte[]> columns,
List<KeyValueScanner> scanners,
long earliestPutTs,
long readPt) |
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
long smallestReadPoint,
long earliestPutTs,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow)
Used for compactions that drop deletes from a limited range of rows.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs)
Used for compactions.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow) |
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
StoreScanner(Store store,
Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
long readPt,
boolean cacheBlocks)
An internal constructor.
|
Modifier and Type | Method and Description |
---|---|
ResultScanner |
RemoteHTable.getScanner(Scan scan) |
Constructor and Description |
---|
RemoteHTable.Scanner(Scan scan) |
Modifier and Type | Method and Description |
---|---|
static ScannerModel |
ScannerModel.fromScan(Scan scan) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
AccessController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
RegionScanner |
AccessController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
VisibilityController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
RegionScanner |
VisibilityController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e,
Scan scan,
RegionScanner s) |
Modifier and Type | Method and Description |
---|---|
static Scan |
ThriftUtilities.scanFromThrift(org.apache.hadoop.hbase.thrift2.generated.TScan in) |
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.