Package | Description |
---|---|
org.apache.hadoop.hbase.client |
Provides HBase Client
|
org.apache.hadoop.hbase.mapred |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mapreduce |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.rest.client |
Modifier and Type | Method and Description |
---|---|
Scan |
Scan.addColumn(byte[] family,
byte[] qualifier)
Get the column from the specified family with the specified qualifier.
|
Scan |
Scan.addFamily(byte[] family)
Get all columns from the specified family.
|
static Scan |
Scan.createScanFromCursor(Cursor cursor)
Create a new Scan with a cursor.
|
Scan |
Scan.setACL(Map<String,org.apache.hadoop.hbase.security.access.Permission> perms) |
Scan |
Scan.setACL(String user,
org.apache.hadoop.hbase.security.access.Permission perms) |
Scan |
Scan.setAllowPartialResults(boolean allowPartialResults)
Setting whether the caller wants to see the partial results when server returns
less-than-expected cells.
|
Scan |
Scan.setAttribute(String name,
byte[] value) |
Scan |
Scan.setAuthorizations(org.apache.hadoop.hbase.security.visibility.Authorizations authorizations) |
Scan |
Scan.setBatch(int batch)
Set the maximum number of cells to return for each call to next().
|
Scan |
Scan.setCacheBlocks(boolean cacheBlocks)
Set whether blocks should be cached for this Scan.
|
Scan |
Scan.setCaching(int caching)
Set the number of rows for caching that will be passed to scanners.
|
Scan |
Scan.setColumnFamilyTimeRange(byte[] cf,
long minStamp,
long maxStamp) |
Scan |
Scan.setConsistency(Consistency consistency) |
Scan |
Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
Setting the familyMap
|
Scan |
Scan.setFilter(Filter filter) |
Scan |
Scan.setId(String id) |
Scan |
Scan.setIsolationLevel(IsolationLevel level) |
Scan |
Scan.setLimit(int limit)
Set the limit of rows for this scan.
|
Scan |
Scan.setLoadColumnFamiliesOnDemand(boolean value) |
Scan |
Scan.setMaxResultSize(long maxResultSize)
Set the maximum result size.
|
Scan |
Scan.setMaxResultsPerColumnFamily(int limit)
Set the maximum number of values to return per row per Column Family
|
Scan |
Scan.setMaxVersions()
Get all available versions.
|
Scan |
Scan.setMaxVersions(int maxVersions)
Get up to the specified number of versions of each column.
|
Scan |
Scan.setNeedCursorResult(boolean needCursorResult)
When the server is slow or we scan a table with many deleted data or we use a sparse filter,
the server will response heartbeat to prevent timeout.
|
Scan |
Scan.setOneRowLimit()
Call this when you only want to get one row.
|
Scan |
Scan.setPriority(int priority) |
Scan |
Scan.setRaw(boolean raw)
Enable/disable "raw" mode for this scan.
|
Scan |
Scan.setReadType(Scan.ReadType readType)
Set the read type for this scan.
|
Scan |
Scan.setReplicaId(int Id) |
Scan |
Scan.setReversed(boolean reversed)
Set whether this scan is a reversed one
|
Scan |
Scan.setRowOffsetPerColumnFamily(int offset)
Set offset for the row per Column Family.
|
Scan |
Scan.setRowPrefixFilter(byte[] rowPrefix)
Set a filter (using stopRow and startRow) so the result set only contains rows where the
rowKey starts with the specified prefix.
|
Scan |
Scan.setScanMetricsEnabled(boolean enabled)
Enable collection of
ScanMetrics . |
Scan |
Scan.setSmall(boolean small)
Set whether this scan is a small scan
|
Scan |
Scan.setStartRow(byte[] startRow)
Deprecated.
use
withStartRow(byte[]) instead. This method may change the inclusive of
the stop row to keep compatible with the old behavior. |
Scan |
Scan.setStopRow(byte[] stopRow)
Deprecated.
use
withStartRow(byte[]) instead. This method may change the inclusive of
the stop row to keep compatible with the old behavior. |
Scan |
Scan.setTimeRange(long minStamp,
long maxStamp)
Get versions of columns only within the specified timestamp range,
[minStamp, maxStamp).
|
Scan |
Scan.setTimeStamp(long timestamp)
Get versions of columns with the specified timestamp.
|
Scan |
Scan.withStartRow(byte[] startRow)
Set the start row of the scan.
|
Scan |
Scan.withStartRow(byte[] startRow,
boolean inclusive)
Set the start row of the scan.
|
Scan |
Scan.withStopRow(byte[] stopRow)
Set the stop row of the scan.
|
Scan |
Scan.withStopRow(byte[] stopRow,
boolean inclusive)
Set the stop row of the scan.
|
Modifier and Type | Method and Description |
---|---|
ResultScanner |
Table.getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan
object. |
Constructor and Description |
---|
Scan(Scan scan)
Creates a new instance of this class while copying all values.
|
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMap> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapred.JobConf job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more multiple table snapshots, with one or more scans
per snapshot.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path restoreDir)
Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of
restoreDir.
|
Modifier and Type | Method and Description |
---|---|
static Scan |
TableMapReduceUtil.convertStringToScan(String base64)
Converts the given Base64 string back into a Scan instance.
|
static Scan |
TableInputFormat.createScanFromConfiguration(org.apache.hadoop.conf.Configuration conf)
Sets up a
Scan instance, applying settings from the configuration property
constants defined in TableInputFormat . |
Scan |
TableSplit.getScan()
Returns a Scan object from the stored string representation.
|
Scan |
TableInputFormatBase.getScan()
Gets the scan defining the actual details like columns etc.
|
Modifier and Type | Method and Description |
---|---|
protected List<Scan> |
MultiTableInputFormatBase.getScans()
Allows subclasses to get the list of
Scan objects. |
Modifier and Type | Method and Description |
---|---|
static void |
TableInputFormat.addColumns(Scan scan,
byte[][] columns)
Adds an array of columns specified using old format, family:qualifier.
|
static String |
TableMapReduceUtil.convertScanToString(Scan scan)
Writes the given scan into a Base64 encoded string.
|
static void |
IdentityTableMapper.initJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
GroupingTableMapper.initJob(String table,
Scan scan,
String groupColumns,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(TableName table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from a table snapshot.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir,
org.apache.hadoop.hbase.util.RegionSplitter.SplitAlgorithm splitAlgo,
int numSplitsPerRegion)
Sets up the job for reading from a table snapshot.
|
void |
TableRecordReader.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableInputFormatBase.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableRecordReaderImpl.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more table snapshots, with one or more scans
per snapshot.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials)
Use this before submitting a Multi TableMap job.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration configuration,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path tmpRestoreDir) |
protected void |
MultiTableInputFormatBase.setScans(List<Scan> scans)
Allows subclasses to set the list of
Scan objects. |
Constructor and Description |
---|
TableSplit(byte[] tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Deprecated.
As of release 0.96
(HBASE-9508).
This will be removed in HBase 2.0.0.
Use
TableSplit.TableSplit(TableName, byte[], byte[], String) . |
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
long length)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
String encodedRegionName,
long length)
Creates a new instance while assigning all variables.
|
Modifier and Type | Method and Description |
---|---|
ResultScanner |
RemoteHTable.getScanner(Scan scan) |
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.