Modifier and Type | Method and Description |
---|---|
static Table |
MetaTableAccessor.getMetaHTable(Connection connection)
Callers should call close on the returned
Table instance. |
Modifier and Type | Method and Description |
---|---|
private static Result |
MetaTableAccessor.get(Table t,
Get g) |
static void |
MetaTableAccessor.multiMutate(Connection connection,
Table table,
byte[] row,
List<Mutation> mutations)
Performs an atomic multi-mutate operation against the given table.
|
private static void |
MetaTableAccessor.multiMutate(Connection connection,
Table table,
byte[] row,
Mutation... mutations) |
private static void |
MetaTableAccessor.put(Table t,
Put p) |
Modifier and Type | Class and Description |
---|---|
class |
HTable
An implementation of
Table . |
Modifier and Type | Field and Description |
---|---|
private Table |
SecureBulkLoadClient.table |
Modifier and Type | Method and Description |
---|---|
Table |
TableBuilder.build()
Create the
Table instance. |
Table |
ConnectionImplementation.getTable(TableName tableName) |
default Table |
Connection.getTable(TableName tableName)
Retrieve a Table implementation for accessing a table.
|
default Table |
Connection.getTable(TableName tableName,
ExecutorService pool)
Retrieve a Table implementation for accessing a table.
|
Constructor and Description |
---|
SecureBulkLoadClient(org.apache.hadoop.conf.Configuration conf,
Table table) |
Modifier and Type | Method and Description |
---|---|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.avg(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for
a given cf-cq combination.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getAvgArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes average while fetching sum and row count from all the
corresponding regions.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getMedianArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It helps locate the region with median for a given column whose weight
is specified in an optional column.
|
private <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.getStdArgs(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It computes a global standard deviation for a given column and its value.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.max(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.median(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.min(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.rowCount(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from
regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.std(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.sum(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions.
|
Modifier and Type | Method and Description |
---|---|
void |
RefreshHFilesClient.refreshHFiles(Table table) |
Modifier and Type | Field and Description |
---|---|
private Table |
TableRecordReaderImpl.htable |
private Table |
TableInputFormatBase.table |
Modifier and Type | Method and Description |
---|---|
protected Table |
TableInputFormatBase.getTable()
Allows subclasses to get the
Table . |
Modifier and Type | Method and Description |
---|---|
void |
TableRecordReader.setHTable(Table htable) |
void |
TableRecordReaderImpl.setHTable(Table htable) |
Modifier and Type | Field and Description |
---|---|
private Table |
TableRecordReaderImpl.htable |
(package private) Table |
SyncTable.SyncMapper.sourceTable |
private Table |
TableInputFormatBase.table
The
Table to scan. |
(package private) Table |
SyncTable.SyncMapper.targetTable |
Modifier and Type | Method and Description |
---|---|
protected Table |
TableInputFormatBase.getTable()
Allows subclasses to get the
Table . |
private static Table |
SyncTable.SyncMapper.openTable(Connection connection,
org.apache.hadoop.conf.Configuration conf,
String tableNameConfKey) |
Modifier and Type | Method and Description |
---|---|
static void |
HFileOutputFormat2.configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job,
Table table,
RegionLocator regionLocator)
Configure a MapReduce Job to perform an incremental load into the given
table.
|
void |
TableRecordReaderImpl.setHTable(Table htable)
Sets the HBase table.
|
void |
TableRecordReader.setTable(Table table) |
Modifier and Type | Field and Description |
---|---|
private Table |
VerifyReplication.Verifier.replicatedTable |
private Table |
VerifyReplication.Verifier.sourceTable |
Modifier and Type | Field and Description |
---|---|
private Table |
TableNamespaceManager.nsTable |
Modifier and Type | Method and Description |
---|---|
private Table |
TableNamespaceManager.getNamespaceTable() |
Modifier and Type | Method and Description |
---|---|
private NamespaceDescriptor |
TableNamespaceManager.get(Table table,
String name) |
Modifier and Type | Method and Description |
---|---|
private void |
PartitionedMobCompactor.bulkloadRefFile(Connection connection,
Table table,
org.apache.hadoop.fs.Path bulkloadDirectory,
String fileName)
Bulkloads the current file.
|
private List<org.apache.hadoop.fs.Path> |
PartitionedMobCompactor.compactMobFilePartition(PartitionedMobCompactionRequest request,
PartitionedMobCompactionRequest.CompactionPartition partition,
List<HStoreFile> delFiles,
Connection connection,
Table table)
Compacts a partition of selected small mob files and all the del files.
|
private void |
PartitionedMobCompactor.compactMobFilesInBatch(PartitionedMobCompactionRequest request,
PartitionedMobCompactionRequest.CompactionPartition partition,
Connection connection,
Table table,
List<HStoreFile> filesToCompact,
int batch,
org.apache.hadoop.fs.Path bulkloadPathOfPartition,
org.apache.hadoop.fs.Path bulkloadColumnPath,
List<org.apache.hadoop.fs.Path> newFiles)
Compacts a partition of selected small mob files and all the del files in a batch.
|
Modifier and Type | Field and Description |
---|---|
private Table |
QuotaRetriever.table |
Modifier and Type | Method and Description |
---|---|
(package private) void |
SnapshotQuotaObserverChore.persistSnapshotSizes(Table table,
org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName,SnapshotQuotaObserverChore.SnapshotWithSize> snapshotsWithSize)
Writes the snapshot sizes to the provided
table . |
(package private) void |
SnapshotQuotaObserverChore.persistSnapshotSizesByNS(Table quotaTable,
org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName,SnapshotQuotaObserverChore.SnapshotWithSize> snapshotsWithSize)
Rolls up the snapshot sizes by namespace and writes a single record for each namespace
which is the size of all snapshots in that namespace.
|
Modifier and Type | Method and Description |
---|---|
private Table |
ReplicationTableBase.getAndSetUpReplicationTable()
Creates a new copy of the Replication Table and sets up the proper Table time outs for it
|
protected Table |
ReplicationTableBase.getOrBlockOnReplicationTable()
Attempts to acquire the Replication Table.
|
private Table |
ReplicationTableBase.setReplicationTableTimeOuts(Table replicationTable)
Increases the RPC and operations timeouts for the Replication Table
|
Modifier and Type | Method and Description |
---|---|
private Table |
ReplicationTableBase.setReplicationTableTimeOuts(Table replicationTable)
Increases the RPC and operations timeouts for the Replication Table
|
Modifier and Type | Method and Description |
---|---|
private void |
HFileReplicator.cleanup(String stagingDir,
Table table) |
private void |
HFileReplicator.doBulkLoad(LoadIncrementalHFiles loadHFiles,
Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
RegionLocator locator,
int maxRetries) |
Modifier and Type | Method and Description |
---|---|
(package private) Table |
RESTServlet.getTable(String tableName)
Caller closes the table afterwards.
|
Modifier and Type | Class and Description |
---|---|
class |
RemoteHTable
HTable interface to remote tables accessed via REST gateway
|
Modifier and Type | Field and Description |
---|---|
private Table |
RSGroupInfoManagerImpl.rsGroupTable |
Modifier and Type | Method and Description |
---|---|
(package private) static void |
AccessControlLists.addUserPermission(org.apache.hadoop.conf.Configuration conf,
UserPermission userPerm,
Table t) |
(package private) static void |
AccessControlLists.addUserPermission(org.apache.hadoop.conf.Configuration conf,
UserPermission userPerm,
Table t,
boolean mergeExistingPermissions)
Stores a new user permission grant in the access control lists table.
|
private static org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface |
AccessControlClient.getAccessControlServiceStub(Table ht) |
(package private) static org.apache.hbase.thirdparty.com.google.common.collect.ListMultimap<String,TablePermission> |
AccessControlLists.getPermissions(org.apache.hadoop.conf.Configuration conf,
byte[] entryName,
Table t)
Reads user permission assignments stored in the
l: column
family of the first table row in _acl_ . |
(package private) static void |
AccessControlLists.removeNamespacePermissions(org.apache.hadoop.conf.Configuration conf,
String namespace,
Table t)
Remove specified namespace from the acl table.
|
private static void |
AccessControlLists.removePermissionRecord(org.apache.hadoop.conf.Configuration conf,
UserPermission userPerm,
Table t) |
(package private) static void |
AccessControlLists.removeTablePermissions(org.apache.hadoop.conf.Configuration conf,
TableName tableName,
byte[] column,
Table t)
Remove specified table column from the acl table.
|
(package private) static void |
AccessControlLists.removeTablePermissions(org.apache.hadoop.conf.Configuration conf,
TableName tableName,
Table t)
Remove specified table from the _acl_ table.
|
private static void |
AccessControlLists.removeTablePermissions(TableName tableName,
byte[] column,
Table table,
boolean closeTable) |
(package private) static void |
AccessControlLists.removeUserPermission(org.apache.hadoop.conf.Configuration conf,
UserPermission userPerm,
Table t)
Removes a previously granted permission from the stored access control
lists.
|
Modifier and Type | Method and Description |
---|---|
Table |
ThriftServerRunner.HBaseHandler.getTable(byte[] tableName)
Creates and returns a Table instance from a given table name.
|
Table |
ThriftServerRunner.HBaseHandler.getTable(ByteBuffer tableName) |
Modifier and Type | Method and Description |
---|---|
private void |
ThriftServerRunner.HBaseHandler.closeTable(Table table) |
(package private) byte[][] |
ThriftServerRunner.HBaseHandler.getAllColumns(Table table)
Returns a list of all the column families for a given Table.
|
Modifier and Type | Method and Description |
---|---|
private Table |
ThriftHBaseServiceHandler.getTable(ByteBuffer tableName) |
Modifier and Type | Method and Description |
---|---|
private void |
ThriftHBaseServiceHandler.closeTable(Table table) |
Modifier and Type | Method and Description |
---|---|
protected void |
LoadIncrementalHFiles.bulkLoadPhase(Table table,
Connection conn,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
boolean copyFile,
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> item2RegionMap)
This takes the LQI's grouped by likely regions and attempts to bulk load them.
|
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.doBulkLoad(Map<byte[],List<org.apache.hadoop.fs.Path>> map,
Admin admin,
Table table,
RegionLocator regionLocator,
boolean silence,
boolean copyFile)
Perform a bulk load of the given directory into the given pre-existing table.
|
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.doBulkLoad(org.apache.hadoop.fs.Path hfofDir,
Admin admin,
Table table,
RegionLocator regionLocator)
Perform a bulk load of the given directory into the given pre-existing table.
|
Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.doBulkLoad(org.apache.hadoop.fs.Path hfofDir,
Admin admin,
Table table,
RegionLocator regionLocator,
boolean silence,
boolean copyFile)
Perform a bulk load of the given directory into the given pre-existing table.
|
protected Pair<List<LoadIncrementalHFiles.LoadQueueItem>,String> |
LoadIncrementalHFiles.groupOrSplit(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem> regionGroups,
LoadIncrementalHFiles.LoadQueueItem item,
Table table,
Pair<byte[][],byte[][]> startEndKeys)
Attempt to assign the given load queue item into its target region group.
|
private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,LoadIncrementalHFiles.LoadQueueItem>,Set<String>> |
LoadIncrementalHFiles.groupOrSplitPhase(Table table,
ExecutorService pool,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys) |
void |
LoadIncrementalHFiles.loadHFileQueue(Table table,
Connection conn,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys)
Used by the replication sink to load the hfiles from the source cluster.
|
void |
LoadIncrementalHFiles.loadHFileQueue(Table table,
Connection conn,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
Pair<byte[][],byte[][]> startEndKeys,
boolean copyFile)
Used by the replication sink to load the hfiles from the source cluster.
|
private Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> |
LoadIncrementalHFiles.performBulkLoad(Admin admin,
Table table,
RegionLocator regionLocator,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
ExecutorService pool,
SecureBulkLoadClient secureClient,
boolean copyFile) |
void |
LoadIncrementalHFiles.prepareHFileQueue(Map<byte[],List<org.apache.hadoop.fs.Path>> map,
Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
boolean silence)
Prepare a collection of
LoadIncrementalHFiles.LoadQueueItem from list of source hfiles contained in the
passed directory and validates whether the prepared queue has all the valid table column
families in it. |
void |
LoadIncrementalHFiles.prepareHFileQueue(org.apache.hadoop.fs.Path hfilesDir,
Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
boolean validateHFile)
Prepare a collection of
LoadIncrementalHFiles.LoadQueueItem from list of source hfiles contained in the
passed directory and validates whether the prepared queue has all the valid table column
families in it. |
void |
LoadIncrementalHFiles.prepareHFileQueue(org.apache.hadoop.fs.Path hfilesDir,
Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
boolean validateHFile,
boolean silence)
Prepare a collection of
LoadIncrementalHFiles.LoadQueueItem from list of source hfiles contained in the
passed directory and validates whether the prepared queue has all the valid table column
families in it. |
private List<LoadIncrementalHFiles.LoadQueueItem> |
LoadIncrementalHFiles.splitStoreFile(LoadIncrementalHFiles.LoadQueueItem item,
Table table,
byte[] startKey,
byte[] splitKey) |
private void |
LoadIncrementalHFiles.validateFamiliesInHFiles(Table table,
Deque<LoadIncrementalHFiles.LoadQueueItem> queue,
boolean silence)
Checks whether there is any invalid family name in HFiles to be bulk loaded.
|
Modifier and Type | Field and Description |
---|---|
private Table |
HBaseFsck.meta |
Modifier and Type | Method and Description |
---|---|
Table |
ConnectionCache.getTable(String tableName)
Caller closes the table afterwards.
|
Modifier and Type | Method and Description |
---|---|
(package private) static void |
HelloHBase.deleteRow(Table table)
Invokes Table#delete to delete test data (i.e.
|
(package private) static void |
HelloHBase.getAndPrintRowContents(Table table)
Invokes Table#get and prints out the contents of the retrieved row.
|
(package private) static void |
HelloHBase.putRowToTable(Table table)
Invokes Table#put to store a row (with two new columns created 'on the
fly') into the table.
|
Modifier and Type | Method and Description |
---|---|
(package private) static void |
HelloHBase.deleteRow(Table table)
Invokes Table#delete to delete test data (i.e.
|
(package private) static void |
HelloHBase.getAndPrintRowContents(Table table)
Invokes Table#get and prints out the contents of the retrieved row.
|
(package private) static void |
HelloHBase.putRowToTable(Table table)
Invokes Table#put to store a row (with two new columns created 'on the
fly') into the table.
|
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.