Uses of Class
org.apache.hadoop.hbase.TableName
Packages that use TableName
Package
Description
Provides HBase Client
Table of Contents
Provides implementations of
HFile and HFile
BlockCache.Tools to help define network clients and servers.
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
The Region Normalizer subsystem is responsible for coaxing all the regions in a table toward
a "normal" size, according to their storefile size.
Multi Cluster Replication
HBase REST
Provides an HBase Thrift
service.
Provides an HBase Thrift
service.
This package provides fully-functional exemplar Java code demonstrating
simple usage of the hbase-client API, for incorporation into a Maven
archetype with hbase-client dependency.
This package provides fully-functional exemplar Java code demonstrating
simple usage of the hbase-client API, for incorporation into a Maven
archetype with hbase-shaded-client dependency.
-
Uses of TableName in org.apache.hadoop.hbase
Fields in org.apache.hadoop.hbase declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameHConstants.ENSEMBLE_TABLE_NAMEThe name of the ensemble tablestatic final TableNameTableName.META_TABLE_NAMEThe hbase:meta table's name.static final TableNameTableName.NAMESPACE_TABLE_NAMEDeprecated.since 3.0.0 and will be removed in 4.0.0.static final TableNameTableName.OLD_META_TABLE_NAMETableName for old .META.static final TableNameTableName.OLD_ROOT_TABLE_NAMETableName for old -ROOT- table.Fields in org.apache.hadoop.hbase with type parameters of type TableNameModifier and TypeFieldDescriptionTableName.tableCacheprivate final Map<TableName,RegionStatesCount> ClusterMetricsBuilder.ClusterMetricsImpl.tableRegionStatesCountprivate Map<TableName,RegionStatesCount> ClusterMetricsBuilder.tableRegionStatesCountMethods in org.apache.hadoop.hbase that return TableNameModifier and TypeMethodDescriptionprivate static TableNameTableName.createTableNameIfNecessary(ByteBuffer bns, ByteBuffer qns) Check that the object does not exist already.private static TableNameTableName.getADummyTableName(String qualifier) It is used to create table names for old META, and ROOT table.static TableNameTableName.valueOf(byte[] fullName) Construct a TableNamestatic TableNameTableName.valueOf(byte[] namespace, byte[] qualifier) static TableNameTableName.valueOf(byte[] fullName, int offset, int length) Construct a TableNamestatic TableNameConstruct a TableNamestatic TableNamestatic TableNameTableName.valueOf(ByteBuffer fullname) Construct a TableNamestatic TableNameTableName.valueOf(ByteBuffer namespace, ByteBuffer qualifier) Methods in org.apache.hadoop.hbase that return types with arguments of type TableNameModifier and TypeMethodDescriptionClusterMetrics.getTableRegionStatesCount()Provide region states count for given table.ClusterMetricsBuilder.ClusterMetricsImpl.getTableRegionStatesCount()static Map<TableName,TableState> MetaTableAccessor.getTableStates(Connection conn) Fetch table states from META tableMethods in org.apache.hadoop.hbase with parameters of type TableNameModifier and TypeMethodDescriptionvoidDeprecated.Compact all of a table's reagion in the mini hbase clustervoidDeprecated.Call flushCache on all regions of the specified table.intintDeprecated.Return the number of rows in the given table.HBaseTestingUtility.createLocalHRegion(TableName tableName, byte[] startKey, byte[] stopKey, org.apache.hadoop.conf.Configuration conf, boolean isReadOnly, Durability durability, WAL wal, byte[]... families) Deprecated.Return a region on which you must callHBaseTestingUtility.closeRegionAndWAL(HRegion)when done.HBaseTestingUtility.createLocalHRegionWithInMemoryFlags(TableName tableName, byte[] startKey, byte[] stopKey, org.apache.hadoop.conf.Configuration conf, boolean isReadOnly, Durability durability, WAL wal, boolean[] compactedMemStore, byte[]... families) Deprecated.HBaseTestingUtility.createModifyableTableDescriptor(TableName name, int minVersions, int versions, int ttl, KeepDeletedCells keepDeleted) Deprecated.HBaseTestingUtility.createMultiRegionTable(TableName tableName, byte[] family) Deprecated.Create a table with multiple regions.HBaseTestingUtility.createMultiRegionTable(TableName tableName, byte[][] families) Deprecated.Create a table with multiple regions.HBaseTestingUtility.createMultiRegionTable(TableName tableName, byte[][] families, int numVersions) Deprecated.Create a table with multiple regions.HBaseTestingUtility.createMultiRegionTable(TableName tableName, byte[] family, int numRegions) Deprecated.Create a table with multiple regions.HBaseTestingUtility.createMultiRegionTable(TableName tableName, int replicaCount, byte[][] families) Deprecated.Create a table with multiple regions.static intHBaseTestingUtility.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[][] columnFamilies, Compression.Algorithm compression, DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, Durability durability) Deprecated.Creates a pre-split table for load testing.static intHBaseTestingUtility.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] columnFamily, Compression.Algorithm compression, DataBlockEncoding dataBlockEncoding) Deprecated.Creates a pre-split table for load testing.static intHBaseTestingUtility.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] columnFamily, Compression.Algorithm compression, DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, Durability durability) Deprecated.Creates a pre-split table for load testing.HBaseTestingUtility.createRandomTable(TableName tableName, Collection<String> families, int maxVersions, int numColsPerRow, int numFlushes, int numRegions, int numRowsPerFlush) Deprecated.Creates a random table with the given parametersHBaseTestingUtility.createTable(TableName tableName, byte[] family) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[][] families) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[][] families, byte[][] splitKeys) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[][] families, byte[][] splitKeys, int replicaCount) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[][] families, byte[][] splitKeys, int replicaCount, org.apache.hadoop.conf.Configuration c) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[][] families, int numVersions) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[][] families, int[] numVersions) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[][] families, int numVersions, byte[][] splitKeys) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[][] families, int numVersions, byte[] startKey, byte[] endKey, int numRegions) Deprecated.HBaseTestingUtility.createTable(TableName tableName, byte[][] families, int numVersions, int blockSize) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[][] families, int numVersions, int blockSize, String cpName) Deprecated.HBaseTestingUtility.createTable(TableName tableName, byte[] family, byte[][] splitRows) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, byte[] family, int numVersions) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, String family) Deprecated.Create a table.HBaseTestingUtility.createTable(TableName tableName, String[] families) Deprecated.Create a table.HBaseTestingUtility.createTableDescriptor(TableName name) Deprecated.Create a table of namename.HBaseTestingUtility.createTableDescriptor(TableName tableName, byte[] family) Deprecated.HBaseTestingUtility.createTableDescriptor(TableName tableName, byte[][] families, int maxVersions) Deprecated.HBaseTestingUtility.createTableDescriptor(TableName name, int minVersions, int versions, int ttl, KeepDeletedCells keepDeleted) Deprecated.voidHBaseTestingUtility.deleteTable(TableName tableName) Deprecated.Drop an existing tableHBaseTestingUtility.deleteTableData(TableName tableName) Deprecated.Provide an existing table name to truncate.voidHBaseTestingUtility.deleteTableIfAny(TableName tableName) Deprecated.Drop an existing tablestatic voidMetaTableAccessor.deleteTableState(Connection connection, TableName table) Remove state for table from metadefault booleanTest whether a given table exists, i.e, has a table descriptor.HBaseTestingUtility.explainTableAvailability(TableName tableName) Deprecated.HBaseTestingUtility.explainTableState(TableName table, TableState.State state) Deprecated.HBaseTestingUtility.findLastTableState(TableName table) Deprecated.MiniHBaseCluster.findRegionsForTable(TableName tableName) Deprecated.voidDeprecated.Flushes all caches in the mini hbase clustervoidMiniHBaseCluster.flushcache(TableName tableName) Deprecated.Call flushCache on all regions of the specified table.Returns TableDescriptor for tablenamestatic CellComparatorCellComparatorImpl.getCellComparator(TableName tableName) Utility method that makes a guess at comparator to use based off passed tableName.private static RegionInfoMetaTableAccessor.getClosestRegionInfo(Connection connection, TableName tableName, byte[] row) Returns Get closest metatable region row to passedrowstatic CellComparatorInnerStoreCellComparator.getInnerStoreCellComparator(TableName tableName) Utility method that makes a guess at comparator to use based off passed tableName.default longClusterMetrics.getLastMajorCompactionTimestamp(TableName table) List<byte[]>HBaseTestingUtility.getMetaTableRows(TableName tableName) Deprecated.Returns all rows from the hbase:meta table for a given user tableintHBaseTestingUtility.getNumHFiles(TableName tableName, byte[] family) Deprecated.intHBaseTestingUtility.getNumHFilesForRS(HRegionServer rs, TableName tableName, byte[] family) Deprecated.private List<RegionInfo>HBaseTestingUtility.getRegions(TableName tableName) Deprecated.Returns all regions of the specified tableMiniHBaseCluster.getRegions(TableName tableName) Deprecated.MockRegionServerServices.getRegions(TableName tableName) HBaseTestingUtility.getRSForFirstRegionInTable(TableName tableName) Deprecated.Tool to get the reference to the region server object that holds the region of the specified user table.static ScanMetaTableAccessor.getScanForTableName(org.apache.hadoop.conf.Configuration conf, TableName tableName) This method creates a Scan object that will only scan catalog rows that belong to the specified table.abstract ServerNameHBaseCluster.getServerHoldingRegion(TableName tn, byte[] regionName) Deprecated.Get the ServerName of region server serving the specified regionMiniHBaseCluster.getServerHoldingRegion(TableName tn, byte[] regionName) Deprecated.HBaseTestingUtility.getSplittableRegion(TableName tableName, int maxAttempts) Deprecated.Retrieves a splittable region randomly from tableNamestatic CompletableFuture<List<HRegionLocation>>ClientMetaTableAccessor.getTableHRegionLocations(AsyncTable<AdvancedScanResultConsumer> metaTable, TableName tableName) Used to get all region locations for the specific tablestatic List<RegionInfo>MetaTableAccessor.getTableRegions(Connection connection, TableName tableName) Gets all of the regions of the specified table.static List<RegionInfo>MetaTableAccessor.getTableRegions(Connection connection, TableName tableName, boolean excludeOfflinedSplitParents) Gets all of the regions of the specified table.private static CompletableFuture<List<Pair<RegionInfo,ServerName>>> ClientMetaTableAccessor.getTableRegionsAndLocations(AsyncTable<AdvancedScanResultConsumer> metaTable, TableName tableName, boolean excludeOfflinedSplitParents) Used to get table regions' info and server.static List<Pair<RegionInfo,ServerName>> MetaTableAccessor.getTableRegionsAndLocations(Connection connection, TableName tableName) Do not use this method to get meta table regions, use methods in MetaTableLocator instead.static List<Pair<RegionInfo,ServerName>> MetaTableAccessor.getTableRegionsAndLocations(Connection connection, TableName tableName, boolean excludeOfflinedSplitParents) Do not use this method to get meta table regions, use methods in MetaTableLocator instead.static byte[]ClientMetaTableAccessor.getTableStartRowForMeta(TableName tableName, ClientMetaTableAccessor.QueryType type) Returns start row for scanning META according to query typestatic CompletableFuture<Optional<TableState>>ClientMetaTableAccessor.getTableState(AsyncTable<?> metaTable, TableName tableName) static TableStateMetaTableAccessor.getTableState(Connection conn, TableName tableName) Fetch table state for given table from META tablestatic byte[]ClientMetaTableAccessor.getTableStopRowForMeta(TableName tableName, ClientMetaTableAccessor.QueryType type) Returns stop row for scanning META according to query typestatic booleanTableName.isMetaTableName(TableName tn) Returns True iftnis the hbase:meta table name.<any>HBaseTestingUtility.predicateTableAvailable(TableName tableName) Deprecated.Returns aPredicatefor checking that table is enabled<any>HBaseTestingUtility.predicateTableDisabled(TableName tableName) Deprecated.Returns aPredicatefor checking that table is enabled<any>HBaseTestingUtility.predicateTableEnabled(TableName tableName) Deprecated.Returns aPredicatefor checking that table is enabledReturns Instance of table descriptor or null if none found.booleanMockRegionServerServices.reportFileArchivalForQuotas(TableName tableName, Collection<Map.Entry<String, Long>> archivedFiles) private static CompletableFuture<Void>ClientMetaTableAccessor.scanMeta(AsyncTable<AdvancedScanResultConsumer> metaTable, TableName tableName, ClientMetaTableAccessor.QueryType type, ClientMetaTableAccessor.Visitor visitor) Performs a scan of META table for given table.static voidMetaTableAccessor.scanMeta(Connection connection, ClientMetaTableAccessor.Visitor visitor, TableName tableName, byte[] row, int rowLimit) Performs a scan of META table for given table starting from given row.private static voidMetaTableAccessor.scanMeta(Connection connection, TableName table, ClientMetaTableAccessor.QueryType type, int maxRows, ClientMetaTableAccessor.Visitor visitor) static voidMetaTableAccessor.scanMetaForTableRegions(Connection connection, ClientMetaTableAccessor.Visitor visitor, TableName tableName) static voidHBaseTestingUtility.setReplicas(Admin admin, TableName table, int replicaCount) Deprecated.Set the number of Region replicas.static CompletableFuture<Boolean>ClientMetaTableAccessor.tableExists(AsyncTable<?> metaTable, TableName tableName) HBaseTestingUtility.truncateTable(TableName tableName) Deprecated.Truncate a table using the admin command.HBaseTestingUtility.truncateTable(TableName tableName, boolean preserveRegions) Deprecated.Truncate a table using the admin command.static voidMetaTableAccessor.updateTableState(Connection conn, TableName tableName, TableState.State actual) Updates state in META Do not use.voidHBaseTestingUtility.waitTableAvailable(TableName table) Deprecated.Wait until all regions in a table have been assigned.voidHBaseTestingUtility.waitTableAvailable(TableName table, long timeoutMillis) Deprecated.voidHBaseTestingUtility.waitTableDisabled(TableName table, long millisTimeout) Deprecated.voidHBaseTestingUtility.waitTableEnabled(TableName table) Deprecated.Waits for a table to be 'enabled'.voidHBaseTestingUtility.waitTableEnabled(TableName table, long timeoutMillis) Deprecated.voidHBaseTestingUtility.waitUntilAllRegionsAssigned(TableName tableName) Deprecated.Wait until all regions for a table in hbase:meta have a non-empty info:server, up to a configuable timeout value (default is 60 seconds) This means all regions have been deployed, master has been informed and updated hbase:meta with the regions deployed server.voidHBaseTestingUtility.waitUntilAllRegionsAssigned(TableName tableName, long timeout) Deprecated.Wait until all regions for a table in hbase:meta have a non-empty info:server, or until timeout.Method parameters in org.apache.hadoop.hbase with type arguments of type TableNameModifier and TypeMethodDescriptionClusterMetricsBuilder.setTableRegionStatesCount(Map<TableName, RegionStatesCount> tableRegionStatesCount) Constructors in org.apache.hadoop.hbase with parameters of type TableNameModifierConstructorDescriptionConcurrentTableModificationException(TableName tableName) TableExistsException(TableName tableName) TableNotDisabledException(TableName tableName) TableNotEnabledException(TableName tableName) TableNotFoundException(TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.backup
Fields in org.apache.hadoop.hbase.backup declared as TableNameModifier and TypeFieldDescriptionprivate TableName[]RestoreRequest.fromTablesprivate TableNameBackupTableInfo.tableprivate TableName[]RestoreRequest.toTablesFields in org.apache.hadoop.hbase.backup with type parameters of type TableNameModifier and TypeFieldDescriptionprivate Map<TableName,BackupTableInfo> BackupInfo.backupTableInfoMapBackup status map for all tablesBackupInfo.incrTimestampMapPrevious Region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestampBackupRequest.tableListBackupInfo.tableSetTimestampMapNew region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestampMethods in org.apache.hadoop.hbase.backup that return TableNameModifier and TypeMethodDescriptionRestoreRequest.getFromTables()BackupTableInfo.getTable()BackupInfo.getTableBySnapshot(String snapshotName) RestoreRequest.getToTables()Methods in org.apache.hadoop.hbase.backup that return types with arguments of type TableNameModifier and TypeMethodDescriptionBackupHFileCleaner.fetchFullyBackedUpTables(BackupSystemTable tbl) BackupInfo.getIncrTimestampMap()Get new region server log timestamps after distributed log rollBackupRequest.getTableList()BackupInfo.getTableNames()BackupInfo.getTables()BackupInfo.getTableSetTimestampMap()BackupInfo.getTableSetTimestampMap(Map<String, org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.RSTimestampMap> map) private static Map<TableName,BackupTableInfo> BackupInfo.toMap(List<org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupTableInfo> list) Methods in org.apache.hadoop.hbase.backup with parameters of type TableNameModifier and TypeMethodDescriptionvoidvoidBackupAdmin.addToBackupSet(String name, TableName[] tables) Add tables to backup set commandprivate voidBackupMasterObserver.deleteBulkLoads(org.apache.hadoop.conf.Configuration config, TableName tableName, Predicate<BulkLoad> filter) Deletes all bulk load entries for the given table, matching the provided predicate.BackupInfo.getBackupTableInfo(TableName table) BackupInfo.getSnapshotName(TableName table) BackupInfo.getTableBackupDir(TableName tableName) static StringHBackupFileSystem.getTableBackupDir(String backupRootDir, String backupId, TableName tableName) Given the backup root dir, backup id and the table name, return the backup image location.static org.apache.hadoop.fs.PathHBackupFileSystem.getTableBackupPath(TableName tableName, org.apache.hadoop.fs.Path backupRootPath, String backupId) Given the backup root dir, backup id and the table name, return the backup image location, which is also where the backup manifest file is.voidBackupMasterObserver.postDeleteTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidBackupMasterObserver.postModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) voidBackupMasterObserver.postTruncateTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidBackupAdmin.removeFromBackupSet(String name, TableName[] tables) Remove tables from backup setvoidRestoreJob.run(org.apache.hadoop.fs.Path[] dirPaths, TableName[] fromTables, org.apache.hadoop.fs.Path restoreRootDir, TableName[] toTables, boolean fullBackupRestore) Run restore operationprivate RestoreRequestRestoreRequest.setFromTables(TableName[] fromTables) voidBackupInfo.setSnapshotName(TableName table, String snapshotName) private RestoreRequestRestoreRequest.setToTables(TableName[] toTables) RestoreRequest.Builder.withFromTables(TableName[] fromTables) RestoreRequest.Builder.withToTables(TableName[] toTables) Method parameters in org.apache.hadoop.hbase.backup with type arguments of type TableNameModifier and TypeMethodDescriptionvoidBackupInfo.setBackupTableInfoMap(Map<TableName, BackupTableInfo> backupTableInfoMap) voidSet the new region server log timestamps after distributed log rollprivate BackupRequestBackupRequest.setTableList(List<TableName> tableList) voidvoidBackupRequest.Builder.withTableList(List<TableName> tables) Constructors in org.apache.hadoop.hbase.backup with parameters of type TableNameModifierConstructorDescriptionBackupInfo(String backupId, BackupType type, TableName[] tables, String targetRootDir) BackupTableInfo(TableName table, String targetRootDir, String backupId) -
Uses of TableName in org.apache.hadoop.hbase.backup.impl
Fields in org.apache.hadoop.hbase.backup.impl declared as TableNameModifier and TypeFieldDescriptionprivate TableNameBackupSystemTable.bulkLoadTableNameBackup System table name for bulk loaded files.private final TableNameMergeSplitBulkloadInfo.srcTableprivate TableName[]RestoreTablesClient.sTableArrayprivate TableNameBackupSystemTable.tableNameBackup system table (main) nameprivate final TableNameBulkLoad.tableNameprivate TableName[]RestoreTablesClient.tTableArrayFields in org.apache.hadoop.hbase.backup.impl with type parameters of type TableNameModifier and TypeFieldDescriptionBackupManifest.BackupImage.incrTimeRangesColumnFamilyMismatchException.ColumnFamilyMismatchExceptionBuilder.mismatchedTablesColumnFamilyMismatchException.mismatchedTablesBackupManifest.BackupImage.tableListTableBackupClient.tableListMethods in org.apache.hadoop.hbase.backup.impl that return TableNameModifier and TypeMethodDescriptionMergeSplitBulkloadInfo.getSrcTable()private TableNameBackupCommands.HistoryCommand.getTableName()static TableNameBackupSystemTable.getTableName(org.apache.hadoop.conf.Configuration conf) BulkLoad.getTableName()static TableNameBackupSystemTable.getTableNameForBulkLoadedData(org.apache.hadoop.conf.Configuration conf) private TableName[]BackupCommands.BackupSetCommand.toTableNames(String[] tables) Methods in org.apache.hadoop.hbase.backup.impl that return types with arguments of type TableNameModifier and TypeMethodDescriptionBackupSystemTable.describeBackupSet(String name) Get backup set description (list of tables)BackupAdminImpl.excludeNonExistingTables(List<TableName> tableList, List<TableName> nonExistingTableList) BackupSystemTable.getBackupHistoryForTableSet(Set<TableName> set, String backupRoot) Goes through all backup history corresponding to the provided root folder, and collects all backup info mentioning each of the provided tables.IncrementalTableBackupClient.getFullBackupIds()BackupManager.getIncrementalBackupTableSet()Return the current tables covered by incremental backup.BackupSystemTable.getIncrementalBackupTableSet(String backupRoot) Return the current tables covered by incremental backup.BackupManifest.BackupImage.getIncrTimeRanges()BackupManifest.getIncrTimestampMap()ColumnFamilyMismatchException.getMismatchedTables()BackupManifest.getTableList()Get the table set of this image.BackupManifest.BackupImage.getTableNames()BackupSystemTable.getTablesIncludedInBackups()Retrieve all table names that are part of any known backupBackupManifest.BackupImage.loadIncrementalTimestampMap(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupImage proto) BackupManager.readLogTimestampMap()Read the timestamp for each region server log after the last successful backup.BackupSystemTable.readLogTimestampMap(String backupRoot) Read the timestamp for each region server log after the last successful backup.Methods in org.apache.hadoop.hbase.backup.impl with parameters of type TableNameModifier and TypeMethodDescriptionColumnFamilyMismatchException.ColumnFamilyMismatchExceptionBuilder.addMismatchedTable(TableName tableName, ColumnFamilyDescriptor[] currentCfs, ColumnFamilyDescriptor[] backupCfs) voidBackupAdminImpl.addToBackupSet(String name, TableName[] tables) private voidRestoreTablesClient.checkTargetTables(TableName[] tTableArray, boolean isOverwrite) Validate target tables.private voidBackupAdminImpl.cleanupBackupDir(BackupInfo backupInfo, TableName table, org.apache.hadoop.conf.Configuration conf) Clean up the data at target directoryBackupSystemTable.createPutForBulkLoad(TableName table, byte[] region, Map<byte[], List<org.apache.hadoop.fs.Path>> columnFamilyToHFilePaths) Creates Put's for bulk loads.private PutBackupSystemTable.createPutForWriteRegionServerLogTimestamp(TableName table, byte[] smap, String backupRoot) Creates Put to write RS last roll log timestamp map(package private) static ScanBackupSystemTable.createScanForOrigBulkLoadedFiles(TableName table) Creates a scan to read all registered bulk loads for the given table, or for all tables iftableisnull.private static voidBackupSystemTable.ensureTableEnabled(Admin admin, TableName tableName) private List<BackupInfo>BackupAdminImpl.getAffectedBackupSessions(BackupInfo backupInfo, TableName tn, BackupSystemTable table) BackupSystemTable.getBackupHistoryForTable(TableName name) Get history for a tableprotected org.apache.hadoop.fs.PathIncrementalTableBackupClient.getBulkOutputDirForTable(TableName table) BackupManifest.getDependentListByTable(TableName table) Get the dependent image list for a specific table of this backup in time order from old to new if want to restore to this backup image level.protected static intprivate org.apache.hadoop.fs.PathIncrementalTableBackupClient.getTargetDirForTable(TableName table) booleanprivate voidIncrementalTableBackupClient.incrementalCopyBulkloadHFiles(org.apache.hadoop.fs.FileSystem tgtFs, TableName tn) private booleanBackupAdminImpl.isLastBackupSession(BackupSystemTable table, TableName tn, long startTime) private voidIncrementalTableBackupClient.mergeSplitAndCopyBulkloadedHFiles(List<String> activeFiles, List<String> archiveFiles, TableName tn, org.apache.hadoop.fs.FileSystem tgtFs) private voidIncrementalTableBackupClient.mergeSplitAndCopyBulkloadedHFiles(List<String> files, TableName tn, org.apache.hadoop.fs.FileSystem tgtFs) voidBackupSystemTable.registerBulkLoad(TableName tableName, byte[] region, Map<byte[], List<org.apache.hadoop.fs.Path>> cfToHfilePath) Registers a bulk load.voidBackupAdminImpl.removeFromBackupSet(String name, TableName[] tables) private voidBackupAdminImpl.removeTableFromBackupImage(BackupInfo info, TableName tn, BackupSystemTable sysTable) private voidRestoreTablesClient.restore(BackupManifest manifest, TableName[] sTableArray, TableName[] tTableArray, boolean isOverwrite, boolean isKeepOriginalSplits) Restore operation.private voidRestoreTablesClient.restoreImages(BackupManifest.BackupImage[] images, TableName sTable, TableName tTable, boolean truncateIfExists, boolean isKeepOriginalSplits) Restore operation handle each backupImage in array.protected voidFullTableBackupClient.snapshotTable(Admin admin, TableName tableName, String snapshotName) protected booleanIncrementalTableBackupClient.tableExists(TableName table, Connection conn) private String[]BackupAdminImpl.toStringArray(TableName[] list) private org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.TableServerTimestampBackupSystemTable.toTableServerTimestampProto(TableName table, Map<String, Long> map) private voidBackupSystemTable.waitForSystemTable(Admin admin, TableName tableName) Method parameters in org.apache.hadoop.hbase.backup.impl with type arguments of type TableNameModifier and TypeMethodDescriptionvoidBackupManager.addIncrementalBackupTableSet(Set<TableName> tables) Adds set of tables to overall incremental backup table setvoidBackupSystemTable.addIncrementalBackupTableSet(Set<TableName> tables, String backupRoot) Add tables to global incremental backup setBackupManager.createBackupInfo(String backupId, BackupType type, List<TableName> tableList, String targetRootDir, int workers, long bandwidth, boolean noChecksumVerify) Creates a backup info based on input backup request.private PutBackupSystemTable.createPutForIncrBackupTableSet(Set<TableName> tables, String backupRoot) Creates Put to store incremental backup table setprivate PutBackupSystemTable.createPutForUpdateTablesForMerge(List<TableName> tables) BackupAdminImpl.excludeNonExistingTables(List<TableName> tableList, List<TableName> nonExistingTableList) BackupSystemTable.getBackupHistoryForTableSet(Set<TableName> set, String backupRoot) Goes through all backup history corresponding to the provided root folder, and collects all backup info mentioning each of the provided tables.protected static intIncrementalTableBackupClient.handleBulkLoad(List<TableName> tablesToBackup) Reads bulk load records from backup table, iterates through the records and forms the paths for bulk loaded hfiles.BackupManager.readBulkloadRows(List<TableName> tableList) BackupSystemTable.readBulkloadRows(Collection<TableName> tableList) Reads the registered bulk loads for the given tables.private voidvoidSet the incremental timestamp map directly.private voidBackupManifest.BackupImage.setTableList(List<TableName> tableList) voidBackupSystemTable.updateProcessedTablesForMerge(List<TableName> tables) private voidIncrementalTableBackupClient.verifyCfCompatibility(Set<TableName> tables, Map<TableName, String> tablesToFullBackupId) Verifies that the current table descriptor CFs matches the descriptor CFs of the last full backup for the tables.private voidIncrementalTableBackupClient.verifyCfCompatibility(Set<TableName> tables, Map<TableName, String> tablesToFullBackupId) Verifies that the current table descriptor CFs matches the descriptor CFs of the last full backup for the tables.(package private) BackupManifest.BackupImage.BuilderBackupManifest.BackupImage.Builder.withTableList(List<TableName> tableList) voidWrite the current timestamps for each regionserver to backup system table after a successful full or incremental backup.voidBackupSystemTable.writeRegionServerLogTimestamp(Set<TableName> tables, Map<String, Long> newTimestamps, String backupRoot) Write the current timestamps for each regionserver to backup system table after a successful full or incremental backup.Constructors in org.apache.hadoop.hbase.backup.impl with parameters of type TableNameModifierConstructorDescriptionBackupManifest(BackupInfo backup, TableName table) Construct a table level manifest for a backup of the named table.MergeSplitBulkloadInfo(TableName srcTable) Constructor parameters in org.apache.hadoop.hbase.backup.impl with type arguments of type TableNameModifierConstructorDescriptionprivateBackupImage(String backupId, BackupType type, String rootDir, List<TableName> tableList, long startTs, long completeTs) privateColumnFamilyMismatchException(String msg, List<TableName> mismatchedTables) -
Uses of TableName in org.apache.hadoop.hbase.backup.mapreduce
Fields in org.apache.hadoop.hbase.backup.mapreduce declared as TableNameMethods in org.apache.hadoop.hbase.backup.mapreduce that return TableNameModifier and TypeMethodDescriptionMapReduceBackupCopyJob.SnapshotCopy.getTable()protected TableName[]MapReduceBackupMergeJob.getTableNamesInBackupImages(String[] backupIds) Methods in org.apache.hadoop.hbase.backup.mapreduce that return types with arguments of type TableNameModifier and TypeMethodDescriptionMapReduceBackupMergeJob.toTableNameList(List<Pair<TableName, org.apache.hadoop.fs.Path>> processedTableList) Methods in org.apache.hadoop.hbase.backup.mapreduce with parameters of type TableNameModifier and TypeMethodDescriptionprotected org.apache.hadoop.fs.Path[]MapReduceBackupMergeJob.findInputDirectories(org.apache.hadoop.fs.FileSystem fs, String backupRoot, TableName tableName, String[] backupIds) private static RegionLocatorMapReduceHFileSplitterJob.getRegionLocator(org.apache.hadoop.conf.Configuration conf, Connection conn, TableName table) protected voidMapReduceBackupMergeJob.moveData(org.apache.hadoop.fs.FileSystem fs, String backupRoot, org.apache.hadoop.fs.Path bulkOutputPath, TableName tableName, String mergedBackupId) voidMapReduceRestoreJob.run(org.apache.hadoop.fs.Path[] dirPaths, TableName[] tableNames, org.apache.hadoop.fs.Path restoreRootDir, TableName[] newTableNames, boolean fullBackupRestore) voidMapReduceRestoreToOriginalSplitsJob.run(org.apache.hadoop.fs.Path[] dirPaths, TableName[] fromTables, org.apache.hadoop.fs.Path restoreRootDir, TableName[] toTables, boolean fullBackupRestore) Method parameters in org.apache.hadoop.hbase.backup.mapreduce with type arguments of type TableNameModifier and TypeMethodDescriptionprotected List<org.apache.hadoop.fs.Path>MapReduceBackupMergeJob.toPathList(List<Pair<TableName, org.apache.hadoop.fs.Path>> processedTableList) MapReduceBackupMergeJob.toTableNameList(List<Pair<TableName, org.apache.hadoop.fs.Path>> processedTableList) Constructors in org.apache.hadoop.hbase.backup.mapreduce with parameters of type TableName -
Uses of TableName in org.apache.hadoop.hbase.backup.util
Fields in org.apache.hadoop.hbase.backup.util with type parameters of type TableNameMethods in org.apache.hadoop.hbase.backup.util that return TableNameMethods in org.apache.hadoop.hbase.backup.util that return types with arguments of type TableNameMethods in org.apache.hadoop.hbase.backup.util with parameters of type TableNameModifier and TypeMethodDescriptionprivate voidRestoreTool.checkAndCreateTable(Connection conn, TableName targetTableName, ArrayList<org.apache.hadoop.fs.Path> regionDirList, TableDescriptor htd, boolean truncateIfExists) Prepare the table for bulkload, most codes copied fromcreateTablemethod inBulkLoadHFilesTool.private voidRestoreTool.createAndRestoreTable(Connection conn, TableName tableName, TableName newTableName, org.apache.hadoop.fs.Path tableBackupPath, boolean truncateIfExists, boolean isKeepOriginalSplits, String lastIncrBackupId) static RestoreRequestBackupUtils.createRestoreRequest(String backupRootDir, String backupId, boolean check, TableName[] fromTables, TableName[] toTables, boolean isOverwrite) Create restore request.static RestoreRequestBackupUtils.createRestoreRequest(String backupRootDir, String backupId, boolean check, TableName[] fromTables, TableName[] toTables, boolean isOverwrite, boolean isKeepOriginalSplits) voidRestoreTool.fullRestoreTable(Connection conn, org.apache.hadoop.fs.Path tableBackupPath, TableName tableName, TableName newTableName, boolean truncateIfExists, boolean isKeepOriginalSplits, String lastIncrBackupId) static StringBackupUtils.getFileNameCompatibleString(TableName table) (package private) ArrayList<org.apache.hadoop.fs.Path>RestoreTool.getRegionList(TableName tableName) Gets region list(package private) org.apache.hadoop.fs.PathRestoreTool.getTableArchivePath(TableName tableName) return value represent path for: ".../user/biadmin/backup1/default/t1_dn/backup_1396650096738/archive/data/default/t1_dn"static StringBackupUtils.getTableBackupDir(String backupRootDir, String backupId, TableName tableName) Given the backup root dir, backup id and the table name, return the backup image location, which is also where the backup manifest file is.(package private) TableDescriptorRestoreTool.getTableDesc(TableName tableName) Get table descriptorprivate TableDescriptorRestoreTool.getTableDescriptor(org.apache.hadoop.fs.FileSystem fileSys, TableName tableName, String lastIncrBackupId) (package private) org.apache.hadoop.fs.PathRestoreTool.getTableInfoPath(TableName tableName) Returns value represent path for: ""/$USER/SBACKUP_ROOT/backup_id/namespace/table/.hbase-snapshot/ snapshot_1396650097621_namespace_table" this path contains .snapshotinfo, .tabledesc (0.96 and 0.98) this path contains .snapshotinfo, .data.manifest (trunk)(package private) org.apache.hadoop.fs.PathRestoreTool.getTableSnapshotPath(org.apache.hadoop.fs.Path backupRootPath, TableName tableName, String backupId) Returns value represent path for path to backup table snapshot directory: "/$USER/SBACKUP_ROOT/backup_id/namespace/table/.hbase-snapshot"voidRestoreTool.incrementalRestoreTable(Connection conn, org.apache.hadoop.fs.Path tableBackupPath, org.apache.hadoop.fs.Path[] logDirs, TableName[] tableNames, TableName[] newTableNames, String incrBackupId, boolean keepOriginalSplits) During incremental backup operation.Method parameters in org.apache.hadoop.hbase.backup.util with type arguments of type TableNameModifier and TypeMethodDescriptionLoop through the RS log timestamp map for the tables, for each RS, find the min timestamp value for the RS among the tables.static booleanBackupUtils.validate(List<TableName> tables, BackupManifest backupManifest, org.apache.hadoop.conf.Configuration conf) Constructor parameters in org.apache.hadoop.hbase.backup.util with type arguments of type TableName -
Uses of TableName in org.apache.hadoop.hbase.client
Fields in org.apache.hadoop.hbase.client declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameTableDescriptorBuilder.ModifyableTableDescriptor.nameprivate final TableNameSnapshotDescription.tableprivate final TableNameAsyncBatchRpcRetryingCaller.tableNameprivate final TableNameAsyncClientScanner.tableNameprivate final TableNameAsyncRegionLocationCache.tableNameprivate TableNameAsyncRpcRetryingCallerFactory.BatchCallerBuilder.tableNameprivate TableNameAsyncRpcRetryingCallerFactory.SingleRequestCallerBuilder.tableNameprivate final TableNameAsyncSingleRequestRpcRetryingCaller.tableNameprotected TableNameAsyncTableBuilderBase.tableNameprivate final TableNameAsyncTableRegionLocatorImpl.tableNameprivate final TableNameAsyncTableResultScanner.tableNameprivate final TableNameBufferedMutatorParams.tableNameprivate final TableNameCatalogReplicaLoadBalanceSimpleSelector.tableNameprivate final TableNameMutableRegionInfo.tableNameprotected final TableNameRawAsyncHBaseAdmin.TableProcedureBiConsumer.tableNameprivate final TableNameRawAsyncTableImpl.tableNameprivate final TableNameRegionCoprocessorRpcChannelImpl.tableNameprivate final TableNameRegionInfoBuilder.tableNameprotected TableNameTableBuilderBase.tableNameprivate final TableNameTableState.tableNameFields in org.apache.hadoop.hbase.client with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final ConcurrentMap<TableName,AsyncNonMetaRegionLocator.TableCache> AsyncNonMetaRegionLocator.cacheprivate final ConcurrentMap<TableName,ConcurrentNavigableMap<byte[], CatalogReplicaLoadBalanceSimpleSelector.StaleLocationCacheEntry>> CatalogReplicaLoadBalanceSimpleSelector.staleCacheNormalizeTableFilterParams.Builder.tableNamesNormalizeTableFilterParams.tableNamesMethods in org.apache.hadoop.hbase.client that return TableNameModifier and TypeMethodDescriptionprivate static TableNameMutableRegionInfo.checkTableName(TableName tableName) AsyncBufferedMutator.getName()Gets the fully qualified table name instance of the table that thisAsyncBufferedMutatorwrites to.AsyncBufferedMutatorImpl.getName()AsyncTable.getName()Gets the fully qualified table name instance of this table.AsyncTableImpl.getName()AsyncTableRegionLocator.getName()Gets the fully qualified table name instance of the table whose region we want to locate.AsyncTableRegionLocatorImpl.getName()BufferedMutator.getName()Gets the fully qualified table name instance of the table that this BufferedMutator writes to.BufferedMutatorOverAsyncBufferedMutator.getName()RawAsyncTableImpl.getName()RegionLocator.getName()Gets the fully qualified table name instance of this table.RegionLocatorOverAsyncTableRegionLocator.getName()Table.getName()Gets the fully qualified table name instance of this table.TableOverAsyncTable.getName()MutableRegionInfo.getTable()Get current table name of the regionRegionInfo.getTable()Returns current table name of the regionstatic TableNameRegionInfo.getTable(byte[] regionName) Gets the table name from the specified region name.BufferedMutatorParams.getTableName()SnapshotDescription.getTableName()TableDescriptor.getTableName()Get the name of the tableTableDescriptorBuilder.ModifyableTableDescriptor.getTableName()Get the name of the tableTableState.getTableName()Table name for stateAdmin.listTableNames()List all of the names of userspace tables.default TableName[]Admin.listTableNames(Pattern pattern) List all of the names of userspace tables.Admin.listTableNames(Pattern pattern, boolean includeSysTables) List all of the names of userspace tables.AdminOverAsyncAdmin.listTableNames()AdminOverAsyncAdmin.listTableNames(Pattern pattern, boolean includeSysTables) Admin.listTableNamesByNamespace(String name) Get list of table names by namespace.AdminOverAsyncAdmin.listTableNamesByNamespace(String name) Methods in org.apache.hadoop.hbase.client that return types with arguments of type TableNameModifier and TypeMethodDescriptionprivate CompletableFuture<TableName>RawAsyncHBaseAdmin.checkRegionsAndGetTableName(byte[][] encodedRegionNames) Admin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) Get the namespaces and tables which have this RegionServer group in descriptor.AdminOverAsyncAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) AsyncAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) Get the namespaces and tables which have this RegionServer group in descriptor.AsyncHBaseAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) RawAsyncHBaseAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) Map<TableName,? extends SpaceQuotaSnapshotView> Admin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) Fetches the observedSpaceQuotaSnapshotViews observed by a RegionServer.Map<TableName,? extends SpaceQuotaSnapshotView> AdminOverAsyncAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) CompletableFuture<? extends Map<TableName,? extends SpaceQuotaSnapshotView>> AsyncAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) Fetches the observedSpaceQuotaSnapshotViews observed by a RegionServer.AsyncHBaseAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) RawAsyncHBaseAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) Admin.getSpaceQuotaTableSizes()Fetches the table sizes on the filesystem as tracked by the HBase Master.AdminOverAsyncAdmin.getSpaceQuotaTableSizes()AsyncAdmin.getSpaceQuotaTableSizes()Fetches the table sizes on the filesystem as tracked by the HBase Master.AsyncHBaseAdmin.getSpaceQuotaTableSizes()RawAsyncHBaseAdmin.getSpaceQuotaTableSizes()AsyncRpcRetryingCaller.getTableName()AsyncSingleRequestRpcRetryingCaller.getTableName()NormalizeTableFilterParams.getTableNames()private CompletableFuture<List<TableName>>RawAsyncHBaseAdmin.getTableNames(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetTableNamesRequest request) default CompletableFuture<List<TableName>>AsyncAdmin.listTableNames()List all of the names of userspace tables.AsyncAdmin.listTableNames(boolean includeSysTables) List all of the names of tables.AsyncAdmin.listTableNames(Pattern pattern, boolean includeSysTables) List all of the names of userspace tables.AsyncHBaseAdmin.listTableNames(boolean includeSysTables) AsyncHBaseAdmin.listTableNames(Pattern pattern, boolean includeSysTables) RawAsyncHBaseAdmin.listTableNames(boolean includeSysTables) RawAsyncHBaseAdmin.listTableNames(Pattern pattern, boolean includeSysTables) AsyncAdmin.listTableNamesByNamespace(String name) Get list of table names by namespace.AsyncHBaseAdmin.listTableNamesByNamespace(String name) RawAsyncHBaseAdmin.listTableNamesByNamespace(String name) Admin.listTableNamesByState(boolean isEnabled) List all enabled or disabled table namesAdminOverAsyncAdmin.listTableNamesByState(boolean isEnabled) AsyncAdmin.listTableNamesByState(boolean isEnabled) List all enabled or disabled table namesAsyncHBaseAdmin.listTableNamesByState(boolean isEnabled) RawAsyncHBaseAdmin.listTableNamesByState(boolean isEnabled) Admin.listTablesInRSGroup(String groupName) Get all tables in this RegionServer group.AdminOverAsyncAdmin.listTablesInRSGroup(String groupName) AsyncAdmin.listTablesInRSGroup(String groupName) Get all tables in this RegionServer group.AsyncHBaseAdmin.listTablesInRSGroup(String groupName) RawAsyncHBaseAdmin.listTablesInRSGroup(String groupName) Methods in org.apache.hadoop.hbase.client with parameters of type TableNameModifier and TypeMethodDescriptiondefault voidAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Add a column family to an existing table.AsyncAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Add a column family to an existing table.AsyncHBaseAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) RawAsyncHBaseAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Admin.addColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) Add a column family to an existing table.AdminOverAsyncAdmin.addColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) AsyncClusterConnection.bulkLoad(TableName tableName, List<Pair<byte[], String>> familyPaths, byte[] row, boolean assignSeqNum, org.apache.hadoop.security.token.Token<?> userToken, String bulkToken, boolean copyFiles, List<String> clusterIds, boolean replicate) Securely bulk load a list of HFiles, passing additional list of clusters ids tracking clusters where the given bulk load has already been processed (important for bulk loading replication).AsyncClusterConnectionImpl.bulkLoad(TableName tableName, List<Pair<byte[], String>> familyPaths, byte[] row, boolean assignSeqNum, org.apache.hadoop.security.token.Token<?> userToken, String bulkToken, boolean copyFiles, List<String> clusterIds, boolean replicate) (package private) static intConnectionUtils.calcPriority(int priority, TableName tableName) Select the priority for the rpc call.private CompletableFuture<Void>RawAsyncHBaseAdmin.checkAndSyncTableToPeerClusters(TableName tableName, byte[][] splits) Connect to peer and check the table descriptor on peer: Create the same table on peer when not exist. Throw an exception if the table already has replication enabled on any of the column families. Throw an exception if the table exists on peer cluster but descriptors are not same.private static TableNameMutableRegionInfo.checkTableName(TableName tableName) AsyncClusterConnection.cleanupBulkLoad(TableName tableName, String bulkToken) Clean up after finishing bulk load, no matter success or not.AsyncClusterConnectionImpl.cleanupBulkLoad(TableName tableName, String bulkToken) Admin.clearBlockCache(TableName tableName) Clear all the blocks corresponding to this table from BlockCache.AdminOverAsyncAdmin.clearBlockCache(TableName tableName) AsyncAdmin.clearBlockCache(TableName tableName) Clear all the blocks corresponding to this table from BlockCache.AsyncHBaseAdmin.clearBlockCache(TableName tableName) RawAsyncHBaseAdmin.clearBlockCache(TableName tableName) (package private) voidAsyncNonMetaRegionLocator.clearCache(TableName tableName) (package private) voidAsyncRegionLocator.clearCache(TableName tableName) default voidAdmin.cloneSnapshot(String snapshotName, TableName tableName) Create a new table by cloning the snapshot content.default voidAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl) Create a new table by cloning the snapshot content.default voidAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) Create a new table by cloning the snapshot content.default CompletableFuture<Void>AsyncAdmin.cloneSnapshot(String snapshotName, TableName tableName) Create a new table by cloning the snapshot content.default CompletableFuture<Void>AsyncAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl) Create a new table by cloning the snapshot content.AsyncAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) Create a new table by cloning the snapshot content.AsyncHBaseAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) RawAsyncHBaseAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) Admin.cloneSnapshotAsync(String snapshotName, TableName tableName) Create a new table by cloning the snapshot content, but does not block and wait for it to be completely cloned.Admin.cloneSnapshotAsync(String snapshotName, TableName tableName, boolean restoreAcl) Create a new table by cloning the snapshot content.Admin.cloneSnapshotAsync(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) Create a new table by cloning the snapshot content.AdminOverAsyncAdmin.cloneSnapshotAsync(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) voidAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) Create a new table by cloning the existent table schema.voidAdminOverAsyncAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) AsyncAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) Create a new table by cloning the existent table schema.AsyncHBaseAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) RawAsyncHBaseAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) voidCompact a table.voidCompact a column family within a table.voidAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) Compact a column family within a table.voidAdmin.compact(TableName tableName, CompactType compactType) Compact a table.voidvoidvoidAdminOverAsyncAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) voidAdminOverAsyncAdmin.compact(TableName tableName, CompactType compactType) default CompletableFuture<Void>Compact a table.default CompletableFuture<Void>Compact a column family within a table.AsyncAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) Compact a column family within a table.AsyncAdmin.compact(TableName tableName, CompactType compactType) Compact a table.AsyncHBaseAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) AsyncHBaseAdmin.compact(TableName tableName, CompactType compactType) private CompletableFuture<Void>RawAsyncHBaseAdmin.compact(TableName tableName, byte[] columnFamily, boolean major, CompactType compactType) Compact column family of a table, Asynchronous operation even if CompletableFuture.get()RawAsyncHBaseAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) RawAsyncHBaseAdmin.compact(TableName tableName, CompactType compactType) private CompletableFuture<Void>RawAsyncHBaseAdmin.compareTableWithPeerCluster(TableName tableName, TableDescriptor tableDesc, ReplicationPeerDescription peer, AsyncAdmin peerAdmin) private voidAsyncNonMetaRegionLocator.complete(TableName tableName, AsyncNonMetaRegionLocator.LocateRequest req, RegionLocations locs, Throwable error) private static CompletableFuture<Boolean>RawAsyncHBaseAdmin.completeCheckTableState(CompletableFuture<Boolean> future, Optional<TableState> tableState, Throwable error, TableState.State targetState, TableName tableName) Utility for completing passed TableStateCompletableFuturefutureusing passed parameters.static TableStateTableState.convert(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableState tableState) Covert from PB version of TableStatestatic TableDescriptorTableDescriptorBuilder.copy(TableName name, TableDescriptor desc) static RegionInfoRegionInfo.createMobRegionInfo(TableName tableName) Creates a RegionInfo object for MOB data.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, byte[] id, boolean newFormat) Make a region name of passed parameters.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, byte[] id, int replicaId, boolean newFormat) Make a region name of passed parameters.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, long regionid, boolean newFormat) Make a region name of passed parameters.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, long regionid, int replicaId, boolean newFormat) Make a region name of passed parameters.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, String id, boolean newFormat) Make a region name of passed parameters.CatalogReplicaLoadBalanceSelectorFactory.createSelector(String replicaSelectorClass, TableName tableName, AsyncConnectionImpl conn, IntSupplier getReplicaCount) Create a CatalogReplicaLoadBalanceReplicaSelector based on input config.private CompletableFuture<Void>RawAsyncHBaseAdmin.createTable(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateTableRequest request) default voidAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) Delete a column family from a table.AsyncAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) Delete a column family from a table.AsyncHBaseAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) RawAsyncHBaseAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) Admin.deleteColumnFamilyAsync(TableName tableName, byte[] columnFamily) Delete a column family from a table.AdminOverAsyncAdmin.deleteColumnFamilyAsync(TableName tableName, byte[] columnFamily) default voidAdmin.deleteTable(TableName tableName) Deletes a table.AsyncAdmin.deleteTable(TableName tableName) Deletes a table.AsyncHBaseAdmin.deleteTable(TableName tableName) RawAsyncHBaseAdmin.deleteTable(TableName tableName) Admin.deleteTableAsync(TableName tableName) Deletes the table but does not block and wait for it to be completely removed.AdminOverAsyncAdmin.deleteTableAsync(TableName tableName) default voidAdmin.disableTable(TableName tableName) Disable table and wait on completion.AsyncAdmin.disableTable(TableName tableName) Disable a table.AsyncHBaseAdmin.disableTable(TableName tableName) RawAsyncHBaseAdmin.disableTable(TableName tableName) Admin.disableTableAsync(TableName tableName) Disable the table but does not block and wait for it to be completely disabled.AdminOverAsyncAdmin.disableTableAsync(TableName tableName) voidAdmin.disableTableReplication(TableName tableName) Disable a table's replication switch.voidAdminOverAsyncAdmin.disableTableReplication(TableName tableName) AsyncAdmin.disableTableReplication(TableName tableName) Disable a table's replication switch.AsyncHBaseAdmin.disableTableReplication(TableName tableName) RawAsyncHBaseAdmin.disableTableReplication(TableName tableName) default voidAdmin.enableTable(TableName tableName) Enable a table.AsyncAdmin.enableTable(TableName tableName) Enable a table.AsyncHBaseAdmin.enableTable(TableName tableName) RawAsyncHBaseAdmin.enableTable(TableName tableName) Admin.enableTableAsync(TableName tableName) Enable the table but does not block and wait for it to be completely enabled.AdminOverAsyncAdmin.enableTableAsync(TableName tableName) voidAdmin.enableTableReplication(TableName tableName) Enable a table's replication switch.voidAdminOverAsyncAdmin.enableTableReplication(TableName tableName) AsyncAdmin.enableTableReplication(TableName tableName) Enable a table's replication switch.AsyncHBaseAdmin.enableTableReplication(TableName tableName) RawAsyncHBaseAdmin.enableTableReplication(TableName tableName) voidFlush a table.voidFlush the specified column family stores on all regions of the passed table.voidFlush the specified column family stores on all regions of the passed table.voidvoidvoidFlush a table.Flush the specified column family stores on all regions of the passed table.Flush the specified column family stores on all regions of the passed table.private static intMutableRegionInfo.generateHashCode(TableName tableName, byte[] startKey, byte[] endKey, long regionId, int replicaId, boolean offLine, byte[] regionName) default AsyncBufferedMutatorAsyncConnection.getBufferedMutator(TableName tableName) Retrieve anAsyncBufferedMutatorfor performing client-side buffering of writes.default AsyncBufferedMutatorAsyncConnection.getBufferedMutator(TableName tableName, ExecutorService pool) Retrieve anAsyncBufferedMutatorfor performing client-side buffering of writes.default BufferedMutatorConnection.getBufferedMutator(TableName tableName) Retrieve aBufferedMutatorfor performing client-side buffering of writes.SharedConnection.getBufferedMutator(TableName tableName) AsyncConnection.getBufferedMutatorBuilder(TableName tableName) Returns anAsyncBufferedMutatorBuilderfor creatingAsyncBufferedMutator.AsyncConnection.getBufferedMutatorBuilder(TableName tableName, ExecutorService pool) Returns anAsyncBufferedMutatorBuilderfor creatingAsyncBufferedMutator.AsyncConnectionImpl.getBufferedMutatorBuilder(TableName tableName) AsyncConnectionImpl.getBufferedMutatorBuilder(TableName tableName, ExecutorService pool) SharedAsyncConnection.getBufferedMutatorBuilder(TableName tableName) SharedAsyncConnection.getBufferedMutatorBuilder(TableName tableName, ExecutorService pool) Admin.getCompactionState(TableName tableName) Get the current compaction state of a table.Admin.getCompactionState(TableName tableName, CompactType compactType) Get the current compaction state of a table.AdminOverAsyncAdmin.getCompactionState(TableName tableName) AdminOverAsyncAdmin.getCompactionState(TableName tableName, CompactType compactType) default CompletableFuture<CompactionState>AsyncAdmin.getCompactionState(TableName tableName) Get the current compaction state of a table.AsyncAdmin.getCompactionState(TableName tableName, CompactType compactType) Get the current compaction state of a table.AsyncHBaseAdmin.getCompactionState(TableName tableName, CompactType compactType) RawAsyncHBaseAdmin.getCompactionState(TableName tableName, CompactType compactType) Admin.getCurrentSpaceQuotaSnapshot(TableName tableName) Returns the Master's view of a quota on the giventableNameor null if the Master has no quota information on that table.AdminOverAsyncAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) CompletableFuture<? extends SpaceQuotaSnapshotView>AsyncAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) Returns the Master's view of a quota on the giventableNameor null if the Master has no quota information on that table.AsyncHBaseAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) RawAsyncHBaseAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) Admin.getDescriptor(TableName tableName) Get a table descriptor.AdminOverAsyncAdmin.getDescriptor(TableName tableName) AsyncAdmin.getDescriptor(TableName tableName) Method for getting the tableDescriptorAsyncHBaseAdmin.getDescriptor(TableName tableName) RawAsyncHBaseAdmin.getDescriptor(TableName tableName) longAdmin.getLastMajorCompactionTimestamp(TableName tableName) Get the timestamp of the last major compaction for the passed table The timestamp of the oldest HFile resulting from a major compaction of that table, or 0 if no such HFile could be found.longAdminOverAsyncAdmin.getLastMajorCompactionTimestamp(TableName tableName) AsyncAdmin.getLastMajorCompactionTimestamp(TableName tableName) Get the timestamp of the last major compaction for the passed table.AsyncHBaseAdmin.getLastMajorCompactionTimestamp(TableName tableName) RawAsyncHBaseAdmin.getLastMajorCompactionTimestamp(TableName tableName) (package private) intAsyncNonMetaRegionLocator.getNumberOfCachedRegionLocations(TableName tableName) (package private) intAsyncRegionLocator.getNumberOfCachedRegionLocations(TableName tableName) (package private) static intConnectionUtils.getPriority(TableName tableName) (package private) CompletableFuture<HRegionLocation>AsyncRegionLocator.getRegionLocation(TableName tableName, byte[] row, int replicaId, RegionLocateType type, boolean reload, long timeoutNs) (package private) CompletableFuture<HRegionLocation>AsyncRegionLocator.getRegionLocation(TableName tableName, byte[] row, int replicaId, RegionLocateType type, long timeoutNs) (package private) CompletableFuture<HRegionLocation>AsyncRegionLocator.getRegionLocation(TableName tableName, byte[] row, RegionLocateType type, boolean reload, long timeoutNs) (package private) CompletableFuture<HRegionLocation>AsyncRegionLocator.getRegionLocation(TableName tableName, byte[] row, RegionLocateType type, long timeoutNs) (package private) RegionLocationsAsyncNonMetaRegionLocator.getRegionLocationInCache(TableName tableName, byte[] row) (package private) RegionLocationsAsyncRegionLocator.getRegionLocationInCache(TableName tableName, byte[] row) AsyncClusterConnection.getRegionLocations(TableName tableName, byte[] row, boolean reload) Return all the replicas for a region.AsyncClusterConnectionImpl.getRegionLocations(TableName tableName, byte[] row, boolean reload) (package private) CompletableFuture<RegionLocations>AsyncNonMetaRegionLocator.getRegionLocations(TableName tableName, byte[] row, int replicaId, RegionLocateType locateType, boolean reload) (package private) CompletableFuture<RegionLocations>AsyncRegionLocator.getRegionLocations(TableName tableName, byte[] row, RegionLocateType type, boolean reload, long timeoutNs) private CompletableFuture<RegionLocations>AsyncNonMetaRegionLocator.getRegionLocationsInternal(TableName tableName, byte[] row, int replicaId, RegionLocateType locateType, boolean reload) AsyncConnection.getRegionLocator(TableName tableName) Retrieve a AsyncRegionLocator implementation to inspect region information on a table.AsyncConnectionImpl.getRegionLocator(TableName tableName) Connection.getRegionLocator(TableName tableName) Retrieve a RegionLocator implementation to inspect region information on a table.ConnectionOverAsyncConnection.getRegionLocator(TableName tableName) SharedAsyncConnection.getRegionLocator(TableName tableName) SharedConnection.getRegionLocator(TableName tableName) Admin.getRegionMetrics(ServerName serverName, TableName tableName) GetRegionMetricsof all regions hosted on a regionserver for a table.AdminOverAsyncAdmin.getRegionMetrics(ServerName serverName, TableName tableName) AsyncAdmin.getRegionMetrics(ServerName serverName, TableName tableName) Get a list ofRegionMetricsof all regions hosted on a region server for a table.AsyncHBaseAdmin.getRegionMetrics(ServerName serverName, TableName tableName) RawAsyncHBaseAdmin.getRegionMetrics(ServerName serverName, TableName tableName) Admin.getRegions(TableName tableName) Get the regions of a given table.AdminOverAsyncAdmin.getRegions(TableName tableName) AsyncAdmin.getRegions(TableName tableName) Get the regions of a given table.AsyncHBaseAdmin.getRegions(TableName tableName) RawAsyncHBaseAdmin.getRegions(TableName tableName) Admin.getRSGroup(TableName tableName) Get group info for the given tableAdminOverAsyncAdmin.getRSGroup(TableName tableName) AsyncAdmin.getRSGroup(TableName tableName) Get group info for the given tableAsyncHBaseAdmin.getRSGroup(TableName tableName) RawAsyncHBaseAdmin.getRSGroup(TableName table) default AsyncTable<AdvancedScanResultConsumer>Retrieve anAsyncTableimplementation for accessing a table.default AsyncTable<ScanResultConsumer>AsyncConnection.getTable(TableName tableName, ExecutorService pool) Retrieve anAsyncTableimplementation for accessing a table.default TableRetrieve a Table implementation for accessing a table.default TableConnection.getTable(TableName tableName, ExecutorService pool) Retrieve a Table implementation for accessing a table.AsyncConnection.getTableBuilder(TableName tableName) Returns anAsyncTableBuilderfor creatingAsyncTable.AsyncConnection.getTableBuilder(TableName tableName, ExecutorService pool) Returns anAsyncTableBuilderfor creatingAsyncTable.AsyncConnectionImpl.getTableBuilder(TableName tableName) AsyncConnectionImpl.getTableBuilder(TableName tableName, ExecutorService pool) Connection.getTableBuilder(TableName tableName, ExecutorService pool) Returns anTableBuilderfor creatingTable.ConnectionOverAsyncConnection.getTableBuilder(TableName tableName, ExecutorService pool) SharedAsyncConnection.getTableBuilder(TableName tableName) SharedAsyncConnection.getTableBuilder(TableName tableName, ExecutorService pool) SharedConnection.getTableBuilder(TableName tableName, ExecutorService pool) AsyncNonMetaRegionLocator.getTableCache(TableName tableName) private CompletableFuture<List<HRegionLocation>>RawAsyncHBaseAdmin.getTableHRegionLocations(TableName tableName) List all region locations for the specific table.private CompletableFuture<byte[][]>RawAsyncHBaseAdmin.getTableSplits(TableName tableName) private CompletableFuture<Void>RawAsyncHBaseAdmin.internalRestoreSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) private booleanbooleanAdmin.isTableAvailable(TableName tableName) Check if a table is available.booleanAdminOverAsyncAdmin.isTableAvailable(TableName tableName) AsyncAdmin.isTableAvailable(TableName tableName) Check if a table is available.AsyncHBaseAdmin.isTableAvailable(TableName tableName) RawAsyncHBaseAdmin.isTableAvailable(TableName tableName) booleanAdmin.isTableDisabled(TableName tableName) Check if a table is disabled.booleanAdminOverAsyncAdmin.isTableDisabled(TableName tableName) AsyncAdmin.isTableDisabled(TableName tableName) Check if a table is disabled.AsyncHBaseAdmin.isTableDisabled(TableName tableName) RawAsyncHBaseAdmin.isTableDisabled(TableName tableName) booleanAdmin.isTableEnabled(TableName tableName) Check if a table is enabled.booleanAdminOverAsyncAdmin.isTableEnabled(TableName tableName) AsyncAdmin.isTableEnabled(TableName tableName) Check if a table is enabled.AsyncHBaseAdmin.isTableEnabled(TableName tableName) RawAsyncHBaseAdmin.isTableEnabled(TableName tableName) private voidRawAsyncHBaseAdmin.legacyFlush(CompletableFuture<Void> future, TableName tableName, List<byte[]> columnFamilies) private voidAsyncNonMetaRegionLocator.locateInMeta(TableName tableName, AsyncNonMetaRegionLocator.LocateRequest req) voidAdmin.majorCompact(TableName tableName) Major compact a table.voidAdmin.majorCompact(TableName tableName, byte[] columnFamily) Major compact a column family within a table.voidAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) Major compact a column family within a table.voidAdmin.majorCompact(TableName tableName, CompactType compactType) Major compact a table.voidAdminOverAsyncAdmin.majorCompact(TableName tableName) voidAdminOverAsyncAdmin.majorCompact(TableName tableName, byte[] columnFamily) voidAdminOverAsyncAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) voidAdminOverAsyncAdmin.majorCompact(TableName tableName, CompactType compactType) default CompletableFuture<Void>AsyncAdmin.majorCompact(TableName tableName) Major compact a table.default CompletableFuture<Void>AsyncAdmin.majorCompact(TableName tableName, byte[] columnFamily) Major compact a column family within a table.AsyncAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) Major compact a column family within a table.AsyncAdmin.majorCompact(TableName tableName, CompactType compactType) Major compact a table.AsyncHBaseAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) AsyncHBaseAdmin.majorCompact(TableName tableName, CompactType compactType) RawAsyncHBaseAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) RawAsyncHBaseAdmin.majorCompact(TableName tableName, CompactType compactType) default voidAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Modify an existing column family on a table.AsyncAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Modify an existing column family on a table.AsyncHBaseAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) RawAsyncHBaseAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Admin.modifyColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) Modify an existing column family on a table.AdminOverAsyncAdmin.modifyColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) default voidAdmin.modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) Change the store file tracker of the given table's given family.AsyncAdmin.modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) Change the store file tracker of the given table's given family.AsyncHBaseAdmin.modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) RawAsyncHBaseAdmin.modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) Admin.modifyColumnFamilyStoreFileTrackerAsync(TableName tableName, byte[] family, String dstSFT) Change the store file tracker of the given table's given family.AdminOverAsyncAdmin.modifyColumnFamilyStoreFileTrackerAsync(TableName tableName, byte[] family, String dstSFT) default voidAdmin.modifyTableStoreFileTracker(TableName tableName, String dstSFT) Change the store file tracker of the given table.AsyncAdmin.modifyTableStoreFileTracker(TableName tableName, String dstSFT) Change the store file tracker of the given table.AsyncHBaseAdmin.modifyTableStoreFileTracker(TableName tableName, String dstSFT) RawAsyncHBaseAdmin.modifyTableStoreFileTracker(TableName tableName, String dstSFT) Admin.modifyTableStoreFileTrackerAsync(TableName tableName, String dstSFT) Change the store file tracker of the given table.AdminOverAsyncAdmin.modifyTableStoreFileTrackerAsync(TableName tableName, String dstSFT) static RegionInfoBuilderRegionInfoBuilder.newBuilder(TableName tableName) static TableDescriptorBuilderTableDescriptorBuilder.newBuilder(TableName name) private booleanAsyncNonMetaRegionLocator.onScanNext(TableName tableName, AsyncNonMetaRegionLocator.LocateRequest req, Result result) static TableStateAsyncClusterConnection.prepareBulkLoad(TableName tableName) Return the token for this bulk load.AsyncClusterConnectionImpl.prepareBulkLoad(TableName tableName) private <PREQ,PRESP, PRES>
CompletableFuture<PRES>RawAsyncHBaseAdmin.procedureCall(TableName tableName, PREQ preq, RawAsyncHBaseAdmin.MasterRpcCall<PRESP, PREQ> rpcCall, RawAsyncHBaseAdmin.Converter<Long, PRESP> respConverter, RawAsyncHBaseAdmin.Converter<PRES, org.apache.hbase.thirdparty.com.google.protobuf.ByteString> resultConverter, RawAsyncHBaseAdmin.ProcedureBiConsumer<PRES> consumer) short-circuit call for procedureCall(Consumer, Object, MasterRpcCall, Converter, Converter, ProcedureBiConsumer) by skip setting priority for requestprivate <PREQ,PRESP>
CompletableFuture<Void>RawAsyncHBaseAdmin.procedureCall(TableName tableName, PREQ preq, RawAsyncHBaseAdmin.MasterRpcCall<PRESP, PREQ> rpcCall, RawAsyncHBaseAdmin.Converter<Long, PRESP> respConverter, RawAsyncHBaseAdmin.ProcedureBiConsumer<Void> consumer) short-circuit call for procedureCall(TableName, Object, MasterRpcCall, Converter, Converter, ProcedureBiConsumer) by ignoring procedure result(package private) static voidConnectionUtils.resetController(HBaseRpcController controller, long timeoutNs, int priority, TableName tableName) private CompletableFuture<Void>RawAsyncHBaseAdmin.restoreSnapshot(String snapshotName, TableName tableName, boolean takeFailSafeSnapshot, boolean restoreAcl) intCatalogReplicaLoadBalanceSelector.select(TableName tablename, byte[] row, RegionLocateType locateType) Select a catalog replica region where client go to loop up the input row key.intCatalogReplicaLoadBalanceSimpleSelector.select(TableName tableName, byte[] row, RegionLocateType locateType) When it looks up a location, it will call this method to find a replica region to go.private CompletableFuture<Void>RawAsyncHBaseAdmin.setTableReplication(TableName tableName, boolean enableRep) Set the table's replication switch if the table's replication switch is already not set.default voidTake a snapshot for the given table.default voidCreate typed snapshot of the table.default voidAdmin.snapshot(String snapshotName, TableName tableName, SnapshotType type) Create typed snapshot of the table.default voidAdmin.snapshot(String snapshotName, TableName tableName, SnapshotType type, Map<String, Object> snapshotProps) Create typed snapshot of the table.default CompletableFuture<Void>Take a snapshot for the given table.default CompletableFuture<Void>AsyncAdmin.snapshot(String snapshotName, TableName tableName, SnapshotType type) Create typed snapshot of the table.voidSplit a table.voidSplit a table.voidvoidSplit a table.Split a table.booleanAdmin.tableExists(TableName tableName) Check if a table exists.booleanAdminOverAsyncAdmin.tableExists(TableName tableName) AsyncAdmin.tableExists(TableName tableName) Check if a table exists.AsyncHBaseAdmin.tableExists(TableName tableName) RawAsyncHBaseAdmin.tableExists(TableName tableName) (package private) static <T> CompletableFuture<T>ConnectionUtils.timelineConsistentRead(AsyncRegionLocator locator, TableName tableName, Query query, byte[] row, RegionLocateType locateType, Function<Integer, CompletableFuture<T>> requestReplica, long rpcTimeoutNs, long primaryCallTimeoutNs, org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, Optional<MetricsConnection> metrics) default voidAdmin.truncateTable(TableName tableName, boolean preserveSplits) Truncate a table.AsyncAdmin.truncateTable(TableName tableName, boolean preserveSplits) Truncate a table.AsyncHBaseAdmin.truncateTable(TableName tableName, boolean preserveSplits) RawAsyncHBaseAdmin.truncateTable(TableName tableName, boolean preserveSplits) Admin.truncateTableAsync(TableName tableName, boolean preserveSplits) Truncate the table but does not block and wait for it to be completely enabled.AdminOverAsyncAdmin.truncateTableAsync(TableName tableName, boolean preserveSplits) private CompletableFuture<Void>RawAsyncHBaseAdmin.trySyncTableToPeerCluster(TableName tableName, byte[][] splits, ReplicationPeerDescription peer) voidMetricsConnection.updateRpc(org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor method, TableName tableName, org.apache.hbase.thirdparty.com.google.protobuf.Message param, MetricsConnection.CallStats stats, Throwable e) Report RPC context to metrics system.private voidMetricsConnection.updateTableMetric(String methodName, TableName tableName, MetricsConnection.CallStats stats, Throwable e) Report table rpc context to metrics system.Method parameters in org.apache.hadoop.hbase.client with type arguments of type TableNameModifier and TypeMethodDescriptiondefault voidAppend the replicable table column family config from the specified peer.Append the replicable table-cf config of the specified peerprivate voidRawAsyncHBaseAdmin.checkAndGetTableName(byte[] encodeRegionName, AtomicReference<TableName> tableName, CompletableFuture<TableName> result) private voidRawAsyncHBaseAdmin.checkAndGetTableName(byte[] encodeRegionName, AtomicReference<TableName> tableName, CompletableFuture<TableName> result) private voidAsyncNonMetaRegionLocator.invalidateCache(CompletableFuture<Void> future, Iterator<TableName> tbnIter, AsyncAdmin admin) Admin.listTableDescriptors(List<TableName> tableNames) Get tableDescriptors.AdminOverAsyncAdmin.listTableDescriptors(List<TableName> tableNames) AsyncAdmin.listTableDescriptors(List<TableName> tableNames) List specific tables including system tables.AsyncHBaseAdmin.listTableDescriptors(List<TableName> tableNames) RawAsyncHBaseAdmin.listTableDescriptors(List<TableName> tableNames) default voidRemove some table-cfs from config of the specified peer.Remove some table-cfs from config of the specified peervoidAdmin.setRSGroup(Set<TableName> tables, String groupName) Set the RegionServer group for tablesvoidAdminOverAsyncAdmin.setRSGroup(Set<TableName> tables, String groupName) AsyncAdmin.setRSGroup(Set<TableName> tables, String groupName) Set the RegionServer group for tablesAsyncHBaseAdmin.setRSGroup(Set<TableName> tables, String groupName) RawAsyncHBaseAdmin.setRSGroup(Set<TableName> tables, String groupName) NormalizeTableFilterParams.Builder.tableNames(List<TableName> tableNames) Constructors in org.apache.hadoop.hbase.client with parameters of type TableNameModifierConstructorDescription(package private)AddColumnFamilyProcedureBiConsumer(TableName tableName) AsyncBatchRpcRetryingCaller(org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, AsyncConnectionImpl conn, TableName tableName, List<? extends Row> actions, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long operationTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) AsyncClientScanner(Scan scan, AdvancedScanResultConsumer consumer, TableName tableName, AsyncConnectionImpl conn, org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long scanTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) AsyncRegionLocationCache(TableName tableName) AsyncSingleRequestRpcRetryingCaller(org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, AsyncConnectionImpl conn, TableName tableName, byte[] row, int replicaId, RegionLocateType locateType, AsyncSingleRequestRpcRetryingCaller.Callable<T> callable, int priority, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long operationTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) (package private)AsyncTableBuilderBase(TableName tableName, AsyncConnectionConfiguration connConf) AsyncTableRegionLocatorImpl(TableName tableName, AsyncConnectionImpl conn) AsyncTableResultScanner(TableName tableName, Scan scan, long maxCacheSize) BufferedMutatorParams(TableName tableName) (package private)CatalogReplicaLoadBalanceSimpleSelector(TableName tableName, AsyncConnectionImpl conn, IntSupplier getNumOfReplicas) (package private)CreateTableProcedureBiConsumer(TableName tableName) (package private)DeleteColumnFamilyProcedureBiConsumer(TableName tableName) (package private)DeleteTableProcedureBiConsumer(TableName tableName) (package private)DisableTableProcedureBiConsumer(TableName tableName) (package private)EnableTableProcedureBiConsumer(TableName tableName) (package private)FlushTableProcedureBiConsumer(TableName tableName) (package private)MergeTableRegionProcedureBiConsumer(TableName tableName) privateConstruct a table descriptor specifying a TableName objectprivateModifyableTableDescriptor(TableName name, Collection<ColumnFamilyDescriptor> families, Map<Bytes, Bytes> values) privateModifyableTableDescriptor(TableName name, TableDescriptor desc) Construct a table descriptor by cloning the descriptor passed as a parameter.(package private)ModifyColumnFamilyProcedureBiConsumer(TableName tableName) (package private)(package private)ModifyTableProcedureBiConsumer(AsyncAdmin admin, TableName tableName) (package private)ModifyTableStoreFileTrackerProcedureBiConsumer(AsyncAdmin admin, TableName tableName) (package private)MutableRegionInfo(long regionId, TableName tableName, int replicaId) Package private constructor used constructing MutableRegionInfo for the first meta regions(package private)MutableRegionInfo(TableName tableName, byte[] startKey, byte[] endKey, boolean split, long regionId, int replicaId, boolean offLine) (package private)RegionCoprocessorRpcChannel(AsyncConnectionImpl conn, TableName tableName, RegionInfo region, byte[] row, long rpcTimeoutNs, long operationTimeoutNs) (package private)RegionCoprocessorRpcChannelImpl(AsyncConnectionImpl conn, TableName tableName, RegionInfo region, byte[] row, long rpcTimeoutNs, long operationTimeoutNs) privateRegionInfoBuilder(TableName tableName) SnapshotDescription(String name, TableName table) SnapshotDescription(String name, TableName table, SnapshotType type) SnapshotDescription(String name, TableName table, SnapshotType type, String owner) SnapshotDescription(String name, TableName table, SnapshotType type, String owner, long creationTime, int version) Deprecated.since 2.3.0 and will be removed in 4.0.0.SnapshotDescription(String name, TableName table, SnapshotType type, String owner, long creationTime, int version, Map<String, Object> snapshotProps) SnapshotDescription Parameterized ConstructorSnapshotDescription(String snapshotName, TableName tableName, SnapshotType type, Map<String, Object> snapshotProps) SnapshotDescription Parameterized Constructor(package private)SnapshotProcedureBiConsumer(TableName tableName) (package private)SplitTableRegionProcedureBiConsumer(TableName tableName) (package private)TableBuilderBase(TableName tableName, ConnectionConfiguration connConf) TableCache(TableName tableName) private(package private)TableProcedureBiConsumer(TableName tableName) TableState(TableName tableName, TableState.State state) Create instance of TableState.(package private)TruncateRegionProcedureBiConsumer(TableName tableName) (package private)TruncateTableProcedureBiConsumer(TableName tableName) Constructor parameters in org.apache.hadoop.hbase.client with type arguments of type TableNameModifierConstructorDescriptionprivateNormalizeTableFilterParams(List<TableName> tableNames, String regex, String namespace) -
Uses of TableName in org.apache.hadoop.hbase.client.example
Fields in org.apache.hadoop.hbase.client.example declared as TableNameModifier and TypeFieldDescriptionprivate static final TableNameBufferedMutatorExample.TABLEprivate final TableNameMultiThreadedClientExample.ReadExampleCallable.tableNameprivate final TableNameMultiThreadedClientExample.SingleWriteExampleCallable.tableNameprivate final TableNameMultiThreadedClientExample.WriteExampleCallable.tableNameMethods in org.apache.hadoop.hbase.client.example with parameters of type TableNameModifier and TypeMethodDescriptionvoidRefreshHFilesClient.refreshHFiles(TableName tableName) private voidMultiThreadedClientExample.warmUpConnectionCache(Connection connection, TableName tn) Constructors in org.apache.hadoop.hbase.client.example with parameters of type TableNameModifierConstructorDescriptionReadExampleCallable(Connection connection, TableName tableName) SingleWriteExampleCallable(Connection connection, TableName tableName) WriteExampleCallable(Connection connection, TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.client.locking
Methods in org.apache.hadoop.hbase.client.locking with parameters of type TableNameModifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.shaded.protobuf.generated.LockServiceProtos.LockRequestLockServiceClient.buildLockRequest(org.apache.hadoop.hbase.shaded.protobuf.generated.LockServiceProtos.LockType type, String namespace, TableName tableName, List<RegionInfo> regionInfos, String description, long nonceGroup, long nonce) LockServiceClient.tableLock(TableName tableName, boolean exclusive, String description, Abortable abort) Create a new EntityLock object to acquire an exclusive or shared lock on a table. -
Uses of TableName in org.apache.hadoop.hbase.client.replication
Fields in org.apache.hadoop.hbase.client.replication declared as TableNameMethods in org.apache.hadoop.hbase.client.replication that return TableNameMethods in org.apache.hadoop.hbase.client.replication that return types with arguments of type TableNameModifier and TypeMethodDescriptionReplicationPeerConfigUtil.convert2Map(org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos.TableCF[] tableCFs) Convert tableCFs Object to Map.ReplicationPeerConfigUtil.copyTableCFsMap(Map<TableName, List<String>> preTableCfs) ReplicationPeerConfigUtil.mergeTableCFs(Map<TableName, List<String>> preTableCfs, Map<TableName, List<String>> tableCfs) ReplicationPeerConfigUtil.parseTableCFsFromConfig(String tableCFsConfig) Convert tableCFs string into Map.Method parameters in org.apache.hadoop.hbase.client.replication with type arguments of type TableNameModifier and TypeMethodDescriptionstatic ReplicationPeerConfigReplicationPeerConfigUtil.appendExcludeTableCFsToReplicationPeerConfig(Map<TableName, List<String>> excludeTableCfs, ReplicationPeerConfig peerConfig) static ReplicationPeerConfigReplicationPeerConfigUtil.appendTableCFsToReplicationPeerConfig(Map<TableName, List<String>> tableCfs, ReplicationPeerConfig peerConfig) static org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos.TableCF[]ReplicationPeerConfigUtil.convert(Map<TableName, ? extends Collection<String>> tableCfs) convert map to TableCFs Objectstatic StringReplicationPeerConfigUtil.convertToString(Map<TableName, ? extends Collection<String>> tableCfs) ReplicationPeerConfigUtil.copyTableCFsMap(Map<TableName, List<String>> preTableCfs) ReplicationPeerConfigUtil.mergeTableCFs(Map<TableName, List<String>> preTableCfs, Map<TableName, List<String>> tableCfs) static ReplicationPeerConfigReplicationPeerConfigUtil.removeExcludeTableCFsFromReplicationPeerConfig(Map<TableName, List<String>> excludeTableCfs, ReplicationPeerConfig peerConfig, String id) static ReplicationPeerConfigReplicationPeerConfigUtil.removeTableCFsFromReplicationPeerConfig(Map<TableName, List<String>> tableCfs, ReplicationPeerConfig peerConfig, String id) Constructors in org.apache.hadoop.hbase.client.replication with parameters of type TableName -
Uses of TableName in org.apache.hadoop.hbase.client.trace
Fields in org.apache.hadoop.hbase.client.trace declared as TableNameMethods in org.apache.hadoop.hbase.client.trace with parameters of type TableNameModifier and TypeMethodDescription(package private) static voidTableSpanBuilder.populateTableNameAttributes(Map<io.opentelemetry.api.common.AttributeKey<?>, Object> attributes, TableName tableName) Static utility method that performs the primary logic of this builder.TableOperationSpanBuilder.setTableName(TableName tableName) TableSpanBuilder.setTableName(TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.coprocessor
Methods in org.apache.hadoop.hbase.coprocessor with parameters of type TableNameModifier and TypeMethodDescriptiondefault voidMasterObserver.postCompletedDeleteTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called afterHMasterdeletes a table.default voidMasterObserver.postCompletedDisableTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the disableTable operation has been requested.default voidMasterObserver.postCompletedEnableTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the enableTable operation has been requested.default voidMasterObserver.postCompletedModifyTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) Called after to modifying a table's properties.default voidMasterObserver.postCompletedTruncateTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called afterHMastertruncates a table.default voidMasterObserver.postDeleteTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the deleteTable operation has been requested.default voidMasterObserver.postDisableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the disableTable operation has been requested.default voidMasterObserver.postEnableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the enableTable operation has been requested.default voidMasterObserver.postGetRSGroupInfoOfTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after getting region server group info of the passed tableName.default voidMasterObserver.postGetUserPermissions(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) Called after getting user permissions.default voidMasterObserver.postModifyColumnFamilyStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, byte[] family, String dstSFT) Called after modifying a family store file tracker.default voidMasterObserver.postModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) Called after the modifyTable operation has been requested.default voidMasterObserver.postModifyTableStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, String dstSFT) Called after modifying a table's store file tracker.default voidMasterObserver.postRequestLock(ObserverContext<MasterCoprocessorEnvironment> ctx, String namespace, TableName tableName, RegionInfo[] regionInfos, String description) Called after new LockProcedure is queued.default voidMasterObserver.postSetTableQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, GlobalQuotaSettings quotas) Called after the quota for the table is stored.default voidMasterObserver.postSetUserQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, TableName tableName, GlobalQuotaSettings quotas) Called after the quota for the user on the specified table is stored.default voidMasterObserver.postTableFlush(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the table memstore is flushed to disk.default voidMasterObserver.postTruncateTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the truncateTable operation has been requested.default voidMasterObserver.preDeleteTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called beforeHMasterdeletes a table.default voidMasterObserver.preDeleteTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called beforeHMasterdeletes a table.default voidMasterObserver.preDisableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called prior to disabling a table.default voidMasterObserver.preDisableTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called prior to disabling a table.default voidMasterObserver.preEnableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called prior to enabling a table.default voidMasterObserver.preEnableTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called prior to enabling a table.default voidMasterObserver.preGetRSGroupInfoOfTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called before getting region server group info of the passed tableName.default voidMasterObserver.preGetUserPermissions(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) Called before getting user permissions.default voidMasterObserver.preLockHeartbeat(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tn, String description) Called before heartbeat to a lock.default StringMasterObserver.preModifyColumnFamilyStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, byte[] family, String dstSFT) Called prior to modifying a family's store file tracker.default TableDescriptorMasterObserver.preModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor) Called prior to modifying a table's properties.default voidMasterObserver.preModifyTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor) Called prior to modifying a table's properties.default StringMasterObserver.preModifyTableStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, String dstSFT) Called prior to modifying a table's store file tracker.default voidMasterObserver.preRequestLock(ObserverContext<MasterCoprocessorEnvironment> ctx, String namespace, TableName tableName, RegionInfo[] regionInfos, String description) Called before new LockProcedure is queued.default voidMasterObserver.preSetTableQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, GlobalQuotaSettings quotas) Called before the quota for the table is stored.default voidMasterObserver.preSetUserQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, TableName tableName, GlobalQuotaSettings quotas) Called before the quota for the user on the specified table is stored.default voidMasterObserver.preSplitRegion(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, byte[] splitRow) Called before the split region procedure is called.default voidMasterObserver.preSplitRegionAction(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, byte[] splitRow) Called before the region is split.default voidMasterObserver.preTableFlush(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called before the table memstore is flushed to disk.default voidMasterObserver.preTruncateTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called beforeHMastertruncates a table.default voidMasterObserver.preTruncateTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called beforeHMastertruncates a table.Method parameters in org.apache.hadoop.hbase.coprocessor with type arguments of type TableNameModifier and TypeMethodDescriptiondefault voidMasterObserver.postGetTableDescriptors(ObserverContext<MasterCoprocessorEnvironment> ctx, List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) Called after a getTableDescriptors request has been processed.default voidMasterObserver.postMoveTables(ObserverContext<MasterCoprocessorEnvironment> ctx, Set<TableName> tables, String targetGroup) Called after servers are moved to target region server groupdefault voidMasterObserver.preGetTableDescriptors(ObserverContext<MasterCoprocessorEnvironment> ctx, List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) Called before a getTableDescriptors request has been processed.default voidMasterObserver.preMoveTables(ObserverContext<MasterCoprocessorEnvironment> ctx, Set<TableName> tables, String targetGroup) Called before tables are moved to target region server group -
Uses of TableName in org.apache.hadoop.hbase.coprocessor.example
Methods in org.apache.hadoop.hbase.coprocessor.example with parameters of type TableNameModifier and TypeMethodDescriptionvoidExampleMasterObserverWithMetrics.preDisableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.coprocessor.example.row.stats
Fields in org.apache.hadoop.hbase.coprocessor.example.row.stats with type parameters of type TableNameModifier and TypeFieldDescriptionprivate static final ConcurrentMap<TableName,Long> RowStatisticsCompactionObserver.TABLE_COUNTERS -
Uses of TableName in org.apache.hadoop.hbase.coprocessor.example.row.stats.utils
Fields in org.apache.hadoop.hbase.coprocessor.example.row.stats.utils declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameRowStatisticsTableUtil.NAMESPACED_TABLE_NAME -
Uses of TableName in org.apache.hadoop.hbase.favored
Methods in org.apache.hadoop.hbase.favored with parameters of type TableNameModifier and TypeMethodDescriptionprotected List<RegionPlan>FavoredNodeLoadBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) -
Uses of TableName in org.apache.hadoop.hbase.fs
Methods in org.apache.hadoop.hbase.fs with parameters of type TableNameModifier and TypeMethodDescriptionstatic voidErasureCodingUtils.setPolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, TableName tableName, String policy) Sets the EC policy on the table directory for the specified tablestatic voidErasureCodingUtils.unsetPolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, TableName tableName) Unsets any EC policy specified on the path. -
Uses of TableName in org.apache.hadoop.hbase.io
Methods in org.apache.hadoop.hbase.io that return TableNameModifier and TypeMethodDescriptionstatic TableNameHFileLink.getReferencedTableName(String fileName) Get the Table name of the referenced linkMethods in org.apache.hadoop.hbase.io that return types with arguments of type TableNameMethods in org.apache.hadoop.hbase.io with parameters of type TableNameModifier and TypeMethodDescriptionstatic HFileLinkHFileLink.build(org.apache.hadoop.conf.Configuration conf, TableName table, String region, String family, String hfile) Create an HFileLink instance from table/region/family/hfile locationstatic StringHFileLink.createHFileLinkName(TableName tableName, String regionName, String hfileName) Create a new HFileLink namestatic org.apache.hadoop.fs.PathHFileLink.createPath(TableName table, String region, String family, String hfile) Create an HFileLink relative path for the table/region/family/hfile location -
Uses of TableName in org.apache.hadoop.hbase.io.hfile
Methods in org.apache.hadoop.hbase.io.hfile with parameters of type TableName -
Uses of TableName in org.apache.hadoop.hbase.ipc
Fields in org.apache.hadoop.hbase.ipc declared as TableNameMethods in org.apache.hadoop.hbase.ipc that return TableNameModifier and TypeMethodDescriptionDelegatingHBaseRpcController.getTableName()default TableNameHBaseRpcController.getTableName()Returns Region's table name or null if not available or pertinent.HBaseRpcControllerImpl.getTableName()Methods in org.apache.hadoop.hbase.ipc with parameters of type TableNameModifier and TypeMethodDescriptionvoidDelegatingHBaseRpcController.setPriority(TableName tn) voidHBaseRpcController.setPriority(TableName tn) Set the priority for this operation.voidHBaseRpcControllerImpl.setPriority(TableName tn) voidDelegatingHBaseRpcController.setTableName(TableName tableName) default voidHBaseRpcController.setTableName(TableName tableName) Sets Region's table name.voidHBaseRpcControllerImpl.setTableName(TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.mapred
Fields in org.apache.hadoop.hbase.mapred declared as TableNameMethods in org.apache.hadoop.hbase.mapred that return TableNameMethods in org.apache.hadoop.hbase.mapred with parameters of type TableNameModifier and TypeMethodDescriptionprivate static intTableMapReduceUtil.getRegionCount(org.apache.hadoop.conf.Configuration conf, TableName tableName) protected voidTableInputFormatBase.initializeTable(Connection connection, TableName tableName) Allows subclasses to initialize the table information.Constructors in org.apache.hadoop.hbase.mapred with parameters of type TableNameModifierConstructorDescriptionTableSplit(TableName tableName, byte[] startRow, byte[] endRow, String location) Constructor -
Uses of TableName in org.apache.hadoop.hbase.mapreduce
Fields in org.apache.hadoop.hbase.mapreduce declared as TableNameFields in org.apache.hadoop.hbase.mapreduce with type parameters of type TableNameMethods in org.apache.hadoop.hbase.mapreduce that return TableNameMethods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type TableNameModifier and TypeMethodDescriptionExportUtils.getArgumentsFromCommandLine(org.apache.hadoop.conf.Configuration conf, String[] args) WALPlayer.getTableNameList(String[] tables) Methods in org.apache.hadoop.hbase.mapreduce with parameters of type TableNameModifier and TypeMethodDescriptionstatic voidTableInputFormat.configureSplitTable(org.apache.hadoop.mapreduce.Job job, TableName tableName) Sets split table in map-reduce job.private static voidImportTsv.createTable(Admin admin, TableName tableName, String[] columns) private static intTableMapReduceUtil.getRegionCount(org.apache.hadoop.conf.Configuration conf, TableName tableName) private static RegionLocatorWALPlayer.getRegionLocator(TableName tableName, org.apache.hadoop.conf.Configuration conf, Connection conn) protected voidTableInputFormatBase.initializeTable(Connection connection, TableName tableName) Allows subclasses to initialize the table information.static voidTableMapReduceUtil.initTableMapperJob(TableName table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.Constructors in org.apache.hadoop.hbase.mapreduce with parameters of type TableNameModifierConstructorDescriptionTableSplit(TableName tableName, byte[] startRow, byte[] endRow, String location) Creates a new instance without a scanner.TableSplit(TableName tableName, byte[] startRow, byte[] endRow, String location, long length) Creates a new instance without a scanner.TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location) Creates a new instance while assigning all variables.TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location, long length) Creates a new instance while assigning all variables.TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location, String encodedRegionName, long length) Creates a new instance while assigning all variables. -
Uses of TableName in org.apache.hadoop.hbase.master
Fields in org.apache.hadoop.hbase.master declared as TableNameFields in org.apache.hadoop.hbase.master with type parameters of type TableNameModifier and TypeFieldDescriptionSnapshotOfRegionAssignmentFromMeta.disabledTablesprivate Map<TableName,AtomicInteger> HMaster.mobCompactionStatesprivate final ConcurrentMap<TableName,TableState.State> TableStateManager.tableName2Stateprivate final Map<TableName,List<RegionInfo>> SnapshotOfRegionAssignmentFromMeta.tableToRegionMapthe table name to region mapRegionPlacementMaintainer.targetTableSetprivate final IdReadWriteLock<TableName>TableStateManager.tnLockMethods in org.apache.hadoop.hbase.master that return types with arguments of type TableNameModifier and TypeMethodDescriptionRegionPlacementMaintainer.getRegionsMovement(FavoredNodesPlan newPlan) Return how many regions will move per table since their primary RS will changeSnapshotOfRegionAssignmentFromMeta.getTableSet()Get the table setTableStateManager.getTablesInStates(TableState.State... states) Return all tables in given states.SnapshotOfRegionAssignmentFromMeta.getTableToRegionMap()Get regions for tablesRegionsRecoveryChore.getTableToRegionsByRefCount(Map<ServerName, ServerMetrics> serverMetricsMap) HMaster.listTableNames(String namespace, String regex, boolean includeSysTables) Returns the list of table names that match the specified requestHMaster.listTableNamesByNamespace(String name) MasterServices.listTableNamesByNamespace(String name) Get list of table names by namespaceMethods in org.apache.hadoop.hbase.master with parameters of type TableNameModifier and TypeMethodDescriptionlongHMaster.addColumn(TableName tableName, ColumnFamilyDescriptor column, long nonceGroup, long nonce) longMasterServices.addColumn(TableName tableName, ColumnFamilyDescriptor column, long nonceGroup, long nonce) Add a new column to an existing tableprivate voidHMaster.checkTableExists(TableName tableName) voidHMaster.checkTableModifiable(TableName tableName) voidMasterServices.checkTableModifiable(TableName tableName) Check table is modifiable; i.e.longHMaster.deleteColumn(TableName tableName, byte[] columnName, long nonceGroup, long nonce) longMasterServices.deleteColumn(TableName tableName, byte[] columnName, long nonceGroup, long nonce) Delete a column from an existing tablelongHMaster.deleteTable(TableName tableName, long nonceGroup, long nonce) longMasterServices.deleteTable(TableName tableName, long nonceGroup, long nonce) Delete a tablelongHMaster.disableTable(TableName tableName, long nonceGroup, long nonce) longMasterServices.disableTable(TableName tableName, long nonceGroup, long nonce) Disable an existing tablelongHMaster.enableTable(TableName tableName, long nonceGroup, long nonce) longMasterServices.enableTable(TableName tableName, long nonceGroup, long nonce) Enable an existing tablevoidAssignmentVerificationReport.fillUp(TableName tableName, SnapshotOfRegionAssignmentFromMeta snapshot, Map<String, Map<String, Float>> regionLocalityMap) voidAssignmentVerificationReport.fillUpDispersion(TableName tableName, SnapshotOfRegionAssignmentFromMeta snapshot, FavoredNodesPlan newPlan) Use this to project the dispersion scoreslongHMaster.flushTable(TableName tableName, List<byte[]> columnFamilies, long nonceGroup, long nonce) longMasterServices.flushTable(TableName tableName, List<byte[]> columnFamilies, long nonceGroup, long nonce) Flush an existing tableprivate voidRegionPlacementMaintainer.genAssignmentPlan(TableName tableName, SnapshotOfRegionAssignmentFromMeta assignmentSnapshot, Map<String, Map<String, Float>> regionLocalityMap, FavoredNodesPlan plan, boolean munkresForSecondaryAndTertiary) Generate the assignment plan for the existing tableHMaster.getCompactionState(TableName tableName) Get the compaction state of the tablelongHMaster.getLastMajorCompactionTimestamp(TableName table) longMasterServices.getLastMajorCompactionTimestamp(TableName table) org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionStateHMaster.getMobCompactionState(TableName tableName) Gets the mob file compaction state for a specific table.TableStateManager.getTableState(TableName tableName) private static booleanHMaster.isCatalogTable(TableName tableName) booleanTableStateManager.isTablePresent(TableName tableName) booleanTableStateManager.isTableState(TableName tableName, TableState.State... states) longHMaster.modifyColumn(TableName tableName, ColumnFamilyDescriptor descriptor, long nonceGroup, long nonce) longMasterServices.modifyColumn(TableName tableName, ColumnFamilyDescriptor descriptor, long nonceGroup, long nonce) Modify the column descriptor of an existing column in an existing tablelongHMaster.modifyColumnStoreFileTracker(TableName tableName, byte[] family, String dstSFT, long nonceGroup, long nonce) longMasterServices.modifyColumnStoreFileTracker(TableName tableName, byte[] family, String dstSFT, long nonceGroup, long nonce) Modify the store file tracker of an existing column in an existing tablelongHMaster.modifyTable(TableName tableName, TableDescriptor newDescriptor, long nonceGroup, long nonce, boolean reopenRegions) private longHMaster.modifyTable(TableName tableName, HMaster.TableDescriptorGetter newDescriptorGetter, long nonceGroup, long nonce, boolean shouldCheckDescriptor) private longHMaster.modifyTable(TableName tableName, HMaster.TableDescriptorGetter newDescriptorGetter, long nonceGroup, long nonce, boolean shouldCheckDescriptor, boolean reopenRegions) default longMasterServices.modifyTable(TableName tableName, TableDescriptor descriptor, long nonceGroup, long nonce) Modify the descriptor of an existing tablelongMasterServices.modifyTable(TableName tableName, TableDescriptor descriptor, long nonceGroup, long nonce, boolean reopenRegions) Modify the descriptor of an existing tablelongHMaster.modifyTableStoreFileTracker(TableName tableName, String dstSFT, long nonceGroup, long nonce) longMasterServices.modifyTableStoreFileTracker(TableName tableName, String dstSFT, long nonceGroup, long nonce) Modify the store file tracker of an existing tablevoidMasterCoprocessorHost.postCompletedDeleteTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postCompletedDisableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postCompletedEnableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postCompletedModifyTableAction(TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor, User user) voidMasterCoprocessorHost.postCompletedTruncateTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postDeleteTable(TableName tableName) voidMasterCoprocessorHost.postDisableTable(TableName tableName) voidMasterCoprocessorHost.postEnableTable(TableName tableName) voidMasterCoprocessorHost.postGetRSGroupInfoOfTable(TableName tableName) voidMasterCoprocessorHost.postGetUserPermissions(String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) voidMasterCoprocessorHost.postModifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) voidMasterCoprocessorHost.postModifyTable(TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) voidMasterCoprocessorHost.postModifyTableStoreFileTracker(TableName tableName, String dstSFT) voidMasterCoprocessorHost.postRequestLock(String namespace, TableName tableName, RegionInfo[] regionInfos, LockType type, String description) voidMasterCoprocessorHost.postSetTableQuota(TableName table, GlobalQuotaSettings quotas) voidMasterCoprocessorHost.postSetUserQuota(String user, TableName table, GlobalQuotaSettings quotas) voidMasterCoprocessorHost.postTableFlush(TableName tableName) voidMasterCoprocessorHost.postTruncateTable(TableName tableName) voidMasterCoprocessorHost.preDeleteTable(TableName tableName) voidMasterCoprocessorHost.preDeleteTableAction(TableName tableName, User user) voidMasterCoprocessorHost.preDisableTable(TableName tableName) voidMasterCoprocessorHost.preDisableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.preEnableTable(TableName tableName) voidMasterCoprocessorHost.preEnableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.preGetRSGroupInfoOfTable(TableName tableName) voidMasterCoprocessorHost.preGetUserPermissions(String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) MasterCoprocessorHost.preModifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) MasterCoprocessorHost.preModifyTable(TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor) voidMasterCoprocessorHost.preModifyTableAction(TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor, User user) MasterCoprocessorHost.preModifyTableStoreFileTracker(TableName tableName, String dstSFT) voidMasterCoprocessorHost.preRequestLock(String namespace, TableName tableName, RegionInfo[] regionInfos, LockType type, String description) voidMasterCoprocessorHost.preSetTableQuota(TableName table, GlobalQuotaSettings quotas) voidMasterCoprocessorHost.preSetUserQuota(String user, TableName table, GlobalQuotaSettings quotas) voidMasterCoprocessorHost.preSplitRegion(TableName tableName, byte[] splitRow) Invoked just before calling the split region procedurevoidMasterCoprocessorHost.preSplitRegionAction(TableName tableName, byte[] splitRow, User user) Invoked just before a splitvoidMasterCoprocessorHost.preTableFlush(TableName tableName) voidMasterCoprocessorHost.preTruncateTable(TableName tableName) voidMasterCoprocessorHost.preTruncateTableAction(TableName tableName, User user) voidRegionPlacementMaintainer.printDispersionScores(TableName table, SnapshotOfRegionAssignmentFromMeta snapshot, int numRegions, FavoredNodesPlan newPlan, boolean simplePrint) private TableStateTableStateManager.readMetaState(TableName tableName) (package private) longHMaster.reopenRegions(TableName tableName, List<byte[]> regionNames, long nonceGroup, long nonce) Reopen regions provided in the argumentvoidHMaster.reportMobCompactionEnd(TableName tableName) voidHMaster.reportMobCompactionStart(TableName tableName) voidTableStateManager.setDeletedTable(TableName tableName) voidTableStateManager.setTableState(TableName tableName, TableState.State newState) Set table state to provided.longHMaster.truncateTable(TableName tableName, boolean preserveSplits, long nonceGroup, long nonce) longMasterServices.truncateTable(TableName tableName, boolean preserveSplits, long nonceGroup, long nonce) Truncate a tableprivate voidTableStateManager.updateMetaState(TableName tableName, TableState.State newState) Method parameters in org.apache.hadoop.hbase.master with type arguments of type TableNameModifier and TypeMethodDescriptionLoadBalancer.balanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) Perform the major balance operation for cluster.voidRegionPlacementMaintainer.checkDifferencesWithOldPlan(Map<TableName, Integer> movesPerTable, Map<String, Map<String, Float>> regionLocalityMap, FavoredNodesPlan newPlan) Compares two plans and check whether the locality dropped or increased (prints the information as a string) also prints the baseline localityHMaster.listTableDescriptors(String namespace, String regex, List<TableName> tableNameList, boolean includeSysTables) Returns the list of table descriptors that match the specified requestvoidMasterCoprocessorHost.postGetTableDescriptors(List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) voidMasterCoprocessorHost.postMoveTables(Set<TableName> tables, String targetGroup) voidMasterCoprocessorHost.preGetTableDescriptors(List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) voidMasterCoprocessorHost.preMoveTables(Set<TableName> tables, String targetGroup) private voidRegionsRecoveryChore.prepareTableToReopenRegionsMap(Map<TableName, List<byte[]>> tableToReopenRegionsMap, byte[] regionName, int regionStoreRefCount) default voidLoadBalancer.updateBalancerLoadInfo(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) In some scenarios, Balancer needs to update internal status or information according to the current tables loadConstructor parameters in org.apache.hadoop.hbase.master with type arguments of type TableNameModifierConstructorDescriptionSnapshotOfRegionAssignmentFromMeta(Connection connection, Set<TableName> disabledTables, boolean excludeOfflinedSplitParents) -
Uses of TableName in org.apache.hadoop.hbase.master.assignment
Methods in org.apache.hadoop.hbase.master.assignment that return TableNameModifier and TypeMethodDescriptionRegionStateNode.getTable()GCMergedRegionsProcedure.getTableName()Deprecated.GCMultipleMergedRegionsProcedure.getTableName()MergeTableRegionsProcedure.getTableName()MoveRegionProcedure.getTableName()Deprecated.RegionRemoteProcedureBase.getTableName()RegionTransitionProcedure.getTableName()Deprecated.Methods in org.apache.hadoop.hbase.master.assignment that return types with arguments of type TableNameModifier and TypeMethodDescriptionRegionStates.getAssignmentsForBalancer(TableStateManager tableStateManager, List<ServerName> onlineServers) This is an EXPENSIVE clone.Methods in org.apache.hadoop.hbase.master.assignment with parameters of type TableNameModifier and TypeMethodDescriptionAssignmentManager.createUnassignProceduresForDisabling(TableName tableName) Called by DisableTableProcedure to unassign all the regions for a table.voidAssignmentManager.deleteTable(TableName tableName) Delete the region states.private TableDescriptorRegionStateStore.getDescriptor(TableName tableName) RegionStates.getRegionByStateOfTable(TableName tableName) RegionStates.getRegionsOfTable(TableName table) Returns Return online regions of table; does not include OFFLINE or SPLITTING regions.private List<RegionInfo>RegionStates.getRegionsOfTable(TableName table, Predicate<RegionStateNode> filter) Returns Return the regions of the table and filter them.RegionStates.getRegionsOfTableForDeleting(TableName table) Get the regions for deleting a table.RegionStates.getRegionsOfTableForEnabling(TableName table) Get the regions for enabling a table.RegionStates.getRegionsOfTableForReopen(TableName tableName) Get the regions to be reopened when modifying a table.private Stream<RegionStateNode>AssignmentManager.getRegionStateNodes(TableName tableName, boolean excludeOfflinedSplitParents) AssignmentManager.getRegionStatesCount(TableName tableName) Provide regions state count for given table.AssignmentManager.getReopenStatus(TableName tableName) Used by the client (via master) to identify if all regions have the schema updatesprivate ScanRegionStateStore.getScanForUpdateRegionReplicas(TableName tableName) AssignmentManager.getTableRegions(TableName tableName, boolean excludeOfflinedSplitParents) AssignmentManager.getTableRegionsAndLocations(TableName tableName, boolean excludeOfflinedSplitParents) (package private) ArrayList<RegionInfo>RegionStates.getTableRegionsInfo(TableName tableName) (package private) List<RegionStateNode>RegionStates.getTableRegionStateNodes(TableName tableName) (package private) ArrayList<RegionState>RegionStates.getTableRegionStates(TableName tableName) private booleanRegionStateStore.hasGlobalReplicationScope(TableName tableName) booleanRegionStates.hasTableRegionStates(TableName tableName) private booleanAssignmentManager.isTableDisabled(TableName tableName) private booleanRegionStates.isTableDisabled(TableStateManager tableStateManager, TableName tableName) private booleanAssignmentManager.isTableEnabled(TableName tableName) intAssignmentManager.numberOfUnclosedExcessRegionReplicas(TableName tableName, int newReplicaCount) private intAssignmentManager.numberOfUnclosedRegions(TableName tableName, Function<RegionStateNode, Boolean> shouldSubmit) intAssignmentManager.numberOfUnclosedRegionsForDisabling(TableName tableName) voidRegionStateStore.removeRegionReplicas(TableName tableName, int oldReplicaCount, int newReplicaCount) private intAssignmentManager.submitUnassignProcedure(TableName tableName, Function<RegionStateNode, Boolean> shouldSubmit, Consumer<RegionStateNode> logRIT, Consumer<TransitRegionStateProcedure> submit) intAssignmentManager.submitUnassignProcedureForClosingExcessRegionReplicas(TableName tableName, int newReplicaCount, Consumer<TransitRegionStateProcedure> submit) Called by ModifyTableProcedure to unassign all the excess region replicas for a table.intAssignmentManager.submitUnassignProcedureForDisablingTable(TableName tableName, Consumer<TransitRegionStateProcedure> submit) Called by DisableTableProcedure to unassign all regions for a table. -
Uses of TableName in org.apache.hadoop.hbase.master.balancer
Fields in org.apache.hadoop.hbase.master.balancer with type parameters of type TableNameModifier and TypeFieldDescriptionprivate Map<TableName,Map<ServerName, List<RegionInfo>>> LoadBalancerPerformanceEvaluation.tableServerRegionMapMethods in org.apache.hadoop.hbase.master.balancer with parameters of type TableNameModifier and TypeMethodDescriptionprotected abstract List<RegionPlan>BaseLoadBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) Perform the major balance operation for table, all sub classes should override this method.protected List<RegionPlan>CacheAwareLoadBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) protected List<RegionPlan>FavoredStochasticBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) For all regions correctly assigned to favored nodes, we just use the stochastic balancer implementation.protected List<RegionPlan>SimpleLoadBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) Generate a global load balancing plan according to the specified map of server information to the most loaded regions of each server.protected List<RegionPlan>StochasticLoadBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) Given the cluster state this will try and approach an optimal balance.private TableDescriptorRegionHDFSBlockLocationFinder.getDescriptor(TableName tableName) return TableDescriptor for a given tableNameClusterInfoProvider.getTableDescriptor(TableName tableName) Get the table descriptor for the given table.MasterClusterInfoProvider.getTableDescriptor(TableName tableName) (package private) booleanStochasticLoadBalancer.needsBalance(TableName tableName, BalancerClusterState cluster) private voidStochasticLoadBalancer.updateBalancerTableLoadInfo(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) private voidStochasticLoadBalancer.updateStochasticCosts(TableName tableName, double overall, double[] subCosts) update costs to JMXMethod parameters in org.apache.hadoop.hbase.master.balancer with type arguments of type TableNameModifier and TypeMethodDescriptionfinal List<RegionPlan>BaseLoadBalancer.balanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) Perform the major balance operation for cluster, will invokeBaseLoadBalancer.balanceTable(TableName, Map)to do actual balance.MaintenanceLoadBalancer.balanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) protected voidBaseLoadBalancer.preBalanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) Called before actually executing balanceCluster.protected voidSimpleLoadBalancer.preBalanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) (package private) voidSimpleLoadBalancer.setClusterLoad(Map<TableName, Map<ServerName, List<RegionInfo>>> clusterLoad) Pass RegionStates and allow balancer to set the current cluster load.protected final Map<ServerName,List<RegionInfo>> BaseLoadBalancer.toEnsumbleTableLoad(Map<TableName, Map<ServerName, List<RegionInfo>>> LoadOfAllTable) voidStochasticLoadBalancer.updateBalancerLoadInfo(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) -
Uses of TableName in org.apache.hadoop.hbase.master.balancer.replicas
Fields in org.apache.hadoop.hbase.master.balancer.replicas declared as TableName -
Uses of TableName in org.apache.hadoop.hbase.master.http
Fields in org.apache.hadoop.hbase.master.http declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameMetaBrowser.scanTableprivate final TableNameRegionVisualizer.RegionDetails.tableNameMethods in org.apache.hadoop.hbase.master.http that return TableNameModifier and TypeMethodDescriptionMetaBrowser.getScanTable()RegionVisualizer.RegionDetails.getTableName()private static TableNameMetaBrowser.resolveScanTable(javax.servlet.http.HttpServletRequest request) Methods in org.apache.hadoop.hbase.master.http with parameters of type TableNameModifier and TypeMethodDescriptionprivate static FilterMetaBrowser.buildTableFilter(TableName tableName) Constructors in org.apache.hadoop.hbase.master.http with parameters of type TableNameModifierConstructorDescription(package private)RegionDetails(ServerName serverName, TableName tableName, RegionMetrics regionMetrics) -
Uses of TableName in org.apache.hadoop.hbase.master.janitor
Methods in org.apache.hadoop.hbase.master.janitor with parameters of type TableNameModifier and TypeMethodDescriptionprivate static RegionInfoMetaFixer.buildRegionInfo(TableName tn, byte[] start, byte[] end) CatalogJanitor.checkRegionReferences(MasterServices services, TableName tableName, RegionInfo region) Checks if a region still holds references to parent. -
Uses of TableName in org.apache.hadoop.hbase.master.locking
Fields in org.apache.hadoop.hbase.master.locking declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameLockManager.MasterLock.tableNameprivate TableNameLockProcedure.tableNameMethods in org.apache.hadoop.hbase.master.locking that return TableNameMethods in org.apache.hadoop.hbase.master.locking with parameters of type TableNameModifier and TypeMethodDescriptionLockManager.createMasterLock(TableName tableName, LockType type, String description) longLockManager.RemoteLocks.requestTableLock(TableName tableName, LockType type, String description, NonceKey nonceKey) Constructors in org.apache.hadoop.hbase.master.locking with parameters of type TableNameModifierConstructorDescriptionLockProcedure(org.apache.hadoop.conf.Configuration conf, TableName tableName, LockType type, String description, CountDownLatch lockAcquireLatch) Constructor for table lock.MasterLock(TableName tableName, LockType type, String description) -
Uses of TableName in org.apache.hadoop.hbase.master.normalizer
Fields in org.apache.hadoop.hbase.master.normalizer declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameSimpleRegionNormalizer.NormalizeContext.tableNameFields in org.apache.hadoop.hbase.master.normalizer with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final RegionNormalizerWorkQueue<TableName>RegionNormalizerManager.workQueueprivate final RegionNormalizerWorkQueue<TableName>RegionNormalizerWorker.workQueueMethods in org.apache.hadoop.hbase.master.normalizer that return TableNameMethods in org.apache.hadoop.hbase.master.normalizer with parameters of type TableNameModifier and TypeMethodDescriptionprivate List<NormalizationPlan>RegionNormalizerWorker.calculatePlans(TableName tableName) Method parameters in org.apache.hadoop.hbase.master.normalizer with type arguments of type TableNameModifier and TypeMethodDescriptionbooleanRegionNormalizerManager.normalizeRegions(List<TableName> tables, boolean isHighPriority) Submit tables for normalization.Constructor parameters in org.apache.hadoop.hbase.master.normalizer with type arguments of type TableNameModifierConstructorDescription(package private)RegionNormalizerManager(RegionNormalizerStateStore regionNormalizerStateStore, RegionNormalizerChore regionNormalizerChore, RegionNormalizerWorkQueue<TableName> workQueue, RegionNormalizerWorker worker) (package private)RegionNormalizerWorker(org.apache.hadoop.conf.Configuration configuration, MasterServices masterServices, RegionNormalizer regionNormalizer, RegionNormalizerWorkQueue<TableName> workQueue) -
Uses of TableName in org.apache.hadoop.hbase.master.procedure
Fields in org.apache.hadoop.hbase.master.procedure declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameTableProcedureInterface.DUMMY_NAMESPACE_TABLE_NAMEUsed for acquire/release lock for namespace related operations, just a place holder as we do not have namespace table any more.private TableNameSnapshotProcedure.snapshotTableprotected TableNameAbstractCloseTableRegionsProcedure.tableNameprivate TableNameDeleteTableProcedure.tableNameprivate TableNameDisableTableProcedure.tableNameprivate TableNameEnableTableProcedure.tableNameprivate TableNameFlushTableProcedure.tableNameprivate TableNameModifyTableDescriptorProcedure.tableNameprivate TableNameReopenTableRegionsProcedure.tableNameprivate TableNameTruncateTableProcedure.tableNameFields in org.apache.hadoop.hbase.master.procedure with type parameters of type TableNameModifier and TypeFieldDescriptionMasterProcedureScheduler.metaRunQueueprivate final Map<TableName,LockAndQueue> SchemaLocking.tableLocksprivate final Map<TableName,TableProcedureWaitingQueue> MasterProcedureScheduler.tableProcsWaitingEnqueueMasterProcedureScheduler.tableRunQueueMethods in org.apache.hadoop.hbase.master.procedure that return TableNameModifier and TypeMethodDescriptionAbstractCloseTableRegionsProcedure.getTableName()AbstractStateMachineNamespaceProcedure.getTableName()AbstractStateMachineRegionProcedure.getTableName()abstract TableNameAbstractStateMachineTableProcedure.getTableName()CloneSnapshotProcedure.getTableName()CreateTableProcedure.getTableName()DeleteTableProcedure.getTableName()DisableTableProcedure.getTableName()EnableTableProcedure.getTableName()FlushRegionProcedure.getTableName()FlushTableProcedure.getTableName()InitMetaProcedure.getTableName()private static TableNameMasterProcedureScheduler.getTableName(Procedure<?> proc) ModifyTableDescriptorProcedure.getTableName()ModifyTableProcedure.getTableName()ReopenTableRegionsProcedure.getTableName()RestoreBackupSystemTableProcedure.getTableName()RestoreSnapshotProcedure.getTableName()SnapshotProcedure.getTableName()SnapshotRegionProcedure.getTableName()SnapshotVerifyProcedure.getTableName()TableProcedureInterface.getTableName()Returns the name of the table the procedure is operating onTruncateTableProcedure.getTableName()Methods in org.apache.hadoop.hbase.master.procedure with parameters of type TableNameModifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescriptionRecoverySnapshotUtils.buildSnapshotDescription(TableName tableName, String snapshotName) Creates a SnapshotDescription for the recovery snapshot for a given operation.static org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescriptionRecoverySnapshotUtils.buildSnapshotDescription(TableName tableName, String snapshotName, long ttl, org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription.Type type) Creates a SnapshotDescription for the recovery snapshot for a given operation.private static voidDeleteTableProcedure.cleanRegionsInMeta(MasterProcedureEnv env, TableName tableName) There may be items for this table still up in hbase:meta in the case where the info:regioninfo column was empty because of some write error.CreateTableProcedure.CreateHdfsRegions.createHdfsRegions(MasterProcedureEnv env, org.apache.hadoop.fs.Path tableRootDir, TableName tableName, List<RegionInfo> newRegions) static SnapshotProcedureRecoverySnapshotUtils.createSnapshotProcedure(MasterProcedureEnv env, TableName tableName, String snapshotName, TableDescriptor tableDescriptor) Creates a SnapshotProcedure for soft drop functionality.protected static voidDeleteTableProcedure.deleteAssignmentState(MasterProcedureEnv env, TableName tableName) static voidMasterDDLOperationHelper.deleteColumnFamilyFromFileSystem(MasterProcedureEnv env, TableName tableName, List<RegionInfo> regionInfoList, byte[] familyName, boolean hasMob) Remove the column family from the file systemprotected static voidDeleteTableProcedure.deleteFromFs(MasterProcedureEnv env, TableName tableName, List<RegionInfo> regions, boolean archive) protected static voidDeleteTableProcedure.deleteFromMeta(MasterProcedureEnv env, TableName tableName, List<RegionInfo> regions) static voidRecoverySnapshotUtils.deleteRecoverySnapshot(MasterProcedureEnv env, String snapshotName, TableName tableName) Deletes a recovery snapshot during rollback scenarios.protected static voidDeleteTableProcedure.deleteTableDescriptorCache(MasterProcedureEnv env, TableName tableName) protected static voidDeleteTableProcedure.deleteTableStates(MasterProcedureEnv env, TableName tableName) static StringRecoverySnapshotUtils.generateSnapshotName(TableName tableName) Generates a recovery snapshot name.static StringRecoverySnapshotUtils.generateSnapshotName(TableName tableName, long timestamp) Generates a recovery snapshot name.(package private) LockAndQueueSchemaLocking.getTableLock(TableName tableName) static intMasterProcedureUtil.getTablePriority(TableName tableName) Return the priority for the given table.private TableQueueMasterProcedureScheduler.getTableQueue(TableName tableName) (package private) booleanMasterProcedureScheduler.markTableAsDeleted(TableName table, Procedure<?> procedure) Tries to remove the queue and the table-lock of the specified table.(package private) LockAndQueueSchemaLocking.removeTableLock(TableName tableName) private voidMasterProcedureScheduler.removeTableQueue(TableName tableName) protected static voidCreateTableProcedure.setEnabledState(MasterProcedureEnv env, TableName tableName) protected static voidCreateTableProcedure.setEnablingState(MasterProcedureEnv env, TableName tableName) protected static voidDisableTableProcedure.setTableStateToDisabled(MasterProcedureEnv env, TableName tableName) Mark table state to Disabledprivate static voidDisableTableProcedure.setTableStateToDisabling(MasterProcedureEnv env, TableName tableName) Mark table state to Disablingprotected static voidEnableTableProcedure.setTableStateToEnabled(MasterProcedureEnv env, TableName tableName) Mark table state to Enabledprotected static voidEnableTableProcedure.setTableStateToEnabling(MasterProcedureEnv env, TableName tableName) Mark table state to EnablingbooleanMasterProcedureScheduler.waitRegions(Procedure<?> procedure, TableName table, RegionInfo... regionInfos) Suspend the procedure if the specified set of regions are already locked.booleanMasterProcedureScheduler.waitTableExclusiveLock(Procedure<?> procedure, TableName table) Suspend the procedure if the specified table is already locked.private TableQueueMasterProcedureScheduler.waitTableQueueSharedLock(Procedure<?> procedure, TableName table) booleanMasterProcedureScheduler.waitTableSharedLock(Procedure<?> procedure, TableName table) Suspend the procedure if the specified table is already locked.voidMasterProcedureScheduler.wakeRegions(Procedure<?> procedure, TableName table, RegionInfo... regionInfos) Wake the procedures waiting for the specified regionsvoidMasterProcedureScheduler.wakeTableExclusiveLock(Procedure<?> procedure, TableName table) Wake the procedures waiting for the specified tablevoidMasterProcedureScheduler.wakeTableSharedLock(Procedure<?> procedure, TableName table) Wake the procedures waiting for the specified tableConstructors in org.apache.hadoop.hbase.master.procedure with parameters of type TableNameModifierConstructorDescriptionprotectedAbstractCloseTableRegionsProcedure(TableName tableName) CloseExcessRegionReplicasProcedure(TableName tableName, int newReplicaCount) CloseTableRegionsProcedure(TableName tableName) DeleteTableProcedure(MasterProcedureEnv env, TableName tableName) DeleteTableProcedure(MasterProcedureEnv env, TableName tableName, ProcedurePrepareLatch syncLatch) DisableTableProcedure(MasterProcedureEnv env, TableName tableName, boolean skipTableStateCheck) ConstructorDisableTableProcedure(MasterProcedureEnv env, TableName tableName, boolean skipTableStateCheck, ProcedurePrepareLatch syncLatch) ConstructorEnableTableProcedure(MasterProcedureEnv env, TableName tableName) ConstructorEnableTableProcedure(MasterProcedureEnv env, TableName tableName, ProcedurePrepareLatch syncLatch) ConstructorFlushTableProcedure(MasterProcedureEnv env, TableName tableName) FlushTableProcedure(MasterProcedureEnv env, TableName tableName, List<byte[]> columnFamilies) protectedModifyTableDescriptorProcedure(MasterProcedureEnv env, TableName tableName) ReopenTableRegionsProcedure(TableName tableName) (package private)ReopenTableRegionsProcedure(TableName tableName, long reopenBatchBackoffMillis, int reopenBatchSizeMax) ReopenTableRegionsProcedure(TableName tableName, List<byte[]> regionNames) privateReopenTableRegionsProcedure(TableName tableName, List<byte[]> regionNames, long reopenBatchBackoffMillis, int reopenBatchSizeMax) TableQueue(TableName tableName, int priority, LockStatus tableLock, LockStatus namespaceLockStatus) TruncateTableProcedure(MasterProcedureEnv env, TableName tableName, boolean preserveSplits) TruncateTableProcedure(MasterProcedureEnv env, TableName tableName, boolean preserveSplits, ProcedurePrepareLatch latch) -
Uses of TableName in org.apache.hadoop.hbase.master.region
Fields in org.apache.hadoop.hbase.master.region declared as TableName -
Uses of TableName in org.apache.hadoop.hbase.master.replication
Methods in org.apache.hadoop.hbase.master.replication with parameters of type TableNameModifier and TypeMethodDescriptionReplicationPeerManager.getSerialPeerIdsBelongsTo(TableName tableName) private voidOfflineTableReplicationQueueStorage.loadReplicationQueueData(org.apache.hadoop.conf.Configuration conf, TableName tableName) private booleanModifyPeerProcedure.needReopen(TableStateManager tsm, TableName tn) private booleanAbstractPeerProcedure.needSetLastPushedSequenceId(TableStateManager tsm, TableName tn) protected final voidAbstractPeerProcedure.setLastPushedSequenceIdForTable(MasterProcedureEnv env, TableName tableName, Map<String, Long> lastSeqIds) Method parameters in org.apache.hadoop.hbase.master.replication with type arguments of type TableNameModifier and TypeMethodDescriptionprivate voidReplicationPeerManager.checkNamespacesAndTableCfsConfigConflict(Set<String> namespaces, Map<TableName, ? extends Collection<String>> tableCfs) Set a namespace in the peer config means that all tables in this namespace will be replicated to the peer cluster.Constructors in org.apache.hadoop.hbase.master.replication with parameters of type TableNameModifierConstructorDescriptionOfflineTableReplicationQueueStorage(org.apache.hadoop.conf.Configuration conf, TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.master.snapshot
Fields in org.apache.hadoop.hbase.master.snapshot declared as TableNameModifier and TypeFieldDescriptionprotected final TableNameTakeSnapshotHandler.snapshotTableprivate TableNameMasterSnapshotVerifier.tableNameFields in org.apache.hadoop.hbase.master.snapshot with type parameters of type TableNameModifier and TypeFieldDescriptionSnapshotManager.restoreTableToProcIdMapprivate final Map<TableName,SnapshotSentinel> SnapshotManager.snapshotHandlersMethods in org.apache.hadoop.hbase.master.snapshot with parameters of type TableNameModifier and TypeMethodDescriptionprivate longSnapshotManager.cloneSnapshot(org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription reqSnapshot, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot, TableDescriptor snapshotTableDesc, NonceKey nonceKey, boolean restoreAcl, String customSFT) Clone the specified snapshot.private booleanSnapshotManager.isRestoringTable(TableName tableName) Verify if the restore of the specified table is in progress.booleanSnapshotManager.isTableTakingAnySnapshot(TableName tableName) booleanSnapshotManager.isTakingSnapshot(TableName tableName) Check to see if the specified table has a snapshot in progress.private booleanSnapshotManager.isTakingSnapshot(TableName tableName, boolean checkProcedure) Check to see if the specified table has a snapshot in progress.private longSnapshotManager.restoreSnapshot(org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription reqSnapshot, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot, TableDescriptor snapshotTableDesc, NonceKey nonceKey, boolean restoreAcl) Restore the specified snapshot.voidSnapshotManager.setSnapshotHandlerForTesting(TableName tableName, SnapshotSentinel handler) Set the handler for the current snapshotMethod parameters in org.apache.hadoop.hbase.master.snapshot with type arguments of type TableNameModifier and TypeMethodDescriptionprivate voidSnapshotManager.cleanupSentinels(Map<TableName, SnapshotSentinel> sentinels) Remove the sentinels that are marked as finished and the completion time has exceeded the removal timeout.private SnapshotSentinelSnapshotManager.removeSentinelIfFinished(Map<TableName, SnapshotSentinel> sentinels, org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot) Return the handler if it is currently live and has the same snapshot target name. -
Uses of TableName in org.apache.hadoop.hbase.mob
Fields in org.apache.hadoop.hbase.mob with type parameters of type TableNameModifier and TypeFieldDescriptionprivate static final ConcurrentMap<TableName,String> ManualMobMaintHFileCleaner.MOB_REGIONS(package private) static ThreadLocal<org.apache.hbase.thirdparty.com.google.common.collect.SetMultimap<TableName,String>> DefaultMobStoreCompactor.mobRefSetMethods in org.apache.hadoop.hbase.mob that return types with arguments of type TableNameModifier and TypeMethodDescriptionstatic org.apache.hbase.thirdparty.com.google.common.collect.ImmutableSetMultimap.Builder<TableName,String> MobUtils.deserializeMobFileRefs(byte[] bytes) Deserialize the set of referenced mob hfiles from store file metadata.MobUtils.getTableName(ExtendedCell cell) Get the table name from when this cell was written into a mob hfile as a TableName.Methods in org.apache.hadoop.hbase.mob with parameters of type TableNameModifier and TypeMethodDescriptionprivate static voidMobFileCleanupUtil.archiveMobFiles(org.apache.hadoop.conf.Configuration conf, TableName tableName, Admin admin, byte[] family, List<org.apache.hadoop.fs.Path> storeFiles) Archives the mob files.voidRSMobFileCleanerChore.archiveMobFiles(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] family, List<org.apache.hadoop.fs.Path> storeFiles) Archives the mob files.private static voidMobFileCleanupUtil.checkColumnFamilyDescriptor(org.apache.hadoop.conf.Configuration conf, TableName table, org.apache.hadoop.fs.FileSystem fs, Admin admin, ColumnFamilyDescriptor hcd, Set<String> regionNames, long maxCreationTimeToArchive) static voidMobFileCleanupUtil.cleanupObsoleteMobFiles(org.apache.hadoop.conf.Configuration conf, TableName table, Admin admin) Performs housekeeping file cleaning (called by MOB Cleaner chore)static org.apache.hadoop.fs.PathMobUtils.getMobFamilyPath(org.apache.hadoop.conf.Configuration conf, TableName tableName, String familyName) Gets the family dir of the mob files.static RegionInfoMobUtils.getMobRegionInfo(TableName tableName) Gets the RegionInfo of the mob files.static org.apache.hadoop.fs.PathMobUtils.getMobRegionPath(org.apache.hadoop.conf.Configuration conf, TableName tableName) Gets the region dir of the mob files.static org.apache.hadoop.fs.PathMobUtils.getMobRegionPath(org.apache.hadoop.fs.Path rootDir, TableName tableName) Gets the region dir of the mob files under the specified root dir.static org.apache.hadoop.fs.PathMobUtils.getMobTableDir(org.apache.hadoop.conf.Configuration conf, TableName tableName) static org.apache.hadoop.fs.PathMobUtils.getMobTableDir(org.apache.hadoop.fs.Path rootDir, TableName tableName) Gets the table dir of the mob files under the qualified HBase root dir.static booleanMobUtils.isMobRegionName(TableName tableName, byte[] regionName) Gets whether the current region name follows the pattern of a mob region name.static booleanMobUtils.removeMobFiles(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tableName, org.apache.hadoop.fs.Path tableDir, byte[] family, Collection<HStoreFile> storeFiles) Archives the mob files.private voidMobFileCompactionChore.startCompaction(Admin admin, TableName table, RegionInfo region, byte[] cf) Method parameters in org.apache.hadoop.hbase.mob with type arguments of type TableNameModifier and TypeMethodDescriptionprivate voidDefaultMobStoreCompactor.calculateMobLengthMap(org.apache.hbase.thirdparty.com.google.common.collect.SetMultimap<TableName, String> mobRefs) static byte[]MobUtils.serializeMobFileRefs(org.apache.hbase.thirdparty.com.google.common.collect.SetMultimap<TableName, String> mobRefSet) Serialize a set of referenced mob hfiles -
Uses of TableName in org.apache.hadoop.hbase.mob.mapreduce
Fields in org.apache.hadoop.hbase.mob.mapreduce declared as TableName -
Uses of TableName in org.apache.hadoop.hbase.namequeues
Fields in org.apache.hadoop.hbase.namequeues declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameWALEventTrackerTableAccessor.WAL_EVENT_TRACKER_TABLE_NAMEWALEventTrackerTableAccessor.WAL_EVENT_TRACKER_TABLE_NAME_STRtable name - can be enabled with config - hbase.regionserver.wal.event.tracker.enabled -
Uses of TableName in org.apache.hadoop.hbase.namespace
Fields in org.apache.hadoop.hbase.namespace with type parameters of type TableNameModifier and TypeFieldDescriptionprivate Map<TableName,AtomicInteger> NamespaceTableAndRegionInfo.tableAndRegionInfoMethods in org.apache.hadoop.hbase.namespace that return types with arguments of type TableNameModifier and TypeMethodDescriptionNamespaceTableAndRegionInfo.getTables()Gets the set of table names belonging to namespace.Methods in org.apache.hadoop.hbase.namespace with parameters of type TableNameModifier and TypeMethodDescriptionprivate void(package private) void(package private) booleanNamespaceStateManager.checkAndUpdateNamespaceRegionCount(TableName name, byte[] regionName, int incr) Check if adding a region violates namespace quota, if not update namespace cache.(package private) voidNamespaceStateManager.checkAndUpdateNamespaceRegionCount(TableName name, int incr) Check and update region count for an existing table.(package private) voidNamespaceStateManager.checkAndUpdateNamespaceTableCount(TableName table, int numRegions) voidNamespaceAuditor.checkQuotaToCreateTable(TableName tName, int regions) Check quota to create table.voidNamespaceAuditor.checkQuotaToUpdateRegion(TableName tName, int regions) Check and update region count quota for an existing table.private voidNamespaceAuditor.checkTableTypeAndThrowException(TableName name) (package private) booleanNamespaceTableAndRegionInfo.containsTable(TableName tableName) (package private) intNamespaceTableAndRegionInfo.decrementRegionCountForTable(TableName tableName, int count) intNamespaceAuditor.getRegionCountOfTable(TableName tName) Get region count for table(package private) intNamespaceTableAndRegionInfo.getRegionCountOfTable(TableName tableName) (package private) intNamespaceTableAndRegionInfo.incRegionCountForTable(TableName tableName, int count) voidNamespaceAuditor.removeFromNamespaceUsage(TableName tableName) (package private) voidNamespaceStateManager.removeTable(TableName tableName) (package private) voidNamespaceTableAndRegionInfo.removeTable(TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.procedure.flush
Fields in org.apache.hadoop.hbase.procedure.flush with type parameters of type TableName -
Uses of TableName in org.apache.hadoop.hbase.quotas
Fields in org.apache.hadoop.hbase.quotas declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameQuotaTableUtil.QUOTA_TABLE_NAMESystem table for quotasprivate final TableNameQuotaSettings.tableName(package private) final TableNameFileArchiverNotifierFactoryImpl.CacheKey.tnprivate final TableNameFileArchiverNotifierImpl.tnFields in org.apache.hadoop.hbase.quotas with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final Map<TableName,SpaceViolationPolicyEnforcement> ActivePolicyEnforcement.activePoliciesprivate final ConcurrentMap<TableName,FileArchiverNotifier> FileArchiverNotifierFactoryImpl.CACHEprivate AtomicReference<Map<TableName,SpaceQuotaSnapshot>> RegionServerSpaceQuotaManager.currentQuotaSnapshotsprivate final ConcurrentHashMap<TableName,SpaceViolationPolicyEnforcement> RegionServerSpaceQuotaManager.enforcedPoliciesprivate final Map<TableName,SpaceViolationPolicyEnforcement> ActivePolicyEnforcement.locallyCachedPoliciesprivate final Map<TableName,SpaceQuotaSnapshot> QuotaObserverChore.readOnlyTableQuotaSnapshotsprivate final Map<TableName,SpaceQuotaSnapshot> ActivePolicyEnforcement.snapshotsprivate Map<TableName,QuotaLimiter> UserQuotaState.tableLimitersprivate MasterQuotaManager.NamedLock<TableName>MasterQuotaManager.tableLocksprivate final ConcurrentHashMap<TableName,Double> QuotaCache.tableMachineQuotaFactorsprivate Map<TableName,QuotaState> QuotaCache.tableQuotaCacheprivate final Map<TableName,SpaceQuotaSnapshot> QuotaObserverChore.tableQuotaSnapshotsprivate QuotaSnapshotStore<TableName>QuotaObserverChore.tableSnapshotStoreQuotaObserverChore.TablesWithQuotas.tablesWithNamespaceQuotasQuotaObserverChore.TablesWithQuotas.tablesWithTableQuotasMethods in org.apache.hadoop.hbase.quotas that return TableNameModifier and TypeMethodDescriptionprotected static TableNameQuotaTableUtil.getTableFromRowKey(byte[] key) QuotaSettings.getTableName()Methods in org.apache.hadoop.hbase.quotas that return types with arguments of type TableNameModifier and TypeMethodDescription(package private) Map<TableName,SpaceViolationPolicyEnforcement> RegionServerSpaceQuotaManager.copyActiveEnforcements()Returns the collection of tables which have quota violation policies enforced on this RegionServer.RegionServerSpaceQuotaManager.copyQuotaSnapshots()Copies the lastSpaceQuotaSnapshots that were recorded.SpaceQuotaRefresherChore.fetchSnapshotsFromQuotaTable()Reads all quota snapshots from the quota table.static Map<TableName,QuotaState> QuotaUtil.fetchTableQuotas(org.apache.hadoop.conf.Configuration conf, Connection connection, Map<TableName, Double> tableMachineFactors) private Map<TableName,QuotaState> QuotaCache.fetchTableQuotaStateEntries()QuotaObserverChore.TablesWithQuotas.filterInsufficientlyReportedTables(QuotaSnapshotStore<TableName> tableStore) Filters out all tables for which the Master currently doesn't have enough region space reports received from RegionServers yet.RegionServerSpaceQuotaManager.getActivePoliciesAsMap()Converts a map of table toSpaceViolationPolicyEnforcements intoSpaceViolationPolicys.(package private) Map<TableName,SpaceViolationPolicyEnforcement> ActivePolicyEnforcement.getLocallyCachedPolicies()Returns an unmodifiable version of the policy enforcements that were cached because they are not in violation of their quota.QuotaObserverChore.TablesWithQuotas.getNamespaceQuotaTables()Returns an unmodifiable view of all tables in namespaces that have namespace quotas.ActivePolicyEnforcement.getPolicies()Returns an unmodifiable version of the activeSpaceViolationPolicyEnforcements.static Map<TableName,SpaceQuotaSnapshot> QuotaTableUtil.getSnapshots(Connection conn) Fetches allSpaceQuotaSnapshotobjects from thehbase:quotatable.SnapshotQuotaObserverChore.getSnapshotsFromTables(Admin admin, Set<TableName> tablesToFetchSnapshotsFrom) Computes a mapping of originatingTableNameto snapshots, when theTableNameexists in the providedSet.SnapshotQuotaObserverChore.getSnapshotsToComputeSize()Fetches each table with a quota (table or namespace quota), and then fetch the name of each snapshot which was created from that table.(package private) Map<TableName,QuotaState> QuotaCache.getTableQuotaCache()visible for testingQuotaObserverChore.getTableQuotaSnapshots()Returns an unmodifiable view over the currentSpaceQuotaSnapshotobjects for each HBase table with a quota defined.QuotaObserverChore.TablesWithQuotas.getTableQuotaTables()Returns an unmodifiable view of all tables with table quotas.QuotaObserverChore.TablesWithQuotas.getTablesByNamespace()Returns a view of all tables that reside in a namespace with a namespace quota, grouped by the namespace itself.QuotaTableUtil.getTableSnapshots(Connection conn) Returns a multimap for all existing table snapshot entries.(package private) QuotaSnapshotStore<TableName>QuotaObserverChore.getTableSnapshotStore()Methods in org.apache.hadoop.hbase.quotas with parameters of type TableNameModifier and TypeMethodDescriptionvoidQuotaObserverChore.TablesWithQuotas.addNamespaceQuotaTable(TableName tn) Adds a table with a namespace quota.static voidQuotaUtil.addTableQuota(Connection connection, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas data) voidQuotaObserverChore.TablesWithQuotas.addTableQuotaTable(TableName tn) Adds a table with a table quota.static voidQuotaUtil.addUserQuota(Connection connection, String user, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas data) booleanRegionServerSpaceQuotaManager.areCompactionsDisabled(TableName tableName) Returns whether or not compactions should be disabled for the giventableNameper a space quota violation policy.org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.FileArchiveNotificationRequestRegionServerSpaceQuotaManager.buildFileArchiveRequest(TableName tn, Collection<Map.Entry<String, Long>> archivedFiles) Builds the protobuf message to inform the Master of files being archived.voidMasterQuotaManager.checkAndUpdateNamespaceRegionQuota(TableName tName, int regions) voidMasterQuotaManager.checkNamespaceTableAndRegionQuota(TableName tName, int regions) SpaceViolationPolicyEnforcementFactory.create(RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot snapshot) Constructs the appropriateSpaceViolationPolicyEnforcementfor tables that are in violation of their space quota.(package private) static PutQuotaTableUtil.createPutForSnapshotSize(TableName tableName, String snapshot, long size) (package private) static PutQuotaTableUtil.createPutForSpaceSnapshot(TableName tableName, SpaceQuotaSnapshot snapshot) (package private) static ScanQuotaTableUtil.createScanForSpaceSnapshotSizes(TableName table) SpaceViolationPolicyEnforcementFactory.createWithoutViolation(RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot snapshot) Creates the "default"SpaceViolationPolicyEnforcementfor a table that isn't in violation.static voidQuotaUtil.deleteTableQuota(Connection connection, TableName table) static voidQuotaUtil.deleteUserQuota(Connection connection, String user, TableName table) static voidQuotaUtil.disableTableIfNotDisabled(Connection conn, TableName tableName) Method to disable a table, if not already disabled.voidRegionServerSpaceQuotaManager.disableViolationPolicyEnforcement(TableName tableName) Disables enforcement on any violation policy on the giventableName.static voidQuotaUtil.enableTableIfNotEnabled(Connection conn, TableName tableName) Method to enable a table, if not already enabled.voidRegionServerSpaceQuotaManager.enforceViolationPolicy(TableName tableName, SpaceQuotaSnapshot snapshot) Enforces the given violationPolicy on the given table in this RegionServer.TableQuotaSnapshotStore.filterBySubject(TableName table) private static List<QuotaSettings>QuotaSettingsFactory.fromQuotas(String userName, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) (package private) static QuotaSettingsQuotaSettingsFactory.fromSpace(TableName table, String namespace, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota protoQuota) (package private) static SpaceLimitSettingsSpaceLimitSettings.fromSpaceQuota(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota proto) Constructs aSpaceLimitSettingsfrom the provided protobuf message and tablename.(package private) static List<QuotaSettings>QuotaSettingsFactory.fromTableQuotas(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) static List<ThrottleSettings>QuotaSettingsFactory.fromTableThrottles(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle throttle) protected static List<ThrottleSettings>QuotaSettingsFactory.fromThrottle(String userName, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle throttle) (package private) static ThrottleSettingsThrottleSettings.fromTimedQuota(String userName, TableName tableName, String namespace, String regionServer, ThrottleType type, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.TimedQuota timedQuota) (package private) static List<QuotaSettings>QuotaSettingsFactory.fromUserQuotas(String userName, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) FileArchiverNotifierFactory.get(Connection conn, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tn) Creates or obtains aFileArchiverNotifierinstance for the given args.FileArchiverNotifierFactoryImpl.get(Connection conn, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tn) Returns theFileArchiverNotifierinstance for the givenTableName.static SpaceQuotaSnapshotQuotaTableUtil.getCurrentSnapshotFromQuotaTable(Connection conn, TableName tableName) Returns the current space quota snapshot of the giventableNamefromQuotaTableUtil.QUOTA_TABLE_NAMEor null if the no quota information is available for that tableName.TableQuotaSnapshotStore.getCurrentState(TableName table) (package private) FileArchiverNotifierSnapshotQuotaObserverChore.getNotifierForTable(TableName tn) Returns the correct instance ofFileArchiverNotifierfor the given table name.(package private) intQuotaObserverChore.TablesWithQuotas.getNumRegions(TableName table) Computes the total number of regions in a table.(package private) intQuotaObserverChore.TablesWithQuotas.getNumReportedRegions(TableName table, QuotaSnapshotStore<TableName> tableStore) Computes the number of regions reported for a table.ActivePolicyEnforcement.getPolicyEnforcement(TableName tableName) Returns the properSpaceViolationPolicyEnforcementimplementation for the given table.RegionServerRpcQuotaManager.getQuota(org.apache.hadoop.security.UserGroupInformation ugi, TableName table, int blockSizeBytes) Returns the quota for an operation.(package private) org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.QuotasTableQuotaSnapshotStore.getQuotaForTable(TableName table) Fetches the table quota.intMasterQuotaManager.getRegionCountOfTable(TableName tName) Returns cached region count, or -1 if quota manager is disabled or table status not foundprotected static byte[]QuotaTableUtil.getSettingsQualifierForUserTable(TableName tableName) (package private) longFileArchiverNotifierImpl.getSizeOfStoreFile(TableName tn, String regionName, String family, String storeFile) Computes the size of the store file given its name, region and family name in the archive directory.(package private) longFileArchiverNotifierImpl.getSizeOfStoreFile(TableName tn, FileArchiverNotifierImpl.StoreFileReference storeFileName) Computes the size of the store files for a single region.(package private) longFileArchiverNotifierImpl.getSizeOfStoreFiles(TableName tn, Set<FileArchiverNotifierImpl.StoreFileReference> storeFileNames) Computes the size of each store file instoreFileNames(package private) longTableQuotaSnapshotStore.getSnapshotSizesForTable(TableName tn) Fetches any serialized snapshot sizes from the quota table for thetnprovided.org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaTableQuotaSnapshotStore.getSpaceQuota(TableName subject) QuotaCache.getTableLimiter(TableName table) Returns the limiter associated to the specified table.UserQuotaState.getTableLimiter(TableName table) Return the limiter for the specified table associated with this quota.static org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.QuotasQuotaTableUtil.getTableQuota(Connection connection, TableName table) (package private) SpaceQuotaSnapshotQuotaObserverChore.getTableQuotaSnapshot(TableName table) Fetches theSpaceQuotaSnapshotfor the given table.protected static byte[]QuotaTableUtil.getTableRowKey(TableName table) TableQuotaSnapshotStore.getTargetState(TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota spaceQuota) QuotaCache.getUserLimiter(org.apache.hadoop.security.UserGroupInformation ugi, TableName table) Returns the limiter associated to the specified user/table.static org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.QuotasQuotaTableUtil.getUserQuota(Connection connection, String user, TableName table) booleanQuotaObserverChore.TablesWithQuotas.hasNamespaceQuota(TableName tn) Returns true if the table exists in a namespace with a namespace quota.booleanQuotaObserverChore.TablesWithQuotas.hasTableQuota(TableName tn) Returns true if the given table has a table quota.voidSpaceViolationPolicyEnforcement.initialize(RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot snapshot) Initializes this policy instance.private booleanMasterQuotaManager.isInViolationAndPolicyDisable(TableName tableName, QuotaObserverChore quotaObserverChore) Method to check if a table is in violation and policy set on table is DISABLE.static QuotaSettingsQuotaSettingsFactory.limitTableSpace(TableName tableName, long sizeLimit, SpaceViolationPolicy violationPolicy) Creates aQuotaSettingsobject to limit the FileSystem space usage for the given table to the given size in bytes.(package private) static GetQuotaTableUtil.makeGetForSnapshotSize(TableName tn, String snapshot) Creates aGetfor the HBase snapshot's size against the given table.static GetQuotaTableUtil.makeQuotaSnapshotGetForTable(TableName tn) Creates aGetwhich returns onlySpaceQuotaSnapshotfrom the quota table for a specific table.static ScanQuotaTableUtil.makeQuotaSnapshotScanForTable(TableName tn) Creates aScanwhich returns onlySpaceQuotaSnapshotfrom the quota table for a specific table.protected static voidQuotaTableUtil.parseTableResult(TableName table, Result result, QuotaTableUtil.TableQuotasVisitor visitor) voidMasterQuotasObserver.postDeleteTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidMasterQuotaManager.removeRegionSizesForTable(TableName tableName) Removes each region size entry where the RegionInfo references the provided TableName.voidMasterQuotaManager.removeTableFromNamespaceQuota(TableName tName) Remove table from namespace quota.static QuotaSettingsQuotaSettingsFactory.removeTableSpaceLimit(TableName tableName) Creates aQuotaSettingsobject to remove the FileSystem space quota for the given table.voidTableQuotaSnapshotStore.setCurrentState(TableName table, SpaceQuotaSnapshot snapshot) voidUserQuotaState.setQuotas(org.apache.hadoop.conf.Configuration conf, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) Add the quota information of the specified table.voidMasterQuotaManager.setTableQuota(TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest req) (package private) voidQuotaObserverChore.setTableQuotaSnapshot(TableName table, SpaceQuotaSnapshot snapshot) Stores the quota state for the given table.voidMasterQuotaManager.setUserQuota(String userName, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest req) private static QuotaSettingsQuotaSettingsFactory.throttle(String userName, TableName tableName, String namespace, String regionServer, ThrottleType type, long limit, TimeUnit timeUnit, QuotaScope scope) static QuotaSettingsQuotaSettingsFactory.throttleTable(TableName tableName, ThrottleType type, long limit, TimeUnit timeUnit) Throttle the specified table.static QuotaSettingsQuotaSettingsFactory.throttleTable(TableName tableName, ThrottleType type, long limit, TimeUnit timeUnit, QuotaScope scope) Throttle the specified table.static QuotaSettingsQuotaSettingsFactory.throttleUser(String userName, TableName tableName, ThrottleType type, long limit, TimeUnit timeUnit) Throttle the specified user on the specified table.static QuotaSettingsQuotaSettingsFactory.throttleUser(String userName, TableName tableName, ThrottleType type, long limit, TimeUnit timeUnit, QuotaScope scope) Throttle the specified user on the specified table.voidSpaceQuotaSnapshotNotifier.transitionTable(TableName tableName, SpaceQuotaSnapshot snapshot) Informs the cluster of the current state of a space quota for a table.voidTableSpaceQuotaSnapshotNotifier.transitionTable(TableName tableName, SpaceQuotaSnapshot snapshot) static QuotaSettingsQuotaSettingsFactory.unthrottleTable(TableName tableName) Remove the throttling for the specified table.static QuotaSettingsQuotaSettingsFactory.unthrottleTableByThrottleType(TableName tableName, ThrottleType type) Remove the throttling for the specified table.static QuotaSettingsQuotaSettingsFactory.unthrottleUser(String userName, TableName tableName) Remove the throttling for the specified user on the specified table.static QuotaSettingsQuotaSettingsFactory.unthrottleUserByThrottleType(String userName, TableName tableName, ThrottleType type) Remove the throttling for the specified user on the specified table.(package private) voidQuotaObserverChore.updateTableQuota(TableName table, SpaceQuotaSnapshot currentSnapshot, SpaceQuotaSnapshot targetSnapshot) Updates the hbase:quota table with the new quota policy for thistableif necessary.voidQuotaTableUtil.TableQuotasVisitor.visitTableQuotas(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) voidQuotaTableUtil.UserQuotasVisitor.visitUserQuotas(String userName, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) Method parameters in org.apache.hadoop.hbase.quotas with type arguments of type TableNameModifier and TypeMethodDescriptionSnapshotQuotaObserverChore.computeSnapshotSizes(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotsToComputeSize) Computes the size of each snapshot provided given the current files referenced by the table.QuotaTableUtil.createDeletesForExistingTableSnapshotSizes(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotEntriesToRemove) Returns a list ofDeleteto remove given table snapshot entries to remove from quota tablestatic voidQuotaTableUtil.extractQuotaSnapshot(Result result, Map<TableName, SpaceQuotaSnapshot> snapshots) Extracts theSpaceViolationPolicyandTableNamefrom the providedResultand adds them to the givenMap.(package private) voidSpaceQuotaRefresherChore.extractQuotaSnapshot(Result result, Map<TableName, SpaceQuotaSnapshot> snapshots) Wrapper aroundQuotaTableUtil.extractQuotaSnapshot(Result, Map)for testing.static Map<TableName,QuotaState> QuotaUtil.fetchTableQuotas(org.apache.hadoop.conf.Configuration conf, Connection connection, Map<TableName, Double> tableMachineFactors) static Map<String,UserQuotaState> QuotaUtil.fetchUserQuotas(org.apache.hadoop.conf.Configuration conf, Connection connection, Map<TableName, Double> tableMachineQuotaFactors, double factor) QuotaObserverChore.TablesWithQuotas.filterInsufficientlyReportedTables(QuotaSnapshotStore<TableName> tableStore) Filters out all tables for which the Master currently doesn't have enough region space reports received from RegionServers yet.(package private) intQuotaObserverChore.TablesWithQuotas.getNumReportedRegions(TableName table, QuotaSnapshotStore<TableName> tableStore) Computes the number of regions reported for a table.SnapshotQuotaObserverChore.getSnapshotsFromTables(Admin admin, Set<TableName> tablesToFetchSnapshotsFrom) Computes a mapping of originatingTableNameto snapshots, when theTableNameexists in the providedSet.(package private) voidQuotaObserverChore.processNamespacesWithQuotas(Set<String> namespacesWithQuotas, org.apache.hbase.thirdparty.com.google.common.collect.Multimap<String, TableName> tablesByNamespace) Processes each namespace which has a quota defined and moves all of the tables contained in that namespace into or out of violation of the quota.(package private) voidQuotaObserverChore.processTablesWithQuotas(Set<TableName> tablesWithTableQuotas) Processes eachTableNamewhich has a quota defined and moves it in or out of violation based on the space use.(package private) voidSnapshotQuotaObserverChore.pruneNamespaceSnapshots(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotsToComputeSize) Removes the snapshot entries that are present in Quota table but not in snapshotsToComputeSize(package private) voidSnapshotQuotaObserverChore.pruneTableSnapshots(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotsToComputeSize) Removes the snapshot entries that are present in Quota table but not in snapshotsToComputeSize(package private) voidSnapshotQuotaObserverChore.removeExistingTableSnapshotSizes(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotEntriesToRemove) (package private) voidQuotaObserverChore.updateNamespaceQuota(String namespace, SpaceQuotaSnapshot currentSnapshot, SpaceQuotaSnapshot targetSnapshot, org.apache.hbase.thirdparty.com.google.common.collect.Multimap<String, TableName> tablesByNamespace) Updates the hbase:quota table with the target quota policy for thisnamespaceif necessary.voidRegionServerSpaceQuotaManager.updateQuotaSnapshot(Map<TableName, SpaceQuotaSnapshot> newSnapshots) Updates the currentSpaceQuotaSnapshots for the RegionServer.Constructors in org.apache.hadoop.hbase.quotas with parameters of type TableNameModifierConstructorDescription(package private)CacheKey(Connection conn, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tn) FileArchiverNotifierImpl(Connection conn, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tn) protectedGlobalQuotaSettings(String userName, TableName tableName, String namespace, String regionServer) protectedGlobalQuotaSettingsImpl(String username, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) protectedGlobalQuotaSettingsImpl(String userName, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle throttleProto, Boolean bypassGlobals, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota spaceProto) (package private)QuotaGlobalsSettingsBypass(String userName, TableName tableName, String namespace, String regionServer, boolean bypassGlobals) protectedQuotaSettings(String userName, TableName tableName, String namespace, String regionServer) (package private)SpaceLimitSettings(TableName tableName) Constructs aSpaceLimitSettingsto remove a space quota on the giventableName.(package private)SpaceLimitSettings(TableName tableName, long sizeLimit, SpaceViolationPolicy violationPolicy) (package private)SpaceLimitSettings(TableName tableName, String namespace, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceLimitRequest req) (package private)ThrottleSettings(String userName, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.ThrottleRequest proto) Constructor parameters in org.apache.hadoop.hbase.quotas with type arguments of type TableNameModifierConstructorDescriptionActivePolicyEnforcement(Map<TableName, SpaceViolationPolicyEnforcement> activePolicies, Map<TableName, SpaceQuotaSnapshot> snapshots, RegionServerServices rss) ActivePolicyEnforcement(Map<TableName, SpaceViolationPolicyEnforcement> activePolicies, Map<TableName, SpaceQuotaSnapshot> snapshots, RegionServerServices rss, SpaceViolationPolicyEnforcementFactory factory) -
Uses of TableName in org.apache.hadoop.hbase.quotas.policies
Fields in org.apache.hadoop.hbase.quotas.policies declared as TableNameModifier and TypeFieldDescription(package private) TableNameAbstractViolationPolicyEnforcement.tableNameMethods in org.apache.hadoop.hbase.quotas.policies that return TableNameMethods in org.apache.hadoop.hbase.quotas.policies with parameters of type TableNameModifier and TypeMethodDescriptionvoidAbstractViolationPolicyEnforcement.initialize(RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot snapshot) voidAbstractViolationPolicyEnforcement.setTableName(TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.regionserver
Fields in org.apache.hadoop.hbase.regionserver declared as TableNameFields in org.apache.hadoop.hbase.regionserver with type parameters of type TableNameMethods in org.apache.hadoop.hbase.regionserver that return TableNameModifier and TypeMethodDescriptionHStore.getTableName()Store.getTableName()StoreContext.getTableName()Methods in org.apache.hadoop.hbase.regionserver that return types with arguments of type TableNameModifier and TypeMethodDescriptionHRegionServer.getOnlineTables()Gets the online tables in this RS.Methods in org.apache.hadoop.hbase.regionserver with parameters of type TableNameModifier and TypeMethodDescriptionprivate org.apache.hadoop.fs.PathSecureBulkLoadManager.createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, TableName tableName) List<org.apache.hadoop.fs.Path>HMobStore.getLocations(TableName tableName) HRegionServer.getRegions(TableName tableName) Gets the online regions of the specified table.OnlineRegions.getRegions(TableName tableName) Get all online regions of a table in this RS.RegionServerServices.getRegions(TableName tableName) booleanHRegionServer.reportFileArchivalForQuotas(TableName tableName, Collection<Map.Entry<String, Long>> archivedFiles) booleanRegionServerServices.reportFileArchivalForQuotas(TableName tableName, Collection<Map.Entry<String, Long>> archivedFiles) Reports a collection of files, and their sizes, that belonged to the giventablewere just moved to the archive directory.Method parameters in org.apache.hadoop.hbase.regionserver with type arguments of type TableNameModifier and TypeMethodDescriptionvoidStoreFileWriter.appendMobMetadata(org.apache.hbase.thirdparty.com.google.common.collect.SetMultimap<TableName, String> mobRefSet) Appends MOB - specific metadata (even if it is empty)private voidStoreFileWriter.SingleStoreFileWriter.appendMobMetadata(org.apache.hbase.thirdparty.com.google.common.collect.SetMultimap<TableName, String> mobRefSet) Appends MOB - specific metadata (even if it is empty)private voidRSRpcServices.executeOpenRegionProcedures(org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.OpenRegionRequest request, Map<TableName, TableDescriptor> tdCache) -
Uses of TableName in org.apache.hadoop.hbase.regionserver.metrics
Fields in org.apache.hadoop.hbase.regionserver.metrics declared as TableNameMethods in org.apache.hadoop.hbase.regionserver.metrics with parameters of type TableNameModifier and TypeMethodDescriptionprivate voidprivate static StringMetricsTableRequests.qualifyMetrics(String prefix, TableName tableName) Constructors in org.apache.hadoop.hbase.regionserver.metrics with parameters of type TableNameModifierConstructorDescriptionMetricsTableRequests(TableName tableName, org.apache.hadoop.conf.Configuration conf) -
Uses of TableName in org.apache.hadoop.hbase.regionserver.storefiletracker
Fields in org.apache.hadoop.hbase.regionserver.storefiletracker declared as TableNameMethods in org.apache.hadoop.hbase.regionserver.storefiletracker that return TableNameMethods in org.apache.hadoop.hbase.regionserver.storefiletracker with parameters of type TableNameModifier and TypeMethodDescriptionStoreFileTracker.createHFileLink(TableName linkedTable, String linkedRegion, String hfileName, boolean createBackRef) Create a new HFileLinkStoreFileTrackerBase.createHFileLink(TableName linkedTable, String linkedRegion, String hfileName, boolean createBackRef) Constructors in org.apache.hadoop.hbase.regionserver.storefiletracker with parameters of type TableNameModifierConstructorDescriptionInitializeStoreFileTrackerProcedure(MasterProcedureEnv env, TableName tableName) ModifyColumnFamilyStoreFileTrackerProcedure(MasterProcedureEnv env, TableName tableName, byte[] family, String dstSFT) protectedModifyStoreFileTrackerProcedure(MasterProcedureEnv env, TableName tableName, String dstSFT) ModifyTableStoreFileTrackerProcedure(MasterProcedureEnv env, TableName tableName, String dstSFT) -
Uses of TableName in org.apache.hadoop.hbase.regionserver.wal
Fields in org.apache.hadoop.hbase.regionserver.wal with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final ConcurrentMap<TableName,MutableFastCounter> MetricsWALSourceImpl.perTableAppendCountprivate final ConcurrentMap<TableName,MutableFastCounter> MetricsWALSourceImpl.perTableAppendSizeMethods in org.apache.hadoop.hbase.regionserver.wal with parameters of type TableNameModifier and TypeMethodDescriptionvoidMetricsWALSource.incrementAppendCount(TableName tableName) Increment the count of wal appendsvoidMetricsWALSourceImpl.incrementAppendCount(TableName tableName) voidMetricsWALSource.incrementAppendSize(TableName tableName, long size) Add the append size.voidMetricsWALSourceImpl.incrementAppendSize(TableName tableName, long size) -
Uses of TableName in org.apache.hadoop.hbase.replication
Fields in org.apache.hadoop.hbase.replication declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameReplicationStorageFactory.REPLICATION_QUEUE_TABLE_NAME_DEFAULTprivate final TableNameTableReplicationQueueStorage.tableNameFields in org.apache.hadoop.hbase.replication with type parameters of type TableNameModifier and TypeFieldDescriptionprivate Map<TableName,? extends Collection<String>> ReplicationPeerConfig.excludeTableCFsMapReplicationPeerConfig.ReplicationPeerConfigBuilderImpl.excludeTableCFsMapReplicationPeerConfig.ReplicationPeerConfigBuilderImpl.tableCFsMapprivate Map<TableName,? extends Collection<String>> ReplicationPeerConfig.tableCFsMapMethods in org.apache.hadoop.hbase.replication that return types with arguments of type TableNameModifier and TypeMethodDescriptionReplicationPeerConfig.getExcludeTableCFsMap()ReplicationPeer.getTableCFs()Get replicable (table, cf-list) map of this peerReplicationPeerImpl.getTableCFs()ReplicationPeerConfig.getTableCFsMap()ReplicationPeerConfig.unmodifiableTableCFsMap(Map<TableName, List<String>> tableCFsMap) Methods in org.apache.hadoop.hbase.replication with parameters of type TableNameModifier and TypeMethodDescriptionstatic TableDescriptorReplicationStorageFactory.createReplicationQueueTableDescriptor(TableName tableName) ReplicationBarrierFamilyFormat.getReplicationBarrierResult(Connection conn, TableName tableName, byte[] row, byte[] encodedRegionName) static ReplicationQueueStorageReplicationStorageFactory.getReplicationQueueStorage(Connection conn, org.apache.hadoop.conf.Configuration conf, TableName tableName) Create a newReplicationQueueStorage.ReplicationBarrierFamilyFormat.getTableEncodedRegionNameAndLastBarrier(Connection conn, TableName tableName) ReplicationBarrierFamilyFormat.getTableEncodedRegionNamesForSerialReplication(Connection conn, TableName tableName) static booleanReplicationStorageFactory.isReplicationQueueTable(org.apache.hadoop.conf.Configuration conf, TableName tableName) booleanReplicationPeerConfig.needToReplicate(TableName table) Decide whether the table need replicate to the peer clusterbooleanReplicationPeerConfig.needToReplicate(TableName table, byte[] family) Decide whether the passed family of the table need replicate to the peer cluster according to this peer config.Method parameters in org.apache.hadoop.hbase.replication with type arguments of type TableNameModifier and TypeMethodDescriptionprivate static booleanReplicationUtils.isTableCFsEqual(Map<TableName, List<String>> tableCFs1, Map<TableName, List<String>> tableCFs2) ReplicationPeerConfig.ReplicationPeerConfigBuilderImpl.setExcludeTableCFsMap(Map<TableName, List<String>> excludeTableCFsMap) ReplicationPeerConfigBuilder.setExcludeTableCFsMap(Map<TableName, List<String>> tableCFsMap) Sets the mapping of table name to column families which should not be replicated.ReplicationPeerConfig.ReplicationPeerConfigBuilderImpl.setTableCFsMap(Map<TableName, List<String>> tableCFsMap) ReplicationPeerConfigBuilder.setTableCFsMap(Map<TableName, List<String>> tableCFsMap) Sets an explicit map of tables and column families in those tables that should be replicated to the given peer.ReplicationPeerConfig.unmodifiableTableCFsMap(Map<TableName, List<String>> tableCFsMap) Constructors in org.apache.hadoop.hbase.replication with parameters of type TableName -
Uses of TableName in org.apache.hadoop.hbase.replication.master
Fields in org.apache.hadoop.hbase.replication.master declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameReplicationSinkTrackerTableCreator.REPLICATION_SINK_TRACKER_TABLE_NAMEReplicationSinkTrackerTableCreator.REPLICATION_SINK_TRACKER_TABLE_NAME_STRtable name - can be enabled with config - hbase.regionserver.replication.sink.tracker.enabled -
Uses of TableName in org.apache.hadoop.hbase.replication.regionserver
Fields in org.apache.hadoop.hbase.replication.regionserver with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final ConcurrentMap<TableName,String> SyncReplicationPeerMappingManager.table2PeerIdMethods in org.apache.hadoop.hbase.replication.regionserver with parameters of type TableNameModifier and TypeMethodDescriptionvoidReplicationSource.addHFileRefs(TableName tableName, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) voidReplicationSourceInterface.addHFileRefs(TableName tableName, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) Add hfile names to the queue to be replicated.voidReplicationSourceManager.addHFileRefs(TableName tableName, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) (package private) voidReplication.addHFileRefsToQueue(TableName tableName, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) private voidReplicationSink.batch(TableName tableName, Collection<List<Row>> allRows, int batchRowSizeThreshold) Do the changes and handle the poolprivate voidReplicationSink.buildBulkLoadHFileMap(Map<String, List<Pair<byte[], List<String>>>> bulkLoadHFileMap, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor bld) booleanSyncReplicationPeerInfoProvider.checkState(TableName table, BiPredicate<SyncReplicationState, SyncReplicationState> checker) Check whether the given table is contained in a sync replication peer which can pass the state checker.booleanSyncReplicationPeerInfoProviderImpl.checkState(TableName table, BiPredicate<SyncReplicationState, SyncReplicationState> checker) private org.apache.hadoop.fs.PathHFileReplicator.createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, TableName tableName) private voidHFileReplicator.doBulkLoad(org.apache.hadoop.conf.Configuration conf, TableName tableName, org.apache.hadoop.fs.Path stagingDir, Deque<BulkLoadHFiles.LoadQueueItem> queue, int maxRetries) booleanprivate StringReplicationSink.getHFilePath(TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor bld, String storeFile, byte[] family) (package private) StringSyncReplicationPeerInfoProvider.getPeerIdAndRemoteWALDir(TableName table) Return the peer id and remote WAL directory if the table is synchronously replicated and the state isSyncReplicationState.ACTIVE.SyncReplicationPeerInfoProviderImpl.getPeerIdAndRemoteWALDir(TableName table) -
Uses of TableName in org.apache.hadoop.hbase.rest
Methods in org.apache.hadoop.hbase.rest with parameters of type TableNameModifier and TypeMethodDescriptionprivate org.apache.hbase.thirdparty.javax.ws.rs.core.ResponseSchemaResource.replace(TableName name, TableSchemaModel model, org.apache.hbase.thirdparty.javax.ws.rs.core.UriInfo uriInfo, Admin admin) private org.apache.hbase.thirdparty.javax.ws.rs.core.ResponseSchemaResource.update(TableName name, TableSchemaModel model, org.apache.hbase.thirdparty.javax.ws.rs.core.UriInfo uriInfo, Admin admin) -
Uses of TableName in org.apache.hadoop.hbase.rsgroup
Fields in org.apache.hadoop.hbase.rsgroup declared as TableNameModifier and TypeFieldDescription(package private) static final TableNameRSGroupInfoManagerImpl.RSGROUP_TABLE_NAMEFields in org.apache.hadoop.hbase.rsgroup with type parameters of type TableNameModifier and TypeFieldDescription(package private) final org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap<TableName,RSGroupInfo> RSGroupInfoManagerImpl.RSGroupInfoHolder.tableName2GroupRSGroupInfo.tablesDeprecated.Since 3.0.0, will be removed in 4.0.0.Methods in org.apache.hadoop.hbase.rsgroup that return types with arguments of type TableNameModifier and TypeMethodDescriptionprivate Pair<Map<TableName,Map<ServerName, List<RegionInfo>>>, List<RegionPlan>> RSGroupBasedLoadBalancer.correctAssignments(Map<TableName, Map<ServerName, List<RegionInfo>>> existingAssignments) (package private) Map<TableName,Map<ServerName, List<RegionInfo>>> RSGroupInfoManagerImpl.getRSGroupAssignmentsByTable(TableStateManager tableStateManager, String groupName) This is an EXPENSIVE clone.RSGroupInfo.getTables()Deprecated.Since 3.0.0, will be removed in 4.0.0.RSGroupUtil.listTablesInRSGroup(MasterServices master, String groupName) Methods in org.apache.hadoop.hbase.rsgroup with parameters of type TableNameModifier and TypeMethodDescriptionvoidDeprecated.Since 3.0.0, will be removed in 4.0.0.booleanRSGroupInfo.containsTable(TableName table) Deprecated.Since 3.0.0, will be removed in 4.0.0.DisabledRSGroupInfoManager.determineRSGroupInfoForTable(TableName tableName) RSGroupInfoManager.determineRSGroupInfoForTable(TableName tableName) DetermineRSGroupInfofor the given table.RSGroupInfoManagerImpl.determineRSGroupInfoForTable(TableName tableName) DisabledRSGroupInfoManager.getRSGroupForTable(TableName tableName) RSGroupInfoManager.getRSGroupForTable(TableName tableName) GetRSGroupInfofor the given table.RSGroupInfoManagerImpl.getRSGroupForTable(TableName tableName) static Optional<RSGroupInfo>RSGroupUtil.getRSGroupInfo(MasterServices master, RSGroupInfoManager manager, TableName tableName) Will try to get the rsgroup fromTableDescriptorfirst, and then try to get the rsgroup from theNamespaceDescriptor.RSGroupAdminClient.getRSGroupInfoOfTable(TableName tableName) Deprecated.GetsRSGroupInfofor the given table's group.private booleanRSGroupInfoManagerImpl.isTableInGroup(TableName tableName, String groupName, Set<TableName> tablesInGroupCache) booleanRSGroupInfo.removeTable(TableName table) Deprecated.Since 3.0.0, will be removed in 4.0.0.Method parameters in org.apache.hadoop.hbase.rsgroup with type arguments of type TableNameModifier and TypeMethodDescriptionvoidRSGroupInfo.addAllTables(Collection<TableName> arg) Deprecated.Since 3.0.0, will be removed in 4.0.0.RSGroupBasedLoadBalancer.balanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) Balance by RSGroup.private Pair<Map<TableName,Map<ServerName, List<RegionInfo>>>, List<RegionPlan>> RSGroupBasedLoadBalancer.correctAssignments(Map<TableName, Map<ServerName, List<RegionInfo>>> existingAssignments) private booleanRSGroupInfoManagerImpl.isTableInGroup(TableName tableName, String groupName, Set<TableName> tablesInGroupCache) voidRSGroupAdminClient.moveTables(Set<TableName> tables, String targetGroup) Deprecated.Move given set of tables to the specified target RegionServer group.private voidRSGroupAdminServiceImpl.moveTablesAndWait(Set<TableName> tables, String targetGroup) Deprecated.private voidRSGroupInfoManagerImpl.moveTablesAndWait(Set<TableName> tables, String targetGroup) voidDisabledRSGroupInfoManager.setRSGroup(Set<TableName> tables, String groupName) voidRSGroupInfoManager.setRSGroup(Set<TableName> tables, String groupName) Set group for tables.voidRSGroupInfoManagerImpl.setRSGroup(Set<TableName> tables, String groupName) voidRSGroupBasedLoadBalancer.updateBalancerLoadInfo(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) Constructors in org.apache.hadoop.hbase.rsgroup with parameters of type TableNameModifierConstructorDescriptionMigrateRSGroupProcedure(MasterProcedureEnv env, TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.security.access
Fields in org.apache.hadoop.hbase.security.access declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameAccessControlClient.ACL_TABLE_NAMEstatic final TableNamePermissionStorage.ACL_TABLE_NAMEInternal storage table for access control listsprivate TableNameAccessControlFilter.tableprivate final TableNameAuthResult.tableprivate TableNameTablePermission.tableprivate TableNameAuthResult.Params.tableNameprivate TableNameGetUserPermissionsRequest.Builder.tableNameprivate TableNameGetUserPermissionsRequest.tableNameprivate TableNamePermission.Builder.tableNameFields in org.apache.hadoop.hbase.security.access with type parameters of type TableNameModifier and TypeFieldDescriptionprivate Map<TableName,List<UserPermission>> AccessController.tableAclsAuthManager.tableCacheCache for table permission.Methods in org.apache.hadoop.hbase.security.access that return TableNameModifier and TypeMethodDescriptionprivate TableNameAccessController.getTableName(RegionCoprocessorEnvironment e) private TableNameAccessController.getTableName(Region region) AuthResult.getTableName()GetUserPermissionsRequest.getTableName()TablePermission.getTableName()static TableNameShadedAccessControlUtil.toTableName(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableName tableNamePB) Methods in org.apache.hadoop.hbase.security.access that return types with arguments of type TableNameModifier and TypeMethodDescriptionSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.getUserNamespaceAndTable(Table aclTable, String userName) Methods in org.apache.hadoop.hbase.security.access with parameters of type TableNameModifier and TypeMethodDescriptionbooleanAuthManager.accessUserTable(User user, TableName table, Permission.Action action) Checks if the user has access to the full table or at least a family/qualifier for the specified action.booleanSnapshotScannerHDFSAclHelper.addTableAcl(TableName tableName, Set<String> users, String operation) Add table user acls(package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.addUserTableHdfsAcl(Connection connection, String user, TableName tableName) (package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.addUserTableHdfsAcl(Connection connection, Set<String> users, TableName tableName) (package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.addUserTableHdfsAcl(Table aclTable, String user, TableName tableName) static AuthResultAuthResult.allow(String request, String reason, User user, Permission.Action action, TableName table, byte[] family, byte[] qualifier) static AuthResultAuthResult.allow(String request, String reason, User user, Permission.Action action, TableName table, Map<byte[], ? extends Collection<?>> families) booleanAuthManager.authorizeCell(User user, TableName table, Cell cell, Permission.Action action) Check if user has given action privilige in cell scope.private booleanAuthManager.authorizeFamily(Set<TablePermission> permissions, TableName table, byte[] family, Permission.Action action) private booleanAuthManager.authorizeTable(Set<TablePermission> permissions, TableName table, byte[] family, byte[] qualifier, Permission.Action action) booleanAuthManager.authorizeUserFamily(User user, TableName table, byte[] family, Permission.Action action) Check if user has given action privilige in table:family scope.booleanAuthManager.authorizeUserTable(User user, TableName table, byte[] family, byte[] qualifier, Permission.Action action) Check if user has given action privilige in table:family:qualifier scope.booleanAuthManager.authorizeUserTable(User user, TableName table, byte[] family, Permission.Action action) Check if user has given action privilige in table:family scope.booleanAuthManager.authorizeUserTable(User user, TableName table, Permission.Action action) Check if user has given action privilige in table scope.static org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.GrantRequestAccessControlUtil.buildGrantRequest(String username, TableName tableName, byte[] family, byte[] qualifier, boolean mergeExistingPermissions, org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.Permission.Action... actions) Create a request to grant user table permissions.static org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.RevokeRequestAccessControlUtil.buildRevokeRequest(String username, TableName tableName, byte[] family, byte[] qualifier, org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.Permission.Action... actions) Create a request to revoke user table permissions.voidAccessChecker.checkLockPermissions(User user, String namespace, TableName tableName, RegionInfo[] regionInfos, String reason) voidAccessController.checkLockPermissions(ObserverContext<?> ctx, String namespace, TableName tableName, RegionInfo[] regionInfos, String reason) voidNoopAccessChecker.checkLockPermissions(User user, String namespace, TableName tableName, RegionInfo[] regionInfos, String reason) (package private) voidSnapshotScannerHDFSAclHelper.createTableDirectories(TableName tableName) voidZKPermissionWatcher.deleteTableACLNode(TableName tableName) Delete the acl notify node of table(package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.deleteTableHdfsAcl(Table aclTable, TableName tableName) (package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.deleteUserTableHdfsAcl(Connection connection, Set<String> users, TableName tableName) (package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.deleteUserTableHdfsAcl(Table aclTable, String user, TableName tableName) static AuthResultAuthResult.deny(String request, String reason, User user, Permission.Action action, TableName table, byte[] family, byte[] qualifier) static AuthResultAuthResult.deny(String request, String reason, User user, Permission.Action action, TableName table, Map<byte[], ? extends Collection<?>> families) private booleanTablePermission.failCheckTable(TableName table) SnapshotScannerHDFSAclController.filterUsersToRemoveNsAccessAcl(Table aclTable, TableName tableName, Set<String> tablesUsers) Remove table user access HDFS acl from namespace directory if the user has no permissions of global, ns of the table or other tables of the ns, eg: Bob has 'ns1:t1' read permission, when delete 'ns1:t1', if Bob has global read permission, '@ns1' read permission or 'ns1:other_tables' read permission, then skip remove Bob access acl in ns1Dirs, otherwise, remove Bob access acl.(package private) org.apache.hadoop.fs.PathSnapshotScannerHDFSAclHelper.PathHelper.getArchiveTableDir(TableName tableName) (package private) org.apache.hadoop.fs.PathSnapshotScannerHDFSAclHelper.PathHelper.getDataTableDir(TableName tableName) (package private) org.apache.hadoop.fs.PathSnapshotScannerHDFSAclHelper.PathHelper.getMobTableDir(TableName tableName) static org.apache.hbase.thirdparty.com.google.common.collect.ListMultimap<String,UserPermission> PermissionStorage.getTablePermissions(org.apache.hadoop.conf.Configuration conf, TableName tableName) (package private) List<org.apache.hadoop.fs.Path>SnapshotScannerHDFSAclHelper.getTableRootPaths(TableName tableName, boolean includeSnapshotPath) return paths that user will table permission will visitprivate List<org.apache.hadoop.fs.Path>SnapshotScannerHDFSAclHelper.getTableSnapshotPaths(TableName tableName) SnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.getTableUsers(Table aclTable, TableName tableName) (package private) org.apache.hadoop.fs.PathSnapshotScannerHDFSAclHelper.PathHelper.getTmpTableDir(TableName tableName) static List<UserPermission>AccessControlUtil.getUserPermissions(org.apache.hbase.thirdparty.com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, TableName t) Deprecated.UseAdmin.getUserPermissions(GetUserPermissionsRequest)instead.static List<UserPermission>AccessControlUtil.getUserPermissions(org.apache.hbase.thirdparty.com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, TableName t, byte[] columnFamily, byte[] columnQualifier, String userName) Deprecated.UseAdmin.getUserPermissions(GetUserPermissionsRequest)instead.SnapshotScannerHDFSAclHelper.getUsersWithTableReadAction(TableName tableName, boolean includeNamespace, boolean includeGlobal) Return users with table read permissionprivate UserPermissionSnapshotScannerHDFSAclController.getUserTablePermission(org.apache.hadoop.conf.Configuration conf, String userName, TableName tableName) static List<UserPermission>PermissionStorage.getUserTablePermissions(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] cf, byte[] cq, String userName, boolean hasFilterUser) Returns the currently granted permissions for a given table as the specified user plus associated permissions.private static voidAccessControlClient.grant(Connection connection, TableName tableName, String userName, byte[] family, byte[] qual, boolean mergeExistingPermissions, Permission.Action... actions) Grants permission on the specified table for the specified userstatic voidAccessControlClient.grant(Connection connection, TableName tableName, String userName, byte[] family, byte[] qual, Permission.Action... actions) Grants permission on the specified table for the specified user.static voidAccessControlUtil.grant(org.apache.hbase.thirdparty.com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, String userShortName, TableName tableName, byte[] f, byte[] q, boolean mergeExistingPermissions, Permission.Action... actions) Deprecated.UseAdmin.grant(UserPermission, boolean)instead.static booleanAccessControlUtil.hasPermission(org.apache.hbase.thirdparty.com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, TableName tableName, byte[] columnFamily, byte[] columnQualifier, String userName, Permission.Action[] actions) Deprecated.UseAdmin.hasUserPermissions(String, List)instead.(package private) static booleanSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.hasUserTableHdfsAcl(Table aclTable, String user, TableName tableName) booleanTablePermission.implies(TableName table, byte[] family, byte[] qualifier, Permission.Action action) Check if given action can performs on given table:family:qualifier.booleanTablePermission.implies(TableName table, byte[] family, Permission.Action action) Check if given action can performs on given table:family.booleanTablePermission.implies(TableName table, KeyValue kv, Permission.Action action) Checks if this permission grants access to perform the given action on the given table and key value.private booleanSnapshotScannerHDFSAclController.isHdfsAclSet(Table aclTable, String userName, String namespace, TableName tableName) Check if user global/namespace/table HDFS acls is already setprivate booleanSnapshotScannerHDFSAclController.isHdfsAclSet(Table aclTable, String userName, TableName tableName) private booleanSnapshotScannerHDFSAclController.needHandleTableHdfsAcl(TableName tableName, String operation) GetUserPermissionsRequest.newBuilder(TableName tableName) Build a get table permission requeststatic Permission.BuilderPermission.newBuilder(TableName tableName) Build a table permissionprivate AuthResultAccessChecker.permissionGranted(String request, User user, Permission.Action permRequest, TableName tableName, byte[] family, byte[] qualifier) AccessChecker.permissionGranted(String request, User user, Permission.Action permRequest, TableName tableName, Map<byte[], ? extends Collection<?>> families) Check the current user for authorization to perform a specific action against the given set of row data.NoopAccessChecker.permissionGranted(String request, User user, Permission.Action permRequest, TableName tableName, Map<byte[], ? extends Collection<?>> families) voidSnapshotScannerHDFSAclController.postCompletedDeleteTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidSnapshotScannerHDFSAclController.postCompletedTruncateTableAction(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.postDeleteTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.postModifyTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, TableDescriptor oldDesc, TableDescriptor currentDesc) voidSnapshotScannerHDFSAclController.postModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) voidAccessController.postTruncateTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidAccessController.preDeleteTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.preDisableTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.preEnableTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.preGetRSGroupInfoOfTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidAccessController.preGetUserPermissions(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) private voidAccessController.preGetUserPermissions(User caller, String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) voidAccessController.preLockHeartbeat(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, String description) AccessController.preModifyColumnFamilyStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, byte[] family, String dstSFT) AccessController.preModifyTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, TableDescriptor currentDesc, TableDescriptor newDesc) CoprocessorWhitelistMasterObserver.preModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDesc, TableDescriptor newDesc) AccessController.preModifyTableStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, String dstSFT) voidAccessController.preRequestLock(ObserverContext<MasterCoprocessorEnvironment> ctx, String namespace, TableName tableName, RegionInfo[] regionInfos, String description) voidAccessController.preSetTableQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, GlobalQuotaSettings quotas) voidAccessController.preSetUserQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, TableName tableName, GlobalQuotaSettings quotas) voidAccessController.preSplitRegion(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, byte[] splitRow) voidAccessController.preTableFlush(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidAccessController.preTruncateTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAuthManager.refreshTableCacheFromWritable(TableName table, byte[] data) Update acl info for table.booleanSnapshotScannerHDFSAclHelper.removeNamespaceAccessAcl(TableName tableName, Set<String> removeUsers, String operation) Remove table access acl from namespace dir when delete tablevoidAuthManager.removeTable(TableName table) Remove given table from AuthManager's table cache.booleanSnapshotScannerHDFSAclHelper.removeTableAcl(TableName tableName, Set<String> users) Remove table acls when modify tablebooleanSnapshotScannerHDFSAclHelper.removeTableDefaultAcl(TableName tableName, Set<String> removeUsers) Remove default acl from table archive dir when delete table(package private) static voidPermissionStorage.removeTablePermissions(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] column, Table t) Remove specified table column from the acl table.(package private) static voidPermissionStorage.removeTablePermissions(org.apache.hadoop.conf.Configuration conf, TableName tableName, Table t) Remove specified table from the _acl_ table.private static voidPermissionStorage.removeTablePermissions(TableName tableName, byte[] column, Table table, boolean closeTable) private voidSnapshotScannerHDFSAclController.removeUserTableHdfsAcl(Table aclTable, String userName, TableName tableName, UserPermission userPermission) voidAccessChecker.requireAccess(User user, String request, TableName tableName, Permission.Action... permissions) Authorizes that the current user has any of the given permissions to access the table.voidAccessController.requireAccess(ObserverContext<?> ctx, String request, TableName tableName, Permission.Action... permissions) voidNoopAccessChecker.requireAccess(User user, String request, TableName tableName, Permission.Action... permissions) voidAccessChecker.requireGlobalPermission(User user, String request, Permission.Action perm, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, String filterUser) Checks that the user has the given global permission.voidAccessController.requireGlobalPermission(ObserverContext<?> ctx, String request, Permission.Action perm, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap) voidNoopAccessChecker.requireGlobalPermission(User user, String request, Permission.Action perm, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, String filterUser) voidAccessChecker.requireNamespacePermission(User user, String request, String namespace, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, Permission.Action... permissions) Checks that the user has the given global or namespace permission.voidAccessController.requireNamespacePermission(ObserverContext<?> ctx, String request, String namespace, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, Permission.Action... permissions) voidNoopAccessChecker.requireNamespacePermission(User user, String request, String namespace, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, Permission.Action... permissions) voidAccessChecker.requirePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, String filterUser, Permission.Action... permissions) Authorizes that the current user has any of the given permissions for the given table, column family and column qualifier.voidAccessController.requirePermission(ObserverContext<?> ctx, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) voidNoopAccessChecker.requirePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, String filterUser, Permission.Action... permissions) voidAccessChecker.requireTablePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) Authorizes that the current user has any of the given permissions for the given table, column family and column qualifier.voidAccessController.requireTablePermission(ObserverContext<?> ctx, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) voidNoopAccessChecker.requireTablePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) static voidAccessControlClient.revoke(Connection connection, TableName tableName, String username, byte[] family, byte[] qualifier, Permission.Action... actions) Revokes the permission on the tablestatic voidAccessControlUtil.revoke(org.apache.hbase.thirdparty.com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, String userShortName, TableName tableName, byte[] f, byte[] q, Permission.Action... actions) Deprecated.UseAdmin.revoke(UserPermission)instead.AuthResult.Params.setTableName(TableName table) private booleanSnapshotScannerHDFSAclCleaner.tableExists(TableName tableName) static org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableNameShadedAccessControlUtil.toProtoTableName(TableName tableName) private voidAuthManager.updateTableCache(TableName table, org.apache.hbase.thirdparty.com.google.common.collect.ListMultimap<String, Permission> tablePerms) Updates the internal table permissions cache for specified table.Method parameters in org.apache.hadoop.hbase.security.access with type arguments of type TableNameModifier and TypeMethodDescriptionprivate voidSnapshotScannerHDFSAclHelper.handleTableAcl(Set<TableName> tableNames, Set<String> users, Set<String> skipNamespaces, Set<TableName> skipTables, SnapshotScannerHDFSAclHelper.HDFSAclOperation.OperationType operationType) voidAccessController.postGetTableDescriptors(ObserverContext<MasterCoprocessorEnvironment> ctx, List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) voidAccessController.preGetTableDescriptors(ObserverContext<MasterCoprocessorEnvironment> ctx, List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) voidAccessController.preMoveTables(ObserverContext<MasterCoprocessorEnvironment> ctx, Set<TableName> tables, String targetGroup) Constructors in org.apache.hadoop.hbase.security.access with parameters of type TableNameModifierConstructorDescription(package private)AccessControlFilter(AuthManager mgr, User ugi, TableName tableName, AccessControlFilter.Strategy strategy, Map<ByteRange, Integer> cfVsMaxVersions) AuthResult(boolean allowed, String request, String reason, User user, Permission.Action action, TableName table, byte[] family, byte[] qualifier) AuthResult(boolean allowed, String request, String reason, User user, Permission.Action action, TableName table, Map<byte[], ? extends Collection<?>> families) privateprivateprivateGetUserPermissionsRequest(String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) (package private)TablePermission(TableName table, byte[] family, byte[] qualifier, Permission.Action... assigned) Construct a table:family:qualifier permission. -
Uses of TableName in org.apache.hadoop.hbase.security.visibility
Fields in org.apache.hadoop.hbase.security.visibility declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameVisibilityConstants.LABELS_TABLE_NAMEInternal storage table for visibility labelsMethods in org.apache.hadoop.hbase.security.visibility with parameters of type TableNameModifier and TypeMethodDescriptionvoidVisibilityController.preDisableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) VisibilityController.preModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor) -
Uses of TableName in org.apache.hadoop.hbase.slowlog
Fields in org.apache.hadoop.hbase.slowlog declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameSlowLogTableAccessor.SLOW_LOG_TABLE_NAMEhbase:slowlog table name - can be enabled with config - hbase.regionserver.slowlog.systable.enabled -
Uses of TableName in org.apache.hadoop.hbase.snapshot
Fields in org.apache.hadoop.hbase.snapshot declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameRestoreSnapshotHelper.snapshotTableprivate final TableNameSnapshotInfo.SnapshotStats.snapshotTableprivate TableNameCreateSnapshot.tableNameprivate final TableNameSnapshotRegionLocator.tableNameMethods in org.apache.hadoop.hbase.snapshot that return TableNameMethods in org.apache.hadoop.hbase.snapshot with parameters of type TableNameModifier and TypeMethodDescriptionstatic RegionInfoRestoreSnapshotHelper.cloneRegionInfo(TableName tableName, RegionInfo snapshotRegionInfo) static SnapshotRegionLocatorprivate static Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long> ExportSnapshot.getSnapshotFileAndSize(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration conf, TableName table, String region, String family, String hfile, long size) private static StringSnapshotRegionLocator.getSnapshotManifestDirKey(TableName table) static voidRestoreSnapshotHelper.restoreSnapshotAcl(org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot, TableName newTableName, org.apache.hadoop.conf.Configuration conf) static voidSnapshotRegionLocator.setSnapshotManifestDir(org.apache.hadoop.conf.Configuration conf, String dir, TableName table) static booleanSnapshotRegionLocator.shouldUseSnapshotRegionLocator(org.apache.hadoop.conf.Configuration conf, TableName table) private static SnapshotRegionLocator.SnapshotHRegionLocationSnapshotRegionLocator.toLocation(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo ri, TableName tableName) Constructors in org.apache.hadoop.hbase.snapshot with parameters of type TableNameModifierConstructorDescriptionprivateSnapshotRegionLocator(TableName tableName, TreeMap<byte[], SnapshotRegionLocator.HRegionReplicas> regions, List<HRegionLocation> rawLocations) TablePartiallyOpenException(TableName tableName) -
Uses of TableName in org.apache.hadoop.hbase.thrift
Methods in org.apache.hadoop.hbase.thrift that return TableNameModifier and TypeMethodDescriptionprivate static TableNameThriftHBaseServiceHandler.getTableName(ByteBuffer buffer) -
Uses of TableName in org.apache.hadoop.hbase.thrift2
Methods in org.apache.hadoop.hbase.thrift2 that return TableNameModifier and TypeMethodDescriptionstatic TableNameThriftUtilities.tableNameFromThrift(org.apache.hadoop.hbase.thrift2.generated.TTableName tableName) static TableName[]ThriftUtilities.tableNamesArrayFromThrift(List<org.apache.hadoop.hbase.thrift2.generated.TTableName> tableNames) Methods in org.apache.hadoop.hbase.thrift2 that return types with arguments of type TableNameModifier and TypeMethodDescriptionThriftUtilities.tableNamesFromThrift(List<org.apache.hadoop.hbase.thrift2.generated.TTableName> tableNames) Methods in org.apache.hadoop.hbase.thrift2 with parameters of type TableNameModifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.thrift2.generated.TTableNameThriftUtilities.tableNameFromHBase(TableName table) static List<org.apache.hadoop.hbase.thrift2.generated.TTableName>ThriftUtilities.tableNamesFromHBase(TableName[] in) Method parameters in org.apache.hadoop.hbase.thrift2 with type arguments of type TableNameModifier and TypeMethodDescriptionstatic List<org.apache.hadoop.hbase.thrift2.generated.TTableName>ThriftUtilities.tableNamesFromHBase(List<TableName> in) -
Uses of TableName in org.apache.hadoop.hbase.thrift2.client
Fields in org.apache.hadoop.hbase.thrift2.client declared as TableNameMethods in org.apache.hadoop.hbase.thrift2.client that return TableNameModifier and TypeMethodDescriptionThriftTable.getName()ThriftAdmin.listTableNames()ThriftAdmin.listTableNames(Pattern pattern) ThriftAdmin.listTableNames(Pattern pattern, boolean includeSysTables) ThriftAdmin.listTableNamesByNamespace(String name) Methods in org.apache.hadoop.hbase.thrift2.client that return types with arguments of type TableNameModifier and TypeMethodDescriptionThriftAdmin.getConfiguredNamespacesAndTablesInRSGroup(String groupName) ThriftAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) ThriftAdmin.getSpaceQuotaTableSizes()ThriftAdmin.listTableNamesByState(boolean isEnabled) ThriftAdmin.listTablesInRSGroup(String groupName) Methods in org.apache.hadoop.hbase.thrift2.client with parameters of type TableNameModifier and TypeMethodDescriptionvoidThriftAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) ThriftAdmin.addColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) ThriftAdmin.clearBlockCache(TableName tableName) ThriftAdmin.cloneSnapshotAsync(String snapshotName, TableName tableName, boolean cloneAcl, String customSFT) voidThriftAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) voidvoidvoidThriftAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) voidThriftAdmin.compact(TableName tableName, CompactType compactType) voidThriftAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) ThriftAdmin.deleteColumnFamilyAsync(TableName tableName, byte[] columnFamily) voidThriftAdmin.deleteTable(TableName tableName) ThriftAdmin.deleteTableAsync(TableName tableName) voidThriftAdmin.disableTable(TableName tableName) ThriftAdmin.disableTableAsync(TableName tableName) voidThriftAdmin.disableTableReplication(TableName tableName) voidThriftAdmin.enableTable(TableName tableName) ThriftAdmin.enableTableAsync(TableName tableName) voidThriftAdmin.enableTableReplication(TableName tableName) voidvoidvoidThriftConnection.getBufferedMutator(TableName tableName) ThriftAdmin.getCompactionState(TableName tableName) ThriftAdmin.getCompactionState(TableName tableName, CompactType compactType) ThriftAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) ThriftAdmin.getDescriptor(TableName tableName) longThriftAdmin.getLastMajorCompactionTimestamp(TableName tableName) ThriftConnection.getRegionLocator(TableName tableName) ThriftAdmin.getRegionMetrics(ServerName serverName, TableName tableName) ThriftAdmin.getRegions(TableName tableName) ThriftAdmin.getRSGroup(TableName tableName) ThriftConnection.getTableBuilder(TableName tableName, ExecutorService pool) Get a TableBuider to build ThriftTable, ThriftTable is NOT thread safebooleanThriftAdmin.isTableAvailable(TableName tableName) booleanThriftAdmin.isTableDisabled(TableName tableName) booleanThriftAdmin.isTableEnabled(TableName tableName) voidThriftAdmin.majorCompact(TableName tableName) voidThriftAdmin.majorCompact(TableName tableName, byte[] columnFamily) voidThriftAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) voidThriftAdmin.majorCompact(TableName tableName, CompactType compactType) voidThriftAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) ThriftAdmin.modifyColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) ThriftAdmin.modifyColumnFamilyStoreFileTrackerAsync(TableName tableName, byte[] family, String dstSFT) ThriftAdmin.modifyTableStoreFileTrackerAsync(TableName tableName, String dstSFT) voidvoidThriftAdmin.snapshot(String snapshotName, TableName tableName, SnapshotType type) voidvoidbooleanThriftAdmin.tableExists(TableName tableName) voidThriftAdmin.truncateTable(TableName tableName, boolean preserveSplits) ThriftAdmin.truncateTableAsync(TableName tableName, boolean preserveSplits) Method parameters in org.apache.hadoop.hbase.thrift2.client with type arguments of type TableNameModifier and TypeMethodDescriptionThriftAdmin.listTableDescriptors(List<TableName> tableNames) voidThriftAdmin.setRSGroup(Set<TableName> tables, String groupName) Constructors in org.apache.hadoop.hbase.thrift2.client with parameters of type TableNameModifierConstructorDescriptionThriftTable(TableName tableName, org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client client, org.apache.thrift.transport.TTransport tTransport, org.apache.hadoop.conf.Configuration conf) -
Uses of TableName in org.apache.hadoop.hbase.tool
Fields in org.apache.hadoop.hbase.tool declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameCanaryTool.DEFAULT_WRITE_TABLE_NAMEprivate TableNameCanaryTool.RegionTaskResult.tableNameprivate TableNameCanaryTool.RegionMonitor.writeTableNameMethods in org.apache.hadoop.hbase.tool that return TableNameMethods in org.apache.hadoop.hbase.tool with parameters of type TableNameModifier and TypeMethodDescriptionBulkLoadHFiles.bulkLoad(TableName tableName, Map<byte[], List<org.apache.hadoop.fs.Path>> family2Files) Perform a bulk load of the given directory into the given pre-existing table.Perform a bulk load of the given directory into the given pre-existing table.BulkLoadHFilesTool.bulkLoad(TableName tableName, Map<byte[], List<org.apache.hadoop.fs.Path>> family2Files) protected voidBulkLoadHFilesTool.bulkLoadPhase(AsyncClusterConnection conn, TableName tableName, Deque<BulkLoadHFiles.LoadQueueItem> queue, org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer, BulkLoadHFiles.LoadQueueItem> regionGroups, boolean copyFiles, Map<BulkLoadHFiles.LoadQueueItem, ByteBuffer> item2RegionMap) This takes the LQI's grouped by likely regions and attempts to bulk load them.private voidBulkLoadHFilesTool.checkRegionIndexValid(int idx, List<Pair<byte[], byte[]>> startEndKeys, TableName tableName) we can consider there is a region hole or overlap in following conditions.private voidBulkLoadHFilesTool.cleanup(AsyncClusterConnection conn, TableName tableName, Deque<BulkLoadHFiles.LoadQueueItem> queue, ExecutorService pool) private voidBulkLoadHFilesTool.createTable(TableName tableName, org.apache.hadoop.fs.Path hfofDir, AsyncAdmin admin) If the table is created for the first time, then "completebulkload" reads the files twice.private Map<BulkLoadHFiles.LoadQueueItem,ByteBuffer> BulkLoadHFilesTool.doBulkLoad(AsyncClusterConnection conn, TableName tableName, Map<byte[], List<org.apache.hadoop.fs.Path>> map, boolean silence, boolean copyFile) Perform a bulk load of the given map of families to hfiles into the given pre-existing table.private Map<BulkLoadHFiles.LoadQueueItem,ByteBuffer> BulkLoadHFilesTool.doBulkLoad(AsyncClusterConnection conn, TableName tableName, org.apache.hadoop.fs.Path hfofDir, boolean silence, boolean copyFile) Perform a bulk load of the given directory into the given pre-existing table.protected Pair<List<BulkLoadHFiles.LoadQueueItem>,String> BulkLoadHFilesTool.groupOrSplit(AsyncClusterConnection conn, TableName tableName, org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer, BulkLoadHFiles.LoadQueueItem> regionGroups, BulkLoadHFiles.LoadQueueItem item, List<Pair<byte[], byte[]>> startEndKeys) Attempt to assign the given load queue item into its target region group.private Pair<org.apache.hbase.thirdparty.com.google.common.collect.Multimap<ByteBuffer,BulkLoadHFiles.LoadQueueItem>, Set<String>> BulkLoadHFilesTool.groupOrSplitPhase(AsyncClusterConnection conn, TableName tableName, ExecutorService pool, Deque<BulkLoadHFiles.LoadQueueItem> queue, List<Pair<byte[], byte[]>> startEndKeys) voidBulkLoadHFilesTool.loadHFileQueue(AsyncClusterConnection conn, TableName tableName, Deque<BulkLoadHFiles.LoadQueueItem> queue, boolean copyFiles) Used by the replication sink to load the hfiles from the source cluster.private Map<BulkLoadHFiles.LoadQueueItem,ByteBuffer> BulkLoadHFilesTool.performBulkLoad(AsyncClusterConnection conn, TableName tableName, Deque<BulkLoadHFiles.LoadQueueItem> queue, ExecutorService pool, boolean copyFile) static voidBulkLoadHFilesTool.prepareHFileQueue(org.apache.hadoop.conf.Configuration conf, AsyncClusterConnection conn, TableName tableName, org.apache.hadoop.fs.Path hfilesDir, Deque<BulkLoadHFiles.LoadQueueItem> queue, boolean validateHFile, boolean silence) Prepare a collection ofLoadQueueItemfrom list of source hfiles contained in the passed directory and validates whether the prepared queue has all the valid table column families in it.static voidBulkLoadHFilesTool.prepareHFileQueue(AsyncClusterConnection conn, TableName tableName, Map<byte[], List<org.apache.hadoop.fs.Path>> map, Deque<BulkLoadHFiles.LoadQueueItem> queue, boolean silence) Prepare a collection ofLoadQueueItemfrom list of source hfiles contained in the passed directory and validates whether the prepared queue has all the valid table column families in it.private voidBulkLoadHFilesTool.tableExists(AsyncClusterConnection conn, TableName tableName) private voidBulkLoadHFilesTool.throwAndLogTableNotFoundException(TableName tn) protected CompletableFuture<Collection<BulkLoadHFiles.LoadQueueItem>>BulkLoadHFilesTool.tryAtomicRegionLoad(AsyncClusterConnection conn, TableName tableName, boolean copyFiles, byte[] first, Collection<BulkLoadHFiles.LoadQueueItem> lqis) Attempts to do an atomic load of many hfiles into a region.Constructors in org.apache.hadoop.hbase.tool with parameters of type TableNameModifierConstructorDescriptionRegionMonitor(Connection connection, String[] monitorTargets, boolean useRegExp, CanaryTool.Sink sink, ExecutorService executor, boolean writeSniffing, TableName writeTableName, boolean treatFailureAsError, HashMap<String, Long> configuredReadTableTimeouts, long configuredWriteTableTimeout, long allowedFailures) RegionTaskResult(RegionInfo region, TableName tableName, ServerName serverName, ColumnFamilyDescriptor column) -
Uses of TableName in org.apache.hadoop.hbase.util
Fields in org.apache.hadoop.hbase.util declared as TableNameModifier and TypeFieldDescriptionprivate TableNameHBaseFsck.cleanReplicationBarrierTableDeprecated.(package private) TableNameHbckTableInfo.tableNameprivate TableNameLoadTestTool.tableNameTable name for the testprotected final TableNameMultiThreadedAction.tableNameFields in org.apache.hadoop.hbase.util with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final Map<TableName,TableDescriptor> FSTableDescriptors.cacheHBaseFsck.orphanTableDirsDeprecated.HBaseFsck.skippedRegionsDeprecated.HBaseFsck.tablesIncludedDeprecated.private SortedMap<TableName,HbckTableInfo> HBaseFsck.tablesInfoDeprecated.This map from Tablename -> TableInfo contains the structures necessary to detect table consistency problems (holes, dupes, overlaps).private Map<TableName,TableState> HBaseFsck.tableStatesDeprecated.Methods in org.apache.hadoop.hbase.util that return TableNameModifier and TypeMethodDescriptionHbckTableInfo.getName()static TableNameCommonFSUtils.getTableName(org.apache.hadoop.fs.Path tablePath) Returns theTableNameobject representing the table directory under path rootdirHbckRegionInfo.getTableName()static TableNameHFileArchiveUtil.getTableName(org.apache.hadoop.fs.Path archivePath) Methods in org.apache.hadoop.hbase.util that return types with arguments of type TableNameModifier and TypeMethodDescriptionprivate SortedMap<TableName,HbckTableInfo> HBaseFsck.checkHdfsIntegrity(boolean fixHoles, boolean fixOverlaps) Deprecated.(package private) SortedMap<TableName,HbckTableInfo> HBaseFsck.checkIntegrity()Deprecated.Checks tables integrity.HBaseFsck.getIncludedTables()Deprecated.private SortedMap<TableName,HbckTableInfo> HBaseFsck.loadHdfsRegionInfos()Deprecated.Populate hbi's from regionInfos loaded from file system.Methods in org.apache.hadoop.hbase.util with parameters of type TableNameModifier and TypeMethodDescriptionprotected voidLoadTestTool.applyColumnFamilyOptions(TableName tableName, byte[][] columnFamilies) Apply column family options such as Bloom filters, compression, and data block encoding.static intLoadTestUtil.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[][] columnFamilies, Compression.Algorithm compression, DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, Durability durability) Creates a pre-split table for load testing.static intLoadTestUtil.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] columnFamily, Compression.Algorithm compression, DataBlockEncoding dataBlockEncoding) Creates a pre-split table for load testing.static intLoadTestUtil.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] columnFamily, Compression.Algorithm compression, DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, Durability durability) Creates a pre-split table for load testing.(package private) static voidRegionSplitter.createPresplitTable(TableName tableName, RegionSplitter.SplitAlgorithm splitAlgo, String[] columnFamilies, org.apache.hadoop.conf.Configuration conf) private booleanHBaseFsck.fabricateTableInfo(FSTableDescriptors fstd, TableName tableName, Set<String> columns) Deprecated.To fabricate a .tableinfo file with following contents
1.Get the current table descriptor for the given table, or null if none exists.static org.apache.hadoop.fs.PathHFileArchiveUtil.getRegionArchiveDir(org.apache.hadoop.fs.Path rootDir, TableName tableName, String encodedRegionName) Get the archive directory for a given region under the specified tablestatic org.apache.hadoop.fs.PathHFileArchiveUtil.getRegionArchiveDir(org.apache.hadoop.fs.Path rootDir, TableName tableName, org.apache.hadoop.fs.Path regiondir) Get the archive directory for a given region under the specified tablestatic org.apache.hadoop.fs.PathCommonFSUtils.getRegionDir(org.apache.hadoop.fs.Path rootdir, TableName tableName, String regionName) Returns thePathobject representing the region directory under path rootdir(package private) static LinkedList<Pair<byte[],byte[]>> RegionSplitter.getSplits(Connection connection, TableName tableName, RegionSplitter.SplitAlgorithm splitAlgo) static org.apache.hadoop.fs.PathHFileArchiveUtil.getStoreArchivePath(org.apache.hadoop.conf.Configuration conf, TableName tableName, String regionName, String familyName) Get the directory to archive a store directorystatic org.apache.hadoop.fs.PathHFileArchiveUtil.getTableArchivePath(org.apache.hadoop.conf.Configuration conf, TableName tableName) Get the path to the table archive directory based on the configured archive directory.static org.apache.hadoop.fs.PathHFileArchiveUtil.getTableArchivePath(org.apache.hadoop.fs.Path rootdir, TableName tableName) Get the path to the table archive directory based on the configured archive directory.static TableDescriptorFSTableDescriptors.getTableDescriptorFromFs(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, TableName tableName) Returns the latest table descriptor for the given table directly from the file system if it exists, bypassing the local cache.static org.apache.hadoop.fs.PathCommonFSUtils.getTableDir(org.apache.hadoop.fs.Path rootdir, TableName tableName) Returns thePathobject representing the table directory under path rootdirprivate org.apache.hadoop.fs.PathFSTableDescriptors.getTableDir(TableName tableName) Return the table directory in HDFSprivate static Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> RegionSplitter.getTableDirAndSplitFile(org.apache.hadoop.conf.Configuration conf, TableName tableName) FSUtils.getTableStoreFilePathMap(Map<String, org.apache.hadoop.fs.Path> map, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, TableName tableName) Runs through the HBase rootdir/tablename and creates a reverse lookup map for table StoreFile names to the full Path.FSUtils.getTableStoreFilePathMap(Map<String, org.apache.hadoop.fs.Path> resultMap, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, TableName tableName, org.apache.hadoop.fs.PathFilter sfFilter, ExecutorService executor, FSUtils.ProgressReporter progressReporter) Runs through the HBase rootdir/tablename and creates a reverse lookup map for table StoreFile names to the full Path.FSUtils.getTableStoreFilePathMap(Map<String, org.apache.hadoop.fs.Path> resultMap, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, TableName tableName, org.apache.hadoop.fs.PathFilter sfFilter, ExecutorService executor, HbckErrorReporter progressReporter) Deprecated.Since 2.3.0.static org.apache.hadoop.fs.PathCommonFSUtils.getWALRegionDir(org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName) Returns the WAL region directory based on the given table name and region namestatic org.apache.hadoop.fs.PathCommonFSUtils.getWALTableDir(org.apache.hadoop.conf.Configuration conf, TableName tableName) Returns the Table directory under the WALRootDir for the specified table namestatic org.apache.hadoop.fs.PathCommonFSUtils.getWrongWALRegionDir(org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName) Deprecated.For compatibility, will be removed in 4.0.0.voidHBaseFsck.includeTable(TableName table) Deprecated.static booleanServerRegionReplicaUtil.isMetaRegionReplicaReplicationEnabled(org.apache.hadoop.conf.Configuration conf, TableName tn) Returns True if hbase:meta Region Read Replica is enabled.static booleanServerRegionReplicaUtil.isRegionReplicaReplicationEnabled(org.apache.hadoop.conf.Configuration conf, TableName tn) (package private) booleanHBaseFsck.isTableDisabled(TableName tableName) Deprecated.Check if the specified region's table is disabled.(package private) booleanHBaseFsck.isTableIncluded(TableName table) Deprecated.Only check/fix tables specified by the list, Empty list means all tables are included.Removes the table descriptor from the local cache and returns it.(package private) static voidRegionSplitter.rollingSplit(TableName tableName, RegionSplitter.SplitAlgorithm splitAlgo, org.apache.hadoop.conf.Configuration conf) (package private) static LinkedList<Pair<byte[],byte[]>> RegionSplitter.splitScan(LinkedList<Pair<byte[], byte[]>> regionList, Connection connection, TableName tableName, RegionSplitter.SplitAlgorithm splitAlgo) Method parameters in org.apache.hadoop.hbase.util with type arguments of type TableNameModifier and TypeMethodDescription(package private) TableDescriptor[]HBaseFsck.getTableDescriptors(List<TableName> tableNames) Deprecated.private voidHBaseFsck.printTableSummary(SortedMap<TableName, HbckTableInfo> tablesInfo) Deprecated.Prints summary of all tables found on the system.Constructors in org.apache.hadoop.hbase.util with parameters of type TableNameModifierConstructorDescription(package private)HbckTableInfo(TableName name, HBaseFsck hbck) MultiThreadedAction(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName, String actionLetter) MultiThreadedReader(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName, double verifyPercent) MultiThreadedReaderWithACL(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName, double verifyPercent, String userNames) MultiThreadedUpdater(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName, double updatePercent) MultiThreadedUpdaterWithACL(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName, double updatePercent, User userOwner, String userNames) MultiThreadedWriter(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName) MultiThreadedWriterBase(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName, String actionLetter) MultiThreadedWriterWithACL(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName, User userOwner) -
Uses of TableName in org.apache.hadoop.hbase.util.compaction
Fields in org.apache.hadoop.hbase.util.compaction declared as TableNameConstructors in org.apache.hadoop.hbase.util.compaction with parameters of type TableNameModifierConstructorDescriptionMajorCompactor(org.apache.hadoop.conf.Configuration conf, TableName tableName, Set<String> storesToCompact, int concurrency, long timestamp, long sleepForMs) -
Uses of TableName in org.apache.hadoop.hbase.wal
Fields in org.apache.hadoop.hbase.wal declared as TableNameModifier and TypeFieldDescriptionprivate TableNameWALKeyImpl.tablename(package private) final TableNameEntryBuffers.RegionEntryBuffer.tableNameMethods in org.apache.hadoop.hbase.wal that return TableNameModifier and TypeMethodDescriptionEntryBuffers.RegionEntryBuffer.getTableName()WALKey.getTableName()Returns table nameWALKeyImpl.getTableName()Returns table nameMethods in org.apache.hadoop.hbase.wal with parameters of type TableNameModifier and TypeMethodDescriptionAbstractRecoveredEditsOutputSink.createRecoveredEditsWriter(TableName tableName, byte[] region, long seqId) Returns a writer that wraps aWALProvider.Writerand its Path.private StoreFileWriterBoundedRecoveredHFilesOutputSink.createRecoveredHFileWriter(TableName tableName, String regionName, long seqId, String familyName, boolean isMetaTable) RecoveredEditsOutputSink.getRecoveredEditsWriter(TableName tableName, byte[] region, long seqId) Get a writer and path for a log starting at the given entry.(package private) static org.apache.hadoop.fs.PathWALSplitUtil.getRegionSplitEditsPath(TableName tableName, byte[] encodedRegionName, long seqId, String fileNameBeingSplit, String tmpDirName, org.apache.hadoop.conf.Configuration conf, String workerNameComponent) Path to a file under RECOVERED_EDITS_DIR directory of the region found inlogEntrynamed for the sequenceid in the passedlogEntry: e.g.protected voidWALKeyImpl.init(byte[] encodedRegionName, TableName tablename, long logSeqNum, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope, Map<String, byte[]> extendedAttributes) (package private) voidWALKeyImpl.internTableName(TableName tablename) Drop this instance's tablename byte array and instead hold a reference to the provided tablename.private booleanWALSplitter.isRegionDirPresentUnderRoot(TableName tn, String region) (package private) static org.apache.hadoop.fs.PathWALSplitUtil.tryCreateRecoveredHFilesDir(org.apache.hadoop.fs.FileSystem rootFS, org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName, String familyName) Return path to recovered.hfiles directory of the region's column family: e.g.Constructors in org.apache.hadoop.hbase.wal with parameters of type TableNameModifierConstructorDescription(package private)RegionEntryBuffer(TableName tableName, byte[] region) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long logSeqNum, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long logSeqNum, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long logSeqNum, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long logSeqNum, long now, UUID clusterId) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long logSeqNum, long now, UUID clusterId, MultiVersionConcurrencyControl mvcc) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope, Map<String, byte[]> extendedAttributes) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, NavigableMap<byte[], Integer> replicationScope) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, MultiVersionConcurrencyControl mvcc) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope, Map<String, byte[]> extendedAttributes) -
Uses of TableName in org.apache.hbase.archetypes.exemplars.client
Fields in org.apache.hbase.archetypes.exemplars.client declared as TableNameModifier and TypeFieldDescription(package private) static final TableNameHelloHBase.MY_TABLE_NAME -
Uses of TableName in org.apache.hbase.archetypes.exemplars.shaded_client
Fields in org.apache.hbase.archetypes.exemplars.shaded_client declared as TableNameModifier and TypeFieldDescription(package private) static final TableNameHelloHBase.MY_TABLE_NAME