Uses of Class
org.apache.hadoop.hbase.TableName
Packages that use TableName
Package
Description
Provides HBase Client
Table of Contents
Provides implementations of 
HFile and HFile
 BlockCache.Tools to help define network clients and servers.
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
The Region Normalizer subsystem is responsible for coaxing all the regions in a table toward
 a "normal" size, according to their storefile size.
Multi Cluster Replication
HBase REST
Provides an HBase Thrift
service.
Provides an HBase Thrift
service.
This package provides fully-functional exemplar Java code demonstrating
 simple usage of the hbase-client API, for incorporation into a Maven
 archetype with hbase-client dependency.
This package provides fully-functional exemplar Java code demonstrating
 simple usage of the hbase-client API, for incorporation into a Maven
 archetype with hbase-shaded-client dependency.
- 
Uses of TableName in org.apache.hadoop.hbase
Fields in org.apache.hadoop.hbase declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameHConstants.ENSEMBLE_TABLE_NAMEThe name of the ensemble tablestatic final TableNameTableName.META_TABLE_NAMEThe hbase:meta table's name.static final TableNameTableName.NAMESPACE_TABLE_NAMEThe Namespace table's name.static final TableNameTableName.OLD_META_TABLE_NAMETableName for old .META.static final TableNameTableName.OLD_ROOT_TABLE_NAMETableName for old -ROOT- table.private static final TableNameRSGroupTableAccessor.RSGROUP_TABLE_NAMEprivate TableNameHRegionInfo.tableNameDeprecated.private TableNameMetaTableAccessor.TableVisitorBase.tableNameFields in org.apache.hadoop.hbase with type parameters of type TableNameModifier and TypeFieldDescriptionTableName.tableCacheprivate final Map<TableName,RegionStatesCount> ClusterMetricsBuilder.ClusterMetricsImpl.tableRegionStatesCountprivate Map<TableName,RegionStatesCount> ClusterMetricsBuilder.tableRegionStatesCountMethods in org.apache.hadoop.hbase that return TableNameModifier and TypeMethodDescriptionprivate static TableNameTableName.createTableNameIfNecessary(ByteBuffer bns, ByteBuffer qns) Check that the object does not exist already.private static TableNameTableName.getADummyTableName(String qualifier) It is used to create table names for old META, and ROOT table.HRegionInfo.getTable()Deprecated.Get current table name of the regionstatic TableNameHRegionInfo.getTable(byte[] regionName) Deprecated.HTableDescriptor.getTableName()Deprecated.Get the name of the tablestatic TableNameTableName.valueOf(byte[] fullName) Construct a TableNamestatic TableNameTableName.valueOf(byte[] namespace, byte[] qualifier) static TableNameTableName.valueOf(byte[] fullName, int offset, int length) Construct a TableNamestatic TableNameConstruct a TableNamestatic TableNamestatic TableNameTableName.valueOf(ByteBuffer fullname) Construct a TableNamestatic TableNameTableName.valueOf(ByteBuffer namespace, ByteBuffer qualifier) Methods in org.apache.hadoop.hbase that return types with arguments of type TableNameModifier and TypeMethodDescriptionClusterMetrics.getTableRegionStatesCount()Provide region states count for given table.ClusterMetricsBuilder.ClusterMetricsImpl.getTableRegionStatesCount()ClusterStatus.getTableRegionStatesCount()Deprecated.static Map<TableName,TableState> MetaTableAccessor.getTableStates(Connection conn) Fetch table states from META tableMethods in org.apache.hadoop.hbase with parameters of type TableNameModifier and TypeMethodDescriptionintstatic byte[]HRegionInfo.createRegionName(TableName tableName, byte[] startKey, byte[] id, boolean newFormat) Deprecated.As of release 2.0.0, this will be removed in HBase 3.0.0 UseRegionInfo.createRegionName(TableName, byte[], byte[], boolean).static byte[]HRegionInfo.createRegionName(TableName tableName, byte[] startKey, byte[] id, int replicaId, boolean newFormat) Deprecated.As of release 2.0.0, this will be removed in HBase 3.0.0 UseRegionInfo.createRegionName(TableName, byte[], byte[], int, boolean).static byte[]HRegionInfo.createRegionName(TableName tableName, byte[] startKey, long regionId, boolean newFormat) Deprecated.As of release 2.0.0, this will be removed in HBase 3.0.0 UseRegionInfo.createRegionName(TableName, byte[], long, boolean).static byte[]HRegionInfo.createRegionName(TableName tableName, byte[] startKey, long regionId, int replicaId, boolean newFormat) Deprecated.As of release 2.0.0, this will be removed in HBase 3.0.0 UseRegionInfo.createRegionName(TableName, byte[], long, int, boolean).static byte[]HRegionInfo.createRegionName(TableName tableName, byte[] startKey, String id, boolean newFormat) Deprecated.As of release 2.0.0, this will be removed in HBase 3.0.0 UseRegionInfo.createRegionName(TableName, byte[], String, boolean).static voidMetaTableAccessor.deleteTableState(Connection connection, TableName table) Remove state for table from metadefault booleanTest whether a given table exists, i.e, has a table descriptor.Returns TableDescriptor for tablenameSharedConnection.getBufferedMutator(TableName tableName) static CellComparatorCellComparatorImpl.getCellComparator(TableName tableName) Utility method that makes a guess at comparator to use based off passed tableName.private static RegionInfoMetaTableAccessor.getClosestRegionInfo(Connection connection, TableName tableName, byte[] row) Returns Get closest metatable region row to passedrowstatic CellComparatorInnerStoreCellComparator.getInnerStoreCellComparator(TableName tableName) Utility method that makes a guess at comparator to use based off passed tableName.default longClusterMetrics.getLastMajorCompactionTimestamp(TableName table) longClusterStatus.getLastMajorCompactionTsForTable(TableName table) Deprecated.As of release 2.0.0, this will be removed in HBase 3.0.0 UseClusterMetrics.getLastMajorCompactionTimestamp(TableName)instead.SharedConnection.getRegionLocator(TableName tableName) MetaTableAccessor.getReplicationBarrierResult(Connection conn, TableName tableName, byte[] row, byte[] encodedRegionName) static ScanMetaTableAccessor.getScanForTableName(org.apache.hadoop.conf.Configuration conf, TableName tableName) This method creates a Scan object that will only scan catalog rows that belong to the specified table.SharedConnection.getTableBuilder(TableName tableName, ExecutorService pool) MetaTableAccessor.getTableEncodedRegionNameAndLastBarrier(Connection conn, TableName tableName) MetaTableAccessor.getTableEncodedRegionNamesForSerialReplication(Connection conn, TableName tableName) static CompletableFuture<List<HRegionLocation>>AsyncMetaTableAccessor.getTableHRegionLocations(AsyncTable<AdvancedScanResultConsumer> metaTable, TableName tableName) Used to get all region locations for the specific tablestatic List<RegionInfo>MetaTableAccessor.getTableRegions(Connection connection, TableName tableName) Gets all of the regions of the specified table.static List<RegionInfo>MetaTableAccessor.getTableRegions(Connection connection, TableName tableName, boolean excludeOfflinedSplitParents) Gets all of the regions of the specified table.private static CompletableFuture<List<Pair<RegionInfo,ServerName>>> AsyncMetaTableAccessor.getTableRegionsAndLocations(AsyncTable<AdvancedScanResultConsumer> metaTable, TableName tableName, boolean excludeOfflinedSplitParents) Used to get table regions' info and server.static List<Pair<RegionInfo,ServerName>> MetaTableAccessor.getTableRegionsAndLocations(Connection connection, TableName tableName) Do not use this method to get meta table regions, use methods in MetaTableLocator instead.static List<Pair<RegionInfo,ServerName>> MetaTableAccessor.getTableRegionsAndLocations(Connection connection, TableName tableName, boolean excludeOfflinedSplitParents) Do not use this method to get meta table regions, use methods in MetaTableLocator instead.private static byte[]AsyncMetaTableAccessor.getTableStartRowForMeta(TableName tableName, MetaTableAccessor.QueryType type) static byte[]MetaTableAccessor.getTableStartRowForMeta(TableName tableName, MetaTableAccessor.QueryType type) Returns start row for scanning META according to query typestatic CompletableFuture<Optional<TableState>>AsyncMetaTableAccessor.getTableState(AsyncTable<?> metaTable, TableName tableName) static TableStateMetaTableAccessor.getTableState(Connection conn, TableName tableName) Fetch table state for given table from META tableprivate static byte[]AsyncMetaTableAccessor.getTableStopRowForMeta(TableName tableName, MetaTableAccessor.QueryType type) static byte[]MetaTableAccessor.getTableStopRowForMeta(TableName tableName, MetaTableAccessor.QueryType type) Returns stop row for scanning META according to query typestatic booleanTableName.isMetaTableName(TableName tn) Returns True iftnis the hbase:meta table name.Returns Instance of table descriptor or null if none found.private static CompletableFuture<Void>AsyncMetaTableAccessor.scanMeta(AsyncTable<AdvancedScanResultConsumer> metaTable, TableName tableName, MetaTableAccessor.QueryType type, MetaTableAccessor.Visitor visitor) Performs a scan of META table for given table.static voidMetaTableAccessor.scanMeta(Connection connection, MetaTableAccessor.Visitor visitor, TableName tableName, byte[] row, int rowLimit) Performs a scan of META table for given table starting from given row.private static voidMetaTableAccessor.scanMeta(Connection connection, TableName table, MetaTableAccessor.QueryType type, int maxRows, MetaTableAccessor.Visitor visitor, CatalogReplicaMode metaReplicaMode) static voidMetaTableAccessor.scanMetaForTableRegions(Connection connection, MetaTableAccessor.Visitor visitor, TableName tableName) static voidMetaTableAccessor.scanMetaForTableRegions(Connection connection, MetaTableAccessor.Visitor visitor, TableName tableName, CatalogReplicaMode metaReplicaMode) static CompletableFuture<Boolean>AsyncMetaTableAccessor.tableExists(AsyncTable<?> metaTable, TableName tableName) static voidMetaTableAccessor.updateTableState(Connection conn, TableName tableName, TableState.State actual) Updates state in META Do not use.Method parameters in org.apache.hadoop.hbase with type arguments of type TableNameModifier and TypeMethodDescriptionClusterMetricsBuilder.setTableRegionStatesCount(Map<TableName, RegionStatesCount> tableRegionStatesCount) Constructors in org.apache.hadoop.hbase with parameters of type TableNameModifierConstructorDescriptionConcurrentTableModificationException(TableName tableName) privateHRegionInfo(long regionId, TableName tableName) Deprecated.Private constructor used constructing HRegionInfo for the first meta regionsHRegionInfo(long regionId, TableName tableName, int replicaId) Deprecated.HRegionInfo(TableName tableName) Deprecated.HRegionInfo(TableName tableName, byte[] startKey, byte[] endKey) Deprecated.Construct HRegionInfo with explicit parametersHRegionInfo(TableName tableName, byte[] startKey, byte[] endKey, boolean split) Deprecated.Construct HRegionInfo with explicit parametersHRegionInfo(TableName tableName, byte[] startKey, byte[] endKey, boolean split, long regionId) Deprecated.Construct HRegionInfo with explicit parametersHRegionInfo(TableName tableName, byte[] startKey, byte[] endKey, boolean split, long regionId, int replicaId) Deprecated.Construct HRegionInfo with explicit parametersHTableDescriptor(TableName name) Deprecated.Construct a table descriptor specifying a TableName objectHTableDescriptor(TableName name, HTableDescriptor desc) Deprecated.Construct a table descriptor by cloning the descriptor passed as a parameter but using a different table name.TableExistsException(TableName tableName) TableNotDisabledException(TableName tableName) TableNotEnabledException(TableName tableName) TableNotFoundException(TableName tableName) TableVisitorBase(TableName tableName)  - 
Uses of TableName in org.apache.hadoop.hbase.backup
Fields in org.apache.hadoop.hbase.backup declared as TableNameModifier and TypeFieldDescriptionprivate TableName[]RestoreRequest.fromTablesprivate TableNameBackupTableInfo.tableprivate TableName[]RestoreRequest.toTablesFields in org.apache.hadoop.hbase.backup with type parameters of type TableNameModifier and TypeFieldDescriptionprivate Map<TableName,BackupTableInfo> BackupInfo.backupTableInfoMapBackup status map for all tablesBackupHFileCleaner.fullyBackedUpTablesBackupInfo.incrTimestampMapPrevious Region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestampBackupRequest.tableListBackupInfo.tableSetTimestampMapNew region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestampMethods in org.apache.hadoop.hbase.backup that return TableNameModifier and TypeMethodDescriptionRestoreRequest.getFromTables()BackupTableInfo.getTable()BackupInfo.getTableBySnapshot(String snapshotName) RestoreRequest.getToTables()Methods in org.apache.hadoop.hbase.backup that return types with arguments of type TableNameModifier and TypeMethodDescriptionBackupInfo.getIncrTimestampMap()Get new region server log timestamps after distributed log rollBackupRequest.getTableList()BackupInfo.getTableNames()BackupInfo.getTables()BackupInfo.getTableSetTimestampMap()BackupInfo.getTableSetTimestampMap(Map<String, org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.RSTimestampMap> map) private static Map<TableName,BackupTableInfo> BackupInfo.toMap(List<org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupTableInfo> list) Methods in org.apache.hadoop.hbase.backup with parameters of type TableNameModifier and TypeMethodDescriptionvoidvoidBackupAdmin.addToBackupSet(String name, TableName[] tables) Add tables to backup set commandstatic voidHBackupFileSystem.checkImageManifestExist(HashMap<TableName, BackupManifest> backupManifestMap, TableName[] tableArray, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path backupRootPath, String backupId) Check whether the backup image path and there is manifest file in the path.BackupInfo.getBackupTableInfo(TableName table) BackupInfo.getSnapshotName(TableName table) static StringHBackupFileSystem.getTableBackupDataDir(String backupRootDir, String backupId, TableName tableName) BackupInfo.getTableBackupDir(TableName tableName) static StringHBackupFileSystem.getTableBackupDir(String backupRootDir, String backupId, TableName tableName) Given the backup root dir, backup id and the table name, return the backup image location, which is also where the backup manifest file is.static org.apache.hadoop.fs.PathHBackupFileSystem.getTableBackupPath(TableName tableName, org.apache.hadoop.fs.Path backupRootPath, String backupId) Given the backup root dir, backup id and the table name, return the backup image location, which is also where the backup manifest file is.voidBackupAdmin.removeFromBackupSet(String name, TableName[] tables) Remove tables from backup setvoidRestoreJob.run(org.apache.hadoop.fs.Path[] dirPaths, TableName[] fromTables, org.apache.hadoop.fs.Path restoreRootDir, TableName[] toTables, boolean fullBackupRestore) Run restore operationprivate RestoreRequestRestoreRequest.setFromTables(TableName[] fromTables) voidBackupInfo.setSnapshotName(TableName table, String snapshotName) private RestoreRequestRestoreRequest.setToTables(TableName[] toTables) RestoreRequest.Builder.withFromTables(TableName[] fromTables) RestoreRequest.Builder.withToTables(TableName[] toTables) Method parameters in org.apache.hadoop.hbase.backup with type arguments of type TableNameModifier and TypeMethodDescriptionstatic voidHBackupFileSystem.checkImageManifestExist(HashMap<TableName, BackupManifest> backupManifestMap, TableName[] tableArray, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path backupRootPath, String backupId) Check whether the backup image path and there is manifest file in the path.BackupHFileCleaner.loadHFileRefs(List<TableName> tableList) voidBackupInfo.setBackupTableInfoMap(Map<TableName, BackupTableInfo> backupTableInfoMap) voidSet the new region server log timestamps after distributed log rollprivate BackupRequestBackupRequest.setTableList(List<TableName> tableList) voidvoidBackupRequest.Builder.withTableList(List<TableName> tables) Constructors in org.apache.hadoop.hbase.backup with parameters of type TableNameModifierConstructorDescriptionBackupInfo(String backupId, BackupType type, TableName[] tables, String targetRootDir) BackupTableInfo(TableName table, String targetRootDir, String backupId)  - 
Uses of TableName in org.apache.hadoop.hbase.backup.impl
Fields in org.apache.hadoop.hbase.backup.impl declared as TableNameModifier and TypeFieldDescriptionprivate TableNameBackupSystemTable.bulkLoadTableNameBackup System table name for bulk loaded files.private TableName[]RestoreTablesClient.sTableArrayprivate TableNameBackupSystemTable.tableNameBackup system table (main) nameprivate TableName[]RestoreTablesClient.tTableArrayFields in org.apache.hadoop.hbase.backup.impl with type parameters of type TableNameModifier and TypeFieldDescriptionBackupManifest.BackupImage.incrTimeRangesBackupManifest.BackupImage.tableListTableBackupClient.tableListMethods in org.apache.hadoop.hbase.backup.impl that return TableNameModifier and TypeMethodDescriptionprivate TableNameBackupCommands.HistoryCommand.getTableName()static TableNameBackupSystemTable.getTableName(org.apache.hadoop.conf.Configuration conf) static TableNameBackupSystemTable.getTableNameForBulkLoadedData(org.apache.hadoop.conf.Configuration conf) private TableName[]BackupCommands.BackupSetCommand.toTableNames(String[] tables) Methods in org.apache.hadoop.hbase.backup.impl that return types with arguments of type TableNameModifier and TypeMethodDescriptionBackupSystemTable.describeBackupSet(String name) Get backup set description (list of tables)BackupAdminImpl.excludeNonExistingTables(List<TableName> tableList, List<TableName> nonExistingTableList) BackupSystemTable.getBackupHistoryForTableSet(Set<TableName> set, String backupRoot) BackupManager.getIncrementalBackupTableSet()Return the current tables covered by incremental backup.BackupSystemTable.getIncrementalBackupTableSet(String backupRoot) Return the current tables covered by incremental backup.BackupManifest.BackupImage.getIncrTimeRanges()BackupManifest.getIncrTimestampMap()BackupManifest.getTableList()Get the table set of this image.BackupManifest.BackupImage.getTableNames()BackupSystemTable.getTablesForBackupType(BackupType type) BackupManifest.BackupImage.loadIncrementalTimestampMap(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupImage proto) BackupManager.readBulkloadRows(List<TableName> tableList) BackupSystemTable.readBulkloadRows(List<TableName> tableList) BackupManager.readLogTimestampMap()Read the timestamp for each region server log after the last successful backup.BackupSystemTable.readLogTimestampMap(String backupRoot) Read the timestamp for each region server log after the last successful backup.Methods in org.apache.hadoop.hbase.backup.impl with parameters of type TableNameModifier and TypeMethodDescriptionvoidBackupAdminImpl.addToBackupSet(String name, TableName[] tables) private voidRestoreTablesClient.checkTargetTables(TableName[] tTableArray, boolean isOverwrite) Validate target tables.private voidBackupAdminImpl.cleanupBackupDir(BackupInfo backupInfo, TableName table, org.apache.hadoop.conf.Configuration conf) Clean up the data at target directory(package private) static PutBackupSystemTable.createPutForBulkLoadedFile(TableName tn, byte[] fam, String p, String backupId, long ts, int idx) BackupSystemTable.createPutForCommittedBulkload(TableName table, byte[] region, Map<byte[], List<org.apache.hadoop.fs.Path>> finalPaths) BackupSystemTable.createPutForPreparedBulkload(TableName table, byte[] region, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) private PutBackupSystemTable.createPutForWriteRegionServerLogTimestamp(TableName table, byte[] smap, String backupRoot) Creates Put to write RS last roll log timestamp map(package private) static ScanBackupSystemTable.createScanForOrigBulkLoadedFiles(TableName table) private List<BackupInfo>BackupAdminImpl.getAffectedBackupSessions(BackupInfo backupInfo, TableName tn, BackupSystemTable table) BackupManifest.getAllDependentListByTable(TableName table) Get the full dependent image list in the whole dependency scope for a specific table of this backup in time order from old to new.BackupManager.getAncestors(BackupInfo backupInfo, TableName table) Get the direct ancestors of this backup for one table involved.BackupSystemTable.getBackupHistoryForTable(TableName name) Get history for a tableprotected org.apache.hadoop.fs.PathIncrementalTableBackupClient.getBulkOutputDirForTable(TableName table) BackupManifest.getDependentListByTable(TableName table) Get the dependent image list for a specific table of this backup in time order from old to new if want to restore to this backup image level.protected static intbooleanprivate booleanBackupAdminImpl.isLastBackupSession(BackupSystemTable table, TableName tn, long startTime) voidBackupAdminImpl.removeFromBackupSet(String name, TableName[] tables) private voidBackupAdminImpl.removeTableFromBackupImage(BackupInfo info, TableName tn, BackupSystemTable sysTable) private voidRestoreTablesClient.restore(HashMap<TableName, BackupManifest> backupManifestMap, TableName[] sTableArray, TableName[] tTableArray, boolean isOverwrite) Restore operation.private voidRestoreTablesClient.restoreImages(BackupManifest.BackupImage[] images, TableName sTable, TableName tTable, boolean truncateIfExists) Restore operation handle each backupImage in array.protected voidFullTableBackupClient.snapshotTable(Admin admin, TableName tableName, String snapshotName) protected booleanIncrementalTableBackupClient.tableExists(TableName table, Connection conn) private String[]BackupAdminImpl.toStringArray(TableName[] list) private org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.TableServerTimestampBackupSystemTable.toTableServerTimestampProto(TableName table, Map<String, Long> map) private voidBackupSystemTable.waitForSystemTable(Admin admin, TableName tableName) voidBackupSystemTable.writeFilesForBulkLoadPreCommit(TableName tabName, byte[] region, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) voidBackupSystemTable.writePathsPostBulkLoad(TableName tabName, byte[] region, Map<byte[], List<org.apache.hadoop.fs.Path>> finalPaths) Method parameters in org.apache.hadoop.hbase.backup.impl with type arguments of type TableNameModifier and TypeMethodDescriptionvoidBackupManager.addIncrementalBackupTableSet(Set<TableName> tables) Adds set of tables to overall incremental backup table setvoidBackupSystemTable.addIncrementalBackupTableSet(Set<TableName> tables, String backupRoot) Add tables to global incremental backup setBackupManager.createBackupInfo(String backupId, BackupType type, List<TableName> tableList, String targetRootDir, int workers, long bandwidth) Creates a backup info based on input backup request.BackupSystemTable.createDeleteForOrigBulkLoad(List<TableName> lst) private PutBackupSystemTable.createPutForIncrBackupTableSet(Set<TableName> tables, String backupRoot) Creates Put to store incremental backup table setprivate PutBackupSystemTable.createPutForUpdateTablesForMerge(List<TableName> tables) BackupAdminImpl.excludeNonExistingTables(List<TableName> tableList, List<TableName> nonExistingTableList) private voidBackupAdminImpl.finalizeDelete(Map<String, HashSet<TableName>> tablesMap, BackupSystemTable table) Updates incremental backup set for every backupRootBackupSystemTable.getBackupHistoryForTableSet(Set<TableName> set, String backupRoot) protected static intIncrementalTableBackupClient.handleBulkLoad(List<TableName> sTableList) BackupSystemTable.readBulkLoadedFiles(String backupId, List<TableName> sTableList) BackupManager.readBulkloadRows(List<TableName> tableList) BackupSystemTable.readBulkloadRows(List<TableName> tableList) private voidRestoreTablesClient.restore(HashMap<TableName, BackupManifest> backupManifestMap, TableName[] sTableArray, TableName[] tTableArray, boolean isOverwrite) Restore operation.private voidvoidSet the incremental timestamp map directly.private voidBackupManifest.BackupImage.setTableList(List<TableName> tableList) voidBackupSystemTable.updateProcessedTablesForMerge(List<TableName> tables) (package private) BackupManifest.BackupImage.BuilderBackupManifest.BackupImage.Builder.withTableList(List<TableName> tableList) voidBackupSystemTable.writeBulkLoadedFiles(List<TableName> sTableList, Map<byte[], List<org.apache.hadoop.fs.Path>>[] maps, String backupId) voidWrite the current timestamps for each regionserver to backup system table after a successful full or incremental backup.voidBackupSystemTable.writeRegionServerLogTimestamp(Set<TableName> tables, Map<String, Long> newTimestamps, String backupRoot) Write the current timestamps for each regionserver to backup system table after a successful full or incremental backup.Constructors in org.apache.hadoop.hbase.backup.impl with parameters of type TableNameModifierConstructorDescriptionBackupManifest(BackupInfo backup, TableName table) Construct a table level manifest for a backup of the named table.Constructor parameters in org.apache.hadoop.hbase.backup.impl with type arguments of type TableNameModifierConstructorDescriptionprivateBackupImage(String backupId, BackupType type, String rootDir, List<TableName> tableList, long startTs, long completeTs)  - 
Uses of TableName in org.apache.hadoop.hbase.backup.mapreduce
Fields in org.apache.hadoop.hbase.backup.mapreduce declared as TableNameMethods in org.apache.hadoop.hbase.backup.mapreduce that return TableNameModifier and TypeMethodDescriptionMapReduceBackupCopyJob.SnapshotCopy.getTable()protected TableName[]MapReduceBackupMergeJob.getTableNamesInBackupImages(String[] backupIds) Methods in org.apache.hadoop.hbase.backup.mapreduce that return types with arguments of type TableNameModifier and TypeMethodDescriptionMapReduceBackupMergeJob.toTableNameList(List<Pair<TableName, org.apache.hadoop.fs.Path>> processedTableList) Methods in org.apache.hadoop.hbase.backup.mapreduce with parameters of type TableNameModifier and TypeMethodDescriptionprotected org.apache.hadoop.fs.Path[]MapReduceBackupMergeJob.findInputDirectories(org.apache.hadoop.fs.FileSystem fs, String backupRoot, TableName tableName, String[] backupIds) protected voidMapReduceBackupMergeJob.moveData(org.apache.hadoop.fs.FileSystem fs, String backupRoot, org.apache.hadoop.fs.Path bulkOutputPath, TableName tableName, String mergedBackupId) voidMapReduceRestoreJob.run(org.apache.hadoop.fs.Path[] dirPaths, TableName[] tableNames, org.apache.hadoop.fs.Path restoreRootDir, TableName[] newTableNames, boolean fullBackupRestore) Method parameters in org.apache.hadoop.hbase.backup.mapreduce with type arguments of type TableNameModifier and TypeMethodDescriptionprotected List<org.apache.hadoop.fs.Path>MapReduceBackupMergeJob.toPathList(List<Pair<TableName, org.apache.hadoop.fs.Path>> processedTableList) MapReduceBackupMergeJob.toTableNameList(List<Pair<TableName, org.apache.hadoop.fs.Path>> processedTableList) Constructors in org.apache.hadoop.hbase.backup.mapreduce with parameters of type TableName - 
Uses of TableName in org.apache.hadoop.hbase.backup.util
Fields in org.apache.hadoop.hbase.backup.util with type parameters of type TableNameMethods in org.apache.hadoop.hbase.backup.util that return TableNameMethods in org.apache.hadoop.hbase.backup.util that return types with arguments of type TableNameMethods in org.apache.hadoop.hbase.backup.util with parameters of type TableNameModifier and TypeMethodDescriptionprivate voidRestoreTool.checkAndCreateTable(Connection conn, TableName targetTableName, ArrayList<org.apache.hadoop.fs.Path> regionDirList, TableDescriptor htd, boolean truncateIfExists) Prepare the table for bulkload, most codes copied fromcreateTablemethod inBulkLoadHFilesTool.private voidRestoreTool.createAndRestoreTable(Connection conn, TableName tableName, TableName newTableName, org.apache.hadoop.fs.Path tableBackupPath, boolean truncateIfExists, String lastIncrBackupId) static RestoreRequestBackupUtils.createRestoreRequest(String backupRootDir, String backupId, boolean check, TableName[] fromTables, TableName[] toTables, boolean isOverwrite) Create restore request.voidRestoreTool.fullRestoreTable(Connection conn, org.apache.hadoop.fs.Path tableBackupPath, TableName tableName, TableName newTableName, boolean truncateIfExists, String lastIncrBackupId) static StringBackupUtils.getFileNameCompatibleString(TableName table) (package private) ArrayList<org.apache.hadoop.fs.Path>RestoreTool.getRegionList(TableName tableName) Gets region list(package private) org.apache.hadoop.fs.PathRestoreTool.getTableArchivePath(TableName tableName) return value represent path for: ".../user/biadmin/backup1/default/t1_dn/backup_1396650096738/archive/data/default/t1_dn"static StringBackupUtils.getTableBackupDir(String backupRootDir, String backupId, TableName tableName) Given the backup root dir, backup id and the table name, return the backup image location, which is also where the backup manifest file is.(package private) TableDescriptorRestoreTool.getTableDesc(TableName tableName) Get table descriptorprivate TableDescriptorRestoreTool.getTableDescriptor(org.apache.hadoop.fs.FileSystem fileSys, TableName tableName, String lastIncrBackupId) (package private) org.apache.hadoop.fs.PathRestoreTool.getTableInfoPath(TableName tableName) Returns value represent path for: ""/$USER/SBACKUP_ROOT/backup_id/namespace/table/.hbase-snapshot/ snapshot_1396650097621_namespace_table" this path contains .snapshotinfo, .tabledesc (0.96 and 0.98) this path contains .snapshotinfo, .data.manifest (trunk)(package private) org.apache.hadoop.fs.PathRestoreTool.getTableSnapshotPath(org.apache.hadoop.fs.Path backupRootPath, TableName tableName, String backupId) Returns value represent path for path to backup table snapshot directory: "/$USER/SBACKUP_ROOT/backup_id/namespace/table/.hbase-snapshot"voidRestoreTool.incrementalRestoreTable(Connection conn, org.apache.hadoop.fs.Path tableBackupPath, org.apache.hadoop.fs.Path[] logDirs, TableName[] tableNames, TableName[] newTableNames, String incrBackupId) During incremental backup operation.Method parameters in org.apache.hadoop.hbase.backup.util with type arguments of type TableNameModifier and TypeMethodDescriptionLoop through the RS log timestamp map for the tables, for each RS, find the min timestamp value for the RS among the tables.static booleanBackupUtils.validate(HashMap<TableName, BackupManifest> backupManifestMap, org.apache.hadoop.conf.Configuration conf) Constructor parameters in org.apache.hadoop.hbase.backup.util with type arguments of type TableName - 
Uses of TableName in org.apache.hadoop.hbase.client
Fields in org.apache.hadoop.hbase.client declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameTableDescriptorBuilder.ModifyableTableDescriptor.nameprivate final TableNameRegionCoprocessorRpcChannel.tableprivate final TableNameSnapshotDescription.tableprivate final TableNameAsyncBatchRpcRetryingCaller.tableNameprivate final TableNameAsyncClientScanner.tableNameprivate TableNameAsyncProcessTask.Builder.tableNameprivate final TableNameAsyncProcessTask.tableNameprivate final TableNameAsyncRegionLocationCache.tableNameprivate final TableNameAsyncRequestFutureImpl.tableNameprivate TableNameAsyncRpcRetryingCallerFactory.BatchCallerBuilder.tableNameprivate TableNameAsyncRpcRetryingCallerFactory.SingleRequestCallerBuilder.tableNameprivate final TableNameAsyncSingleRequestRpcRetryingCaller.tableNameprotected TableNameAsyncTableBuilderBase.tableNameprivate final TableNameAsyncTableRegionLocatorImpl.tableNameprivate final TableNameAsyncTableResultScanner.tableNameprivate final TableNameBufferedMutatorImpl.tableNameprivate final TableNameBufferedMutatorParams.tableNameprivate final TableNameCatalogReplicaLoadBalanceSimpleSelector.tableNameprotected final TableNameClientScanner.tableNameprivate final TableNameHBaseAdmin.TableFuture.tableNameprivate final TableNameHRegionLocator.tableNameprivate final TableNameHTable.tableNameprivate final TableNameMutableRegionInfo.tableNameprotected final TableNameRawAsyncHBaseAdmin.TableProcedureBiConsumer.tableNameprivate final TableNameRawAsyncTableImpl.tableNameprotected final TableNameRegionAdminServiceCallable.tableNameprivate final TableNameRegionCoprocessorRpcChannelImpl.tableNameprivate final TableNameRegionInfoBuilder.tableNameprivate final TableNameRegionServerCallable.tableNameprotected final TableNameRpcRetryingCallerWithReadReplicas.tableNameprivate final TableNameScannerCallableWithReplicas.tableNameprotected TableNameTableBuilderBase.tableNameprivate final TableNameTableState.tableNameFields in org.apache.hadoop.hbase.client with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final ConcurrentMap<TableName,AsyncNonMetaRegionLocator.TableCache> AsyncNonMetaRegionLocator.cacheprivate final ConcurrentMap<TableName,ConcurrentNavigableMap<byte[], RegionLocations>> MetaCache.cachedRegionLocationsMap of table to tableHRegionLocations.private final ConcurrentMap<TableName,ConcurrentNavigableMap<byte[], CatalogReplicaLoadBalanceSimpleSelector.StaleLocationCacheEntry>> CatalogReplicaLoadBalanceSimpleSelector.staleCacheNormalizeTableFilterParams.Builder.tableNamesNormalizeTableFilterParams.tableNamesMethods in org.apache.hadoop.hbase.client that return TableNameModifier and TypeMethodDescriptionprivate TableNameHBaseAdmin.checkTableExists(TableName tableName) Check if table exists or notprivate static TableNameMutableRegionInfo.checkTableName(TableName tableName) AsyncBufferedMutator.getName()Gets the fully qualified table name instance of the table that thisAsyncBufferedMutatorwrites to.AsyncBufferedMutatorImpl.getName()AsyncTable.getName()Gets the fully qualified table name instance of this table.AsyncTableImpl.getName()AsyncTableRegionLocator.getName()Gets the fully qualified table name instance of the table whose region we want to locate.AsyncTableRegionLocatorImpl.getName()BufferedMutator.getName()Gets the fully qualified table name instance of the table that this BufferedMutator writes to.BufferedMutatorImpl.getName()HRegionLocator.getName()HTable.getName()RawAsyncTableImpl.getName()RegionLocator.getName()Gets the fully qualified table name instance of this table.Table.getName()Gets the fully qualified table name instance of this table.protected TableNameClientScanner.getTable()MutableRegionInfo.getTable()Get current table name of the regionRegionInfo.getTable()Returns current table name of the regionstatic TableNameRegionInfo.getTable(byte[] regionName) Gets the table name from the specified region name.AsyncProcessTask.getTableName()BufferedMutatorParams.getTableName()protected TableNameHBaseAdmin.TableFuture.getTableName()Returns the table nameRegionServerCallable.getTableName()SnapshotDescription.getTableName()TableDescriptor.getTableName()Get the name of the tableTableDescriptorBuilder.ModifyableTableDescriptor.getTableName()Get the name of the tableTableState.getTableName()Table name for stateprivate TableNameHBaseAdmin.getTableNameBeforeRestoreSnapshot(String snapshotName) Check whether the snapshot exists and contains disabled tableAdmin.listTableNames()List all of the names of userspace tables.Admin.listTableNames(String regex) Deprecated.since 2.0 version and will be removed in 3.0 version.Admin.listTableNames(String regex, boolean includeSysTables) Deprecated.since 2.0 version and will be removed in 3.0 version.default TableName[]Admin.listTableNames(Pattern pattern) List all of the names of userspace tables.Admin.listTableNames(Pattern pattern, boolean includeSysTables) List all of the names of userspace tables.HBaseAdmin.listTableNames()HBaseAdmin.listTableNames(String regex) HBaseAdmin.listTableNames(String regex, boolean includeSysTables) HBaseAdmin.listTableNames(Pattern pattern, boolean includeSysTables) Admin.listTableNamesByNamespace(String name) Get list of table names by namespace.HBaseAdmin.listTableNamesByNamespace(String name) Methods in org.apache.hadoop.hbase.client that return types with arguments of type TableNameModifier and TypeMethodDescriptionprivate CompletableFuture<TableName>RawAsyncHBaseAdmin.checkRegionsAndGetTableName(byte[][] encodedRegionNames) Map<TableName,? extends SpaceQuotaSnapshotView> Admin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) Fetches the observedSpaceQuotaSnapshotViews observed by a RegionServer.CompletableFuture<? extends Map<TableName,? extends SpaceQuotaSnapshotView>> AsyncAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) Fetches the observedSpaceQuotaSnapshotViews observed by a RegionServer.AsyncHBaseAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) HBaseAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) RawAsyncHBaseAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) Admin.getSpaceQuotaTableSizes()Fetches the table sizes on the filesystem as tracked by the HBase Master.AsyncAdmin.getSpaceQuotaTableSizes()Fetches the table sizes on the filesystem as tracked by the HBase Master.AsyncHBaseAdmin.getSpaceQuotaTableSizes()HBaseAdmin.getSpaceQuotaTableSizes()RawAsyncHBaseAdmin.getSpaceQuotaTableSizes()AsyncRpcRetryingCaller.getTableName()AsyncSingleRequestRpcRetryingCaller.getTableName()NormalizeTableFilterParams.getTableNames()private CompletableFuture<List<TableName>>RawAsyncHBaseAdmin.getTableNames(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetTableNamesRequest request) default CompletableFuture<List<TableName>>AsyncAdmin.listTableNames()List all of the names of userspace tables.AsyncAdmin.listTableNames(boolean includeSysTables) List all of the names of tables.AsyncAdmin.listTableNames(Pattern pattern, boolean includeSysTables) List all of the names of userspace tables.AsyncHBaseAdmin.listTableNames(boolean includeSysTables) AsyncHBaseAdmin.listTableNames(Pattern pattern, boolean includeSysTables) RawAsyncHBaseAdmin.listTableNames(boolean includeSysTables) RawAsyncHBaseAdmin.listTableNames(Pattern pattern, boolean includeSysTables) AsyncAdmin.listTableNamesByNamespace(String name) Get list of table names by namespace.AsyncHBaseAdmin.listTableNamesByNamespace(String name) RawAsyncHBaseAdmin.listTableNamesByNamespace(String name) Admin.listTableNamesByState(boolean isEnabled) List all enabled or disabled table namesAsyncAdmin.listTableNamesByState(boolean isEnabled) List all enabled or disabled table namesAsyncHBaseAdmin.listTableNamesByState(boolean isEnabled) HBaseAdmin.listTableNamesByState(boolean isEnabled) RawAsyncHBaseAdmin.listTableNamesByState(boolean isEnabled) Methods in org.apache.hadoop.hbase.client with parameters of type TableNameModifier and TypeMethodDescriptiondefault voidAdmin.addColumn(TableName tableName, ColumnFamilyDescriptor columnFamily) Deprecated.As of release 2.0.0.default voidAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Add a column family to an existing table.AsyncAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Add a column family to an existing table.AsyncHBaseAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) RawAsyncHBaseAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Admin.addColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) Add a column family to an existing table.HBaseAdmin.addColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) voidClusterConnection.cacheLocation(TableName tableName, RegionLocations location) voidConnectionImplementation.cacheLocation(TableName tableName, RegionLocations location) Put a newly discovered HRegionLocation into the cache.private voidConnectionImplementation.cacheLocation(TableName tableName, ServerName source, HRegionLocation location) Put a newly discovered HRegionLocation into the cache.voidMetaCache.cacheLocation(TableName tableName, RegionLocations locations) Put a newly discovered HRegionLocation into the cache.voidMetaCache.cacheLocation(TableName tableName, ServerName source, HRegionLocation location) Put a newly discovered HRegionLocation into the cache.(package private) static intConnectionUtils.calcPriority(int priority, TableName tableName) Select the priority for the rpc call.private voidHBaseAdmin.checkAndSyncTableDescToPeers(TableName tableName, byte[][] splits) Connect to peer and check the table descriptor on peer: Create the same table on peer when not exist. Throw an exception if the table already has replication enabled on any of the column families. Throw an exception if the table exists on peer cluster but descriptors are not same.private CompletableFuture<Void>RawAsyncHBaseAdmin.checkAndSyncTableToPeerClusters(TableName tableName, byte[][] splits) Connect to peer and check the table descriptor on peer: Create the same table on peer when not exist. Throw an exception if the table already has replication enabled on any of the column families. Throw an exception if the table exists on peer cluster but descriptors are not same.private TableNameHBaseAdmin.checkTableExists(TableName tableName) Check if table exists or notprivate static TableNameMutableRegionInfo.checkTableName(TableName tableName) Admin.clearBlockCache(TableName tableName) Clear all the blocks corresponding to this table from BlockCache.AsyncAdmin.clearBlockCache(TableName tableName) Clear all the blocks corresponding to this table from BlockCache.AsyncHBaseAdmin.clearBlockCache(TableName tableName) HBaseAdmin.clearBlockCache(TableName tableName) Clear all the blocks corresponding to this table from BlockCache.RawAsyncHBaseAdmin.clearBlockCache(TableName tableName) (package private) voidAsyncNonMetaRegionLocator.clearCache(TableName tableName) (package private) voidAsyncRegionLocator.clearCache(TableName tableName) voidMetaCache.clearCache(TableName tableName) Delete all cached entries of a table.
Synchronized because of calls in cacheLocation which need to be executed atomicallyvoidMetaCache.clearCache(TableName tableName, byte[] row) Delete a cached location, no matter what it is.voidMetaCache.clearCache(TableName tableName, byte[] row, int replicaId) Delete a cached location with specific replicaId.
Synchronized because of calls in cacheLocation which need to be executed atomicallyvoidMetaCache.clearCache(TableName tableName, byte[] row, ServerName serverName) Delete a cached location for a table, row and server.voidClusterConnection.clearRegionCache(TableName tableName) Allows flushing the region cache of all locations that pertain totableNamevoidConnectionImplementation.clearRegionCache(TableName tableName) voidConnectionImplementation.clearRegionCache(TableName tableName, byte[] row) default voidAdmin.cloneSnapshot(byte[] snapshotName, TableName tableName) Deprecated.since 2.3.0, will be removed in 3.0.0.default voidAdmin.cloneSnapshot(String snapshotName, TableName tableName) Create a new table by cloning the snapshot content.default voidAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl) Create a new table by cloning the snapshot content.default voidAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) Create a new table by cloning the snapshot content.default CompletableFuture<Void>AsyncAdmin.cloneSnapshot(String snapshotName, TableName tableName) Create a new table by cloning the snapshot content.default CompletableFuture<Void>AsyncAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl) Create a new table by cloning the snapshot content.AsyncAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) Create a new table by cloning the snapshot content.AsyncHBaseAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) RawAsyncHBaseAdmin.cloneSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) Admin.cloneSnapshotAsync(String snapshotName, TableName tableName) Create a new table by cloning the snapshot content, but does not block and wait for it to be completely cloned.Admin.cloneSnapshotAsync(String snapshotName, TableName tableName, boolean restoreAcl) Create a new table by cloning the snapshot content.Admin.cloneSnapshotAsync(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) Create a new table by cloning the snapshot content.HBaseAdmin.cloneSnapshotAsync(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) voidAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) Create a new table by cloning the existent table schema.AsyncAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) Create a new table by cloning the existent table schema.AsyncHBaseAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) voidHBaseAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) RawAsyncHBaseAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) voidCompact a table.voidCompact a column family within a table.voidAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) Compact a column family within a table.voidAdmin.compact(TableName tableName, CompactType compactType) Compact a table.default CompletableFuture<Void>Compact a table.default CompletableFuture<Void>Compact a column family within a table.AsyncAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) Compact a column family within a table.AsyncAdmin.compact(TableName tableName, CompactType compactType) Compact a table.AsyncHBaseAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) AsyncHBaseAdmin.compact(TableName tableName, CompactType compactType) voidCompact a table.voidCompact a column family within a table.private voidHBaseAdmin.compact(TableName tableName, byte[] columnFamily, boolean major, CompactType compactType) Compact a table.voidHBaseAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) Compact a column family within a table.voidHBaseAdmin.compact(TableName tableName, CompactType compactType) Compact a table.private CompletableFuture<Void>RawAsyncHBaseAdmin.compact(TableName tableName, byte[] columnFamily, boolean major, CompactType compactType) Compact column family of a table, Asynchronous operation even if CompletableFuture.get()RawAsyncHBaseAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) RawAsyncHBaseAdmin.compact(TableName tableName, CompactType compactType) private CompletableFuture<Void>RawAsyncHBaseAdmin.compareTableWithPeerCluster(TableName tableName, TableDescriptor tableDesc, ReplicationPeerDescription peer, AsyncAdmin peerAdmin) private voidAsyncNonMetaRegionLocator.complete(TableName tableName, AsyncNonMetaRegionLocator.LocateRequest req, RegionLocations locs, Throwable error) private static CompletableFuture<Boolean>RawAsyncHBaseAdmin.completeCheckTableState(CompletableFuture<Boolean> future, TableState tableState, Throwable error, TableState.State targetState, TableName tableName) Utility for completing passed TableStateCompletableFuturefutureusing passed parameters.static TableStateTableState.convert(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableState tableState) Covert from PB version of TableStatestatic TableDescriptorTableDescriptorBuilder.copy(TableName name, TableDescriptor desc) private MultiServerCallableAsyncRequestFutureImpl.createCallable(ServerName server, TableName tableName, MultiAction multi) Create a callable.static RegionInfoRegionInfo.createMobRegionInfo(TableName tableName) Creates a RegionInfo object for MOB data.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, byte[] id, boolean newFormat) Make a region name of passed parameters.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, byte[] id, int replicaId, boolean newFormat) Make a region name of passed parameters.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, long regionid, boolean newFormat) Make a region name of passed parameters.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, long regionid, int replicaId, boolean newFormat) Make a region name of passed parameters.static byte[]RegionInfo.createRegionName(TableName tableName, byte[] startKey, String id, boolean newFormat) Make a region name of passed parameters.CatalogReplicaLoadBalanceSelectorFactory.createSelector(String replicaSelectorClass, TableName tableName, ChoreService choreService, IntSupplier getReplicaCount) Create a CatalogReplicaLoadBalanceReplicaSelector based on input config.private CompletableFuture<Void>RawAsyncHBaseAdmin.createTable(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateTableRequest request) voidAdmin.deleteColumn(TableName tableName, byte[] columnFamily) Deprecated.As of release 2.0.0.voidHBaseAdmin.deleteColumn(TableName tableName, byte[] columnFamily) Deprecated.Since 2.0.default voidAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) Delete a column family from a table.AsyncAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) Delete a column family from a table.AsyncHBaseAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) RawAsyncHBaseAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) Admin.deleteColumnFamilyAsync(TableName tableName, byte[] columnFamily) Delete a column family from a table.HBaseAdmin.deleteColumnFamilyAsync(TableName tableName, byte[] columnFamily) default voidAdmin.deleteTable(TableName tableName) Deletes a table.AsyncAdmin.deleteTable(TableName tableName) Deletes a table.AsyncHBaseAdmin.deleteTable(TableName tableName) RawAsyncHBaseAdmin.deleteTable(TableName tableName) Admin.deleteTableAsync(TableName tableName) Deletes the table but does not block and wait for it to be completely removed.HBaseAdmin.deleteTableAsync(TableName tableName) default voidAdmin.disableTable(TableName tableName) Disable table and wait on completion.AsyncAdmin.disableTable(TableName tableName) Disable a table.AsyncHBaseAdmin.disableTable(TableName tableName) RawAsyncHBaseAdmin.disableTable(TableName tableName) Admin.disableTableAsync(TableName tableName) Disable the table but does not block and wait for it to be completely disabled.HBaseAdmin.disableTableAsync(TableName tableName) voidAdmin.disableTableReplication(TableName tableName) Disable a table's replication switch.AsyncAdmin.disableTableReplication(TableName tableName) Disable a table's replication switch.AsyncHBaseAdmin.disableTableReplication(TableName tableName) voidHBaseAdmin.disableTableReplication(TableName tableName) RawAsyncHBaseAdmin.disableTableReplication(TableName tableName) static <R> voidHTable.doBatchWithCallback(List<? extends Row> actions, Object[] results, Batch.Callback<R> callback, ClusterConnection connection, ExecutorService pool, TableName tableName, Map<String, byte[]> requestAttributes) default voidAdmin.enableTable(TableName tableName) Enable a table.AsyncAdmin.enableTable(TableName tableName) Enable a table.AsyncHBaseAdmin.enableTable(TableName tableName) RawAsyncHBaseAdmin.enableTable(TableName tableName) Admin.enableTableAsync(TableName tableName) Enable the table but does not block and wait for it to be completely enabled.HBaseAdmin.enableTableAsync(TableName tableName) voidAdmin.enableTableReplication(TableName tableName) Enable a table's replication switch.AsyncAdmin.enableTableReplication(TableName tableName) Enable a table's replication switch.AsyncHBaseAdmin.enableTableReplication(TableName tableName) voidHBaseAdmin.enableTableReplication(TableName tableName) RawAsyncHBaseAdmin.enableTableReplication(TableName tableName) voidFlush a table.voidFlush the specified column family stores on all regions of the passed table.default voidFlush the specified column family stores on all regions of the passed table.Flush a table.Flush the specified column family stores on all regions of the passed table.Flush the specified column family stores on all regions of the passed table.voidvoidAdmin.flushAsync(TableName tableName, List<byte[]> columnFamilies) Flush a table but does not block and wait for it to finish.HBaseAdmin.flushAsync(TableName tableName, List<byte[]> columnFamilies) private static intMutableRegionInfo.generateHashCode(TableName tableName, byte[] startKey, byte[] endKey, long regionId, int replicaId, boolean offLine, byte[] regionName) Admin.getAlterStatus(TableName tableName) Deprecated.Since 2.0.0.HBaseAdmin.getAlterStatus(TableName tableName) default AsyncBufferedMutatorAsyncConnection.getBufferedMutator(TableName tableName) Retrieve anAsyncBufferedMutatorfor performing client-side buffering of writes.default AsyncBufferedMutatorAsyncConnection.getBufferedMutator(TableName tableName, ExecutorService pool) Retrieve anAsyncBufferedMutatorfor performing client-side buffering of writes.Connection.getBufferedMutator(TableName tableName) Retrieve aBufferedMutatorfor performing client-side buffering of writes.ConnectionImplementation.getBufferedMutator(TableName tableName) AsyncConnection.getBufferedMutatorBuilder(TableName tableName) Returns anAsyncBufferedMutatorBuilderfor creatingAsyncBufferedMutator.AsyncConnection.getBufferedMutatorBuilder(TableName tableName, ExecutorService pool) Returns anAsyncBufferedMutatorBuilderfor creatingAsyncBufferedMutator.AsyncConnectionImpl.getBufferedMutatorBuilder(TableName tableName) AsyncConnectionImpl.getBufferedMutatorBuilder(TableName tableName, ExecutorService pool) (package private) RegionLocationsConnectionImplementation.getCachedLocation(TableName tableName, byte[] row) Search the cache for a location that fits our table and row key.MetaCache.getCachedLocation(TableName tableName, byte[] row) Search the cache for a location that fits our table and row key.Admin.getCompactionState(TableName tableName) Get the current compaction state of a table.Admin.getCompactionState(TableName tableName, CompactType compactType) Get the current compaction state of a table.default CompletableFuture<CompactionState>AsyncAdmin.getCompactionState(TableName tableName) Get the current compaction state of a table.AsyncAdmin.getCompactionState(TableName tableName, CompactType compactType) Get the current compaction state of a table.AsyncHBaseAdmin.getCompactionState(TableName tableName, CompactType compactType) HBaseAdmin.getCompactionState(TableName tableName) HBaseAdmin.getCompactionState(TableName tableName, CompactType compactType) Get the current compaction state of a table.RawAsyncHBaseAdmin.getCompactionState(TableName tableName, CompactType compactType) Admin.getCurrentSpaceQuotaSnapshot(TableName tableName) Returns the Master's view of a quota on the giventableNameor null if the Master has no quota information on that table.CompletableFuture<? extends SpaceQuotaSnapshotView>AsyncAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) Returns the Master's view of a quota on the giventableNameor null if the Master has no quota information on that table.AsyncHBaseAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) HBaseAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) RawAsyncHBaseAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) Admin.getDescriptor(TableName tableName) Get a table descriptor.AsyncAdmin.getDescriptor(TableName tableName) Method for getting the tableDescriptorAsyncHBaseAdmin.getDescriptor(TableName tableName) HBaseAdmin.getDescriptor(TableName tableName) RawAsyncHBaseAdmin.getDescriptor(TableName tableName) (package private) static HTableDescriptorHBaseAdmin.getHTableDescriptor(TableName tableName, Connection connection, RpcRetryingCallerFactory rpcCallerFactory, RpcControllerFactory rpcControllerFactory, int operationTimeout, int rpcTimeout) Deprecated.since 2.0 version and will be removed in 3.0 version.longAdmin.getLastMajorCompactionTimestamp(TableName tableName) Get the timestamp of the last major compaction for the passed table The timestamp of the oldest HFile resulting from a major compaction of that table, or 0 if no such HFile could be found.AsyncAdmin.getLastMajorCompactionTimestamp(TableName tableName) Get the timestamp of the last major compaction for the passed table.AsyncHBaseAdmin.getLastMajorCompactionTimestamp(TableName tableName) longHBaseAdmin.getLastMajorCompactionTimestamp(TableName tableName) RawAsyncHBaseAdmin.getLastMajorCompactionTimestamp(TableName tableName) (package private) intConnectionImplementation.getNumberOfCachedRegionLocations(TableName tableName) intMetaCache.getNumberOfCachedRegionLocations(TableName tableName) Return the number of cached region for a table.(package private) static intConnectionUtils.getPriority(TableName tableName) (package private) CompletableFuture<HRegionLocation>AsyncRegionLocator.getRegionLocation(TableName tableName, byte[] row, int replicaId, RegionLocateType type, boolean reload, long timeoutNs) (package private) CompletableFuture<HRegionLocation>AsyncRegionLocator.getRegionLocation(TableName tableName, byte[] row, int replicaId, RegionLocateType type, long timeoutNs) (package private) CompletableFuture<HRegionLocation>AsyncRegionLocator.getRegionLocation(TableName tableName, byte[] row, RegionLocateType type, boolean reload, long timeoutNs) (package private) CompletableFuture<HRegionLocation>AsyncRegionLocator.getRegionLocation(TableName tableName, byte[] row, RegionLocateType type, long timeoutNs) ClusterConnection.getRegionLocation(TableName tableName, byte[] row, boolean reload) Find region location hosting passed rowConnectionImplementation.getRegionLocation(TableName tableName, byte[] row, boolean reload) (package private) RegionLocationsAsyncNonMetaRegionLocator.getRegionLocationInCache(TableName tableName, byte[] row) (package private) CompletableFuture<RegionLocations>AsyncNonMetaRegionLocator.getRegionLocations(TableName tableName, byte[] row, int replicaId, RegionLocateType locateType, boolean reload) (package private) CompletableFuture<RegionLocations>AsyncRegionLocator.getRegionLocations(TableName tableName, byte[] row, RegionLocateType type, boolean reload, long timeoutNs) static RegionLocationsRegionAdminServiceCallable.getRegionLocations(ClusterConnection connection, TableName tableName, byte[] row, boolean useCache, int replicaId) (package private) static RegionLocationsRpcRetryingCallerWithReadReplicas.getRegionLocations(boolean useCache, int replicaId, ClusterConnection cConnection, TableName tableName, byte[] row) private CompletableFuture<RegionLocations>AsyncNonMetaRegionLocator.getRegionLocationsInternal(TableName tableName, byte[] row, int replicaId, RegionLocateType locateType, boolean reload) AsyncConnection.getRegionLocator(TableName tableName) Retrieve a AsyncRegionLocator implementation to inspect region information on a table.AsyncConnectionImpl.getRegionLocator(TableName tableName) Connection.getRegionLocator(TableName tableName) Retrieve a RegionLocator implementation to inspect region information on a table.ConnectionImplementation.getRegionLocator(TableName tableName) Admin.getRegionMetrics(ServerName serverName, TableName tableName) GetRegionMetricsof all regions hosted on a regionserver for a table.AsyncAdmin.getRegionMetrics(ServerName serverName, TableName tableName) Get a list ofRegionMetricsof all regions hosted on a region server for a table.AsyncHBaseAdmin.getRegionMetrics(ServerName serverName, TableName tableName) HBaseAdmin.getRegionMetrics(ServerName serverName, TableName tableName) RawAsyncHBaseAdmin.getRegionMetrics(ServerName serverName, TableName tableName) Admin.getRegions(TableName tableName) Get the regions of a given table.AsyncAdmin.getRegions(TableName tableName) Get the regions of a given table.AsyncHBaseAdmin.getRegions(TableName tableName) HBaseAdmin.getRegions(TableName tableName) RawAsyncHBaseAdmin.getRegions(TableName tableName) default AsyncTable<AdvancedScanResultConsumer>Retrieve anAsyncTableimplementation for accessing a table.default AsyncTable<ScanResultConsumer>AsyncConnection.getTable(TableName tableName, ExecutorService pool) Retrieve anAsyncTableimplementation for accessing a table.default TableRetrieve a Table implementation for accessing a table.default TableConnection.getTable(TableName tableName, ExecutorService pool) Retrieve a Table implementation for accessing a table.AsyncConnection.getTableBuilder(TableName tableName) Returns anAsyncTableBuilderfor creatingAsyncTable.AsyncConnection.getTableBuilder(TableName tableName, ExecutorService pool) Returns anAsyncTableBuilderfor creatingAsyncTable.AsyncConnectionImpl.getTableBuilder(TableName tableName) AsyncConnectionImpl.getTableBuilder(TableName tableName, ExecutorService pool) Connection.getTableBuilder(TableName tableName, ExecutorService pool) Returns anTableBuilderfor creatingTable.ConnectionImplementation.getTableBuilder(TableName tableName, ExecutorService pool) AsyncNonMetaRegionLocator.getTableCache(TableName tableName) Admin.getTableDescriptor(TableName tableName) Deprecated.since 2.0 version and will be removed in 3.0 version.HBaseAdmin.getTableDescriptor(TableName tableName) (package private) static TableDescriptorHBaseAdmin.getTableDescriptor(TableName tableName, Connection connection, RpcRetryingCallerFactory rpcCallerFactory, RpcControllerFactory rpcControllerFactory, int operationTimeout, int rpcTimeout) private CompletableFuture<List<HRegionLocation>>RawAsyncHBaseAdmin.getTableHRegionLocations(TableName tableName) List all region locations for the specific table.private ConcurrentNavigableMap<byte[],RegionLocations> MetaCache.getTableLocations(TableName tableName) Returns Map of cached locations for passedtableName.
Despite being Concurrent, writes to the map should be synchronized because we have cases where we need to make multiple updates atomically.Admin.getTableRegions(TableName tableName) Deprecated.As of release 2.0.0, this will be removed in HBase 3.0.0 (HBASE-17980).HBaseAdmin.getTableRegions(TableName tableName) Deprecated.As of release 2.0.0, this will be removed in HBase 3.0.0 UseHBaseAdmin.getRegions(TableName).private byte[][]HBaseAdmin.getTableSplits(TableName tableName) private CompletableFuture<byte[][]>RawAsyncHBaseAdmin.getTableSplits(TableName tableName) ClusterConnection.getTableState(TableName tableName) Retrieve TableState, represent current table state.ConnectionImplementation.getTableState(TableName tableName) private MasterCallable<org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateRegionResponse>HBaseAdmin.getTruncateRegionCallable(TableName tableName, RegionInfo hri) private CompletableFuture<Void>RawAsyncHBaseAdmin.internalRestoreSnapshot(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) HBaseAdmin.internalRestoreSnapshotAsync(String snapshotName, TableName tableName, boolean restoreAcl, String customSFT) Execute Restore/Clone snapshot and wait for the server to complete (blocking).private booleanbooleanMetaCache.isRegionCached(TableName tableName, byte[] row) Check the region cache to see whether a region is cached yet or not.booleanAdmin.isTableAvailable(TableName tableName) Check if a table is available.booleanAdmin.isTableAvailable(TableName tableName, byte[][] splitKeys) Deprecated.Since 2.0.0.AsyncAdmin.isTableAvailable(TableName tableName) Check if a table is available.AsyncAdmin.isTableAvailable(TableName tableName, byte[][] splitKeys) Deprecated.Since 2.2.0.AsyncHBaseAdmin.isTableAvailable(TableName tableName) AsyncHBaseAdmin.isTableAvailable(TableName tableName, byte[][] splitKeys) booleanClusterConnection.isTableAvailable(TableName tableName, byte[][] splitKeys) Use this api to check if the table has been created with the specified number of splitkeys which was used while creating the given table.booleanConnectionImplementation.isTableAvailable(TableName tableName, byte[][] splitKeys) booleanHBaseAdmin.isTableAvailable(TableName tableName) booleanHBaseAdmin.isTableAvailable(TableName tableName, byte[][] splitKeys) RawAsyncHBaseAdmin.isTableAvailable(TableName tableName) RawAsyncHBaseAdmin.isTableAvailable(TableName tableName, byte[][] splitKeys) private CompletableFuture<Boolean>RawAsyncHBaseAdmin.isTableAvailable(TableName tableName, Optional<byte[][]> splitKeys) booleanAdmin.isTableDisabled(TableName tableName) Check if a table is disabled.AsyncAdmin.isTableDisabled(TableName tableName) Check if a table is disabled.AsyncHBaseAdmin.isTableDisabled(TableName tableName) booleanClusterConnection.isTableDisabled(TableName tableName) Check if a table is disabled.booleanConnectionImplementation.isTableDisabled(TableName tableName) booleanConnectionUtils.MasterlessConnection.isTableDisabled(TableName tableName) booleanHBaseAdmin.isTableDisabled(TableName tableName) RawAsyncHBaseAdmin.isTableDisabled(TableName tableName) booleanAdmin.isTableEnabled(TableName tableName) Check if a table is enabled.AsyncAdmin.isTableEnabled(TableName tableName) Check if a table is enabled.AsyncHBaseAdmin.isTableEnabled(TableName tableName) booleanClusterConnection.isTableEnabled(TableName tableName) A table that isTableEnabled == false and isTableDisabled == false is possible.booleanConnectionImplementation.isTableEnabled(TableName tableName) booleanHBaseAdmin.isTableEnabled(TableName tableName) RawAsyncHBaseAdmin.isTableEnabled(TableName tableName) private voidRawAsyncHBaseAdmin.legacyFlush(CompletableFuture<Void> future, TableName tableName, List<byte[]> columnFamilies) private voidAsyncNonMetaRegionLocator.locateInMeta(TableName tableName, AsyncNonMetaRegionLocator.LocateRequest req) private RegionLocationsConnectionImplementation.locateMeta(TableName tableName, boolean useCache, int replicaId) ClusterConnection.locateRegion(TableName tableName, byte[] row) Find the location of the region of tableName that row lives in.ClusterConnection.locateRegion(TableName tableName, byte[] row, boolean useCache, boolean retry) Gets the locations of the region in the specified table, tableName, for a given row.ClusterConnection.locateRegion(TableName tableName, byte[] row, boolean useCache, boolean retry, int replicaId) Gets the locations of the region in the specified table, tableName, for a given row.ConnectionImplementation.locateRegion(TableName tableName, byte[] row) ConnectionImplementation.locateRegion(TableName tableName, byte[] row, boolean useCache, boolean retry) ConnectionImplementation.locateRegion(TableName tableName, byte[] row, boolean useCache, boolean retry, int replicaId) private RegionLocationsConnectionImplementation.locateRegionInMeta(TableName tableName, byte[] row, boolean useCache, boolean retry, int replicaId) Search the hbase:meta table for the HRegionLocation info that contains the table and row we're seeking.ClusterConnection.locateRegions(TableName tableName) Gets the locations of all regions in the specified table, tableName.ClusterConnection.locateRegions(TableName tableName, boolean useCache, boolean offlined) Gets the locations of all regions in the specified table, tableName.ConnectionImplementation.locateRegions(TableName tableName) ConnectionImplementation.locateRegions(TableName tableName, boolean useCache, boolean offlined) voidAdmin.majorCompact(TableName tableName) Major compact a table.voidAdmin.majorCompact(TableName tableName, byte[] columnFamily) Major compact a column family within a table.voidAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) Major compact a column family within a table.voidAdmin.majorCompact(TableName tableName, CompactType compactType) Major compact a table.default CompletableFuture<Void>AsyncAdmin.majorCompact(TableName tableName) Major compact a table.default CompletableFuture<Void>AsyncAdmin.majorCompact(TableName tableName, byte[] columnFamily) Major compact a column family within a table.AsyncAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) Major compact a column family within a table.AsyncAdmin.majorCompact(TableName tableName, CompactType compactType) Major compact a table.AsyncHBaseAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) AsyncHBaseAdmin.majorCompact(TableName tableName, CompactType compactType) voidHBaseAdmin.majorCompact(TableName tableName) voidHBaseAdmin.majorCompact(TableName tableName, byte[] columnFamily) Major compact a column family within a table.voidHBaseAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) Major compact a column family within a table.voidHBaseAdmin.majorCompact(TableName tableName, CompactType compactType) Major compact a table.RawAsyncHBaseAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) RawAsyncHBaseAdmin.majorCompact(TableName tableName, CompactType compactType) default voidAdmin.modifyColumn(TableName tableName, ColumnFamilyDescriptor columnFamily) Deprecated.As of release 2.0.0.default voidAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Modify an existing column family on a table.AsyncAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Modify an existing column family on a table.AsyncHBaseAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) RawAsyncHBaseAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) Admin.modifyColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) Modify an existing column family on a table.HBaseAdmin.modifyColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) default voidAdmin.modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) Change the store file tracker of the given table's given family.AsyncAdmin.modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) Change the store file tracker of the given table's given family.AsyncHBaseAdmin.modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) RawAsyncHBaseAdmin.modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) Admin.modifyColumnFamilyStoreFileTrackerAsync(TableName tableName, byte[] family, String dstSFT) Change the store file tracker of the given table's given family.HBaseAdmin.modifyColumnFamilyStoreFileTrackerAsync(TableName tableName, byte[] family, String dstSFT) default voidAdmin.modifyTable(TableName tableName, TableDescriptor td) Deprecated.since 2.0 version and will be removed in 3.0 version.Admin.modifyTableAsync(TableName tableName, TableDescriptor td) Deprecated.since 2.0 version and will be removed in 3.0 version.default voidAdmin.modifyTableStoreFileTracker(TableName tableName, String dstSFT) Change the store file tracker of the given table.AsyncAdmin.modifyTableStoreFileTracker(TableName tableName, String dstSFT) Change the store file tracker of the given table.AsyncHBaseAdmin.modifyTableStoreFileTracker(TableName tableName, String dstSFT) RawAsyncHBaseAdmin.modifyTableStoreFileTracker(TableName tableName, String dstSFT) Admin.modifyTableStoreFileTrackerAsync(TableName tableName, String dstSFT) Change the store file tracker of the given table.HBaseAdmin.modifyTableStoreFileTrackerAsync(TableName tableName, String dstSFT) static RegionInfoBuilderRegionInfoBuilder.newBuilder(TableName tableName) static TableDescriptorBuilderTableDescriptorBuilder.newBuilder(TableName name) private booleanAsyncNonMetaRegionLocator.onScanNext(TableName tableName, AsyncNonMetaRegionLocator.LocateRequest req, Result result) static TableStateprivate <PREQ,PRESP> 
CompletableFuture<Void>RawAsyncHBaseAdmin.procedureCall(TableName tableName, PREQ preq, RawAsyncHBaseAdmin.MasterRpcCall<PRESP, PREQ> rpcCall, RawAsyncHBaseAdmin.Converter<Long, PRESP> respConverter, RawAsyncHBaseAdmin.ProcedureBiConsumer consumer) Deprecated.The puts request will be buffered by their corresponding buffer queue.booleanDeprecated.The put request will be buffered by its corresponding buffer queue.booleanDeprecated.The put request will be buffered by its corresponding buffer queue.ClusterConnection.relocateRegion(TableName tableName, byte[] row) Find the location of the region of tableName that row lives in, ignoring any value that might be in the cache.ClusterConnection.relocateRegion(TableName tableName, byte[] row, int replicaId) Find the location of the region of tableName that row lives in, ignoring any value that might be in the cache.ConnectionImplementation.relocateRegion(TableName tableName, byte[] row) ConnectionImplementation.relocateRegion(TableName tableName, byte[] row, int replicaId) (package private) static voidConnectionUtils.resetController(HBaseRpcController controller, long timeoutNs, int priority, TableName tableName) private CompletableFuture<Void>RawAsyncHBaseAdmin.restoreSnapshot(String snapshotName, TableName tableName, boolean takeFailSafeSnapshot, boolean restoreAcl) intCatalogReplicaLoadBalanceSelector.select(TableName tablename, byte[] row, RegionLocateType locateType) Select a catalog replica region where client go to loop up the input row key.intCatalogReplicaLoadBalanceSimpleSelector.select(TableName tableName, byte[] row, RegionLocateType locateType) When it looks up a location, it will call this method to find a replica region to go.(package private) voidMasterCallable.setPriority(TableName tableName) AsyncProcessTask.Builder.setTableName(TableName tableName) private voidHBaseAdmin.setTableRep(TableName tableName, boolean enableRep) Set the table's replication switch if the table's replication switch is already not set.private CompletableFuture<Void>RawAsyncHBaseAdmin.setTableReplication(TableName tableName, boolean enableRep) Set the table's replication switch if the table's replication switch is already not set.default voidDeprecated.since 2.3.0, will be removed in 3.0.0.default voidTake a snapshot for the given table.default voidCreate typed snapshot of the table.default voidAdmin.snapshot(String snapshotName, TableName tableName, SnapshotType type) Create typed snapshot of the table.default voidAdmin.snapshot(String snapshotName, TableName tableName, SnapshotType type, Map<String, Object> snapshotProps) Create typed snapshot of the table.default CompletableFuture<Void>Take a snapshot for the given table.default CompletableFuture<Void>AsyncAdmin.snapshot(String snapshotName, TableName tableName, SnapshotType type) Create typed snapshot of the table.voidSplit a table.voidSplit a table.Split a table.Split a table.voidvoidbooleanAdmin.tableExists(TableName tableName) Check if a table exists.AsyncAdmin.tableExists(TableName tableName) Check if a table exists.AsyncHBaseAdmin.tableExists(TableName tableName) booleanHBaseAdmin.tableExists(TableName tableName) RawAsyncHBaseAdmin.tableExists(TableName tableName) (package private) static <T> CompletableFuture<T>ConnectionUtils.timelineConsistentRead(AsyncRegionLocator locator, TableName tableName, Query query, byte[] row, RegionLocateType locateType, Function<Integer, CompletableFuture<T>> requestReplica, long rpcTimeoutNs, long primaryCallTimeoutNs, org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, Optional<MetricsConnection> metrics) default voidAdmin.truncateTable(TableName tableName, boolean preserveSplits) Truncate a table.AsyncAdmin.truncateTable(TableName tableName, boolean preserveSplits) Truncate a table.AsyncHBaseAdmin.truncateTable(TableName tableName, boolean preserveSplits) RawAsyncHBaseAdmin.truncateTable(TableName tableName, boolean preserveSplits) Admin.truncateTableAsync(TableName tableName, boolean preserveSplits) Truncate the table but does not block and wait for it to be completely enabled.HBaseAdmin.truncateTableAsync(TableName tableName, boolean preserveSplits) private CompletableFuture<Void>RawAsyncHBaseAdmin.trySyncTableToPeerCluster(TableName tableName, byte[][] splits, ReplicationPeerDescription peer) voidClusterConnection.updateCachedLocations(TableName tableName, byte[] regionName, byte[] rowkey, Object exception, ServerName source) Update the location cache.voidConnectionImplementation.updateCachedLocations(TableName tableName, byte[] regionName, byte[] rowkey, Object exception, ServerName source) Update the location with the new value (if the exception is a RegionMovedException) or delete it from the cache.voidMetricsConnection.updateRpc(org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor method, TableName tableName, org.apache.hbase.thirdparty.com.google.protobuf.Message param, MetricsConnection.CallStats stats, Throwable e) Report RPC context to metrics system.private voidMetricsConnection.updateTableMetric(String methodName, TableName tableName, MetricsConnection.CallStats stats, Throwable e) Report table rpc context to metrics system.protected voidAsyncProcess.waitForMaximumCurrentTasks(int max, TableName tableName) Wait until the async does not have more than max tasks in progress.Method parameters in org.apache.hadoop.hbase.client with type arguments of type TableNameModifier and TypeMethodDescriptiondefault voidAppend the replicable table column family config from the specified peer.Append the replicable table-cf config of the specified peerprivate voidRawAsyncHBaseAdmin.checkAndGetTableName(byte[] encodeRegionName, AtomicReference<TableName> tableName, CompletableFuture<TableName> result) private voidRawAsyncHBaseAdmin.checkAndGetTableName(byte[] encodeRegionName, AtomicReference<TableName> tableName, CompletableFuture<TableName> result) Admin.getTableDescriptorsByTableName(List<TableName> tableNames) Deprecated.since 2.0 version and will be removed in 3.0 version.HBaseAdmin.getTableDescriptorsByTableName(List<TableName> tableNames) Admin.listTableDescriptors(List<TableName> tableNames) Get tableDescriptors.AsyncAdmin.listTableDescriptors(List<TableName> tableNames) List specific tables including system tables.AsyncHBaseAdmin.listTableDescriptors(List<TableName> tableNames) HBaseAdmin.listTableDescriptors(List<TableName> tableNames) RawAsyncHBaseAdmin.listTableDescriptors(List<TableName> tableNames) default voidRemove some table-cfs from config of the specified peer.Remove some table-cfs from config of the specified peerNormalizeTableFilterParams.Builder.tableNames(List<TableName> tableNames) Constructors in org.apache.hadoop.hbase.client with parameters of type TableNameModifierConstructorDescriptionAddColumnFamilyFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse response) (package private)AddColumnFamilyProcedureBiConsumer(TableName tableName) AsyncBatchRpcRetryingCaller(org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, AsyncConnectionImpl conn, TableName tableName, List<? extends Row> actions, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long operationTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) AsyncClientScanner(Scan scan, AdvancedScanResultConsumer consumer, TableName tableName, AsyncConnectionImpl conn, org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long scanTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) (package private)AsyncProcessTask(ExecutorService pool, TableName tableName, RowAccess<? extends Row> rows, AsyncProcessTask.SubmittedRows size, Batch.Callback<T> callback, CancellableRegionServerCallable callable, boolean needResults, int rpcTimeout, int operationTimeout, Object[] results, Map<String, byte[]> requestAttributes) AsyncRegionLocationCache(TableName tableName) AsyncSingleRequestRpcRetryingCaller(org.apache.hbase.thirdparty.io.netty.util.Timer retryTimer, AsyncConnectionImpl conn, TableName tableName, byte[] row, int replicaId, RegionLocateType locateType, AsyncSingleRequestRpcRetryingCaller.Callable<T> callable, int priority, long pauseNs, long pauseNsForServerOverloaded, int maxAttempts, long operationTimeoutNs, long rpcTimeoutNs, int startLogErrorsCnt, Map<String, byte[]> requestAttributes) (package private)AsyncTableBuilderBase(TableName tableName, AsyncConnectionConfiguration connConf) AsyncTableRegionLocatorImpl(TableName tableName, AsyncConnectionImpl conn) AsyncTableResultScanner(TableName tableName, Scan scan, long maxCacheSize) BufferedMutatorParams(TableName tableName) (package private)CancellableRegionServerCallable(Connection connection, TableName tableName, byte[] row, org.apache.hbase.thirdparty.com.google.protobuf.RpcController rpcController, int rpcTimeout, RetryingTimeTracker tracker, int priority, Map<String, byte[]> requestAttributes) (package private)CatalogReplicaLoadBalanceSimpleSelector(TableName tableName, ChoreService choreService, IntSupplier getNumOfReplicas) ClientAsyncPrefetchScanner(org.apache.hadoop.conf.Configuration configuration, Scan scan, Scan scanForMetrics, TableName name, ClusterConnection connection, RpcRetryingCallerFactory rpcCallerFactory, RpcControllerFactory rpcControllerFactory, ExecutorService pool, int scanReadRpcTimeout, int scannerTimeout, int replicaCallTimeoutMicroSecondScan, ConnectionConfiguration connectionConfiguration, Map<String, byte[]> requestAttributes) ClientScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, Scan scanForMetrics, TableName tableName, ClusterConnection connection, RpcRetryingCallerFactory rpcFactory, RpcControllerFactory controllerFactory, ExecutorService pool, int scanReadRpcTimeout, int scannerTimeout, int primaryOperationTimeout, ConnectionConfiguration connectionConfiguration, Map<String, byte[]> requestAttributes) Create a new ClientScanner for the specified table Note that the passedScan's start row maybe changed changed.ClientServiceCallable(Connection connection, TableName tableName, byte[] row, org.apache.hbase.thirdparty.com.google.protobuf.RpcController rpcController, int priority, Map<String, byte[]> requestAttributes) ClientSimpleScanner(org.apache.hadoop.conf.Configuration configuration, Scan scan, Scan scanForMetrics, TableName name, ClusterConnection connection, RpcRetryingCallerFactory rpcCallerFactory, RpcControllerFactory rpcControllerFactory, ExecutorService pool, int scanReadRpcTimeout, int scannerTimeout, int replicaCallTimeoutMicroSecondScan, ConnectionConfiguration connectionConfiguration, Map<String, byte[]> requestAttributes) (package private)CreateTableProcedureBiConsumer(TableName tableName) DeleteColumnFamilyFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteColumnResponse response) (package private)DeleteColumnFamilyProcedureBiConsumer(TableName tableName) DeleteTableFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteTableResponse response) (package private)DeleteTableProcedureBiConsumer(TableName tableName) DisableTableFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DisableTableResponse response) (package private)DisableTableProcedureBiConsumer(TableName tableName) EnableTableFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.EnableTableResponse response) (package private)EnableTableProcedureBiConsumer(TableName tableName) FlushRegionCallable(ClusterConnection connection, RpcControllerFactory rpcControllerFactory, TableName tableName, byte[] regionName, byte[] regionStartKey, boolean writeFlushWalMarker) FlushTableFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.FlushTableResponse resp) (package private)FlushTableProcedureBiConsumer(TableName tableName) HRegionLocator(TableName tableName, ConnectionImplementation connection) LegacyFlushFuture(HBaseAdmin admin, TableName tableName, Map<String, String> props) (package private)MergeTableRegionProcedureBiConsumer(TableName tableName) MergeTableRegionsFuture(HBaseAdmin admin, TableName tableName, Long procId) MergeTableRegionsFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MergeTableRegionsResponse response) Construct a table descriptor specifying a TableName objectprivateModifyableTableDescriptor(TableName name, Collection<ColumnFamilyDescriptor> families, Map<Bytes, Bytes> values) ModifyableTableDescriptor(TableName name, TableDescriptor desc) Deprecated.ModifyColumnFamilyFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ModifyColumnResponse response) (package private)ModifyColumnFamilyProcedureBiConsumer(TableName tableName) ModifyColumnFamilyStoreFileTrackerFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ModifyColumnStoreFileTrackerResponse response) (package private)ModifyTableFuture(HBaseAdmin admin, TableName tableName, Long procId) ModifyTableFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ModifyTableResponse response) (package private)ModifyTableProcedureBiConsumer(AsyncAdmin admin, TableName tableName) ModifyTablerStoreFileTrackerFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ModifyTableStoreFileTrackerResponse response) (package private)ModifyTableStoreFileTrackerProcedureBiConsumer(AsyncAdmin admin, TableName tableName) (package private)MultiServerCallable(ClusterConnection connection, TableName tableName, ServerName location, MultiAction multi, org.apache.hbase.thirdparty.com.google.protobuf.RpcController rpcController, int rpcTimeout, RetryingTimeTracker tracker, int priority, Map<String, byte[]> requestAttributes) (package private)MutableRegionInfo(long regionId, TableName tableName, int replicaId) Package private constructor used constructing MutableRegionInfo for the first meta regions(package private)MutableRegionInfo(TableName tableName, byte[] startKey, byte[] endKey, boolean split, long regionId, int replicaId, boolean offLine) NoncedRegionServerCallable(Connection connection, TableName tableName, byte[] row, HBaseRpcController rpcController, int priority, Map<String, byte[]> requestAttributes) RegionAdminServiceCallable(ClusterConnection connection, RpcControllerFactory rpcControllerFactory, HRegionLocation location, TableName tableName, byte[] row) RegionAdminServiceCallable(ClusterConnection connection, RpcControllerFactory rpcControllerFactory, HRegionLocation location, TableName tableName, byte[] row, int replicaId) RegionAdminServiceCallable(ClusterConnection connection, RpcControllerFactory rpcControllerFactory, TableName tableName, byte[] row) (package private)RegionCoprocessorRpcChannel(ClusterConnection conn, TableName table, byte[] row, Map<String, byte[]> requestAttributes) Constructor(package private)RegionCoprocessorRpcChannelImpl(AsyncConnectionImpl conn, TableName tableName, RegionInfo region, byte[] row, long rpcTimeoutNs, long operationTimeoutNs) privateRegionInfoBuilder(TableName tableName) RegionServerCallable(Connection connection, TableName tableName, byte[] row, org.apache.hbase.thirdparty.com.google.protobuf.RpcController rpcController, int priority, Map<String, byte[]> requestAttributes) RegionServerCallable(Connection connection, TableName tableName, byte[] row, org.apache.hbase.thirdparty.com.google.protobuf.RpcController rpcController, Map<String, byte[]> requestAttributes) RestoreSnapshotFuture(HBaseAdmin admin, org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.RestoreSnapshotResponse response) RestoreSnapshotFuture(HBaseAdmin admin, TableName tableName, Long procId) ReversedClientScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, Scan scanForMetrics, TableName tableName, ClusterConnection connection, RpcRetryingCallerFactory rpcFactory, RpcControllerFactory controllerFactory, ExecutorService pool, int scanReadRpcTimeout, int scannerTimeout, int primaryOperationTimeout, ConnectionConfiguration connectionConfiguration, Map<String, byte[]> requestAttributes) Create a new ReversibleClientScanner for the specified table Note that the passedScan's start row maybe changed.ReversedScannerCallable(ClusterConnection connection, TableName tableName, Scan scan, ScanMetrics scanMetrics, RpcControllerFactory rpcFactory, int replicaId, Map<String, byte[]> requestAttributes) RpcRetryingCallerWithReadReplicas(RpcControllerFactory rpcControllerFactory, TableName tableName, ClusterConnection cConnection, Get get, ExecutorService pool, int retries, int operationTimeout, int rpcTimeout, int timeBeforeReplicas, Map<String, byte[]> requestAttributes) ScannerCallable(ClusterConnection connection, TableName tableName, Scan scan, ScanMetrics scanMetrics, RpcControllerFactory rpcControllerFactory, int id, Map<String, byte[]> requestAttributes) ScannerCallableWithReplicas(TableName tableName, ClusterConnection cConnection, ScannerCallable baseCallable, ExecutorService pool, int timeBeforeReplicas, Scan scan, int retries, int readRpcTimeout, int scannerTimeout, boolean useScannerTimeoutForNextCalls, int caching, org.apache.hadoop.conf.Configuration conf, RpcRetryingCaller<Result[]> caller) SnapshotDescription(String name, TableName table) SnapshotDescription(String name, TableName table, SnapshotType type) SnapshotDescription(String name, TableName table, SnapshotType type, String owner) SnapshotDescription(String name, TableName table, SnapshotType type, String owner, long creationTime, int version) Deprecated.since 2.3.0 and will be removed in 4.0.0.SnapshotDescription(String name, TableName table, SnapshotType type, String owner, long creationTime, int version, Map<String, Object> snapshotProps) SnapshotDescription Parameterized ConstructorSnapshotDescription(String snapshotName, TableName tableName, SnapshotType type, Map<String, Object> snapshotProps) SnapshotDescription Parameterized Constructor(package private)SnapshotProcedureBiConsumer(TableName tableName) SplitTableRegionFuture(HBaseAdmin admin, TableName tableName, Long procId) SplitTableRegionFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SplitTableRegionResponse response) (package private)SplitTableRegionProcedureBiConsumer(TableName tableName) (package private)TableBuilderBase(TableName tableName, ConnectionConfiguration connConf) TableCache(TableName tableName) privateTableFuture(HBaseAdmin admin, TableName tableName, Long procId) (package private)TableProcedureBiConsumer(TableName tableName) TableState(TableName tableName, TableState.State state) Create instance of TableState.TruncateRegionFuture(HBaseAdmin admin, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateRegionResponse response) (package private)TruncateRegionProcedureBiConsumer(TableName tableName) TruncateTableFuture(HBaseAdmin admin, TableName tableName, boolean preserveSplits, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTableResponse response) (package private)TruncateTableProcedureBiConsumer(TableName tableName) Constructor parameters in org.apache.hadoop.hbase.client with type arguments of type TableNameModifierConstructorDescriptionprivateNormalizeTableFilterParams(List<TableName> tableNames, String regex, String namespace)  - 
Uses of TableName in org.apache.hadoop.hbase.client.example
Fields in org.apache.hadoop.hbase.client.example declared as TableNameModifier and TypeFieldDescriptionprivate static final TableNameBufferedMutatorExample.TABLEprivate final TableNameMultiThreadedClientExample.ReadExampleCallable.tableNameprivate final TableNameMultiThreadedClientExample.SingleWriteExampleCallable.tableNameprivate final TableNameMultiThreadedClientExample.WriteExampleCallable.tableNameMethods in org.apache.hadoop.hbase.client.example with parameters of type TableNameModifier and TypeMethodDescriptionvoidRefreshHFilesClient.refreshHFiles(TableName tableName) private voidMultiThreadedClientExample.warmUpConnectionCache(Connection connection, TableName tn) Constructors in org.apache.hadoop.hbase.client.example with parameters of type TableNameModifierConstructorDescriptionReadExampleCallable(Connection connection, TableName tableName) SingleWriteExampleCallable(Connection connection, TableName tableName) WriteExampleCallable(Connection connection, TableName tableName)  - 
Uses of TableName in org.apache.hadoop.hbase.client.locking
Methods in org.apache.hadoop.hbase.client.locking with parameters of type TableNameModifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.shaded.protobuf.generated.LockServiceProtos.LockRequestLockServiceClient.buildLockRequest(org.apache.hadoop.hbase.shaded.protobuf.generated.LockServiceProtos.LockType type, String namespace, TableName tableName, List<RegionInfo> regionInfos, String description, long nonceGroup, long nonce) LockServiceClient.tableLock(TableName tableName, boolean exclusive, String description, Abortable abort) Create a new EntityLock object to acquire an exclusive or shared lock on a table. - 
Uses of TableName in org.apache.hadoop.hbase.client.replication
Fields in org.apache.hadoop.hbase.client.replication declared as TableNameMethods in org.apache.hadoop.hbase.client.replication that return TableNameMethods in org.apache.hadoop.hbase.client.replication that return types with arguments of type TableNameModifier and TypeMethodDescriptionReplicationPeerConfigUtil.convert2Map(org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos.TableCF[] tableCFs) Convert tableCFs Object to Map.ReplicationAdmin.copyTableCFs(Map<TableName, ? extends Collection<String>> tableCfs) Deprecated.ReplicationPeerConfigUtil.copyTableCFsMap(Map<TableName, List<String>> preTableCfs) ReplicationPeerConfigUtil.mergeTableCFs(Map<TableName, List<String>> preTableCfs, Map<TableName, List<String>> tableCfs) ReplicationAdmin.parseTableCFsFromConfig(String tableCFsConfig) Deprecated.as release of 2.0.0, and it will be removed in 3.0.0ReplicationPeerConfigUtil.parseTableCFsFromConfig(String tableCFsConfig) Convert tableCFs string into Map.Methods in org.apache.hadoop.hbase.client.replication with parameters of type TableNameModifier and TypeMethodDescriptionvoidReplicationAdmin.disableTableRep(TableName tableName) Deprecated.useAdmin.disableTableReplication(TableName)insteadvoidReplicationAdmin.enableTableRep(TableName tableName) Deprecated.useAdmin.enableTableReplication(TableName)insteadMethod parameters in org.apache.hadoop.hbase.client.replication with type arguments of type TableNameModifier and TypeMethodDescriptionvoidReplicationAdmin.addPeer(String id, ReplicationPeerConfig peerConfig, Map<TableName, ? extends Collection<String>> tableCfs) Deprecated.as release of 2.0.0, and it will be removed in 3.0.0, useReplicationAdmin.addPeer(String, ReplicationPeerConfig)instead.static ReplicationPeerConfigReplicationPeerConfigUtil.appendExcludeTableCFsToReplicationPeerConfig(Map<TableName, List<String>> excludeTableCfs, ReplicationPeerConfig peerConfig) voidReplicationAdmin.appendPeerTableCFs(String id, Map<TableName, ? extends Collection<String>> tableCfs) Deprecated.static ReplicationPeerConfigReplicationPeerConfigUtil.appendTableCFsToReplicationPeerConfig(Map<TableName, List<String>> tableCfs, ReplicationPeerConfig peerConfig) static org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos.TableCF[]ReplicationPeerConfigUtil.convert(Map<TableName, ? extends Collection<String>> tableCfs) convert map to TableCFs Objectstatic StringReplicationPeerConfigUtil.convertToString(Map<TableName, ? extends Collection<String>> tableCfs) ReplicationAdmin.copyTableCFs(Map<TableName, ? extends Collection<String>> tableCfs) Deprecated.ReplicationPeerConfigUtil.copyTableCFsMap(Map<TableName, List<String>> preTableCfs) ReplicationPeerConfigUtil.mergeTableCFs(Map<TableName, List<String>> preTableCfs, Map<TableName, List<String>> tableCfs) static ReplicationPeerConfigReplicationPeerConfigUtil.removeExcludeTableCFsFromReplicationPeerConfig(Map<TableName, List<String>> excludeTableCfs, ReplicationPeerConfig peerConfig, String id) voidReplicationAdmin.removePeerTableCFs(String id, Map<TableName, ? extends Collection<String>> tableCfs) Deprecated.static ReplicationPeerConfigReplicationPeerConfigUtil.removeTableCFsFromReplicationPeerConfig(Map<TableName, List<String>> tableCfs, ReplicationPeerConfig peerConfig, String id) voidReplicationAdmin.setPeerTableCFs(String id, Map<TableName, ? extends Collection<String>> tableCfs) Deprecated.Constructors in org.apache.hadoop.hbase.client.replication with parameters of type TableName - 
Uses of TableName in org.apache.hadoop.hbase.client.trace
Fields in org.apache.hadoop.hbase.client.trace declared as TableNameMethods in org.apache.hadoop.hbase.client.trace with parameters of type TableNameModifier and TypeMethodDescription(package private) static voidTableSpanBuilder.populateTableNameAttributes(Map<io.opentelemetry.api.common.AttributeKey<?>, Object> attributes, TableName tableName) Static utility method that performs the primary logic of this builder.TableOperationSpanBuilder.setTableName(TableName tableName) TableSpanBuilder.setTableName(TableName tableName)  - 
Uses of TableName in org.apache.hadoop.hbase.coprocessor
Methods in org.apache.hadoop.hbase.coprocessor with parameters of type TableNameModifier and TypeMethodDescriptiondefault voidMasterObserver.postCompletedDeleteTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called afterHMasterdeletes a table.default voidMasterObserver.postCompletedDisableTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the disableTable operation has been requested.default voidMasterObserver.postCompletedEnableTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the enableTable operation has been requested.default voidMasterObserver.postCompletedModifyTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDescriptor) Deprecated.Since 2.1.default voidMasterObserver.postCompletedModifyTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) Called after to modifying a table's properties.default voidMasterObserver.postCompletedTruncateTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called afterHMastertruncates a table.default voidMasterObserver.postDeleteTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the deleteTable operation has been requested.default voidMasterObserver.postDisableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the disableTable operation has been requested.default voidMasterObserver.postEnableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the enableTable operation has been requested.default voidMasterObserver.postGetUserPermissions(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) Called after getting user permissions.default voidMasterObserver.postModifyColumnFamilyStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, byte[] family, String dstSFT) Called after modifying a family store file tracker.default voidMasterObserver.postModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDescriptor) Deprecated.Since 2.1.default voidMasterObserver.postModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) Called after the modifyTable operation has been requested.default voidMasterObserver.postModifyTableStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, String dstSFT) Called after modifying a table's store file tracker.default voidMasterObserver.postRequestLock(ObserverContext<MasterCoprocessorEnvironment> ctx, String namespace, TableName tableName, RegionInfo[] regionInfos, String description) Called after new LockProcedure is queued.default voidMasterObserver.postSetTableQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, GlobalQuotaSettings quotas) Called after the quota for the table is stored.default voidMasterObserver.postSetUserQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, TableName tableName, GlobalQuotaSettings quotas) Called after the quota for the user on the specified table is stored.default voidMasterObserver.postTableFlush(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the table memstore is flushed to disk.default voidMasterObserver.postTruncateTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called after the truncateTable operation has been requested.default voidMasterObserver.preDeleteTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called beforeHMasterdeletes a table.default voidMasterObserver.preDeleteTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called beforeHMasterdeletes a table.default voidMasterObserver.preDisableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called prior to disabling a table.default voidMasterObserver.preDisableTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called prior to disabling a table.default voidMasterObserver.preEnableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called prior to enabling a table.default voidMasterObserver.preEnableTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called prior to enabling a table.default voidMasterObserver.preGetUserPermissions(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) Called before getting user permissions.default voidMasterObserver.preLockHeartbeat(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tn, String description) Called before heartbeat to a lock.default StringMasterObserver.preModifyColumnFamilyStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, byte[] family, String dstSFT) Called prior to modifying a family's store file tracker.default voidMasterObserver.preModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor newDescriptor) Deprecated.Since 2.1.default TableDescriptorMasterObserver.preModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor) Called prior to modifying a table's properties.default voidMasterObserver.preModifyTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor newDescriptor) Deprecated.Since 2.1.default voidMasterObserver.preModifyTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor) Called prior to modifying a table's properties.default StringMasterObserver.preModifyTableStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, String dstSFT) Called prior to modifying a table's store file tracker.default voidMasterObserver.preRequestLock(ObserverContext<MasterCoprocessorEnvironment> ctx, String namespace, TableName tableName, RegionInfo[] regionInfos, String description) Called before new LockProcedure is queued.default voidMasterObserver.preSetTableQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, GlobalQuotaSettings quotas) Called before the quota for the table is stored.default voidMasterObserver.preSetUserQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, TableName tableName, GlobalQuotaSettings quotas) Called before the quota for the user on the specified table is stored.default voidMasterObserver.preSplitRegion(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, byte[] splitRow) Called before the split region procedure is called.default voidMasterObserver.preSplitRegionAction(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, byte[] splitRow) Called before the region is split.default voidMasterObserver.preTableFlush(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called before the table memstore is flushed to disk.default voidMasterObserver.preTruncateTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called beforeHMastertruncates a table.default voidMasterObserver.preTruncateTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) Called beforeHMastertruncates a table.Method parameters in org.apache.hadoop.hbase.coprocessor with type arguments of type TableNameModifier and TypeMethodDescriptiondefault voidMasterObserver.postGetTableDescriptors(ObserverContext<MasterCoprocessorEnvironment> ctx, List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) Called after a getTableDescriptors request has been processed.default voidMasterObserver.postMoveTables(ObserverContext<MasterCoprocessorEnvironment> ctx, Set<TableName> tables, String targetGroup) Called after servers are moved to target region server groupdefault voidMasterObserver.preGetTableDescriptors(ObserverContext<MasterCoprocessorEnvironment> ctx, List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) Called before a getTableDescriptors request has been processed.default voidMasterObserver.preMoveTables(ObserverContext<MasterCoprocessorEnvironment> ctx, Set<TableName> tables, String targetGroup) Called before tables are moved to target region server group - 
Uses of TableName in org.apache.hadoop.hbase.coprocessor.example
Methods in org.apache.hadoop.hbase.coprocessor.example with parameters of type TableNameModifier and TypeMethodDescriptionvoidExampleMasterObserverWithMetrics.preDisableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName)  - 
Uses of TableName in org.apache.hadoop.hbase.favored
Methods in org.apache.hadoop.hbase.favored with parameters of type TableNameModifier and TypeMethodDescriptionprotected List<RegionPlan>FavoredNodeLoadBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable)  - 
Uses of TableName in org.apache.hadoop.hbase.fs
Methods in org.apache.hadoop.hbase.fs with parameters of type TableNameModifier and TypeMethodDescriptionstatic voidErasureCodingUtils.setPolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, TableName tableName, String policy) Sets the EC policy on the table directory for the specified tablestatic voidErasureCodingUtils.unsetPolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, TableName tableName) Unsets any EC policy specified on the path. - 
Uses of TableName in org.apache.hadoop.hbase.io
Methods in org.apache.hadoop.hbase.io that return TableNameModifier and TypeMethodDescriptionstatic TableNameHFileLink.getReferencedTableName(String fileName) Get the Table name of the referenced linkMethods in org.apache.hadoop.hbase.io that return types with arguments of type TableNameMethods in org.apache.hadoop.hbase.io with parameters of type TableNameModifier and TypeMethodDescriptionstatic HFileLinkHFileLink.build(org.apache.hadoop.conf.Configuration conf, TableName table, String region, String family, String hfile) Create an HFileLink instance from table/region/family/hfile locationstatic StringHFileLink.create(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dstFamilyPath, String familyName, String dstTableName, String dstRegionName, TableName linkedTable, String linkedRegion, String hfileName, boolean createBackRef) Create a new HFileLinkstatic StringHFileLink.create(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dstFamilyPath, TableName linkedTable, String linkedRegion, String hfileName) Create a new HFileLinkstatic StringHFileLink.create(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dstFamilyPath, TableName linkedTable, String linkedRegion, String hfileName, boolean createBackRef) Create a new HFileLink.static StringHFileLink.createHFileLinkName(TableName tableName, String regionName, String hfileName) Create a new HFileLink namestatic org.apache.hadoop.fs.PathHFileLink.createPath(TableName table, String region, String family, String hfile) Create an HFileLink relative path for the table/region/family/hfile location - 
Uses of TableName in org.apache.hadoop.hbase.io.hfile
Methods in org.apache.hadoop.hbase.io.hfile with parameters of type TableName - 
Uses of TableName in org.apache.hadoop.hbase.ipc
Fields in org.apache.hadoop.hbase.ipc declared as TableNameMethods in org.apache.hadoop.hbase.ipc that return TableNameModifier and TypeMethodDescriptionDelegatingHBaseRpcController.getTableName()default TableNameHBaseRpcController.getTableName()Returns Region's table name or null if not available or pertinent.HBaseRpcControllerImpl.getTableName()Methods in org.apache.hadoop.hbase.ipc with parameters of type TableNameModifier and TypeMethodDescriptionvoidDelegatingHBaseRpcController.setPriority(TableName tn) voidHBaseRpcController.setPriority(TableName tn) Set the priority for this operation.voidHBaseRpcControllerImpl.setPriority(TableName tn) voidDelegatingHBaseRpcController.setTableName(TableName tableName) default voidHBaseRpcController.setTableName(TableName tableName) Sets Region's table name.voidHBaseRpcControllerImpl.setTableName(TableName tableName)  - 
Uses of TableName in org.apache.hadoop.hbase.mapred
Fields in org.apache.hadoop.hbase.mapred declared as TableNameMethods in org.apache.hadoop.hbase.mapred that return TableNameMethods in org.apache.hadoop.hbase.mapred with parameters of type TableNameModifier and TypeMethodDescriptionprivate static intTableMapReduceUtil.getRegionCount(org.apache.hadoop.conf.Configuration conf, TableName tableName) protected voidTableInputFormatBase.initializeTable(Connection connection, TableName tableName) Allows subclasses to initialize the table information.Constructors in org.apache.hadoop.hbase.mapred with parameters of type TableNameModifierConstructorDescriptionTableSplit(TableName tableName, byte[] startRow, byte[] endRow, String location) Constructor - 
Uses of TableName in org.apache.hadoop.hbase.mapreduce
Fields in org.apache.hadoop.hbase.mapreduce declared as TableNameFields in org.apache.hadoop.hbase.mapreduce with type parameters of type TableNameMethods in org.apache.hadoop.hbase.mapreduce that return TableNameMethods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type TableNameModifier and TypeMethodDescriptionExportUtils.getArgumentsFromCommandLine(org.apache.hadoop.conf.Configuration conf, String[] args) WALPlayer.getTableNameList(String[] tables) Methods in org.apache.hadoop.hbase.mapreduce with parameters of type TableNameModifier and TypeMethodDescriptionstatic voidTableInputFormat.configureSplitTable(org.apache.hadoop.mapreduce.Job job, TableName tableName) Sets split table in map-reduce job.private static voidImportTsv.createTable(Admin admin, TableName tableName, String[] columns) private static intTableMapReduceUtil.getRegionCount(org.apache.hadoop.conf.Configuration conf, TableName tableName) protected voidTableInputFormatBase.initializeTable(Connection connection, TableName tableName) Allows subclasses to initialize the table information.static voidTableMapReduceUtil.initTableMapperJob(TableName table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job) Use this before submitting a TableMap job.LoadIncrementalHFiles.run(String dirPath, Map<byte[], List<org.apache.hadoop.fs.Path>> map, TableName tableName) Deprecated.Constructors in org.apache.hadoop.hbase.mapreduce with parameters of type TableNameModifierConstructorDescriptionTableSplit(TableName tableName, byte[] startRow, byte[] endRow, String location) Creates a new instance without a scanner.TableSplit(TableName tableName, byte[] startRow, byte[] endRow, String location, long length) Creates a new instance without a scanner.TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location) Creates a new instance while assigning all variables.TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location, long length) Creates a new instance while assigning all variables.TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location, String encodedRegionName, long length) Creates a new instance while assigning all variables. - 
Uses of TableName in org.apache.hadoop.hbase.master
Fields in org.apache.hadoop.hbase.master declared as TableNameFields in org.apache.hadoop.hbase.master with type parameters of type TableNameModifier and TypeFieldDescriptionSnapshotOfRegionAssignmentFromMeta.disabledTablesprivate Map<TableName,AtomicInteger> HMaster.mobCompactionStatesprivate final ConcurrentMap<TableName,TableState.State> TableStateManager.tableName2Stateprivate final Map<TableName,List<RegionInfo>> SnapshotOfRegionAssignmentFromMeta.tableToRegionMapthe table name to region mapRegionPlacementMaintainer.targetTableSetprivate final IdReadWriteLock<TableName>TableStateManager.tnLockMethods in org.apache.hadoop.hbase.master that return types with arguments of type TableNameModifier and TypeMethodDescriptionRegionPlacementMaintainer.getRegionsMovement(FavoredNodesPlan newPlan) Return how many regions will move per table since their primary RS will changeSnapshotOfRegionAssignmentFromMeta.getTableSet()Get the table setTableStateManager.getTablesInStates(TableState.State... states) Return all tables in given states.SnapshotOfRegionAssignmentFromMeta.getTableToRegionMap()Get regions for tablesRegionsRecoveryChore.getTableToRegionsByRefCount(Map<ServerName, ServerMetrics> serverMetricsMap) HMaster.listTableNames(String namespace, String regex, boolean includeSysTables) Returns the list of table names that match the specified requestHMaster.listTableNamesByNamespace(String name) MasterServices.listTableNamesByNamespace(String name) Get list of table names by namespaceMethods in org.apache.hadoop.hbase.master with parameters of type TableNameModifier and TypeMethodDescriptionlongHMaster.addColumn(TableName tableName, ColumnFamilyDescriptor column, long nonceGroup, long nonce) longMasterServices.addColumn(TableName tableName, ColumnFamilyDescriptor column, long nonceGroup, long nonce) Add a new column to an existing tableprivate voidHMaster.checkTableExists(TableName tableName) voidHMaster.checkTableModifiable(TableName tableName) voidMasterServices.checkTableModifiable(TableName tableName) Check table is modifiable; i.e.longHMaster.deleteColumn(TableName tableName, byte[] columnName, long nonceGroup, long nonce) longMasterServices.deleteColumn(TableName tableName, byte[] columnName, long nonceGroup, long nonce) Delete a column from an existing tablelongHMaster.deleteTable(TableName tableName, long nonceGroup, long nonce) longMasterServices.deleteTable(TableName tableName, long nonceGroup, long nonce) Delete a tableprotected voidTableStateManager.deleteZooKeeper(TableName tableName) Deprecated.Since 2.0.0.longHMaster.disableTable(TableName tableName, long nonceGroup, long nonce) longMasterServices.disableTable(TableName tableName, long nonceGroup, long nonce) Disable an existing tablelongHMaster.enableTable(TableName tableName, long nonceGroup, long nonce) longMasterServices.enableTable(TableName tableName, long nonceGroup, long nonce) Enable an existing tablevoidAssignmentVerificationReport.fillUp(TableName tableName, SnapshotOfRegionAssignmentFromMeta snapshot, Map<String, Map<String, Float>> regionLocalityMap) voidAssignmentVerificationReport.fillUpDispersion(TableName tableName, SnapshotOfRegionAssignmentFromMeta snapshot, FavoredNodesPlan newPlan) Use this to project the dispersion scoreslongHMaster.flushTable(TableName tableName, List<byte[]> columnFamilies, long nonceGroup, long nonce) longMasterServices.flushTable(TableName tableName, List<byte[]> columnFamilies, long nonceGroup, long nonce) Flush an existing tableprivate voidRegionPlacementMaintainer.genAssignmentPlan(TableName tableName, SnapshotOfRegionAssignmentFromMeta assignmentSnapshot, Map<String, Map<String, Float>> regionLocalityMap, FavoredNodesPlan plan, boolean munkresForSecondaryAndTertiary) Generate the assignment plan for the existing tableHMaster.getCompactionState(TableName tableName) Get the compaction state of the tablelongHMaster.getLastMajorCompactionTimestamp(TableName table) longMasterServices.getLastMajorCompactionTimestamp(TableName table) org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionStateHMaster.getMobCompactionState(TableName tableName) Gets the mob file compaction state for a specific table.TableStateManager.getTableState(TableName tableName) private static booleanHMaster.isCatalogTable(TableName tableName) booleanTableStateManager.isTablePresent(TableName tableName) booleanTableStateManager.isTableState(TableName tableName, TableState.State... states) protected voidMirroringTableStateManager.metaStateDeleted(TableName tableName) Deprecated.protected voidTableStateManager.metaStateDeleted(TableName tableName) protected voidMirroringTableStateManager.metaStateUpdated(TableName tableName, TableState.State newState) Deprecated.protected voidTableStateManager.metaStateUpdated(TableName tableName, TableState.State newState) longHMaster.modifyColumn(TableName tableName, ColumnFamilyDescriptor descriptor, long nonceGroup, long nonce) longMasterServices.modifyColumn(TableName tableName, ColumnFamilyDescriptor descriptor, long nonceGroup, long nonce) Modify the column descriptor of an existing column in an existing tablelongHMaster.modifyColumnStoreFileTracker(TableName tableName, byte[] family, String dstSFT, long nonceGroup, long nonce) longMasterServices.modifyColumnStoreFileTracker(TableName tableName, byte[] family, String dstSFT, long nonceGroup, long nonce) Modify the store file tracker of an existing column in an existing tablelongHMaster.modifyTable(TableName tableName, TableDescriptor newDescriptor, long nonceGroup, long nonce, boolean reopenRegions) private longHMaster.modifyTable(TableName tableName, HMaster.TableDescriptorGetter newDescriptorGetter, long nonceGroup, long nonce, boolean shouldCheckDescriptor) private longHMaster.modifyTable(TableName tableName, HMaster.TableDescriptorGetter newDescriptorGetter, long nonceGroup, long nonce, boolean shouldCheckDescriptor, boolean reopenRegions) default longMasterServices.modifyTable(TableName tableName, TableDescriptor descriptor, long nonceGroup, long nonce) Modify the descriptor of an existing tablelongMasterServices.modifyTable(TableName tableName, TableDescriptor descriptor, long nonceGroup, long nonce, boolean reopenRegions) Modify the descriptor of an existing tablelongHMaster.modifyTableStoreFileTracker(TableName tableName, String dstSFT, long nonceGroup, long nonce) longMasterServices.modifyTableStoreFileTracker(TableName tableName, String dstSFT, long nonceGroup, long nonce) Modify the store file tracker of an existing tablevoidMasterCoprocessorHost.postCompletedDeleteTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postCompletedDisableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postCompletedEnableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postCompletedModifyTableAction(TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor, User user) voidMasterCoprocessorHost.postCompletedTruncateTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postDeleteTable(TableName tableName) voidMasterCoprocessorHost.postDisableTable(TableName tableName) voidMasterCoprocessorHost.postEnableTable(TableName tableName) voidMasterCoprocessorHost.postGetUserPermissions(String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) voidMasterCoprocessorHost.postModifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) voidMasterCoprocessorHost.postModifyTable(TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) voidMasterCoprocessorHost.postModifyTableStoreFileTracker(TableName tableName, String dstSFT) voidMasterCoprocessorHost.postRequestLock(String namespace, TableName tableName, RegionInfo[] regionInfos, LockType type, String description) voidMasterCoprocessorHost.postSetTableQuota(TableName table, GlobalQuotaSettings quotas) voidMasterCoprocessorHost.postSetUserQuota(String user, TableName table, GlobalQuotaSettings quotas) voidMasterCoprocessorHost.postTableFlush(TableName tableName) voidMasterCoprocessorHost.postTruncateTable(TableName tableName) voidMasterCoprocessorHost.preDeleteTable(TableName tableName) voidMasterCoprocessorHost.preDeleteTableAction(TableName tableName, User user) voidMasterCoprocessorHost.preDisableTable(TableName tableName) voidMasterCoprocessorHost.preDisableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.preEnableTable(TableName tableName) voidMasterCoprocessorHost.preEnableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.preGetUserPermissions(String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) MasterCoprocessorHost.preModifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) MasterCoprocessorHost.preModifyTable(TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor) voidMasterCoprocessorHost.preModifyTableAction(TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor, User user) MasterCoprocessorHost.preModifyTableStoreFileTracker(TableName tableName, String dstSFT) voidMasterCoprocessorHost.preRequestLock(String namespace, TableName tableName, RegionInfo[] regionInfos, LockType type, String description) voidMasterCoprocessorHost.preSetTableQuota(TableName table, GlobalQuotaSettings quotas) voidMasterCoprocessorHost.preSetUserQuota(String user, TableName table, GlobalQuotaSettings quotas) voidMasterCoprocessorHost.preSplitRegion(TableName tableName, byte[] splitRow) Invoked just before calling the split region procedurevoidMasterCoprocessorHost.preSplitRegionAction(TableName tableName, byte[] splitRow, User user) Invoked just before a splitvoidMasterCoprocessorHost.preTableFlush(TableName tableName) voidMasterCoprocessorHost.preTruncateTable(TableName tableName) voidMasterCoprocessorHost.preTruncateTableAction(TableName tableName, User user) voidRegionPlacementMaintainer.printDispersionScores(TableName table, SnapshotOfRegionAssignmentFromMeta snapshot, int numRegions, FavoredNodesPlan newPlan, boolean simplePrint) private TableStateTableStateManager.readMetaState(TableName tableName) (package private) longHMaster.reopenRegions(TableName tableName, List<byte[]> regionNames, long nonceGroup, long nonce) Reopen regions provided in the argumentvoidHMaster.reportMobCompactionEnd(TableName tableName) voidHMaster.reportMobCompactionStart(TableName tableName) voidTableStateManager.setDeletedTable(TableName tableName) voidTableStateManager.setTableState(TableName tableName, TableState.State newState) Set table state to provided.longHMaster.truncateTable(TableName tableName, boolean preserveSplits, long nonceGroup, long nonce) longMasterServices.truncateTable(TableName tableName, boolean preserveSplits, long nonceGroup, long nonce) Truncate a tableprivate voidTableStateManager.updateMetaState(TableName tableName, TableState.State newState) Method parameters in org.apache.hadoop.hbase.master with type arguments of type TableNameModifier and TypeMethodDescriptionLoadBalancer.balanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) Perform the major balance operation for cluster.voidRegionPlacementMaintainer.checkDifferencesWithOldPlan(Map<TableName, Integer> movesPerTable, Map<String, Map<String, Float>> regionLocalityMap, FavoredNodesPlan newPlan) Compares two plans and check whether the locality dropped or increased (prints the information as a string) also prints the baseline localityHMaster.listTableDescriptors(String namespace, String regex, List<TableName> tableNameList, boolean includeSysTables) Returns the list of table descriptors that match the specified requestvoidMasterCoprocessorHost.postGetTableDescriptors(List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) voidMasterCoprocessorHost.postMoveTables(Set<TableName> tables, String targetGroup) voidMasterCoprocessorHost.preGetTableDescriptors(List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) voidMasterCoprocessorHost.preMoveTables(Set<TableName> tables, String targetGroup) private voidRegionsRecoveryChore.prepareTableToReopenRegionsMap(Map<TableName, List<byte[]>> tableToReopenRegionsMap, byte[] regionName, int regionStoreRefCount) default voidLoadBalancer.updateBalancerLoadInfo(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) In some scenarios, Balancer needs to update internal status or information according to the current tables loadConstructor parameters in org.apache.hadoop.hbase.master with type arguments of type TableNameModifierConstructorDescriptionSnapshotOfRegionAssignmentFromMeta(Connection connection, Set<TableName> disabledTables, boolean excludeOfflinedSplitParents)  - 
Uses of TableName in org.apache.hadoop.hbase.master.assignment
Methods in org.apache.hadoop.hbase.master.assignment that return TableNameModifier and TypeMethodDescriptionRegionStateNode.getTable()GCMergedRegionsProcedure.getTableName()Deprecated.GCMultipleMergedRegionsProcedure.getTableName()MergeTableRegionsProcedure.getTableName()MoveRegionProcedure.getTableName()Deprecated.RegionRemoteProcedureBase.getTableName()RegionTransitionProcedure.getTableName()Deprecated.Methods in org.apache.hadoop.hbase.master.assignment that return types with arguments of type TableNameModifier and TypeMethodDescriptionRegionStates.getAssignmentsForBalancer(TableStateManager tableStateManager, List<ServerName> onlineServers) This is an EXPENSIVE clone.Methods in org.apache.hadoop.hbase.master.assignment with parameters of type TableNameModifier and TypeMethodDescriptionAssignmentManager.createUnassignProceduresForClosingExcessRegionReplicas(TableName tableName, int newReplicaCount) Called by ModifyTableProcedures to unassign all the excess region replicas for a table.AssignmentManager.createUnassignProceduresForDisabling(TableName tableName) Called by DisableTableProcedure to unassign all the regions for a table.voidAssignmentManager.deleteTable(TableName tableName) Delete the region states.RegionStates.getRegionByStateOfTable(TableName tableName) RegionStates.getRegionsOfTable(TableName table) Returns Return online regions of table; does not include OFFLINE or SPLITTING regions.private List<RegionInfo>RegionStates.getRegionsOfTable(TableName table, Predicate<RegionStateNode> filter) Returns Return the regions of the table and filter them.RegionStates.getRegionsOfTableForDeleting(TableName table) Get the regions for deleting a table.RegionStates.getRegionsOfTableForEnabling(TableName table) Get the regions for enabling a table.RegionStates.getRegionsOfTableForReopen(TableName tableName) Get the regions to be reopened when modifying a table.private Stream<RegionStateNode>AssignmentManager.getRegionStateNodes(TableName tableName, boolean excludeOfflinedSplitParents) AssignmentManager.getRegionStatesCount(TableName tableName) Provide regions state count for given table.AssignmentManager.getReopenStatus(TableName tableName) Used by the client (via master) to identify if all regions have the schema updatesprivate ScanRegionStateStore.getScanForUpdateRegionReplicas(TableName tableName) private TableDescriptorRegionStateStore.getTableDescriptor(TableName tableName) AssignmentManager.getTableRegions(TableName tableName, boolean excludeOfflinedSplitParents) AssignmentManager.getTableRegionsAndLocations(TableName tableName, boolean excludeOfflinedSplitParents) (package private) ArrayList<RegionInfo>RegionStates.getTableRegionsInfo(TableName tableName) (package private) List<RegionStateNode>RegionStates.getTableRegionStateNodes(TableName tableName) (package private) ArrayList<RegionState>RegionStates.getTableRegionStates(TableName tableName) private booleanRegionStateStore.hasGlobalReplicationScope(TableName tableName) booleanRegionStates.hasTableRegionStates(TableName tableName) private booleanAssignmentManager.isTableDisabled(TableName tableName) private booleanRegionStates.isTableDisabled(TableStateManager tableStateManager, TableName tableName) private booleanAssignmentManager.isTableEnabled(TableName tableName) voidRegionStateStore.removeRegionReplicas(TableName tableName, int oldReplicaCount, int newReplicaCount)  - 
Uses of TableName in org.apache.hadoop.hbase.master.balancer
Methods in org.apache.hadoop.hbase.master.balancer with parameters of type TableNameModifier and TypeMethodDescriptionprotected abstract List<RegionPlan>BaseLoadBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) Perform the major balance operation for table, all sub classes should override this method.protected List<RegionPlan>FavoredStochasticBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) protected List<RegionPlan>SimpleLoadBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) Generate a global load balancing plan according to the specified map of server information to the most loaded regions of each server.protected List<RegionPlan>StochasticLoadBalancer.balanceTable(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) Given the cluster state this will try and approach an optimal balance.protected TableDescriptorRegionLocationFinder.getTableDescriptor(TableName tableName) return TableDescriptor for a given tableName(package private) booleanStochasticLoadBalancer.needsBalance(TableName tableName, BalancerClusterState cluster) private voidStochasticLoadBalancer.updateBalancerTableLoadInfo(TableName tableName, Map<ServerName, List<RegionInfo>> loadOfOneTable) private voidStochasticLoadBalancer.updateStochasticCosts(TableName tableName, double overall, double[] subCosts) update costs to JMXMethod parameters in org.apache.hadoop.hbase.master.balancer with type arguments of type TableNameModifier and TypeMethodDescriptionfinal List<RegionPlan>BaseLoadBalancer.balanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) Perform the major balance operation for cluster, will invokeBaseLoadBalancer.balanceTable(TableName, Map)to do actual balance.MaintenanceLoadBalancer.balanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) protected voidBaseLoadBalancer.preBalanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) Called before actually executing balanceCluster.protected voidSimpleLoadBalancer.preBalanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) (package private) voidSimpleLoadBalancer.setClusterLoad(Map<TableName, Map<ServerName, List<RegionInfo>>> clusterLoad) Pass RegionStates and allow balancer to set the current cluster load.protected final Map<ServerName,List<RegionInfo>> BaseLoadBalancer.toEnsumbleTableLoad(Map<TableName, Map<ServerName, List<RegionInfo>>> LoadOfAllTable) voidStochasticLoadBalancer.updateBalancerLoadInfo(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable)  - 
Uses of TableName in org.apache.hadoop.hbase.master.http
Fields in org.apache.hadoop.hbase.master.http declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameMetaBrowser.scanTableprivate final TableNameRegionVisualizer.RegionDetails.tableNameMethods in org.apache.hadoop.hbase.master.http that return TableNameModifier and TypeMethodDescriptionMetaBrowser.getScanTable()RegionVisualizer.RegionDetails.getTableName()private static TableNameMetaBrowser.resolveScanTable(javax.servlet.http.HttpServletRequest request) Methods in org.apache.hadoop.hbase.master.http with parameters of type TableNameModifier and TypeMethodDescriptionprivate static FilterMetaBrowser.buildTableFilter(TableName tableName) Constructors in org.apache.hadoop.hbase.master.http with parameters of type TableNameModifierConstructorDescription(package private)RegionDetails(ServerName serverName, TableName tableName, RegionMetrics regionMetrics)  - 
Uses of TableName in org.apache.hadoop.hbase.master.janitor
Methods in org.apache.hadoop.hbase.master.janitor with parameters of type TableNameModifier and TypeMethodDescriptionprivate static RegionInfoMetaFixer.buildRegionInfo(TableName tn, byte[] start, byte[] end) CatalogJanitor.checkRegionReferences(MasterServices services, TableName tableName, RegionInfo region) Checks if a region still holds references to parent. - 
Uses of TableName in org.apache.hadoop.hbase.master.locking
Fields in org.apache.hadoop.hbase.master.locking declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameLockManager.MasterLock.tableNameprivate TableNameLockProcedure.tableNameMethods in org.apache.hadoop.hbase.master.locking that return TableNameMethods in org.apache.hadoop.hbase.master.locking with parameters of type TableNameModifier and TypeMethodDescriptionLockManager.createMasterLock(TableName tableName, LockType type, String description) longLockManager.RemoteLocks.requestTableLock(TableName tableName, LockType type, String description, NonceKey nonceKey) Constructors in org.apache.hadoop.hbase.master.locking with parameters of type TableNameModifierConstructorDescriptionLockProcedure(org.apache.hadoop.conf.Configuration conf, TableName tableName, LockType type, String description, CountDownLatch lockAcquireLatch) Constructor for table lock.MasterLock(TableName tableName, LockType type, String description)  - 
Uses of TableName in org.apache.hadoop.hbase.master.normalizer
Fields in org.apache.hadoop.hbase.master.normalizer declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameSimpleRegionNormalizer.NormalizeContext.tableNameFields in org.apache.hadoop.hbase.master.normalizer with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final RegionNormalizerWorkQueue<TableName>RegionNormalizerManager.workQueueprivate final RegionNormalizerWorkQueue<TableName>RegionNormalizerWorker.workQueueMethods in org.apache.hadoop.hbase.master.normalizer that return TableNameMethods in org.apache.hadoop.hbase.master.normalizer with parameters of type TableNameModifier and TypeMethodDescriptionprivate List<NormalizationPlan>RegionNormalizerWorker.calculatePlans(TableName tableName) Method parameters in org.apache.hadoop.hbase.master.normalizer with type arguments of type TableNameModifier and TypeMethodDescriptionbooleanRegionNormalizerManager.normalizeRegions(List<TableName> tables, boolean isHighPriority) Submit tables for normalization.Constructor parameters in org.apache.hadoop.hbase.master.normalizer with type arguments of type TableNameModifierConstructorDescription(package private)RegionNormalizerManager(RegionNormalizerStateStore regionNormalizerStateStore, RegionNormalizerChore regionNormalizerChore, RegionNormalizerWorkQueue<TableName> workQueue, RegionNormalizerWorker worker) (package private)RegionNormalizerWorker(org.apache.hadoop.conf.Configuration configuration, MasterServices masterServices, RegionNormalizer regionNormalizer, RegionNormalizerWorkQueue<TableName> workQueue)  - 
Uses of TableName in org.apache.hadoop.hbase.master.procedure
Fields in org.apache.hadoop.hbase.master.procedure declared as TableNameModifier and TypeFieldDescriptionprivate TableNameSnapshotProcedure.snapshotTableprivate TableNameDeleteTableProcedure.tableNameprivate TableNameDisableTableProcedure.tableNameprivate TableNameEnableTableProcedure.tableNameprivate TableNameFlushTableProcedure.tableNameprivate TableNameModifyTableDescriptorProcedure.tableNameprivate TableNameReopenTableRegionsProcedure.tableNameprivate TableNameTruncateTableProcedure.tableNameFields in org.apache.hadoop.hbase.master.procedure with type parameters of type TableNameModifier and TypeFieldDescriptionMasterProcedureScheduler.metaRunQueueprivate final Map<TableName,LockAndQueue> SchemaLocking.tableLocksMasterProcedureScheduler.tableRunQueueMethods in org.apache.hadoop.hbase.master.procedure that return TableNameModifier and TypeMethodDescriptionAbstractStateMachineNamespaceProcedure.getTableName()AbstractStateMachineRegionProcedure.getTableName()abstract TableNameAbstractStateMachineTableProcedure.getTableName()CloneSnapshotProcedure.getTableName()CreateTableProcedure.getTableName()DeleteTableProcedure.getTableName()DisableTableProcedure.getTableName()EnableTableProcedure.getTableName()FlushRegionProcedure.getTableName()FlushTableProcedure.getTableName()InitMetaProcedure.getTableName()private static TableNameMasterProcedureScheduler.getTableName(Procedure<?> proc) ModifyTableDescriptorProcedure.getTableName()ModifyTableProcedure.getTableName()ReopenTableRegionsProcedure.getTableName()RestoreSnapshotProcedure.getTableName()SnapshotProcedure.getTableName()SnapshotRegionProcedure.getTableName()SnapshotVerifyProcedure.getTableName()TableProcedureInterface.getTableName()Returns the name of the table the procedure is operating onTruncateTableProcedure.getTableName()Methods in org.apache.hadoop.hbase.master.procedure with parameters of type TableNameModifier and TypeMethodDescriptionprivate static voidDeleteTableProcedure.cleanRegionsInMeta(MasterProcedureEnv env, TableName tableName) There may be items for this table still up in hbase:meta in the case where the info:regioninfo column was empty because of some write error.CreateTableProcedure.CreateHdfsRegions.createHdfsRegions(MasterProcedureEnv env, org.apache.hadoop.fs.Path tableRootDir, TableName tableName, List<RegionInfo> newRegions) protected static voidDeleteTableProcedure.deleteAssignmentState(MasterProcedureEnv env, TableName tableName) static voidMasterDDLOperationHelper.deleteColumnFamilyFromFileSystem(MasterProcedureEnv env, TableName tableName, List<RegionInfo> regionInfoList, byte[] familyName, boolean hasMob) Remove the column family from the file systemprotected static voidDeleteTableProcedure.deleteFromFs(MasterProcedureEnv env, TableName tableName, List<RegionInfo> regions, boolean archive) protected static voidDeleteTableProcedure.deleteFromMeta(MasterProcedureEnv env, TableName tableName, List<RegionInfo> regions) protected static voidDeleteTableProcedure.deleteTableDescriptorCache(MasterProcedureEnv env, TableName tableName) protected static voidDeleteTableProcedure.deleteTableStates(MasterProcedureEnv env, TableName tableName) (package private) LockAndQueueSchemaLocking.getTableLock(TableName tableName) static intMasterProcedureUtil.getTablePriority(TableName tableName) Return the priority for the given table.private TableQueueMasterProcedureScheduler.getTableQueue(TableName tableName) (package private) booleanMasterProcedureScheduler.markTableAsDeleted(TableName table, Procedure<?> procedure) Tries to remove the queue and the table-lock of the specified table.(package private) LockAndQueueSchemaLocking.removeTableLock(TableName tableName) private voidMasterProcedureScheduler.removeTableQueue(TableName tableName) protected static voidCreateTableProcedure.setEnabledState(MasterProcedureEnv env, TableName tableName) protected static voidCreateTableProcedure.setEnablingState(MasterProcedureEnv env, TableName tableName) protected static voidDisableTableProcedure.setTableStateToDisabled(MasterProcedureEnv env, TableName tableName) Mark table state to Disabledprivate static voidDisableTableProcedure.setTableStateToDisabling(MasterProcedureEnv env, TableName tableName) Mark table state to Disablingprotected static voidEnableTableProcedure.setTableStateToEnabled(MasterProcedureEnv env, TableName tableName) Mark table state to Enabledprotected static voidEnableTableProcedure.setTableStateToEnabling(MasterProcedureEnv env, TableName tableName) Mark table state to EnablingbooleanMasterProcedureScheduler.waitRegions(Procedure<?> procedure, TableName table, RegionInfo... regionInfos) Suspend the procedure if the specified set of regions are already locked.booleanMasterProcedureScheduler.waitTableExclusiveLock(Procedure<?> procedure, TableName table) Suspend the procedure if the specified table is already locked.private TableQueueMasterProcedureScheduler.waitTableQueueSharedLock(Procedure<?> procedure, TableName table) booleanMasterProcedureScheduler.waitTableSharedLock(Procedure<?> procedure, TableName table) Suspend the procedure if the specified table is already locked.voidMasterProcedureScheduler.wakeRegions(Procedure<?> procedure, TableName table, RegionInfo... regionInfos) Wake the procedures waiting for the specified regionsvoidMasterProcedureScheduler.wakeTableExclusiveLock(Procedure<?> procedure, TableName table) Wake the procedures waiting for the specified tablevoidMasterProcedureScheduler.wakeTableSharedLock(Procedure<?> procedure, TableName table) Wake the procedures waiting for the specified tableConstructors in org.apache.hadoop.hbase.master.procedure with parameters of type TableNameModifierConstructorDescriptionDeleteTableProcedure(MasterProcedureEnv env, TableName tableName) DeleteTableProcedure(MasterProcedureEnv env, TableName tableName, ProcedurePrepareLatch syncLatch) DisableTableProcedure(MasterProcedureEnv env, TableName tableName, boolean skipTableStateCheck) ConstructorDisableTableProcedure(MasterProcedureEnv env, TableName tableName, boolean skipTableStateCheck, ProcedurePrepareLatch syncLatch) ConstructorEnableTableProcedure(MasterProcedureEnv env, TableName tableName) ConstructorEnableTableProcedure(MasterProcedureEnv env, TableName tableName, ProcedurePrepareLatch syncLatch) ConstructorFlushTableProcedure(MasterProcedureEnv env, TableName tableName) FlushTableProcedure(MasterProcedureEnv env, TableName tableName, List<byte[]> columnFamilies) protectedModifyTableDescriptorProcedure(MasterProcedureEnv env, TableName tableName) ReopenTableRegionsProcedure(TableName tableName) ReopenTableRegionsProcedure(TableName tableName, long reopenBatchBackoffMillis, int reopenBatchSizeMax) ReopenTableRegionsProcedure(TableName tableName, List<byte[]> regionNames) ReopenTableRegionsProcedure(TableName tableName, List<byte[]> regionNames, long reopenBatchBackoffMillis, int reopenBatchSizeMax) TableQueue(TableName tableName, int priority, LockStatus tableLock, LockStatus namespaceLockStatus) TruncateTableProcedure(MasterProcedureEnv env, TableName tableName, boolean preserveSplits) TruncateTableProcedure(MasterProcedureEnv env, TableName tableName, boolean preserveSplits, ProcedurePrepareLatch latch)  - 
Uses of TableName in org.apache.hadoop.hbase.master.region
Fields in org.apache.hadoop.hbase.master.region declared as TableName - 
Uses of TableName in org.apache.hadoop.hbase.master.replication
Methods in org.apache.hadoop.hbase.master.replication with parameters of type TableNameModifier and TypeMethodDescriptionReplicationPeerManager.getSerialPeerIdsBelongsTo(TableName tableName) private booleanModifyPeerProcedure.needReopen(TableStateManager tsm, TableName tn) private booleanModifyPeerProcedure.needSetLastPushedSequenceId(TableStateManager tsm, TableName tn) protected final voidModifyPeerProcedure.setLastPushedSequenceIdForTable(MasterProcedureEnv env, TableName tableName, Map<String, Long> lastSeqIds) Method parameters in org.apache.hadoop.hbase.master.replication with type arguments of type TableNameModifier and TypeMethodDescriptionprivate voidReplicationPeerManager.checkNamespacesAndTableCfsConfigConflict(Set<String> namespaces, Map<TableName, ? extends Collection<String>> tableCfs) Set a namespace in the peer config means that all tables in this namespace will be replicated to the peer cluster. - 
Uses of TableName in org.apache.hadoop.hbase.master.snapshot
Fields in org.apache.hadoop.hbase.master.snapshot declared as TableNameModifier and TypeFieldDescriptionprotected final TableNameTakeSnapshotHandler.snapshotTableprivate TableNameMasterSnapshotVerifier.tableNameFields in org.apache.hadoop.hbase.master.snapshot with type parameters of type TableNameModifier and TypeFieldDescriptionSnapshotManager.restoreTableToProcIdMapprivate final Map<TableName,SnapshotSentinel> SnapshotManager.snapshotHandlersMethods in org.apache.hadoop.hbase.master.snapshot with parameters of type TableNameModifier and TypeMethodDescriptionprivate longSnapshotManager.cloneSnapshot(org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription reqSnapshot, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot, TableDescriptor snapshotTableDesc, NonceKey nonceKey, boolean restoreAcl, String customSFT) Clone the specified snapshot.private booleanSnapshotManager.isRestoringTable(TableName tableName) Verify if the restore of the specified table is in progress.booleanSnapshotManager.isTableTakingAnySnapshot(TableName tableName) booleanSnapshotManager.isTakingSnapshot(TableName tableName) Check to see if the specified table has a snapshot in progress.private booleanSnapshotManager.isTakingSnapshot(TableName tableName, boolean checkProcedure) Check to see if the specified table has a snapshot in progress.private longSnapshotManager.restoreSnapshot(org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription reqSnapshot, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot, TableDescriptor snapshotTableDesc, NonceKey nonceKey, boolean restoreAcl) Restore the specified snapshot.voidSnapshotManager.setSnapshotHandlerForTesting(TableName tableName, SnapshotSentinel handler) Set the handler for the current snapshotMethod parameters in org.apache.hadoop.hbase.master.snapshot with type arguments of type TableNameModifier and TypeMethodDescriptionprivate voidSnapshotManager.cleanupSentinels(Map<TableName, SnapshotSentinel> sentinels) Remove the sentinels that are marked as finished and the completion time has exceeded the removal timeout.private SnapshotSentinelSnapshotManager.removeSentinelIfFinished(Map<TableName, SnapshotSentinel> sentinels, org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot) Return the handler if it is currently live and has the same snapshot target name. - 
Uses of TableName in org.apache.hadoop.hbase.mob
Fields in org.apache.hadoop.hbase.mob with type parameters of type TableNameModifier and TypeFieldDescriptionprivate static final ConcurrentMap<TableName,String> ManualMobMaintHFileCleaner.MOB_REGIONS(package private) static ThreadLocal<org.apache.hbase.thirdparty.com.google.common.collect.SetMultimap<TableName,String>> DefaultMobStoreCompactor.mobRefSetMethods in org.apache.hadoop.hbase.mob that return types with arguments of type TableNameModifier and TypeMethodDescriptionstatic org.apache.hbase.thirdparty.com.google.common.collect.ImmutableSetMultimap.Builder<TableName,String> MobUtils.deserializeMobFileRefs(byte[] bytes) Deserialize the set of referenced mob hfiles from store file metadata.MobUtils.getTableName(Cell cell) Get the table name from when this cell was written into a mob hfile as a TableName.Methods in org.apache.hadoop.hbase.mob with parameters of type TableNameModifier and TypeMethodDescriptionprivate static voidMobFileCleanupUtil.archiveMobFiles(org.apache.hadoop.conf.Configuration conf, TableName tableName, Admin admin, byte[] family, List<org.apache.hadoop.fs.Path> storeFiles) Archives the mob files.voidRSMobFileCleanerChore.archiveMobFiles(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] family, List<org.apache.hadoop.fs.Path> storeFiles) Archives the mob files.private static voidMobFileCleanupUtil.checkColumnFamilyDescriptor(org.apache.hadoop.conf.Configuration conf, TableName table, org.apache.hadoop.fs.FileSystem fs, Admin admin, ColumnFamilyDescriptor hcd, Set<String> regionNames, long maxCreationTimeToArchive) static voidMobUtils.cleanExpiredMobFiles(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration conf, TableName tableName, ColumnFamilyDescriptor columnDescriptor, CacheConfig cacheConfig, long current) Cleans the expired mob files.static voidMobFileCleanupUtil.cleanupObsoleteMobFiles(org.apache.hadoop.conf.Configuration conf, TableName table, Admin admin) Performs housekeeping file cleaning (called by MOB Cleaner chore)static org.apache.hadoop.fs.PathMobUtils.getMobFamilyPath(org.apache.hadoop.conf.Configuration conf, TableName tableName, String familyName) Gets the family dir of the mob files.static RegionInfoMobUtils.getMobRegionInfo(TableName tableName) Gets the RegionInfo of the mob files.static org.apache.hadoop.fs.PathMobUtils.getMobRegionPath(org.apache.hadoop.conf.Configuration conf, TableName tableName) Gets the region dir of the mob files.static org.apache.hadoop.fs.PathMobUtils.getMobRegionPath(org.apache.hadoop.fs.Path rootDir, TableName tableName) Gets the region dir of the mob files under the specified root dir.static org.apache.hadoop.fs.PathMobUtils.getMobTableDir(org.apache.hadoop.fs.Path rootDir, TableName tableName) Gets the table dir of the mob files under the qualified HBase root dir.static booleanMobUtils.isMobRegionName(TableName tableName, byte[] regionName) Gets whether the current region name follows the pattern of a mob region name.static booleanMobUtils.removeMobFiles(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tableName, org.apache.hadoop.fs.Path tableDir, byte[] family, Collection<HStoreFile> storeFiles) Archives the mob files.private voidMobFileCompactionChore.startCompaction(Admin admin, TableName table, RegionInfo region, byte[] cf) Method parameters in org.apache.hadoop.hbase.mob with type arguments of type TableNameModifier and TypeMethodDescriptionprivate voidDefaultMobStoreCompactor.calculateMobLengthMap(org.apache.hbase.thirdparty.com.google.common.collect.SetMultimap<TableName, String> mobRefs) static byte[]MobUtils.serializeMobFileRefs(org.apache.hbase.thirdparty.com.google.common.collect.SetMultimap<TableName, String> mobRefSet) Serialize a set of referenced mob hfiles - 
Uses of TableName in org.apache.hadoop.hbase.mob.mapreduce
Fields in org.apache.hadoop.hbase.mob.mapreduce declared as TableName - 
Uses of TableName in org.apache.hadoop.hbase.namequeues
Fields in org.apache.hadoop.hbase.namequeues declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameWALEventTrackerTableAccessor.WAL_EVENT_TRACKER_TABLE_NAMEWALEventTrackerTableAccessor.WAL_EVENT_TRACKER_TABLE_NAME_STRtable name - can be enabled with config - hbase.regionserver.wal.event.tracker.enabled - 
Uses of TableName in org.apache.hadoop.hbase.namespace
Fields in org.apache.hadoop.hbase.namespace with type parameters of type TableNameModifier and TypeFieldDescriptionprivate Map<TableName,AtomicInteger> NamespaceTableAndRegionInfo.tableAndRegionInfoMethods in org.apache.hadoop.hbase.namespace that return types with arguments of type TableNameModifier and TypeMethodDescriptionNamespaceTableAndRegionInfo.getTables()Gets the set of table names belonging to namespace.Methods in org.apache.hadoop.hbase.namespace with parameters of type TableNameModifier and TypeMethodDescriptionprivate void(package private) void(package private) booleanNamespaceStateManager.checkAndUpdateNamespaceRegionCount(TableName name, byte[] regionName, int incr) Check if adding a region violates namespace quota, if not update namespace cache.(package private) voidNamespaceStateManager.checkAndUpdateNamespaceRegionCount(TableName name, int incr) Check and update region count for an existing table.(package private) voidNamespaceStateManager.checkAndUpdateNamespaceTableCount(TableName table, int numRegions) voidNamespaceAuditor.checkQuotaToCreateTable(TableName tName, int regions) Check quota to create table.voidNamespaceAuditor.checkQuotaToUpdateRegion(TableName tName, int regions) Check and update region count quota for an existing table.private voidNamespaceAuditor.checkTableTypeAndThrowException(TableName name) (package private) booleanNamespaceTableAndRegionInfo.containsTable(TableName tableName) (package private) intNamespaceTableAndRegionInfo.decrementRegionCountForTable(TableName tableName, int count) intNamespaceAuditor.getRegionCountOfTable(TableName tName) Get region count for table(package private) intNamespaceTableAndRegionInfo.getRegionCountOfTable(TableName tableName) (package private) intNamespaceTableAndRegionInfo.incRegionCountForTable(TableName tableName, int count) voidNamespaceAuditor.removeFromNamespaceUsage(TableName tableName) (package private) voidNamespaceStateManager.removeTable(TableName tableName) (package private) voidNamespaceTableAndRegionInfo.removeTable(TableName tableName)  - 
Uses of TableName in org.apache.hadoop.hbase.procedure.flush
Fields in org.apache.hadoop.hbase.procedure.flush with type parameters of type TableName - 
Uses of TableName in org.apache.hadoop.hbase.quotas
Fields in org.apache.hadoop.hbase.quotas declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameQuotaTableUtil.QUOTA_TABLE_NAMESystem table for quotasprivate final TableNameQuotaSettings.tableName(package private) final TableNameFileArchiverNotifierFactoryImpl.CacheKey.tnprivate final TableNameFileArchiverNotifierImpl.tnFields in org.apache.hadoop.hbase.quotas with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final Map<TableName,SpaceViolationPolicyEnforcement> ActivePolicyEnforcement.activePoliciesprivate final ConcurrentMap<TableName,FileArchiverNotifier> FileArchiverNotifierFactoryImpl.CACHEprivate AtomicReference<Map<TableName,SpaceQuotaSnapshot>> RegionServerSpaceQuotaManager.currentQuotaSnapshotsprivate final ConcurrentHashMap<TableName,SpaceViolationPolicyEnforcement> RegionServerSpaceQuotaManager.enforcedPoliciesprivate final Map<TableName,SpaceViolationPolicyEnforcement> ActivePolicyEnforcement.locallyCachedPoliciesprivate final Map<TableName,SpaceQuotaSnapshot> QuotaObserverChore.readOnlyTableQuotaSnapshotsprivate final Map<TableName,SpaceQuotaSnapshot> ActivePolicyEnforcement.snapshotsprivate Map<TableName,QuotaLimiter> UserQuotaState.tableLimitersprivate MasterQuotaManager.NamedLock<TableName>MasterQuotaManager.tableLocksprivate final ConcurrentHashMap<TableName,Double> QuotaCache.tableMachineQuotaFactorsprivate final ConcurrentMap<TableName,QuotaState> QuotaCache.tableQuotaCacheprivate final Map<TableName,SpaceQuotaSnapshot> QuotaObserverChore.tableQuotaSnapshotsprivate QuotaSnapshotStore<TableName>QuotaObserverChore.tableSnapshotStoreQuotaObserverChore.TablesWithQuotas.tablesWithNamespaceQuotasQuotaObserverChore.TablesWithQuotas.tablesWithTableQuotasMethods in org.apache.hadoop.hbase.quotas that return TableNameModifier and TypeMethodDescriptionprotected static TableNameQuotaTableUtil.getTableFromRowKey(byte[] key) QuotaSettings.getTableName()Methods in org.apache.hadoop.hbase.quotas that return types with arguments of type TableNameModifier and TypeMethodDescription(package private) Map<TableName,SpaceViolationPolicyEnforcement> RegionServerSpaceQuotaManager.copyActiveEnforcements()Returns the collection of tables which have quota violation policies enforced on this RegionServer.RegionServerSpaceQuotaManager.copyQuotaSnapshots()Copies the lastSpaceQuotaSnapshots that were recorded.SpaceQuotaRefresherChore.fetchSnapshotsFromQuotaTable()Reads all quota snapshots from the quota table.static Map<TableName,QuotaState> QuotaUtil.fetchTableQuotas(Connection connection, List<Get> gets, Map<TableName, Double> tableMachineFactors) QuotaObserverChore.TablesWithQuotas.filterInsufficientlyReportedTables(QuotaSnapshotStore<TableName> tableStore) Filters out all tables for which the Master currently doesn't have enough region space reports received from RegionServers yet.RegionServerSpaceQuotaManager.getActivePoliciesAsMap()Converts a map of table toSpaceViolationPolicyEnforcements intoSpaceViolationPolicys.(package private) Map<TableName,SpaceViolationPolicyEnforcement> ActivePolicyEnforcement.getLocallyCachedPolicies()Returns an unmodifiable version of the policy enforcements that were cached because they are not in violation of their quota.QuotaObserverChore.TablesWithQuotas.getNamespaceQuotaTables()Returns an unmodifiable view of all tables in namespaces that have namespace quotas.ActivePolicyEnforcement.getPolicies()Returns an unmodifiable version of the activeSpaceViolationPolicyEnforcements.static Map<TableName,SpaceQuotaSnapshot> QuotaTableUtil.getSnapshots(Connection conn) Fetches allSpaceQuotaSnapshotobjects from thehbase:quotatable.SnapshotQuotaObserverChore.getSnapshotsFromTables(Admin admin, Set<TableName> tablesToFetchSnapshotsFrom) Computes a mapping of originatingTableNameto snapshots, when theTableNameexists in the providedSet.SnapshotQuotaObserverChore.getSnapshotsToComputeSize()Fetches each table with a quota (table or namespace quota), and then fetch the name of each snapshot which was created from that table.(package private) Map<TableName,QuotaState> QuotaCache.getTableQuotaCache()QuotaObserverChore.getTableQuotaSnapshots()Returns an unmodifiable view over the currentSpaceQuotaSnapshotobjects for each HBase table with a quota defined.QuotaObserverChore.TablesWithQuotas.getTableQuotaTables()Returns an unmodifiable view of all tables with table quotas.QuotaObserverChore.TablesWithQuotas.getTablesByNamespace()Returns a view of all tables that reside in a namespace with a namespace quota, grouped by the namespace itself.QuotaTableUtil.getTableSnapshots(Connection conn) Returns a multimap for all existing table snapshot entries.(package private) QuotaSnapshotStore<TableName>QuotaObserverChore.getTableSnapshotStore()Methods in org.apache.hadoop.hbase.quotas with parameters of type TableNameModifier and TypeMethodDescriptionvoidQuotaObserverChore.TablesWithQuotas.addNamespaceQuotaTable(TableName tn) Adds a table with a namespace quota.static voidQuotaUtil.addTableQuota(Connection connection, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas data) voidQuotaObserverChore.TablesWithQuotas.addTableQuotaTable(TableName tn) Adds a table with a table quota.static voidQuotaUtil.addUserQuota(Connection connection, String user, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas data) booleanRegionServerSpaceQuotaManager.areCompactionsDisabled(TableName tableName) Returns whether or not compactions should be disabled for the giventableNameper a space quota violation policy.org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.FileArchiveNotificationRequestRegionServerSpaceQuotaManager.buildFileArchiveRequest(TableName tn, Collection<Map.Entry<String, Long>> archivedFiles) Builds the protobuf message to inform the Master of files being archived.voidMasterQuotaManager.checkAndUpdateNamespaceRegionQuota(TableName tName, int regions) voidMasterQuotaManager.checkNamespaceTableAndRegionQuota(TableName tName, int regions) SpaceViolationPolicyEnforcementFactory.create(RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot snapshot) Constructs the appropriateSpaceViolationPolicyEnforcementfor tables that are in violation of their space quota.(package private) static PutQuotaTableUtil.createPutForSnapshotSize(TableName tableName, String snapshot, long size) (package private) static PutQuotaTableUtil.createPutForSpaceSnapshot(TableName tableName, SpaceQuotaSnapshot snapshot) (package private) static ScanQuotaTableUtil.createScanForSpaceSnapshotSizes(TableName table) SpaceViolationPolicyEnforcementFactory.createWithoutViolation(RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot snapshot) Creates the "default"SpaceViolationPolicyEnforcementfor a table that isn't in violation.static voidQuotaUtil.deleteTableQuota(Connection connection, TableName table) static voidQuotaUtil.deleteUserQuota(Connection connection, String user, TableName table) static voidQuotaUtil.disableTableIfNotDisabled(Connection conn, TableName tableName) Method to disable a table, if not already disabled.voidRegionServerSpaceQuotaManager.disableViolationPolicyEnforcement(TableName tableName) Disables enforcement on any violation policy on the giventableName.static voidQuotaUtil.enableTableIfNotEnabled(Connection conn, TableName tableName) Method to enable a table, if not already enabled.voidRegionServerSpaceQuotaManager.enforceViolationPolicy(TableName tableName, SpaceQuotaSnapshot snapshot) Enforces the given violationPolicy on the given table in this RegionServer.TableQuotaSnapshotStore.filterBySubject(TableName table) private static List<QuotaSettings>QuotaSettingsFactory.fromQuotas(String userName, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) (package private) static QuotaSettingsQuotaSettingsFactory.fromSpace(TableName table, String namespace, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota protoQuota) (package private) static SpaceLimitSettingsSpaceLimitSettings.fromSpaceQuota(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota proto) Constructs aSpaceLimitSettingsfrom the provided protobuf message and tablename.(package private) static List<QuotaSettings>QuotaSettingsFactory.fromTableQuotas(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) static List<ThrottleSettings>QuotaSettingsFactory.fromTableThrottles(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle throttle) protected static List<ThrottleSettings>QuotaSettingsFactory.fromThrottle(String userName, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle throttle) (package private) static ThrottleSettingsThrottleSettings.fromTimedQuota(String userName, TableName tableName, String namespace, String regionServer, ThrottleType type, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.TimedQuota timedQuota) (package private) static List<QuotaSettings>QuotaSettingsFactory.fromUserQuotas(String userName, TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) FileArchiverNotifierFactory.get(Connection conn, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tn) Creates or obtains aFileArchiverNotifierinstance for the given args.FileArchiverNotifierFactoryImpl.get(Connection conn, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tn) Returns theFileArchiverNotifierinstance for the givenTableName.static SpaceQuotaSnapshotQuotaTableUtil.getCurrentSnapshotFromQuotaTable(Connection conn, TableName tableName) Returns the current space quota snapshot of the giventableNamefromQuotaTableUtil.QUOTA_TABLE_NAMEor null if the no quota information is available for that tableName.TableQuotaSnapshotStore.getCurrentState(TableName table) (package private) FileArchiverNotifierSnapshotQuotaObserverChore.getNotifierForTable(TableName tn) Returns the correct instance ofFileArchiverNotifierfor the given table name.(package private) intQuotaObserverChore.TablesWithQuotas.getNumRegions(TableName table) Computes the total number of regions in a table.(package private) intQuotaObserverChore.TablesWithQuotas.getNumReportedRegions(TableName table, QuotaSnapshotStore<TableName> tableStore) Computes the number of regions reported for a table.ActivePolicyEnforcement.getPolicyEnforcement(TableName tableName) Returns the properSpaceViolationPolicyEnforcementimplementation for the given table.RegionServerRpcQuotaManager.getQuota(org.apache.hadoop.security.UserGroupInformation ugi, TableName table, int blockSizeBytes) Returns the quota for an operation.(package private) org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.QuotasTableQuotaSnapshotStore.getQuotaForTable(TableName table) Fetches the table quota.intMasterQuotaManager.getRegionCountOfTable(TableName tName) Returns cached region count, or -1 if quota manager is disabled or table status not foundprotected static byte[]QuotaTableUtil.getSettingsQualifierForUserTable(TableName tableName) (package private) longFileArchiverNotifierImpl.getSizeOfStoreFile(TableName tn, String regionName, String family, String storeFile) Computes the size of the store file given its name, region and family name in the archive directory.(package private) longFileArchiverNotifierImpl.getSizeOfStoreFile(TableName tn, FileArchiverNotifierImpl.StoreFileReference storeFileName) Computes the size of the store files for a single region.(package private) longFileArchiverNotifierImpl.getSizeOfStoreFiles(TableName tn, Set<FileArchiverNotifierImpl.StoreFileReference> storeFileNames) Computes the size of each store file instoreFileNames(package private) longTableQuotaSnapshotStore.getSnapshotSizesForTable(TableName tn) Fetches any serialized snapshot sizes from the quota table for thetnprovided.org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaTableQuotaSnapshotStore.getSpaceQuota(TableName subject) QuotaCache.getTableLimiter(TableName table) Returns the limiter associated to the specified table.UserQuotaState.getTableLimiter(TableName table) Return the limiter for the specified table associated with this quota.static org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.QuotasQuotaTableUtil.getTableQuota(Connection connection, TableName table) (package private) SpaceQuotaSnapshotQuotaObserverChore.getTableQuotaSnapshot(TableName table) Fetches theSpaceQuotaSnapshotfor the given table.protected static byte[]QuotaTableUtil.getTableRowKey(TableName table) TableQuotaSnapshotStore.getTargetState(TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota spaceQuota) QuotaCache.getUserLimiter(org.apache.hadoop.security.UserGroupInformation ugi, TableName table) Returns the limiter associated to the specified user/table.static org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.QuotasQuotaTableUtil.getUserQuota(Connection connection, String user, TableName table) booleanQuotaObserverChore.TablesWithQuotas.hasNamespaceQuota(TableName tn) Returns true if the table exists in a namespace with a namespace quota.booleanQuotaObserverChore.TablesWithQuotas.hasTableQuota(TableName tn) Returns true if the given table has a table quota.voidSpaceViolationPolicyEnforcement.initialize(RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot snapshot) Initializes this policy instance.private booleanMasterQuotaManager.isInViolationAndPolicyDisable(TableName tableName, QuotaObserverChore quotaObserverChore) Method to check if a table is in violation and policy set on table is DISABLE.static QuotaSettingsQuotaSettingsFactory.limitTableSpace(TableName tableName, long sizeLimit, SpaceViolationPolicy violationPolicy) Creates aQuotaSettingsobject to limit the FileSystem space usage for the given table to the given size in bytes.(package private) static GetQuotaTableUtil.makeGetForSnapshotSize(TableName tn, String snapshot) Creates aGetfor the HBase snapshot's size against the given table.static GetQuotaTableUtil.makeGetForTableQuotas(TableName table) static GetQuotaTableUtil.makeQuotaSnapshotGetForTable(TableName tn) Creates aGetwhich returns onlySpaceQuotaSnapshotfrom the quota table for a specific table.static ScanQuotaTableUtil.makeQuotaSnapshotScanForTable(TableName tn) Creates aScanwhich returns onlySpaceQuotaSnapshotfrom the quota table for a specific table.protected static voidQuotaTableUtil.parseTableResult(TableName table, Result result, QuotaTableUtil.TableQuotasVisitor visitor) voidMasterQuotasObserver.postDeleteTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidMasterQuotaManager.removeRegionSizesForTable(TableName tableName) Removes each region size entry where the RegionInfo references the provided TableName.voidMasterQuotaManager.removeTableFromNamespaceQuota(TableName tName) Remove table from namespace quota.static QuotaSettingsQuotaSettingsFactory.removeTableSpaceLimit(TableName tableName) Creates aQuotaSettingsobject to remove the FileSystem space quota for the given table.voidTableQuotaSnapshotStore.setCurrentState(TableName table, SpaceQuotaSnapshot snapshot) voidUserQuotaState.setQuotas(TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) Add the quota information of the specified table.voidMasterQuotaManager.setTableQuota(TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest req) (package private) voidQuotaObserverChore.setTableQuotaSnapshot(TableName table, SpaceQuotaSnapshot snapshot) Stores the quota state for the given table.voidMasterQuotaManager.setUserQuota(String userName, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest req) private static QuotaSettingsQuotaSettingsFactory.throttle(String userName, TableName tableName, String namespace, String regionServer, ThrottleType type, long limit, TimeUnit timeUnit, QuotaScope scope) static QuotaSettingsQuotaSettingsFactory.throttleTable(TableName tableName, ThrottleType type, long limit, TimeUnit timeUnit) Throttle the specified table.static QuotaSettingsQuotaSettingsFactory.throttleTable(TableName tableName, ThrottleType type, long limit, TimeUnit timeUnit, QuotaScope scope) Throttle the specified table.static QuotaSettingsQuotaSettingsFactory.throttleUser(String userName, TableName tableName, ThrottleType type, long limit, TimeUnit timeUnit) Throttle the specified user on the specified table.static QuotaSettingsQuotaSettingsFactory.throttleUser(String userName, TableName tableName, ThrottleType type, long limit, TimeUnit timeUnit, QuotaScope scope) Throttle the specified user on the specified table.voidSpaceQuotaSnapshotNotifier.transitionTable(TableName tableName, SpaceQuotaSnapshot snapshot) Informs the cluster of the current state of a space quota for a table.voidTableSpaceQuotaSnapshotNotifier.transitionTable(TableName tableName, SpaceQuotaSnapshot snapshot) static QuotaSettingsQuotaSettingsFactory.unthrottleTable(TableName tableName) Remove the throttling for the specified table.static QuotaSettingsQuotaSettingsFactory.unthrottleTableByThrottleType(TableName tableName, ThrottleType type) Remove the throttling for the specified table.static QuotaSettingsQuotaSettingsFactory.unthrottleUser(String userName, TableName tableName) Remove the throttling for the specified user on the specified table.static QuotaSettingsQuotaSettingsFactory.unthrottleUserByThrottleType(String userName, TableName tableName, ThrottleType type) Remove the throttling for the specified user on the specified table.(package private) voidQuotaObserverChore.updateTableQuota(TableName table, SpaceQuotaSnapshot currentSnapshot, SpaceQuotaSnapshot targetSnapshot) Updates the hbase:quota table with the new quota policy for thistableif necessary.voidQuotaTableUtil.TableQuotasVisitor.visitTableQuotas(TableName tableName, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) voidQuotaTableUtil.UserQuotasVisitor.visitUserQuotas(String userName, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) Method parameters in org.apache.hadoop.hbase.quotas with type arguments of type TableNameModifier and TypeMethodDescriptionSnapshotQuotaObserverChore.computeSnapshotSizes(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotsToComputeSize) Computes the size of each snapshot provided given the current files referenced by the table.QuotaTableUtil.createDeletesForExistingTableSnapshotSizes(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotEntriesToRemove) Returns a list ofDeleteto remove given table snapshot entries to remove from quota tablestatic voidQuotaTableUtil.extractQuotaSnapshot(Result result, Map<TableName, SpaceQuotaSnapshot> snapshots) Extracts theSpaceViolationPolicyandTableNamefrom the providedResultand adds them to the givenMap.(package private) voidSpaceQuotaRefresherChore.extractQuotaSnapshot(Result result, Map<TableName, SpaceQuotaSnapshot> snapshots) Wrapper aroundQuotaTableUtil.extractQuotaSnapshot(Result, Map)for testing.static Map<TableName,QuotaState> QuotaUtil.fetchTableQuotas(Connection connection, List<Get> gets, Map<TableName, Double> tableMachineFactors) static Map<String,UserQuotaState> QuotaUtil.fetchUserQuotas(Connection connection, List<Get> gets, Map<TableName, Double> tableMachineQuotaFactors, double factor) QuotaObserverChore.TablesWithQuotas.filterInsufficientlyReportedTables(QuotaSnapshotStore<TableName> tableStore) Filters out all tables for which the Master currently doesn't have enough region space reports received from RegionServers yet.(package private) intQuotaObserverChore.TablesWithQuotas.getNumReportedRegions(TableName table, QuotaSnapshotStore<TableName> tableStore) Computes the number of regions reported for a table.SnapshotQuotaObserverChore.getSnapshotsFromTables(Admin admin, Set<TableName> tablesToFetchSnapshotsFrom) Computes a mapping of originatingTableNameto snapshots, when theTableNameexists in the providedSet.static GetQuotaTableUtil.makeGetForUserQuotas(String user, Iterable<TableName> tables, Iterable<String> namespaces) (package private) voidQuotaObserverChore.processNamespacesWithQuotas(Set<String> namespacesWithQuotas, org.apache.hbase.thirdparty.com.google.common.collect.Multimap<String, TableName> tablesByNamespace) Processes each namespace which has a quota defined and moves all of the tables contained in that namespace into or out of violation of the quota.(package private) voidQuotaObserverChore.processTablesWithQuotas(Set<TableName> tablesWithTableQuotas) Processes eachTableNamewhich has a quota defined and moves it in or out of violation based on the space use.(package private) voidSnapshotQuotaObserverChore.pruneNamespaceSnapshots(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotsToComputeSize) Removes the snapshot entries that are present in Quota table but not in snapshotsToComputeSize(package private) voidSnapshotQuotaObserverChore.pruneTableSnapshots(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotsToComputeSize) Removes the snapshot entries that are present in Quota table but not in snapshotsToComputeSize(package private) voidSnapshotQuotaObserverChore.removeExistingTableSnapshotSizes(org.apache.hbase.thirdparty.com.google.common.collect.Multimap<TableName, String> snapshotEntriesToRemove) (package private) voidQuotaObserverChore.updateNamespaceQuota(String namespace, SpaceQuotaSnapshot currentSnapshot, SpaceQuotaSnapshot targetSnapshot, org.apache.hbase.thirdparty.com.google.common.collect.Multimap<String, TableName> tablesByNamespace) Updates the hbase:quota table with the target quota policy for thisnamespaceif necessary.voidRegionServerSpaceQuotaManager.updateQuotaSnapshot(Map<TableName, SpaceQuotaSnapshot> newSnapshots) Updates the currentSpaceQuotaSnapshots for the RegionServer.Constructors in org.apache.hadoop.hbase.quotas with parameters of type TableNameModifierConstructorDescription(package private)CacheKey(Connection conn, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tn) FileArchiverNotifierImpl(Connection conn, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, TableName tn) protectedGlobalQuotaSettings(String userName, TableName tableName, String namespace, String regionServer) protectedGlobalQuotaSettingsImpl(String username, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas quotas) protectedGlobalQuotaSettingsImpl(String userName, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle throttleProto, Boolean bypassGlobals, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota spaceProto) (package private)QuotaGlobalsSettingsBypass(String userName, TableName tableName, String namespace, String regionServer, boolean bypassGlobals) protectedQuotaSettings(String userName, TableName tableName, String namespace, String regionServer) (package private)SpaceLimitSettings(TableName tableName) Constructs aSpaceLimitSettingsto remove a space quota on the giventableName.(package private)SpaceLimitSettings(TableName tableName, long sizeLimit, SpaceViolationPolicy violationPolicy) (package private)SpaceLimitSettings(TableName tableName, String namespace, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceLimitRequest req) (package private)ThrottleSettings(String userName, TableName tableName, String namespace, String regionServer, org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.ThrottleRequest proto) Constructor parameters in org.apache.hadoop.hbase.quotas with type arguments of type TableNameModifierConstructorDescriptionActivePolicyEnforcement(Map<TableName, SpaceViolationPolicyEnforcement> activePolicies, Map<TableName, SpaceQuotaSnapshot> snapshots, RegionServerServices rss) ActivePolicyEnforcement(Map<TableName, SpaceViolationPolicyEnforcement> activePolicies, Map<TableName, SpaceQuotaSnapshot> snapshots, RegionServerServices rss, SpaceViolationPolicyEnforcementFactory factory)  - 
Uses of TableName in org.apache.hadoop.hbase.quotas.policies
Fields in org.apache.hadoop.hbase.quotas.policies declared as TableNameModifier and TypeFieldDescription(package private) TableNameAbstractViolationPolicyEnforcement.tableNameMethods in org.apache.hadoop.hbase.quotas.policies that return TableNameMethods in org.apache.hadoop.hbase.quotas.policies with parameters of type TableNameModifier and TypeMethodDescriptionvoidAbstractViolationPolicyEnforcement.initialize(RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot snapshot) voidAbstractViolationPolicyEnforcement.setTableName(TableName tableName)  - 
Uses of TableName in org.apache.hadoop.hbase.regionserver
Fields in org.apache.hadoop.hbase.regionserver declared as TableNameFields in org.apache.hadoop.hbase.regionserver with type parameters of type TableNameMethods in org.apache.hadoop.hbase.regionserver that return TableNameModifier and TypeMethodDescriptionHStore.getTableName()Store.getTableName()StoreContext.getTableName()Methods in org.apache.hadoop.hbase.regionserver that return types with arguments of type TableNameModifier and TypeMethodDescriptionHRegionServer.getOnlineTables()Gets the online tables in this RS.Methods in org.apache.hadoop.hbase.regionserver with parameters of type TableNameModifier and TypeMethodDescriptionprivate org.apache.hadoop.fs.PathSecureBulkLoadManager.createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, TableName tableName) List<org.apache.hadoop.fs.Path>HMobStore.getLocations(TableName tableName) HRegionServer.getRegions(TableName tableName) Gets the online regions of the specified table.OnlineRegions.getRegions(TableName tableName) Get all online regions of a table in this RS.booleanHRegionServer.reportFileArchivalForQuotas(TableName tableName, Collection<Map.Entry<String, Long>> archivedFiles) booleanRegionServerServices.reportFileArchivalForQuotas(TableName tableName, Collection<Map.Entry<String, Long>> archivedFiles) Reports a collection of files, and their sizes, that belonged to the giventablewere just moved to the archive directory.Method parameters in org.apache.hadoop.hbase.regionserver with type arguments of type TableNameModifier and TypeMethodDescriptionvoidStoreFileWriter.appendMobMetadata(org.apache.hbase.thirdparty.com.google.common.collect.SetMultimap<TableName, String> mobRefSet) Appends MOB - specific metadata (even if it is empty)private voidRSRpcServices.executeOpenRegionProcedures(org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.OpenRegionRequest request, Map<TableName, TableDescriptor> tdCache)  - 
Uses of TableName in org.apache.hadoop.hbase.regionserver.metrics
Fields in org.apache.hadoop.hbase.regionserver.metrics declared as TableNameMethods in org.apache.hadoop.hbase.regionserver.metrics with parameters of type TableNameModifier and TypeMethodDescriptionprivate voidprivate static StringMetricsTableRequests.qualifyMetrics(String prefix, TableName tableName) Constructors in org.apache.hadoop.hbase.regionserver.metrics with parameters of type TableNameModifierConstructorDescriptionMetricsTableRequests(TableName tableName, org.apache.hadoop.conf.Configuration conf)  - 
Uses of TableName in org.apache.hadoop.hbase.regionserver.storefiletracker
Fields in org.apache.hadoop.hbase.regionserver.storefiletracker declared as TableNameMethods in org.apache.hadoop.hbase.regionserver.storefiletracker that return TableNameConstructors in org.apache.hadoop.hbase.regionserver.storefiletracker with parameters of type TableNameModifierConstructorDescriptionInitializeStoreFileTrackerProcedure(MasterProcedureEnv env, TableName tableName) ModifyColumnFamilyStoreFileTrackerProcedure(MasterProcedureEnv env, TableName tableName, byte[] family, String dstSFT) protectedModifyStoreFileTrackerProcedure(MasterProcedureEnv env, TableName tableName, String dstSFT) ModifyTableStoreFileTrackerProcedure(MasterProcedureEnv env, TableName tableName, String dstSFT)  - 
Uses of TableName in org.apache.hadoop.hbase.regionserver.wal
Fields in org.apache.hadoop.hbase.regionserver.wal with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final ConcurrentMap<TableName,MutableFastCounter> MetricsWALSourceImpl.perTableAppendCountprivate final ConcurrentMap<TableName,MutableFastCounter> MetricsWALSourceImpl.perTableAppendSizeMethods in org.apache.hadoop.hbase.regionserver.wal with parameters of type TableNameModifier and TypeMethodDescriptionvoidMetricsWALSource.incrementAppendCount(TableName tableName) Increment the count of wal appendsvoidMetricsWALSourceImpl.incrementAppendCount(TableName tableName) voidMetricsWALSource.incrementAppendSize(TableName tableName, long size) Add the append size.voidMetricsWALSourceImpl.incrementAppendSize(TableName tableName, long size)  - 
Uses of TableName in org.apache.hadoop.hbase.replication
Fields in org.apache.hadoop.hbase.replication with type parameters of type TableNameModifier and TypeFieldDescriptionprivate Map<TableName,? extends Collection<String>> ReplicationPeerConfig.excludeTableCFsMapReplicationPeerConfig.ReplicationPeerConfigBuilderImpl.excludeTableCFsMapReplicationPeerConfig.ReplicationPeerConfigBuilderImpl.tableCFsMapprivate Map<TableName,? extends Collection<String>> ReplicationPeerConfig.tableCFsMapMethods in org.apache.hadoop.hbase.replication that return types with arguments of type TableNameModifier and TypeMethodDescriptionReplicationPeerConfig.getExcludeTableCFsMap()ReplicationPeer.getTableCFs()Get replicable (table, cf-list) map of this peerReplicationPeerImpl.getTableCFs()Get replicable (table, cf-list) map of this peerReplicationPeerConfig.getTableCFsMap()ReplicationPeerConfig.unmodifiableTableCFsMap(Map<TableName, List<String>> tableCFsMap) Methods in org.apache.hadoop.hbase.replication with parameters of type TableNameModifier and TypeMethodDescriptionstatic booleanReplicationUtils.contains(ReplicationPeerConfig peerConfig, TableName tableName) Deprecated.Will be removed in HBase 3.booleanReplicationPeerConfig.needToReplicate(TableName table) Decide whether the table need replicate to the peer clusterbooleanReplicationPeerConfig.needToReplicate(TableName table, byte[] family) Decide whether the passed family of the table need replicate to the peer cluster according to this peer config.Method parameters in org.apache.hadoop.hbase.replication with type arguments of type TableNameModifier and TypeMethodDescriptionprivate static booleanReplicationUtils.isTableCFsEqual(Map<TableName, List<String>> tableCFs1, Map<TableName, List<String>> tableCFs2) ReplicationPeerConfig.ReplicationPeerConfigBuilderImpl.setExcludeTableCFsMap(Map<TableName, List<String>> excludeTableCFsMap) ReplicationPeerConfig.setExcludeTableCFsMap(Map<TableName, ? extends Collection<String>> tableCFsMap) Deprecated.as release of 2.0.0, and it will be removed in 3.0.0.ReplicationPeerConfigBuilder.setExcludeTableCFsMap(Map<TableName, List<String>> tableCFsMap) Sets the mapping of table name to column families which should not be replicated.ReplicationPeerConfig.ReplicationPeerConfigBuilderImpl.setTableCFsMap(Map<TableName, List<String>> tableCFsMap) ReplicationPeerConfig.setTableCFsMap(Map<TableName, ? extends Collection<String>> tableCFsMap) Deprecated.as release of 2.0.0, and it will be removed in 3.0.0.ReplicationPeerConfigBuilder.setTableCFsMap(Map<TableName, List<String>> tableCFsMap) Sets an explicit map of tables and column families in those tables that should be replicated to the given peer.ReplicationPeerConfig.unmodifiableTableCFsMap(Map<TableName, List<String>> tableCFsMap)  - 
Uses of TableName in org.apache.hadoop.hbase.replication.master
Fields in org.apache.hadoop.hbase.replication.master declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameReplicationSinkTrackerTableCreator.REPLICATION_SINK_TRACKER_TABLE_NAMEReplicationSinkTrackerTableCreator.REPLICATION_SINK_TRACKER_TABLE_NAME_STRtable name - can be enabled with config - hbase.regionserver.replication.sink.tracker.enabled - 
Uses of TableName in org.apache.hadoop.hbase.replication.regionserver
Fields in org.apache.hadoop.hbase.replication.regionserver with type parameters of type TableNameModifier and TypeFieldDescriptionRegionReplicaReplicationEndpoint.RegionReplicaSinkWriter.disabledAndDroppedTablesRegionReplicaReplicationEndpoint.RegionReplicaOutputSink.memstoreReplicationEnabledMethods in org.apache.hadoop.hbase.replication.regionserver with parameters of type TableNameModifier and TypeMethodDescriptionvoidReplicationSource.addHFileRefs(TableName tableName, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) voidReplicationSourceInterface.addHFileRefs(TableName tableName, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) Add hfile names to the queue to be replicated.voidReplicationSourceManager.addHFileRefs(TableName tableName, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) (package private) voidReplication.addHFileRefsToQueue(TableName tableName, byte[] family, List<Pair<org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path>> pairs) voidRegionReplicaReplicationEndpoint.RegionReplicaSinkWriter.append(TableName tableName, byte[] encodedRegionName, byte[] row, List<WAL.Entry> entries) private voidReplicationSink.batch(TableName tableName, Collection<List<Row>> allRows, int batchRowSizeThreshold) Do the changes and handle the poolprivate voidReplicationSink.buildBulkLoadHFileMap(Map<String, List<Pair<byte[], List<String>>>> bulkLoadHFileMap, TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor bld) private org.apache.hadoop.fs.PathHFileReplicator.createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, TableName tableName) booleanprivate StringReplicationSink.getHFilePath(TableName table, org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor bld, String storeFile, byte[] family) private booleanRegionReplicaReplicationEndpoint.RegionReplicaOutputSink.requiresReplication(TableName tableName, List<WAL.Entry> entries) returns true if the specified entry must be replicated.Constructors in org.apache.hadoop.hbase.replication.regionserver with parameters of type TableNameModifierConstructorDescriptionRegionReplicaReplayCallable(ClusterConnection connection, RpcControllerFactory rpcControllerFactory, TableName tableName, HRegionLocation location, RegionInfo regionInfo, byte[] row, List<WAL.Entry> entries, AtomicLong skippedEntries)  - 
Uses of TableName in org.apache.hadoop.hbase.rest
Methods in org.apache.hadoop.hbase.rest with parameters of type TableNameModifier and TypeMethodDescriptionprivate org.apache.hbase.thirdparty.javax.ws.rs.core.ResponseSchemaResource.replace(TableName name, TableSchemaModel model, org.apache.hbase.thirdparty.javax.ws.rs.core.UriInfo uriInfo, Admin admin) private org.apache.hbase.thirdparty.javax.ws.rs.core.ResponseSchemaResource.update(TableName name, TableSchemaModel model, org.apache.hbase.thirdparty.javax.ws.rs.core.UriInfo uriInfo, Admin admin)  - 
Uses of TableName in org.apache.hadoop.hbase.rsgroup
Fields in org.apache.hadoop.hbase.rsgroup declared as TableNameFields in org.apache.hadoop.hbase.rsgroup with type parameters of type TableNameMethods in org.apache.hadoop.hbase.rsgroup that return types with arguments of type TableNameModifier and TypeMethodDescriptionprivate Pair<Map<TableName,Map<ServerName, List<RegionInfo>>>, List<RegionPlan>> RSGroupBasedLoadBalancer.correctAssignments(Map<TableName, Map<ServerName, List<RegionInfo>>> existingAssignments) RSGroupInfoManagerImpl.flushConfigTable(Map<String, RSGroupInfo> groupMap) (package private) Map<TableName,Map<ServerName, List<RegionInfo>>> RSGroupAdminServer.getRSGroupAssignmentsByTable(TableStateManager tableStateManager, String groupName) This is an EXPENSIVE clone.RSGroupInfo.getTables()Get set of tables that are members of the group.Methods in org.apache.hadoop.hbase.rsgroup with parameters of type TableNameModifier and TypeMethodDescriptionvoidbooleanRSGroupInfo.containsTable(TableName table) RSGroupInfoManager.determineRSGroupInfoForTable(TableName tableName) DetermineRSGroupInfofor the given table.RSGroupInfoManagerImpl.determineRSGroupInfoForTable(TableName tableName) Will try to get the rsgroup fromtableMapfirst then try to get the rsgroup fromscripttry to get the rsgroup from theNamespaceDescriptorlastly.RSGroupAdmin.getRSGroupInfoOfTable(TableName tableName) GetsRSGroupInfofor the given table's group.RSGroupAdminClient.getRSGroupInfoOfTable(TableName tableName) RSGroupAdminServer.getRSGroupInfoOfTable(TableName tableName) RSGroupInfoManager.getRSGroupOfTable(TableName tableName) Get the group membership of a tableRSGroupInfoManagerImpl.getRSGroupOfTable(TableName tableName) voidRSGroupAdminEndpoint.postCompletedModifyTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) voidRSGroupAdminEndpoint.postDeleteTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidRSGroupAdminEndpoint.preModifyTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor) booleanRSGroupInfo.removeTable(TableName table) Method parameters in org.apache.hadoop.hbase.rsgroup with type arguments of type TableNameModifier and TypeMethodDescriptionvoidRSGroupInfo.addAllTables(Collection<TableName> arg) RSGroupBasedLoadBalancer.balanceCluster(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable) Balance by RSGroup.private Pair<Map<TableName,Map<ServerName, List<RegionInfo>>>, List<RegionPlan>> RSGroupBasedLoadBalancer.correctAssignments(Map<TableName, Map<ServerName, List<RegionInfo>>> existingAssignments) (package private) voidRSGroupAdminServer.modifyOrMoveTables(Set<TableName> tables, RSGroupInfo targetGroup) private voidRSGroupAdminServer.moveTableRegionsToGroup(Set<TableName> tables, RSGroupInfo targetGrp) Moves regions of tables which are not on target group servers.voidRSGroupAdmin.moveTables(Set<TableName> tables, String targetGroup) Move given set of tables to the specified target RegionServer group.voidRSGroupAdminClient.moveTables(Set<TableName> tables, String targetGroup) voidRSGroupAdminServer.moveTables(Set<TableName> tables, String targetGroup) voidRSGroupInfoManager.moveTables(Set<TableName> tableNames, String groupName) Set the group membership of a set of tablesvoidRSGroupInfoManagerImpl.moveTables(Set<TableName> tableNames, String groupName) voidRSGroupBasedLoadBalancer.updateBalancerLoadInfo(Map<TableName, Map<ServerName, List<RegionInfo>>> loadOfAllTable)  - 
Uses of TableName in org.apache.hadoop.hbase.security.access
Fields in org.apache.hadoop.hbase.security.access declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameAccessControlClient.ACL_TABLE_NAMEstatic final TableNamePermissionStorage.ACL_TABLE_NAMEInternal storage table for access control listsprivate TableNameAccessControlFilter.tableprivate final TableNameAuthResult.tableprivate TableNameTablePermission.tableprivate TableNameAuthResult.Params.tableNameprivate TableNameGetUserPermissionsRequest.Builder.tableNameprivate TableNameGetUserPermissionsRequest.tableNameprivate TableNamePermission.Builder.tableNameFields in org.apache.hadoop.hbase.security.access with type parameters of type TableNameModifier and TypeFieldDescriptionprivate Map<TableName,List<UserPermission>> AccessController.tableAclsAuthManager.tableCacheCache for table permission.Methods in org.apache.hadoop.hbase.security.access that return TableNameModifier and TypeMethodDescriptionprivate TableNameAccessController.getTableName(RegionCoprocessorEnvironment e) private TableNameAccessController.getTableName(Region region) AuthResult.getTableName()GetUserPermissionsRequest.getTableName()TablePermission.getTableName()static TableNameShadedAccessControlUtil.toTableName(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableName tableNamePB) Methods in org.apache.hadoop.hbase.security.access that return types with arguments of type TableNameModifier and TypeMethodDescriptionSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.getUserNamespaceAndTable(Table aclTable, String userName) Methods in org.apache.hadoop.hbase.security.access with parameters of type TableNameModifier and TypeMethodDescriptionbooleanAuthManager.accessUserTable(User user, TableName table, Permission.Action action) Checks if the user has access to the full table or at least a family/qualifier for the specified action.booleanSnapshotScannerHDFSAclHelper.addTableAcl(TableName tableName, Set<String> users, String operation) Add table user acls(package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.addUserTableHdfsAcl(Connection connection, String user, TableName tableName) (package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.addUserTableHdfsAcl(Connection connection, Set<String> users, TableName tableName) (package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.addUserTableHdfsAcl(Table aclTable, String user, TableName tableName) static AuthResultAuthResult.allow(String request, String reason, User user, Permission.Action action, TableName table, byte[] family, byte[] qualifier) static AuthResultAuthResult.allow(String request, String reason, User user, Permission.Action action, TableName table, Map<byte[], ? extends Collection<?>> families) booleanAuthManager.authorizeCell(User user, TableName table, Cell cell, Permission.Action action) Check if user has given action privilige in cell scope.private booleanAuthManager.authorizeFamily(Set<TablePermission> permissions, TableName table, byte[] family, Permission.Action action) private booleanAuthManager.authorizeTable(Set<TablePermission> permissions, TableName table, byte[] family, byte[] qualifier, Permission.Action action) booleanAuthManager.authorizeUserFamily(User user, TableName table, byte[] family, Permission.Action action) Check if user has given action privilige in table:family scope.booleanAuthManager.authorizeUserTable(User user, TableName table, byte[] family, byte[] qualifier, Permission.Action action) Check if user has given action privilige in table:family:qualifier scope.booleanAuthManager.authorizeUserTable(User user, TableName table, byte[] family, Permission.Action action) Check if user has given action privilige in table:family scope.booleanAuthManager.authorizeUserTable(User user, TableName table, Permission.Action action) Check if user has given action privilige in table scope.static org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GrantRequestAccessControlUtil.buildGrantRequest(String username, TableName tableName, byte[] family, byte[] qualifier, boolean mergeExistingPermissions, org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.Permission.Action... actions) Create a request to grant user table permissions.static org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.RevokeRequestAccessControlUtil.buildRevokeRequest(String username, TableName tableName, byte[] family, byte[] qualifier, org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.Permission.Action... actions) Create a request to revoke user table permissions.voidAccessChecker.checkLockPermissions(User user, String namespace, TableName tableName, RegionInfo[] regionInfos, String reason) voidAccessController.checkLockPermissions(ObserverContext<?> ctx, String namespace, TableName tableName, RegionInfo[] regionInfos, String reason) voidNoopAccessChecker.checkLockPermissions(User user, String namespace, TableName tableName, RegionInfo[] regionInfos, String reason) (package private) voidSnapshotScannerHDFSAclHelper.createTableDirectories(TableName tableName) voidZKPermissionWatcher.deleteTableACLNode(TableName tableName) Delete the acl notify node of table(package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.deleteTableHdfsAcl(Table aclTable, TableName tableName) (package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.deleteUserTableHdfsAcl(Connection connection, Set<String> users, TableName tableName) (package private) static voidSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.deleteUserTableHdfsAcl(Table aclTable, String user, TableName tableName) static AuthResultAuthResult.deny(String request, String reason, User user, Permission.Action action, TableName table, byte[] family, byte[] qualifier) static AuthResultAuthResult.deny(String request, String reason, User user, Permission.Action action, TableName table, Map<byte[], ? extends Collection<?>> families) private booleanTablePermission.failCheckTable(TableName table) SnapshotScannerHDFSAclController.filterUsersToRemoveNsAccessAcl(Table aclTable, TableName tableName, Set<String> tablesUsers) Remove table user access HDFS acl from namespace directory if the user has no permissions of global, ns of the table or other tables of the ns, eg: Bob has 'ns1:t1' read permission, when delete 'ns1:t1', if Bob has global read permission, '@ns1' read permission or 'ns1:other_tables' read permission, then skip remove Bob access acl in ns1Dirs, otherwise, remove Bob access acl.(package private) org.apache.hadoop.fs.PathSnapshotScannerHDFSAclHelper.PathHelper.getArchiveTableDir(TableName tableName) (package private) org.apache.hadoop.fs.PathSnapshotScannerHDFSAclHelper.PathHelper.getDataTableDir(TableName tableName) (package private) org.apache.hadoop.fs.PathSnapshotScannerHDFSAclHelper.PathHelper.getMobTableDir(TableName tableName) static org.apache.hbase.thirdparty.com.google.common.collect.ListMultimap<String,UserPermission> PermissionStorage.getTablePermissions(org.apache.hadoop.conf.Configuration conf, TableName tableName) (package private) List<org.apache.hadoop.fs.Path>SnapshotScannerHDFSAclHelper.getTableRootPaths(TableName tableName, boolean includeSnapshotPath) return paths that user will table permission will visitprivate List<org.apache.hadoop.fs.Path>SnapshotScannerHDFSAclHelper.getTableSnapshotPaths(TableName tableName) SnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.getTableUsers(Table aclTable, TableName tableName) (package private) org.apache.hadoop.fs.PathSnapshotScannerHDFSAclHelper.PathHelper.getTmpTableDir(TableName tableName) static List<UserPermission>AccessControlUtil.getUserPermissions(com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, TableName t) Deprecated.UseAdmin.getUserPermissions(GetUserPermissionsRequest)instead.static List<UserPermission>AccessControlUtil.getUserPermissions(com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, TableName t, byte[] columnFamily, byte[] columnQualifier, String userName) Deprecated.UseAdmin.getUserPermissions(GetUserPermissionsRequest)instead.SnapshotScannerHDFSAclHelper.getUsersWithTableReadAction(TableName tableName, boolean includeNamespace, boolean includeGlobal) Return users with table read permissionprivate UserPermissionSnapshotScannerHDFSAclController.getUserTablePermission(org.apache.hadoop.conf.Configuration conf, String userName, TableName tableName) static List<UserPermission>PermissionStorage.getUserTablePermissions(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] cf, byte[] cq, String userName, boolean hasFilterUser) Returns the currently granted permissions for a given table as the specified user plus associated permissions.private static voidAccessControlClient.grant(Connection connection, TableName tableName, String userName, byte[] family, byte[] qual, boolean mergeExistingPermissions, Permission.Action... actions) Grants permission on the specified table for the specified userstatic voidAccessControlClient.grant(Connection connection, TableName tableName, String userName, byte[] family, byte[] qual, Permission.Action... actions) Grants permission on the specified table for the specified user.static voidAccessControlUtil.grant(com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, String userShortName, TableName tableName, byte[] f, byte[] q, boolean mergeExistingPermissions, Permission.Action... actions) Deprecated.UseAdmin.grant(UserPermission, boolean)instead.static booleanAccessControlUtil.hasPermission(com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, TableName tableName, byte[] columnFamily, byte[] columnQualifier, String userName, Permission.Action[] actions) Deprecated.UseAdmin.hasUserPermissions(String, List)instead.(package private) static booleanSnapshotScannerHDFSAclController.SnapshotScannerHDFSAclStorage.hasUserTableHdfsAcl(Table aclTable, String user, TableName tableName) booleanTablePermission.implies(TableName table, byte[] family, byte[] qualifier, Permission.Action action) Check if given action can performs on given table:family:qualifier.booleanTablePermission.implies(TableName table, byte[] family, Permission.Action action) Check if given action can performs on given table:family.booleanTablePermission.implies(TableName table, KeyValue kv, Permission.Action action) Checks if this permission grants access to perform the given action on the given table and key value.private booleanSnapshotScannerHDFSAclController.isHdfsAclSet(Table aclTable, String userName, String namespace, TableName tableName) Check if user global/namespace/table HDFS acls is already setprivate booleanSnapshotScannerHDFSAclController.isHdfsAclSet(Table aclTable, String userName, TableName tableName) private booleanSnapshotScannerHDFSAclController.needHandleTableHdfsAcl(TableName tableName, String operation) GetUserPermissionsRequest.newBuilder(TableName tableName) Build a get table permission requeststatic Permission.BuilderPermission.newBuilder(TableName tableName) Build a table permissionprivate AuthResultAccessChecker.permissionGranted(String request, User user, Permission.Action permRequest, TableName tableName, byte[] family, byte[] qualifier) AccessChecker.permissionGranted(String request, User user, Permission.Action permRequest, TableName tableName, Map<byte[], ? extends Collection<?>> families) Check the current user for authorization to perform a specific action against the given set of row data.NoopAccessChecker.permissionGranted(String request, User user, Permission.Action permRequest, TableName tableName, Map<byte[], ? extends Collection<?>> families) voidSnapshotScannerHDFSAclController.postCompletedDeleteTableAction(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidSnapshotScannerHDFSAclController.postCompletedTruncateTableAction(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.postDeleteTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.postModifyTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, TableDescriptor htd) voidSnapshotScannerHDFSAclController.postModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor) voidAccessController.postTruncateTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidAccessController.preDeleteTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.preDisableTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.preEnableTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAccessController.preGetUserPermissions(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) private voidAccessController.preGetUserPermissions(User caller, String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) voidAccessController.preLockHeartbeat(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, String description) AccessController.preModifyColumnFamilyStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, byte[] family, String dstSFT) AccessController.preModifyTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, TableDescriptor currentDesc, TableDescriptor newDesc) CoprocessorWhitelistMasterObserver.preModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDesc, TableDescriptor newDesc) AccessController.preModifyTableStoreFileTracker(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName, String dstSFT) voidAccessController.preRequestLock(ObserverContext<MasterCoprocessorEnvironment> ctx, String namespace, TableName tableName, RegionInfo[] regionInfos, String description) voidAccessController.preSetTableQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, GlobalQuotaSettings quotas) voidAccessController.preSetUserQuota(ObserverContext<MasterCoprocessorEnvironment> ctx, String userName, TableName tableName, GlobalQuotaSettings quotas) voidAccessController.preSplitRegion(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, byte[] splitRow) voidAccessController.preTableFlush(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) voidAccessController.preTruncateTable(ObserverContext<MasterCoprocessorEnvironment> c, TableName tableName) voidAuthManager.refreshTableCacheFromWritable(TableName table, byte[] data) Update acl info for table.booleanSnapshotScannerHDFSAclHelper.removeNamespaceAccessAcl(TableName tableName, Set<String> removeUsers, String operation) Remove table access acl from namespace dir when delete tablevoidAuthManager.removeTable(TableName table) Remove given table from AuthManager's table cache.booleanSnapshotScannerHDFSAclHelper.removeTableAcl(TableName tableName, Set<String> users) Remove table acls when modify tablebooleanSnapshotScannerHDFSAclHelper.removeTableDefaultAcl(TableName tableName, Set<String> removeUsers) Remove default acl from table archive dir when delete table(package private) static voidPermissionStorage.removeTablePermissions(org.apache.hadoop.conf.Configuration conf, TableName tableName, byte[] column, Table t) Remove specified table column from the acl table.(package private) static voidPermissionStorage.removeTablePermissions(org.apache.hadoop.conf.Configuration conf, TableName tableName, Table t) Remove specified table from the _acl_ table.private static voidPermissionStorage.removeTablePermissions(TableName tableName, byte[] column, Table table, boolean closeTable) private voidSnapshotScannerHDFSAclController.removeUserTableHdfsAcl(Table aclTable, String userName, TableName tableName, UserPermission userPermission) voidAccessChecker.requireAccess(User user, String request, TableName tableName, Permission.Action... permissions) Authorizes that the current user has any of the given permissions to access the table.voidAccessController.requireAccess(ObserverContext<?> ctx, String request, TableName tableName, Permission.Action... permissions) voidNoopAccessChecker.requireAccess(User user, String request, TableName tableName, Permission.Action... permissions) voidAccessChecker.requireGlobalPermission(User user, String request, Permission.Action perm, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, String filterUser) Checks that the user has the given global permission.voidAccessController.requireGlobalPermission(ObserverContext<?> ctx, String request, Permission.Action perm, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap) voidNoopAccessChecker.requireGlobalPermission(User user, String request, Permission.Action perm, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, String filterUser) voidAccessChecker.requireNamespacePermission(User user, String request, String namespace, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, Permission.Action... permissions) Checks that the user has the given global or namespace permission.voidAccessController.requireNamespacePermission(ObserverContext<?> ctx, String request, String namespace, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, Permission.Action... permissions) voidNoopAccessChecker.requireNamespacePermission(User user, String request, String namespace, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, Permission.Action... permissions) voidAccessChecker.requirePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, String filterUser, Permission.Action... permissions) Authorizes that the current user has any of the given permissions for the given table, column family and column qualifier.voidAccessController.requirePermission(ObserverContext<?> ctx, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) voidNoopAccessChecker.requirePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, String filterUser, Permission.Action... permissions) voidAccessChecker.requireTablePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) Authorizes that the current user has any of the given permissions for the given table, column family and column qualifier.voidAccessController.requireTablePermission(ObserverContext<?> ctx, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) voidNoopAccessChecker.requireTablePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) static voidAccessControlClient.revoke(Connection connection, TableName tableName, String username, byte[] family, byte[] qualifier, Permission.Action... actions) Revokes the permission on the tablestatic voidAccessControlUtil.revoke(com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingInterface protocol, String userShortName, TableName tableName, byte[] f, byte[] q, Permission.Action... actions) Deprecated.UseAdmin.revoke(UserPermission)instead.AuthResult.Params.setTableName(TableName table) private booleanSnapshotScannerHDFSAclCleaner.tableExists(TableName tableName) static org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableNameShadedAccessControlUtil.toProtoTableName(TableName tableName) private voidAuthManager.updateTableCache(TableName table, org.apache.hbase.thirdparty.com.google.common.collect.ListMultimap<String, Permission> tablePerms) Updates the internal table permissions cache for specified table.Method parameters in org.apache.hadoop.hbase.security.access with type arguments of type TableNameModifier and TypeMethodDescriptionprivate voidSnapshotScannerHDFSAclHelper.handleTableAcl(Set<TableName> tableNames, Set<String> users, Set<String> skipNamespaces, Set<TableName> skipTables, SnapshotScannerHDFSAclHelper.HDFSAclOperation.OperationType operationType) voidAccessController.postGetTableDescriptors(ObserverContext<MasterCoprocessorEnvironment> ctx, List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) voidAccessController.preGetTableDescriptors(ObserverContext<MasterCoprocessorEnvironment> ctx, List<TableName> tableNamesList, List<TableDescriptor> descriptors, String regex) Constructors in org.apache.hadoop.hbase.security.access with parameters of type TableNameModifierConstructorDescription(package private)AccessControlFilter(AuthManager mgr, User ugi, TableName tableName, AccessControlFilter.Strategy strategy, Map<ByteRange, Integer> cfVsMaxVersions) AuthResult(boolean allowed, String request, String reason, User user, Permission.Action action, TableName table, byte[] family, byte[] qualifier) AuthResult(boolean allowed, String request, String reason, User user, Permission.Action action, TableName table, Map<byte[], ? extends Collection<?>> families) privateprivateprivateGetUserPermissionsRequest(String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) (package private)TablePermission(TableName table, byte[] family, byte[] qualifier, Permission.Action... assigned) Construct a table:family:qualifier permission. - 
Uses of TableName in org.apache.hadoop.hbase.security.visibility
Fields in org.apache.hadoop.hbase.security.visibility declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameVisibilityConstants.LABELS_TABLE_NAMEInternal storage table for visibility labelsMethods in org.apache.hadoop.hbase.security.visibility with parameters of type TableNameModifier and TypeMethodDescriptionvoidVisibilityController.preDisableTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName) VisibilityController.preModifyTable(ObserverContext<MasterCoprocessorEnvironment> ctx, TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor)  - 
Uses of TableName in org.apache.hadoop.hbase.slowlog
Fields in org.apache.hadoop.hbase.slowlog declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameSlowLogTableAccessor.SLOW_LOG_TABLE_NAMEhbase:slowlog table name - can be enabled with config - hbase.regionserver.slowlog.systable.enabled - 
Uses of TableName in org.apache.hadoop.hbase.snapshot
Fields in org.apache.hadoop.hbase.snapshot declared as TableNameModifier and TypeFieldDescriptionprivate final TableNameRestoreSnapshotHelper.snapshotTableprivate final TableNameSnapshotInfo.SnapshotStats.snapshotTableprivate TableNameCreateSnapshot.tableNameMethods in org.apache.hadoop.hbase.snapshot with parameters of type TableNameModifier and TypeMethodDescriptionstatic RegionInfoRestoreSnapshotHelper.cloneRegionInfo(TableName tableName, RegionInfo snapshotRegionInfo) private static Pair<org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo,Long> ExportSnapshot.getSnapshotFileAndSize(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration conf, TableName table, String region, String family, String hfile, long size) static voidRestoreSnapshotHelper.restoreSnapshotAcl(org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription snapshot, TableName newTableName, org.apache.hadoop.conf.Configuration conf) Constructors in org.apache.hadoop.hbase.snapshot with parameters of type TableName - 
Uses of TableName in org.apache.hadoop.hbase.thrift
Methods in org.apache.hadoop.hbase.thrift that return TableNameModifier and TypeMethodDescriptionprivate static TableNameThriftHBaseServiceHandler.getTableName(ByteBuffer buffer)  - 
Uses of TableName in org.apache.hadoop.hbase.thrift2
Methods in org.apache.hadoop.hbase.thrift2 that return TableNameModifier and TypeMethodDescriptionstatic TableNameThriftUtilities.tableNameFromThrift(org.apache.hadoop.hbase.thrift2.generated.TTableName tableName) static TableName[]ThriftUtilities.tableNamesArrayFromThrift(List<org.apache.hadoop.hbase.thrift2.generated.TTableName> tableNames) Methods in org.apache.hadoop.hbase.thrift2 that return types with arguments of type TableNameModifier and TypeMethodDescriptionThriftUtilities.tableNamesFromThrift(List<org.apache.hadoop.hbase.thrift2.generated.TTableName> tableNames) Methods in org.apache.hadoop.hbase.thrift2 with parameters of type TableNameModifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.thrift2.generated.TTableNameThriftUtilities.tableNameFromHBase(TableName table) static List<org.apache.hadoop.hbase.thrift2.generated.TTableName>ThriftUtilities.tableNamesFromHBase(TableName[] in) Method parameters in org.apache.hadoop.hbase.thrift2 with type arguments of type TableNameModifier and TypeMethodDescriptionstatic List<org.apache.hadoop.hbase.thrift2.generated.TTableName>ThriftUtilities.tableNamesFromHBase(List<TableName> in)  - 
Uses of TableName in org.apache.hadoop.hbase.thrift2.client
Fields in org.apache.hadoop.hbase.thrift2.client declared as TableNameMethods in org.apache.hadoop.hbase.thrift2.client that return TableNameModifier and TypeMethodDescriptionThriftTable.getName()ThriftAdmin.listTableNames()ThriftAdmin.listTableNames(String regex) ThriftAdmin.listTableNames(String regex, boolean includeSysTables) ThriftAdmin.listTableNames(Pattern pattern) ThriftAdmin.listTableNames(Pattern pattern, boolean includeSysTables) ThriftAdmin.listTableNamesByNamespace(String name) Methods in org.apache.hadoop.hbase.thrift2.client that return types with arguments of type TableNameModifier and TypeMethodDescriptionThriftAdmin.getRegionServerSpaceQuotaSnapshots(ServerName serverName) ThriftAdmin.getSpaceQuotaTableSizes()ThriftAdmin.listTableNamesByState(boolean isEnabled) Methods in org.apache.hadoop.hbase.thrift2.client with parameters of type TableNameModifier and TypeMethodDescriptionvoidThriftAdmin.addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) ThriftAdmin.addColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) ThriftAdmin.clearBlockCache(TableName tableName) ThriftAdmin.cloneSnapshotAsync(String snapshotName, TableName tableName, boolean cloneAcl, String customSFT) voidThriftAdmin.cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) voidvoidvoidThriftAdmin.compact(TableName tableName, byte[] columnFamily, CompactType compactType) voidThriftAdmin.compact(TableName tableName, CompactType compactType) voidThriftAdmin.deleteColumn(TableName tableName, byte[] columnFamily) voidThriftAdmin.deleteColumnFamily(TableName tableName, byte[] columnFamily) ThriftAdmin.deleteColumnFamilyAsync(TableName tableName, byte[] columnFamily) voidThriftAdmin.deleteTable(TableName tableName) ThriftAdmin.deleteTableAsync(TableName tableName) voidThriftAdmin.disableTable(TableName tableName) ThriftAdmin.disableTableAsync(TableName tableName) voidThriftAdmin.disableTableReplication(TableName tableName) voidThriftAdmin.enableTable(TableName tableName) ThriftAdmin.enableTableAsync(TableName tableName) voidThriftAdmin.enableTableReplication(TableName tableName) voidvoidvoidThriftAdmin.flushAsync(TableName tableName, List<byte[]> columnFamilies) ThriftAdmin.getAlterStatus(TableName tableName) ThriftConnection.getBufferedMutator(TableName tableName) ThriftAdmin.getCompactionState(TableName tableName) ThriftAdmin.getCompactionState(TableName tableName, CompactType compactType) ThriftAdmin.getCurrentSpaceQuotaSnapshot(TableName tableName) ThriftAdmin.getDescriptor(TableName tableName) longThriftAdmin.getLastMajorCompactionTimestamp(TableName tableName) ThriftConnection.getRegionLocator(TableName tableName) ThriftAdmin.getRegionMetrics(ServerName serverName, TableName tableName) ThriftAdmin.getRegions(TableName tableName) ThriftConnection.getTableBuilder(TableName tableName, ExecutorService pool) Get a TableBuider to build ThriftTable, ThriftTable is NOT thread safeThriftAdmin.getTableDescriptor(TableName tableName) ThriftAdmin.getTableRegions(TableName tableName) booleanThriftAdmin.isTableAvailable(TableName tableName) booleanThriftAdmin.isTableAvailable(TableName tableName, byte[][] splitKeys) booleanThriftAdmin.isTableDisabled(TableName tableName) booleanThriftAdmin.isTableEnabled(TableName tableName) voidThriftAdmin.majorCompact(TableName tableName) voidThriftAdmin.majorCompact(TableName tableName, byte[] columnFamily) voidThriftAdmin.majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) voidThriftAdmin.majorCompact(TableName tableName, CompactType compactType) voidThriftAdmin.modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) ThriftAdmin.modifyColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) ThriftAdmin.modifyColumnFamilyStoreFileTrackerAsync(TableName tableName, byte[] family, String dstSFT) voidThriftAdmin.modifyTable(TableName tableName, TableDescriptor td) ThriftAdmin.modifyTableAsync(TableName tableName, TableDescriptor td) ThriftAdmin.modifyTableStoreFileTrackerAsync(TableName tableName, String dstSFT) voidvoidvoidThriftAdmin.snapshot(String snapshotName, TableName tableName, SnapshotType type) voidvoidbooleanThriftAdmin.tableExists(TableName tableName) voidThriftAdmin.truncateTable(TableName tableName, boolean preserveSplits) ThriftAdmin.truncateTableAsync(TableName tableName, boolean preserveSplits) Method parameters in org.apache.hadoop.hbase.thrift2.client with type arguments of type TableNameModifier and TypeMethodDescriptionThriftAdmin.getTableDescriptorsByTableName(List<TableName> tableNames) ThriftAdmin.listTableDescriptors(List<TableName> tableNames) Constructors in org.apache.hadoop.hbase.thrift2.client with parameters of type TableNameModifierConstructorDescriptionThriftTable(TableName tableName, org.apache.hadoop.hbase.thrift2.generated.THBaseService.Client client, org.apache.thrift.transport.TTransport tTransport, org.apache.hadoop.conf.Configuration conf)  - 
Uses of TableName in org.apache.hadoop.hbase.tool
Fields in org.apache.hadoop.hbase.tool declared as TableNameModifier and TypeFieldDescriptionstatic final TableNameCanaryTool.DEFAULT_WRITE_TABLE_NAMEprivate TableNameCanaryTool.RegionTaskResult.tableNameprivate TableNameCanaryTool.RegionMonitor.writeTableNameMethods in org.apache.hadoop.hbase.tool that return TableNameMethods in org.apache.hadoop.hbase.tool with parameters of type TableNameModifier and TypeMethodDescriptionprotected ClientServiceCallable<byte[]>LoadIncrementalHFiles.buildClientServiceCallable(Connection conn, TableName tableName, byte[] first, Collection<LoadIncrementalHFiles.LoadQueueItem> lqis, boolean copyFile) Deprecated.BulkLoadHFiles.bulkLoad(TableName tableName, Map<byte[], List<org.apache.hadoop.fs.Path>> family2Files) Perform a bulk load of the given directory into the given pre-existing table.Perform a bulk load of the given directory into the given pre-existing table.BulkLoadHFilesTool.bulkLoad(TableName tableName, Map<byte[], List<org.apache.hadoop.fs.Path>> family2Files) private voidLoadIncrementalHFiles.checkRegionIndexValid(int idx, Pair<byte[][], byte[][]> startEndKeys, TableName tableName) Deprecated.we can consider there is a region hole in following conditions.private voidLoadIncrementalHFiles.createTable(TableName tableName, org.apache.hadoop.fs.Path hfofDir, Admin admin) Deprecated.If the table is created for the first time, then "completebulkload" reads the files twice.Deprecated.Perform bulk load on the given table.LoadIncrementalHFiles.run(Map<byte[], List<org.apache.hadoop.fs.Path>> family2Files, TableName tableName) Deprecated.Perform bulk load on the given table.protected final Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> Deprecated.protected List<LoadIncrementalHFiles.LoadQueueItem>LoadIncrementalHFiles.tryAtomicRegionLoad(ClientServiceCallable<byte[]> serviceCallable, TableName tableName, byte[] first, Collection<LoadIncrementalHFiles.LoadQueueItem> lqis) Deprecated.as of release 2.3.0.protected List<LoadIncrementalHFiles.LoadQueueItem>LoadIncrementalHFiles.tryAtomicRegionLoad(Connection conn, TableName tableName, byte[] first, Collection<LoadIncrementalHFiles.LoadQueueItem> lqis, boolean copyFile) Deprecated.as of release 2.3.0.Constructors in org.apache.hadoop.hbase.tool with parameters of type TableNameModifierConstructorDescriptionRegionMonitor(Connection connection, String[] monitorTargets, boolean useRegExp, CanaryTool.Sink sink, ExecutorService executor, boolean writeSniffing, TableName writeTableName, boolean treatFailureAsError, HashMap<String, Long> configuredReadTableTimeouts, long configuredWriteTableTimeout, long allowedFailures) RegionTaskResult(RegionInfo region, TableName tableName, ServerName serverName, ColumnFamilyDescriptor column)  - 
Uses of TableName in org.apache.hadoop.hbase.util
Fields in org.apache.hadoop.hbase.util declared as TableNameModifier and TypeFieldDescriptionprivate TableNameHBaseFsck.cleanReplicationBarrierTableDeprecated.(package private) TableNameHbckTableInfo.tableNameFields in org.apache.hadoop.hbase.util with type parameters of type TableNameModifier and TypeFieldDescriptionprivate final Map<TableName,TableDescriptor> FSTableDescriptors.cacheHBaseFsck.orphanTableDirsDeprecated.HBaseFsck.skippedRegionsDeprecated.HBaseFsck.tablesIncludedDeprecated.private SortedMap<TableName,HbckTableInfo> HBaseFsck.tablesInfoDeprecated.This map from Tablename -> TableInfo contains the structures necessary to detect table consistency problems (holes, dupes, overlaps).private Map<TableName,TableState> HBaseFsck.tableStatesDeprecated.Methods in org.apache.hadoop.hbase.util that return TableNameModifier and TypeMethodDescriptionHbckTableInfo.getName()static TableNameCommonFSUtils.getTableName(org.apache.hadoop.fs.Path tablePath) Returns theTableNameobject representing the table directory under path rootdirHbckRegionInfo.getTableName()static TableNameHFileArchiveUtil.getTableName(org.apache.hadoop.fs.Path archivePath) Methods in org.apache.hadoop.hbase.util that return types with arguments of type TableNameModifier and TypeMethodDescriptionprivate SortedMap<TableName,HbckTableInfo> HBaseFsck.checkHdfsIntegrity(boolean fixHoles, boolean fixOverlaps) Deprecated.(package private) SortedMap<TableName,HbckTableInfo> HBaseFsck.checkIntegrity()Deprecated.Checks tables integrity.HBaseFsck.getIncludedTables()Deprecated.private SortedMap<TableName,HbckTableInfo> HBaseFsck.loadHdfsRegionInfos()Deprecated.Populate hbi's from regionInfos loaded from file system.static Map<TableName,TableState.State> ZKDataMigrator.queryForTableStates(ZKWatcher zkw) Deprecated.Since 2.0.0.Methods in org.apache.hadoop.hbase.util with parameters of type TableNameModifier and TypeMethodDescription(package private) static voidRegionSplitter.createPresplitTable(TableName tableName, RegionSplitter.SplitAlgorithm splitAlgo, String[] columnFamilies, org.apache.hadoop.conf.Configuration conf) private booleanHBaseFsck.fabricateTableInfo(FSTableDescriptors fstd, TableName tableName, Set<String> columns) Deprecated.To fabricate a .tableinfo file with following contents
1.Get the current table descriptor for the given table, or null if none exists.static org.apache.hadoop.fs.PathHFileArchiveUtil.getRegionArchiveDir(org.apache.hadoop.fs.Path rootDir, TableName tableName, String encodedRegionName) Get the archive directory for a given region under the specified tablestatic org.apache.hadoop.fs.PathHFileArchiveUtil.getRegionArchiveDir(org.apache.hadoop.fs.Path rootDir, TableName tableName, org.apache.hadoop.fs.Path regiondir) Get the archive directory for a given region under the specified tablestatic org.apache.hadoop.fs.PathCommonFSUtils.getRegionDir(org.apache.hadoop.fs.Path rootdir, TableName tableName, String regionName) Returns thePathobject representing the region directory under path rootdir(package private) static LinkedList<Pair<byte[],byte[]>> RegionSplitter.getSplits(Connection connection, TableName tableName, RegionSplitter.SplitAlgorithm splitAlgo) static org.apache.hadoop.fs.PathHFileArchiveUtil.getStoreArchivePath(org.apache.hadoop.conf.Configuration conf, TableName tableName, String regionName, String familyName) Get the directory to archive a store directorystatic org.apache.hadoop.fs.PathHFileArchiveUtil.getTableArchivePath(org.apache.hadoop.conf.Configuration conf, TableName tableName) Get the path to the table archive directory based on the configured archive directory.static org.apache.hadoop.fs.PathHFileArchiveUtil.getTableArchivePath(org.apache.hadoop.fs.Path rootdir, TableName tableName) Get the path to the table archive directory based on the configured archive directory.static TableDescriptorFSTableDescriptors.getTableDescriptorFromFs(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, TableName tableName) Returns the latest table descriptor for the given table directly from the file system if it exists, bypassing the local cache.static org.apache.hadoop.fs.PathCommonFSUtils.getTableDir(org.apache.hadoop.fs.Path rootdir, TableName tableName) Returns thePathobject representing the table directory under path rootdirprivate org.apache.hadoop.fs.PathFSTableDescriptors.getTableDir(TableName tableName) Return the table directory in HDFSprivate static Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path> RegionSplitter.getTableDirAndSplitFile(org.apache.hadoop.conf.Configuration conf, TableName tableName) private static org.apache.hadoop.hbase.shaded.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.StateZKDataMigrator.getTableState(ZKWatcher zkw, TableName tableName) Deprecated.Since 2.0.0.FSUtils.getTableStoreFilePathMap(Map<String, org.apache.hadoop.fs.Path> map, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, TableName tableName) Runs through the HBase rootdir/tablename and creates a reverse lookup map for table StoreFile names to the full Path.FSUtils.getTableStoreFilePathMap(Map<String, org.apache.hadoop.fs.Path> resultMap, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, TableName tableName, org.apache.hadoop.fs.PathFilter sfFilter, ExecutorService executor, FSUtils.ProgressReporter progressReporter) Runs through the HBase rootdir/tablename and creates a reverse lookup map for table StoreFile names to the full Path.FSUtils.getTableStoreFilePathMap(Map<String, org.apache.hadoop.fs.Path> resultMap, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, TableName tableName, org.apache.hadoop.fs.PathFilter sfFilter, ExecutorService executor, HbckErrorReporter progressReporter) Deprecated.Since 2.3.0.static org.apache.hadoop.fs.PathCommonFSUtils.getWALRegionDir(org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName) Returns the WAL region directory based on the given table name and region namestatic org.apache.hadoop.fs.PathCommonFSUtils.getWALTableDir(org.apache.hadoop.conf.Configuration conf, TableName tableName) Returns the Table directory under the WALRootDir for the specified table namestatic org.apache.hadoop.fs.PathCommonFSUtils.getWrongWALRegionDir(org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName) Deprecated.For compatibility, will be removed in 4.0.0.voidHBaseFsck.includeTable(TableName table) Deprecated.static booleanServerRegionReplicaUtil.isMetaRegionReplicaReplicationEnabled(org.apache.hadoop.conf.Configuration conf, TableName tn) Returns True if hbase:meta Region Read Replica is enabled.static booleanServerRegionReplicaUtil.isRegionReplicaReplicationEnabled(org.apache.hadoop.conf.Configuration conf, TableName tn) (package private) booleanHBaseFsck.isTableDisabled(TableName tableName) Deprecated.Check if the specified region's table is disabled.(package private) booleanHBaseFsck.isTableIncluded(TableName table) Deprecated.Only check/fix tables specified by the list, Empty list means all tables are included.<R> voidMultiHConnection.processBatchCallback(List<? extends Row> actions, TableName tableName, Object[] results, Batch.Callback<R> callback) Randomly pick a connection and process the batch of actions for a given tableRemoves the table descriptor from the local cache and returns it.(package private) static voidRegionSplitter.rollingSplit(TableName tableName, RegionSplitter.SplitAlgorithm splitAlgo, org.apache.hadoop.conf.Configuration conf) (package private) static LinkedList<Pair<byte[],byte[]>> RegionSplitter.splitScan(LinkedList<Pair<byte[], byte[]>> regionList, Connection connection, TableName tableName, RegionSplitter.SplitAlgorithm splitAlgo) Method parameters in org.apache.hadoop.hbase.util with type arguments of type TableNameModifier and TypeMethodDescription(package private) TableDescriptor[]HBaseFsck.getTableDescriptors(List<TableName> tableNames) Deprecated.private voidHBaseFsck.printTableSummary(SortedMap<TableName, HbckTableInfo> tablesInfo) Deprecated.Prints summary of all tables found on the system.Constructors in org.apache.hadoop.hbase.util with parameters of type TableName - 
Uses of TableName in org.apache.hadoop.hbase.util.compaction
Fields in org.apache.hadoop.hbase.util.compaction declared as TableNameConstructors in org.apache.hadoop.hbase.util.compaction with parameters of type TableNameModifierConstructorDescriptionMajorCompactor(org.apache.hadoop.conf.Configuration conf, TableName tableName, Set<String> storesToCompact, int concurrency, long timestamp, long sleepForMs)  - 
Uses of TableName in org.apache.hadoop.hbase.wal
Fields in org.apache.hadoop.hbase.wal declared as TableNameModifier and TypeFieldDescriptionprivate TableNameWALKeyImpl.tablename(package private) final TableNameEntryBuffers.RegionEntryBuffer.tableNameMethods in org.apache.hadoop.hbase.wal that return TableNameModifier and TypeMethodDescriptionEntryBuffers.RegionEntryBuffer.getTableName()WALKey.getTableName()Returns table nameWALKeyImpl.getTableName()Returns table nameMethods in org.apache.hadoop.hbase.wal with parameters of type TableNameModifier and TypeMethodDescriptionAbstractRecoveredEditsOutputSink.createRecoveredEditsWriter(TableName tableName, byte[] region, long seqId) Returns a writer that wraps aWALProvider.Writerand its Path.private StoreFileWriterBoundedRecoveredHFilesOutputSink.createRecoveredHFileWriter(TableName tableName, String regionName, long seqId, String familyName, boolean isMetaTable) RecoveredEditsOutputSink.getRecoveredEditsWriter(TableName tableName, byte[] region, long seqId) Get a writer and path for a log starting at the given entry.(package private) static org.apache.hadoop.fs.PathWALSplitUtil.getRegionSplitEditsPath(TableName tableName, byte[] encodedRegionName, long seqId, String fileNameBeingSplit, String tmpDirName, org.apache.hadoop.conf.Configuration conf) Path to a file under RECOVERED_EDITS_DIR directory of the region found inlogEntrynamed for the sequenceid in the passedlogEntry: e.g.protected voidWALKeyImpl.init(byte[] encodedRegionName, TableName tablename, long logSeqNum, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope, Map<String, byte[]> extendedAttributes) (package private) voidWALKeyImpl.internTableName(TableName tablename) Drop this instance's tablename byte array and instead hold a reference to the provided tablename.private booleanWALSplitter.isRegionDirPresentUnderRoot(TableName tn, String region) (package private) static org.apache.hadoop.fs.PathWALSplitUtil.tryCreateRecoveredHFilesDir(org.apache.hadoop.fs.FileSystem rootFS, org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName, String familyName) Return path to recovered.hfiles directory of the region's column family: e.g.Constructors in org.apache.hadoop.hbase.wal with parameters of type TableNameModifierConstructorDescription(package private)RegionEntryBuffer(TableName tableName, byte[] region) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long logSeqNum, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long logSeqNum, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long logSeqNum, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long logSeqNum, long now, UUID clusterId) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope) Create the log key for writing to somewhere.WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, List<UUID> clusterIds, long nonceGroup, long nonce, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope, Map<String, byte[]> extendedAttributes) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, NavigableMap<byte[], Integer> replicationScope) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, MultiVersionConcurrencyControl mvcc) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope) WALKeyImpl(byte[] encodedRegionName, TableName tablename, long now, MultiVersionConcurrencyControl mvcc, NavigableMap<byte[], Integer> replicationScope, Map<String, byte[]> extendedAttributes)  - 
Uses of TableName in org.apache.hbase.archetypes.exemplars.client
Fields in org.apache.hbase.archetypes.exemplars.client declared as TableNameModifier and TypeFieldDescription(package private) static final TableNameHelloHBase.MY_TABLE_NAME - 
Uses of TableName in org.apache.hbase.archetypes.exemplars.shaded_client
Fields in org.apache.hbase.archetypes.exemplars.shaded_client declared as TableNameModifier and TypeFieldDescription(package private) static final TableNameHelloHBase.MY_TABLE_NAME 
RegionInfo.getTable(byte[]).