Class IncrementalTableBackupClient
java.lang.Object
org.apache.hadoop.hbase.backup.impl.TableBackupClient
org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient
Incremental backup implementation. See the
execute method.-
Nested Class Summary
Nested classes/interfaces inherited from class org.apache.hadoop.hbase.backup.impl.TableBackupClient
TableBackupClient.Stage -
Field Summary
FieldsFields inherited from class org.apache.hadoop.hbase.backup.impl.TableBackupClient
BACKUP_CLIENT_IMPL_CLASS, BACKUP_TEST_MODE_STAGE, backupId, backupInfo, backupManager, conf, conn, fs, newTimestamps, tableList -
Constructor Summary
ConstructorsModifierConstructorDescriptionprotectedIncrementalTableBackupClient(Connection conn, String backupId, BackupRequest request) -
Method Summary
Modifier and TypeMethodDescriptionprivate static booleanareCfsCompatible(ColumnFamilyDescriptor[] currentCfs, ColumnFamilyDescriptor[] backupCfs) protected voidprotected voidvoidexecute()Backup request execution.filterMissingFiles(List<String> incrBackupFileList) protected org.apache.hadoop.fs.Pathprotected org.apache.hadoop.fs.Pathprotected static intprivate org.apache.hadoop.fs.PathgetTargetDirForTable(TableName table) handleBulkLoad(List<TableName> tablesToBackup) Reads bulk load records from backup table, iterates through the records and forms the paths for bulk loaded hfiles.private voidincrementalCopyBulkloadHFiles(org.apache.hadoop.fs.FileSystem tgtFs, TableName tn) protected voidincrementalCopyHFiles(String[] files, String backupDest) protected booleanisActiveWalPath(org.apache.hadoop.fs.Path p) Check if a given path is belongs to active WAL directoryprivate voidmergeSplitAndCopyBulkloadedHFiles(List<String> activeFiles, List<String> archiveFiles, TableName tn, org.apache.hadoop.fs.FileSystem tgtFs) private voidmergeSplitAndCopyBulkloadedHFiles(List<String> files, TableName tn, org.apache.hadoop.fs.FileSystem tgtFs) private voidprotected booleantableExists(TableName table, Connection conn) voidupdateFileLists(List<String> activeFiles, List<String> archiveFiles) private voidVerifies that the current table descriptor CFs matches the descriptor CFs of the last full backup for the tables.protected voidwalToHFiles(List<String> dirPaths, List<String> tableList) Methods inherited from class org.apache.hadoop.hbase.backup.impl.TableBackupClient
addManifest, beginBackup, cleanupAndRestoreBackupSystem, cleanupExportSnapshotLog, cleanupTargetDir, completeBackup, deleteSnapshots, failBackup, failStageIf, getAncestors, getMessage, getTestStage, init, obtainBackupMetaDataStr
-
Field Details
-
LOG
-
-
Constructor Details
-
IncrementalTableBackupClient
protected IncrementalTableBackupClient() -
IncrementalTableBackupClient
public IncrementalTableBackupClient(Connection conn, String backupId, BackupRequest request) throws IOException - Throws:
IOException
-
-
Method Details
-
filterMissingFiles
- Throws:
IOException
-
isActiveWalPath
Check if a given path is belongs to active WAL directory- Parameters:
p- path- Returns:
- true, if yes
-
getIndex
-
handleBulkLoad
Reads bulk load records from backup table, iterates through the records and forms the paths for bulk loaded hfiles. Copies the bulk loaded hfiles to backup destination. This method does NOT clean up the entries in the bulk load system table. Those entries should not be cleaned until the backup is marked as complete.- Parameters:
tablesToBackup- list of tables to be backed up- Throws:
IOException
-
mergeSplitAndCopyBulkloadedHFiles
private void mergeSplitAndCopyBulkloadedHFiles(List<String> activeFiles, List<String> archiveFiles, TableName tn, org.apache.hadoop.fs.FileSystem tgtFs) throws IOException - Throws:
IOException
-
mergeSplitAndCopyBulkloadedHFiles
private void mergeSplitAndCopyBulkloadedHFiles(List<String> files, TableName tn, org.apache.hadoop.fs.FileSystem tgtFs) throws IOException - Throws:
IOException
-
updateFileLists
- Throws:
IOException
-
execute
Description copied from class:TableBackupClientBackup request execution.- Specified by:
executein classTableBackupClient- Throws:
IOException- If the execution of the backup failsColumnFamilyMismatchException- If the column families of the current table do not match the column families for the last full backup. In which case, a full backup should be taken
-
incrementalCopyHFiles
- Throws:
IOException
-
deleteBulkLoadDirectory
- Throws:
IOException
-
convertWALsToHFiles
- Throws:
IOException
-
tableExists
- Throws:
IOException
-
walToHFiles
- Throws:
IOException
-
incrementalCopyBulkloadHFiles
private void incrementalCopyBulkloadHFiles(org.apache.hadoop.fs.FileSystem tgtFs, TableName tn) throws IOException - Throws:
IOException
-
getBulkOutputDirForTable
-
getBulkOutputDir
-
getTargetDirForTable
-
setupRegionLocator
- Throws:
IOException
-
getFullBackupIds
- Throws:
IOException
-
verifyCfCompatibility
private void verifyCfCompatibility(Set<TableName> tables, Map<TableName, String> tablesToFullBackupId) throws IOException, ColumnFamilyMismatchExceptionVerifies that the current table descriptor CFs matches the descriptor CFs of the last full backup for the tables. This ensures CF compatibility across incremental backups. If a mismatch is detected, a full table backup should be taken, rather than an incremental one -
areCfsCompatible
private static boolean areCfsCompatible(ColumnFamilyDescriptor[] currentCfs, ColumnFamilyDescriptor[] backupCfs)
-