Package org.apache.hadoop.hbase.backup
Class BackupInfo
java.lang.Object
org.apache.hadoop.hbase.backup.BackupInfo
- All Implemented Interfaces:
Comparable<BackupInfo>
An object to encapsulate the information for each backup session
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic enumBackupPhase - phases of an ACTIVE backup session (running), when state of a backup session is BackupState.RUNNINGstatic enumBackup session statesstatic interface -
Field Summary
FieldsModifier and TypeFieldDescriptionprivate StringBackup idprivate StringTarget root directory for storing the backup filesprivate Map<TableName,BackupTableInfo> Backup status map for all tablesprivate longBandwidth per worker in MB per sec.private longActual end timestamp of the backup processprivate StringBackup failure messageprivate StringFor incremental backup, a location of a backed-up hlogsIncremental backup file listPrevious Region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestampprivate static final org.slf4j.Loggerprivate static final intprivate booleanDo not verify checksum between source snapshot and exported snapshotprivate BackupInfo.BackupPhaseBackup phaseprivate intBackup progress in %% (0-100)private longActual start timestamp of a backup processprivate BackupInfo.BackupStateBackup stateNew region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestampprivate longTotal bytes of incremental logs copiedprivate BackupTypeBackup type, full or incrementalprivate intNumber of parallel workers. -
Constructor Summary
ConstructorsConstructorDescriptionBackupInfo(String backupId, BackupType type, TableName[] tables, String targetRootDir) -
Method Summary
Modifier and TypeMethodDescriptionvoidintWe use only time stamps to compare objects during sort operationbooleanstatic BackupInfofromByteArray(byte[] data) static BackupInfofromProto(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo proto) static BackupInfofromStream(InputStream stream) getBackupTableInfo(TableName table) longlongGet new region server log timestamps after distributed log rollbooleangetPhase()intGet current progressgetSnapshotName(TableName table) longgetState()getTableBackupDir(TableName tableName) getTableBySnapshot(String snapshotName) getTableSetTimestampMap(Map<String, org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.RSTimestampMap> map) longgetType()intinthashCode()voidsetBackupId(String backupId) voidsetBackupRootDir(String targetRootDir) voidsetBackupTableInfoMap(Map<TableName, BackupTableInfo> backupTableInfoMap) private voidsetBackupTableInfoMap(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.Builder builder) voidsetBandwidth(long bandwidth) voidsetCompleteTs(long endTs) voidsetFailedMsg(String failedMsg) voidsetHLogTargetDir(String hlogTagetDir) voidsetIncrBackupFileList(List<String> incrBackupFileList) voidSet the new region server log timestamps after distributed log rollvoidsetNoChecksumVerify(boolean noChecksumVerify) voidsetPhase(BackupInfo.BackupPhase phase) voidsetProgress(int p) Set progress (0-100%)voidsetSnapshotName(TableName table, String snapshotName) voidsetStartTs(long startTs) voidvoidvoidprivate voidsetTableSetTimestampMap(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.Builder builder) voidsetTotalBytesCopied(long totalBytesCopied) voidsetType(BackupType type) voidsetWorkers(int workers) byte[]private static Map<TableName,BackupTableInfo> org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfotoString()
-
Field Details
-
LOG
-
MAX_FAILED_MESSAGE_LENGTH
- See Also:
-
backupId
Backup id -
type
Backup type, full or incremental -
backupRootDir
Target root directory for storing the backup files -
state
Backup state -
phase
Backup phase -
failedMsg
Backup failure message -
backupTableInfoMap
Backup status map for all tables -
startTs
Actual start timestamp of a backup process -
completeTs
Actual end timestamp of the backup process -
totalBytesCopied
Total bytes of incremental logs copied -
hlogTargetDir
For incremental backup, a location of a backed-up hlogs -
incrBackupFileList
Incremental backup file list -
tableSetTimestampMap
New region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestamp -
incrTimestampMap
Previous Region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestamp -
progress
Backup progress in %% (0-100) -
workers
Number of parallel workers. -1 - system defined -
bandwidth
Bandwidth per worker in MB per sec. -1 - unlimited -
noChecksumVerify
Do not verify checksum between source snapshot and exported snapshot
-
-
Constructor Details
-
BackupInfo
public BackupInfo() -
BackupInfo
-
-
Method Details
-
getWorkers
-
setWorkers
-
getBandwidth
-
setBandwidth
-
setNoChecksumVerify
-
getNoChecksumVerify
-
setBackupTableInfoMap
-
getTableSetTimestampMap
-
setTableSetTimestampMap
-
setType
-
setBackupRootDir
-
setTotalBytesCopied
-
setProgress
Set progress (0-100%)- Parameters:
p- progress value
-
getProgress
Get current progress -
getBackupId
-
setBackupId
-
getBackupTableInfo
-
getFailedMsg
-
setFailedMsg
-
getStartTs
-
setStartTs
-
getCompleteTs
-
setCompleteTs
-
getTotalBytesCopied
-
getState
-
setState
-
getPhase
-
setPhase
-
getType
-
setSnapshotName
-
getSnapshotName
-
getSnapshotNames
-
getTables
-
getTableNames
-
addTables
-
setTables
-
getBackupRootDir
-
getTableBackupDir
-
setHLogTargetDir
-
getHLogTargetDir
-
getIncrBackupFileList
-
setIncrBackupFileList
-
setIncrTimestampMap
Set the new region server log timestamps after distributed log roll- Parameters:
prevTableSetTimestampMap- table timestamp map
-
getIncrTimestampMap
Get new region server log timestamps after distributed log roll- Returns:
- new region server log timestamps
-
getTableBySnapshot
-
toProtosBackupInfo
public org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo toProtosBackupInfo() -
hashCode
-
equals
-
toString
-
toByteArray
- Throws:
IOException
-
setBackupTableInfoMap
private void setBackupTableInfoMap(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.Builder builder) -
setTableSetTimestampMap
private void setTableSetTimestampMap(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.Builder builder) -
fromByteArray
- Throws:
IOException
-
fromStream
- Throws:
IOException
-
fromProto
public static BackupInfo fromProto(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo proto) -
toMap
private static Map<TableName,BackupTableInfo> toMap(List<org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupTableInfo> list) -
getTableSetTimestampMap
-
getShortDescription
-
getStatusAndProgressAsString
-
getTableListAsString
-
compareTo
We use only time stamps to compare objects during sort operation- Specified by:
compareToin interfaceComparable<BackupInfo>
-