Class StoreFileTrackerBase
java.lang.Object
org.apache.hadoop.hbase.regionserver.storefiletracker.StoreFileTrackerBase
- All Implemented Interfaces:
StoreFileTracker
- Direct Known Subclasses:
DefaultStoreFileTracker
,FileBasedStoreFileTracker
,MigrationStoreFileTracker
Base class for all store file tracker.
Mainly used to place the common logic to skip persistent for secondary replicas.
-
Field Summary
Modifier and TypeFieldDescriptionprivate boolean
protected final org.apache.hadoop.conf.Configuration
protected final StoreContext
protected final boolean
private static final org.slf4j.Logger
-
Constructor Summary
ModifierConstructorDescriptionprotected
StoreFileTrackerBase
(org.apache.hadoop.conf.Configuration conf, boolean isPrimaryReplica, StoreContext ctx) -
Method Summary
Modifier and TypeMethodDescriptionfinal void
add
(Collection<StoreFileInfo> newFiles) Add new store files.private HFileContext
createFileContext
(Compression.Algorithm compression, boolean includeMVCCReadpoint, boolean includesTag, Encryption.Context encryptionContext) createReference
(Reference reference, org.apache.hadoop.fs.Path path) final StoreFileWriter
Create a writer for writing new store files.protected abstract void
doAddCompactionResults
(Collection<StoreFileInfo> compactedFiles, Collection<StoreFileInfo> newFiles) protected abstract void
doAddNewStoreFiles
(Collection<StoreFileInfo> newFiles) protected abstract List<StoreFileInfo>
doLoadStoreFiles
(boolean readOnly) For primary replica, we will call load once when opening a region, and the implementation could choose to do some cleanup work.protected abstract void
doSetStoreFiles
(Collection<StoreFileInfo> files) getStoreFileInfo
(org.apache.hadoop.fs.FileStatus fileStatus, org.apache.hadoop.fs.Path initialPath, boolean primaryReplica) getStoreFileInfo
(org.apache.hadoop.fs.Path initialPath, boolean primaryReplica) protected final String
boolean
Returns true if the specified family has reference filesfinal List<StoreFileInfo>
load()
Load the store files list when opening a region.readReference
(org.apache.hadoop.fs.Path p) Reads the reference file from the given path.final void
replace
(Collection<StoreFileInfo> compactedFiles, Collection<StoreFileInfo> newFiles) Add new store files and remove compacted store files after compaction.final void
set
(List<StoreFileInfo> files) Set the store files.Adds StoreFileTracker implementations specific configurations into the table descriptor.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.hadoop.hbase.regionserver.storefiletracker.StoreFileTracker
requireWritingToTmpDirFirst
-
Field Details
-
LOG
-
conf
-
isPrimaryReplica
-
ctx
-
cacheOnWriteLogged
-
-
Constructor Details
-
StoreFileTrackerBase
protected StoreFileTrackerBase(org.apache.hadoop.conf.Configuration conf, boolean isPrimaryReplica, StoreContext ctx)
-
-
Method Details
-
load
Description copied from interface:StoreFileTracker
Load the store files list when opening a region.- Specified by:
load
in interfaceStoreFileTracker
- Throws:
IOException
-
add
Description copied from interface:StoreFileTracker
Add new store files. Used for flush and bulk load.- Specified by:
add
in interfaceStoreFileTracker
- Throws:
IOException
-
replace
public final void replace(Collection<StoreFileInfo> compactedFiles, Collection<StoreFileInfo> newFiles) throws IOException Description copied from interface:StoreFileTracker
Add new store files and remove compacted store files after compaction.- Specified by:
replace
in interfaceStoreFileTracker
- Throws:
IOException
-
set
Description copied from interface:StoreFileTracker
Set the store files.- Specified by:
set
in interfaceStoreFileTracker
- Throws:
IOException
-
updateWithTrackerConfigs
Description copied from interface:StoreFileTracker
Adds StoreFileTracker implementations specific configurations into the table descriptor. This is used to avoid accidentally data loss when changing the cluster level store file tracker implementation, and also possible misconfiguration between master and region servers. See HBASE-26246 for more details.- Specified by:
updateWithTrackerConfigs
in interfaceStoreFileTracker
- Parameters:
builder
- The table descriptor builder for the given table.
-
getTrackerName
-
createFileContext
private HFileContext createFileContext(Compression.Algorithm compression, boolean includeMVCCReadpoint, boolean includesTag, Encryption.Context encryptionContext) -
createWriter
Description copied from interface:StoreFileTracker
Create a writer for writing new store files.- Specified by:
createWriter
in interfaceStoreFileTracker
- Returns:
- Writer for a new StoreFile
- Throws:
IOException
-
createReference
public Reference createReference(Reference reference, org.apache.hadoop.fs.Path path) throws IOException - Specified by:
createReference
in interfaceStoreFileTracker
- Throws:
IOException
-
hasReferences
Returns true if the specified family has reference files- Specified by:
hasReferences
in interfaceStoreFileTracker
- Parameters:
familyName
- Column Family Name- Returns:
- true if family contains reference files
- Throws:
IOException
-
readReference
Description copied from interface:StoreFileTracker
Reads the reference file from the given path.- Specified by:
readReference
in interfaceStoreFileTracker
- Parameters:
p
- thePath
to the reference file in the file system.- Returns:
- a
Reference
that points at top/bottom half of a an hfile - Throws:
IOException
-
getStoreFileInfo
public StoreFileInfo getStoreFileInfo(org.apache.hadoop.fs.Path initialPath, boolean primaryReplica) throws IOException - Specified by:
getStoreFileInfo
in interfaceStoreFileTracker
- Throws:
IOException
-
getStoreFileInfo
public StoreFileInfo getStoreFileInfo(org.apache.hadoop.fs.FileStatus fileStatus, org.apache.hadoop.fs.Path initialPath, boolean primaryReplica) throws IOException - Specified by:
getStoreFileInfo
in interfaceStoreFileTracker
- Throws:
IOException
-
doLoadStoreFiles
For primary replica, we will call load once when opening a region, and the implementation could choose to do some cleanup work. So here we usereadOnly
to indicate that whether you are allowed to do the cleanup work. For secondary replicas, we will setreadOnly
totrue
.- Throws:
IOException
-
doAddNewStoreFiles
- Throws:
IOException
-
doAddCompactionResults
protected abstract void doAddCompactionResults(Collection<StoreFileInfo> compactedFiles, Collection<StoreFileInfo> newFiles) throws IOException - Throws:
IOException
-
doSetStoreFiles
- Throws:
IOException
-