@InterfaceAudience.Private public class StoreFileWriter extends Object implements CellSink, ShipperListener
Modifier and Type | Class and Description |
---|---|
static class |
StoreFileWriter.Builder |
Modifier and Type | Field and Description |
---|---|
private BloomContext |
bloomContext |
private byte[] |
bloomParam |
private BloomType |
bloomType |
private Supplier<Collection<HStoreFile>> |
compactedFilesSupplier |
private static Pattern |
dash |
private BloomContext |
deleteFamilyBloomContext |
private BloomFilterWriter |
deleteFamilyBloomFilterWriter |
private long |
deleteFamilyCnt |
private long |
earliestPutTs |
private BloomFilterWriter |
generalBloomFilterWriter |
private static org.slf4j.Logger |
LOG |
private TimeRangeTracker |
timeRangeTracker |
protected HFile.Writer |
writer |
Modifier | Constructor and Description |
---|---|
private |
StoreFileWriter(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf,
CacheConfig cacheConf,
CellComparator comparator,
BloomType bloomType,
long maxKeys,
InetSocketAddress[] favoredNodes,
HFileContext fileContext,
boolean shouldDropCacheBehind,
Supplier<Collection<HStoreFile>> compactedFilesSupplier)
Creates an HFile.Writer that also write helpful meta data.
|
Modifier and Type | Method and Description |
---|---|
void |
append(Cell cell)
Append the given cell
|
private void |
appendDeleteFamilyBloomFilter(Cell cell) |
void |
appendFileInfo(byte[] key,
byte[] value) |
private void |
appendGeneralBloomfilter(Cell cell) |
void |
appendMetadata(long maxSequenceId,
boolean majorCompaction)
Writes meta data.
|
void |
appendMetadata(long maxSequenceId,
boolean majorCompaction,
Collection<HStoreFile> storeFiles)
Writes meta data.
|
void |
appendMetadata(long maxSequenceId,
boolean majorCompaction,
long mobCellsCount)
Writes meta data.
|
void |
appendTrackedTimestampsToMetadata()
Add TimestampRange and earliest put timestamp to Metadata
|
void |
beforeShipped()
The action that needs to be performed before
Shipper.shipped() is performed |
void |
close() |
private boolean |
closeBloomFilter(BloomFilterWriter bfw) |
private boolean |
closeDeleteFamilyBloomFilter() |
private boolean |
closeGeneralBloomFilter() |
(package private) BloomFilterWriter |
getGeneralBloomWriter()
For unit testing only.
|
(package private) HFile.Writer |
getHFileWriter()
For use in testing.
|
org.apache.hadoop.fs.Path |
getPath() |
(package private) static org.apache.hadoop.fs.Path |
getUniqueFile(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir) |
boolean |
hasGeneralBloom() |
private byte[] |
toCompactionEventTrackerBytes(Collection<HStoreFile> storeFiles)
Used when write
HStoreFile.COMPACTION_EVENT_KEY to new file's file info. |
void |
trackTimestamps(Cell cell)
Record the earlest Put timestamp.
|
private static final org.slf4j.Logger LOG
private final BloomFilterWriter generalBloomFilterWriter
private final BloomFilterWriter deleteFamilyBloomFilterWriter
private byte[] bloomParam
private long earliestPutTs
private long deleteFamilyCnt
private BloomContext bloomContext
private BloomContext deleteFamilyBloomContext
private final TimeRangeTracker timeRangeTracker
private final Supplier<Collection<HStoreFile>> compactedFilesSupplier
protected HFile.Writer writer
private StoreFileWriter(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf, CacheConfig cacheConf, CellComparator comparator, BloomType bloomType, long maxKeys, InetSocketAddress[] favoredNodes, HFileContext fileContext, boolean shouldDropCacheBehind, Supplier<Collection<HStoreFile>> compactedFilesSupplier) throws IOException
fs
- file system to write topath
- file name to createconf
- user configurationcomparator
- key comparatorbloomType
- bloom filter settingmaxKeys
- the expected maximum number of keys to be added. Was used
for Bloom filter size in HFile
format version 1.favoredNodes
- an array of favored nodes or possibly nullfileContext
- The HFile contextshouldDropCacheBehind
- Drop pages written to page cache after writing the store file.compactedFilesSupplier
- Returns the HStore
compacted files which not archivedIOException
- problem writing to FSpublic void appendMetadata(long maxSequenceId, boolean majorCompaction) throws IOException
close()
since its written as meta data to this file.maxSequenceId
- Maximum sequence id.majorCompaction
- True if this file is product of a major compactionIOException
- problem writing to FSpublic void appendMetadata(long maxSequenceId, boolean majorCompaction, Collection<HStoreFile> storeFiles) throws IOException
close()
since its written as meta data to this file.maxSequenceId
- Maximum sequence id.majorCompaction
- True if this file is product of a major compactionstoreFiles
- The compacted store files to generate this new fileIOException
- problem writing to FSprivate byte[] toCompactionEventTrackerBytes(Collection<HStoreFile> storeFiles)
HStoreFile.COMPACTION_EVENT_KEY
to new file's file info. The compacted
store files's name is needed. But if the compacted store file is a result of compaction, it's
compacted files which still not archived is needed, too. And don't need to add compacted files
recursively. If file A, B, C compacted to new file D, and file D compacted to new file E, will
write A, B, C, D to file E's compacted files. So if file E compacted to new file F, will add E
to F's compacted files first, then add E's compacted files: A, B, C, D to it. And no need to
add D's compacted file, as D's compacted files has been in E's compacted files, too.
See HBASE-20724 for more details.storeFiles
- The compacted store files to generate this new filepublic void appendMetadata(long maxSequenceId, boolean majorCompaction, long mobCellsCount) throws IOException
close()
since its written as meta data to this file.maxSequenceId
- Maximum sequence id.majorCompaction
- True if this file is product of a major compactionmobCellsCount
- The number of mob cells.IOException
- problem writing to FSpublic void appendTrackedTimestampsToMetadata() throws IOException
IOException
public void trackTimestamps(Cell cell)
private void appendGeneralBloomfilter(Cell cell) throws IOException
IOException
private void appendDeleteFamilyBloomFilter(Cell cell) throws IOException
IOException
public void append(Cell cell) throws IOException
CellSink
append
in interface CellSink
cell
- the cell to be addedIOException
public void beforeShipped() throws IOException
ShipperListener
Shipper.shipped()
is performedbeforeShipped
in interface ShipperListener
IOException
public org.apache.hadoop.fs.Path getPath()
public boolean hasGeneralBloom()
BloomFilterWriter getGeneralBloomWriter()
private boolean closeBloomFilter(BloomFilterWriter bfw) throws IOException
IOException
private boolean closeGeneralBloomFilter() throws IOException
IOException
private boolean closeDeleteFamilyBloomFilter() throws IOException
IOException
public void close() throws IOException
IOException
public void appendFileInfo(byte[] key, byte[] value) throws IOException
IOException
HFile.Writer getHFileWriter()
static org.apache.hadoop.fs.Path getUniqueFile(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException
fs
- dir
- Directory to create file in.dir
IOException
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.