@InterfaceAudience.LimitedPrivate(value="Coprocesssor") @InterfaceStability.Evolving public interface Store
Modifier and Type | Field and Description |
---|---|
static int |
NO_PRIORITY |
static int |
PRIORITY_USER
The default priority for user-specified compaction requests.
|
Modifier and Type | Method and Description |
---|---|
boolean |
areWritesEnabled() |
boolean |
canSplit()
Returns whether this store is splittable, i.e., no reference file in this store.
|
OptionalDouble |
getAvgStoreFileAge()
Returns Average age of store files in this store
|
long |
getBloomFilterEligibleRequestsCount()
Returns count of requests which could have used bloom filters, but they weren't configured or
loaded.
|
long |
getBloomFilterNegativeResultsCount()
Returns count of negative results for bloom filter requests for this store.
|
long |
getBloomFilterRequestsCount()
Returns count of bloom filter results for this store.
|
ColumnFamilyDescriptor |
getColumnFamilyDescriptor() |
String |
getColumnFamilyName() |
long |
getCompactedCellsCount()
Returns The number of cells processed during minor compactions
|
long |
getCompactedCellsSize()
Returns The total amount of data processed during minor compactions, in bytes
|
Collection<? extends StoreFile> |
getCompactedFiles() |
int |
getCompactedFilesCount()
Returns Count of compacted store files
|
double |
getCompactionPressure()
This value can represent the degree of emergency of compaction for this store.
|
int |
getCompactPriority() |
CellComparator |
getComparator() |
int |
getCurrentParallelPutCount() |
org.apache.hadoop.fs.FileSystem |
getFileSystem() |
MemStoreSize |
getFlushableSize() |
long |
getFlushedCellsCount()
Returns The number of cells flushed to disk
|
long |
getFlushedCellsSize()
Returns The total size of data flushed to disk, in bytes
|
long |
getFlushedOutputFileSize()
Returns The total size of out output files on disk, in bytes
|
long |
getHFilesSize()
Returns The size of only the store files which are HFiles, in bytes.
|
long |
getLastCompactSize()
Returns aggregate size of all HStores used in the last compaction
|
long |
getMajorCompactedCellsCount()
Returns The number of cells processed during major compactions
|
long |
getMajorCompactedCellsSize()
Returns The total amount of data processed during major compactions, in bytes
|
OptionalLong |
getMaxMemStoreTS()
Returns The maximum memstoreTS in all store files.
|
OptionalLong |
getMaxSequenceId()
Returns The maximum sequence id in all store files.
|
OptionalLong |
getMaxStoreFileAge()
Returns Max age of store files in this store
|
long |
getMemstoreOnlyRowReadsCount()
Returns the number of read requests purely from the memstore.
|
MemStoreSize |
getMemStoreSize()
Returns The size of this store's memstore.
|
OptionalLong |
getMinStoreFileAge()
Returns Min age of store files in this store
|
long |
getMixedRowReadsCount()
Returns the number of read requests from the files under this store.
|
long |
getNumHFiles()
Returns Number of HFiles in this store
|
long |
getNumReferenceFiles()
Returns Number of reference files in this store
|
org.apache.hadoop.conf.Configuration |
getReadOnlyConfiguration() |
RegionInfo |
getRegionInfo()
Returns the parent region info hosting this store
|
long |
getSize()
Returns aggregate size of HStore
|
long |
getSmallestReadPoint() |
MemStoreSize |
getSnapshotSize()
Returns size of the memstore snapshot
|
Collection<? extends StoreFile> |
getStorefiles() |
int |
getStorefilesCount()
Returns Count of store files
|
long |
getStorefilesRootLevelIndexSize()
Returns The size of the store file root-level indexes, in bytes.
|
long |
getStorefilesSize()
Returns The size of the store files, in bytes.
|
long |
getStoreSizeUncompressed()
Returns The size of the store files, in bytes, uncompressed.
|
TableName |
getTableName() |
long |
getTotalStaticBloomSize()
Returns the total byte size of all Bloom filter bit arrays.
|
long |
getTotalStaticIndexSize()
Returns the total size of all index blocks in the data block indexes, including the root level,
intermediate levels, and the leaf level for multi-level indexes, or just the root level for
single-level indexes.
|
boolean |
hasReferences()
Returns
true if the store has any underlying reference files to older HFiles |
boolean |
hasTooManyStoreFiles()
Returns Whether this store has too many store files.
|
boolean |
isPrimaryReplicaStore() |
boolean |
isSloppyMemStore()
Returns true if the memstore may need some extra memory space
|
boolean |
needsCompaction()
See if there's too much store files in this store
|
void |
refreshStoreFiles()
Checks the underlying store files, and opens the files that have not been opened, and removes
the store file readers for store files no longer available.
|
boolean |
shouldPerformMajorCompaction()
Tests whether we should run a major compaction.
|
long |
timeOfOldestEdit()
When was the last edit done in the memstore
|
static final int PRIORITY_USER
static final int NO_PRIORITY
CellComparator getComparator()
Collection<? extends StoreFile> getStorefiles()
Collection<? extends StoreFile> getCompactedFiles()
long timeOfOldestEdit()
org.apache.hadoop.fs.FileSystem getFileSystem()
boolean shouldPerformMajorCompaction() throws IOException
IOException
boolean needsCompaction()
true
if number of store files is greater than the number defined in
minFilesToCompactint getCompactPriority()
boolean canSplit()
boolean hasReferences()
true
if the store has any underlying reference files to older HFilesMemStoreSize getMemStoreSize()
MemStoreSize getFlushableSize()
getMemStoreSize()
unless we are carrying snapshots and then it will be the
size of outstanding snapshots.MemStoreSize getSnapshotSize()
ColumnFamilyDescriptor getColumnFamilyDescriptor()
OptionalLong getMaxSequenceId()
OptionalLong getMaxMemStoreTS()
long getLastCompactSize()
long getSize()
int getStorefilesCount()
int getCompactedFilesCount()
OptionalLong getMaxStoreFileAge()
OptionalLong getMinStoreFileAge()
OptionalDouble getAvgStoreFileAge()
long getNumReferenceFiles()
long getNumHFiles()
long getStoreSizeUncompressed()
long getStorefilesSize()
long getHFilesSize()
long getStorefilesRootLevelIndexSize()
long getTotalStaticIndexSize()
long getTotalStaticBloomSize()
RegionInfo getRegionInfo()
boolean areWritesEnabled()
long getSmallestReadPoint()
String getColumnFamilyName()
TableName getTableName()
long getFlushedCellsCount()
long getFlushedCellsSize()
long getFlushedOutputFileSize()
long getCompactedCellsCount()
long getCompactedCellsSize()
long getMajorCompactedCellsCount()
long getMajorCompactedCellsSize()
boolean hasTooManyStoreFiles()
void refreshStoreFiles() throws IOException
IOException
double getCompactionPressure()
And for striped stores, we should calculate this value by the files in each stripe separately and return the maximum value.
It is similar to getCompactPriority()
except that it is more suitable to use in a
linear formula.
boolean isPrimaryReplicaStore()
boolean isSloppyMemStore()
int getCurrentParallelPutCount()
long getMemstoreOnlyRowReadsCount()
long getMixedRowReadsCount()
org.apache.hadoop.conf.Configuration getReadOnlyConfiguration()
UnsupportedOperationException
if you try to set a configuration.long getBloomFilterRequestsCount()
long getBloomFilterNegativeResultsCount()
long getBloomFilterEligibleRequestsCount()
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.