Package | Description |
---|---|
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.regionserver.compactions | |
org.apache.hadoop.hbase.regionserver.throttle |
Modifier and Type | Method and Description |
---|---|
ThroughputController |
CompactSplitThread.getCompactionThroughputController() |
ThroughputController |
RegionServerServices.getFlushThroughputController() |
ThroughputController |
HRegionServer.getFlushThroughputController() |
Modifier and Type | Method and Description |
---|---|
boolean |
HRegion.compact(CompactionContext compaction,
Store store,
ThroughputController throughputController) |
boolean |
HRegion.compact(CompactionContext compaction,
Store store,
ThroughputController throughputController,
User user) |
List<StoreFile> |
Store.compact(CompactionContext compaction,
ThroughputController throughputController)
Deprecated.
see compact(CompactionContext, ThroughputController, User)
|
List<StoreFile> |
HStore.compact(CompactionContext compaction,
ThroughputController throughputController)
Compact the StoreFiles.
|
List<StoreFile> |
Store.compact(CompactionContext compaction,
ThroughputController throughputController,
User user) |
List<StoreFile> |
HStore.compact(CompactionContext compaction,
ThroughputController throughputController,
User user) |
protected List<org.apache.hadoop.fs.Path> |
HStore.flushCache(long logCacheFlushId,
MemStoreSnapshot snapshot,
MonitoredTask status,
ThroughputController throughputController)
Write out current snapshot.
|
List<org.apache.hadoop.fs.Path> |
DefaultStoreFlusher.flushSnapshot(MemStoreSnapshot snapshot,
long cacheFlushId,
MonitoredTask status,
ThroughputController throughputController) |
List<org.apache.hadoop.fs.Path> |
StripeStoreFlusher.flushSnapshot(MemStoreSnapshot snapshot,
long cacheFlushSeqNum,
MonitoredTask status,
ThroughputController throughputController) |
Modifier and Type | Method and Description |
---|---|
protected List<org.apache.hadoop.fs.Path> |
Compactor.compact(CompactionRequest request,
Compactor.InternalScannerFactory scannerFactory,
Compactor.CellSinkFactory<T> sinkFactory,
ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequest request,
int targetCount,
long targetSize,
byte[] left,
byte[] right,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequest request,
List<byte[]> targetBoundaries,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
DateTieredCompactor.compact(CompactionRequest request,
List<Long> lowerBoundaries,
ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
DefaultCompactor.compact(CompactionRequest request,
ThroughputController throughputController,
User user)
Do a minor/major compaction on an explicit set of storefiles from a Store.
|
abstract List<org.apache.hadoop.fs.Path> |
CompactionContext.compact(ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.StripeCompactionRequest.execute(StripeCompactor compactor,
ThroughputController throughputController) |
abstract List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.StripeCompactionRequest.execute(StripeCompactor compactor,
ThroughputController throughputController,
User user)
Executes the request against compactor (essentially, just calls correct overload of
compact method), to simulate more dynamic dispatch.
|
protected boolean |
Compactor.performCompaction(InternalScanner scanner,
Compactor.CellSink writer,
long smallestReadPoint,
boolean cleanSeqId,
ThroughputController throughputController)
Performs the compaction.
|
Modifier and Type | Class and Description |
---|---|
class |
NoLimitThroughputController |
class |
PressureAwareCompactionThroughputController
A throughput controller which uses the follow schema to limit throughput
If compaction pressure is greater than 1.0, no limitation.
In off peak hours, use a fixed throughput limitation
"hbase.hstore.compaction.throughput.offpeak"
In normal hours, the max throughput is tuned between
"hbase.hstore.compaction.throughput.lower.bound" and
"hbase.hstore.compaction.throughput.higher.bound", using the formula "lower +
(higer - lower) * compactionPressure", where compactionPressure is in range [0.0, 1.0]
|
class |
PressureAwareFlushThroughputController
A throughput controller which uses the follow schema to limit throughput
If flush pressure is greater than or equal to 1.0, no limitation.
In normal case, the max throughput is tuned between
"hbase.hstore.flush.throughput.lower.bound" and
"hbase.hstore.flush.throughput.upper.bound", using the formula "lower +
(upper - lower) * flushPressure", where flushPressure is in range [0.0, 1.0)
|
class |
PressureAwareThroughputController |
Modifier and Type | Method and Description |
---|---|
static ThroughputController |
FlushThroughputControllerFactory.create(RegionServerServices server,
org.apache.hadoop.conf.Configuration conf) |
static ThroughputController |
CompactionThroughputControllerFactory.create(RegionServerServices server,
org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static Class<? extends ThroughputController> |
FlushThroughputControllerFactory.getThroughputControllerClass(org.apache.hadoop.conf.Configuration conf) |
static Class<? extends ThroughputController> |
CompactionThroughputControllerFactory.getThroughputControllerClass(org.apache.hadoop.conf.Configuration conf) |
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.