Package | Description |
---|---|
org.apache.hadoop.hbase.coprocessor |
Table of Contents
|
org.apache.hadoop.hbase.coprocessor.example | |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.regionserver.compactions | |
org.apache.hadoop.hbase.security.access |
Modifier and Type | Method and Description |
---|---|
protected ScanInfo |
ZooKeeperScanPolicyObserver.getScanInfo(Store store,
RegionCoprocessorEnvironment e) |
InternalScanner |
ZooKeeperScanPolicyObserver.preCompactScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long earliestPutTs,
InternalScanner s) |
InternalScanner |
ZooKeeperScanPolicyObserver.preFlushScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
KeyValueScanner memstoreScanner,
InternalScanner s) |
KeyValueScanner |
ZooKeeperScanPolicyObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
KeyValueScanner s) |
Modifier and Type | Class and Description |
---|---|
class |
HStore
A Store holds a column family in a Region.
|
Modifier and Type | Field and Description |
---|---|
protected Store |
StoreScanner.store |
Modifier and Type | Field and Description |
---|---|
protected Map<byte[],Store> |
HRegion.stores |
Modifier and Type | Method and Description |
---|---|
Store |
Region.getStore(byte[] family)
Return the Store for the given family
|
Store |
HRegion.getStore(byte[] column) |
Modifier and Type | Method and Description |
---|---|
List<Store> |
Region.getStores()
Return the list of Stores managed by this region
|
List<Store> |
HRegion.getStores() |
abstract Collection<Store> |
FlushPolicy.selectStoresToFlush() |
Collection<Store> |
FlushLargeStoresPolicy.selectStoresToFlush() |
Collection<Store> |
FlushAllStoresPolicy.selectStoresToFlush() |
Modifier and Type | Method and Description |
---|---|
boolean |
HRegion.compact(CompactionContext compaction,
Store store,
CompactionThroughputController throughputController) |
boolean |
HRegion.compact(CompactionContext compaction,
Store store,
CompactionThroughputController throughputController,
User user) |
static StoreEngine<?,?,?,?> |
StoreEngine.create(Store store,
org.apache.hadoop.conf.Configuration conf,
KeyValue.KVComparator kvComparator)
Create the StoreEngine configured for the given Store.
|
protected void |
StripeStoreEngine.createComponents(org.apache.hadoop.conf.Configuration conf,
Store store,
KeyValue.KVComparator comparator) |
protected abstract void |
StoreEngine.createComponents(org.apache.hadoop.conf.Configuration conf,
Store store,
KeyValue.KVComparator kvComparator)
Create the StoreEngine's components.
|
protected void |
DefaultStoreEngine.createComponents(org.apache.hadoop.conf.Configuration conf,
Store store,
KeyValue.KVComparator kvComparator) |
void |
RegionCoprocessorHost.postCompact(Store store,
StoreFile resultFile,
CompactionRequest request)
Called after the store compaction has completed.
|
void |
RegionCoprocessorHost.postCompactSelection(Store store,
com.google.common.collect.ImmutableList<StoreFile> selected,
CompactionRequest request)
Called after the
StoreFile s to be compacted have been selected from the available
candidates. |
void |
RegionCoprocessorHost.postFlush(Store store,
StoreFile storeFile)
Invoked after a memstore flush
|
InternalScanner |
RegionCoprocessorHost.preCompact(Store store,
InternalScanner scanner,
ScanType scanType,
CompactionRequest request)
Called prior to rewriting the store files selected for compaction
|
InternalScanner |
RegionCoprocessorHost.preCompactScannerOpen(Store store,
List<StoreFileScanner> scanners,
ScanType scanType,
long earliestPutTs,
CompactionRequest request)
|
boolean |
RegionCoprocessorHost.preCompactSelection(Store store,
List<StoreFile> candidates,
CompactionRequest request)
Called prior to selecting the
StoreFile s for compaction from the list of currently
available candidates. |
InternalScanner |
RegionCoprocessorHost.preFlush(Store store,
InternalScanner scanner)
Invoked before a memstore flush
|
InternalScanner |
RegionCoprocessorHost.preFlushScannerOpen(Store store,
KeyValueScanner memstoreScanner)
|
KeyValueScanner |
RegionCoprocessorHost.preStoreScannerOpen(Store store,
Scan scan,
NavigableSet<byte[]> targetCols)
|
CompactionRequest |
CompactionRequestor.requestCompaction(Region r,
Store s,
String why,
CompactionRequest request) |
CompactionRequest |
CompactSplitThread.requestCompaction(Region r,
Store s,
String why,
CompactionRequest request) |
CompactionRequest |
CompactionRequestor.requestCompaction(Region r,
Store s,
String why,
int pri,
CompactionRequest request,
User user) |
CompactionRequest |
CompactSplitThread.requestCompaction(Region r,
Store s,
String why,
int priority,
CompactionRequest request,
User user) |
void |
CompactSplitThread.requestSystemCompaction(Region r,
Store s,
String why) |
protected boolean |
HRegion.restoreEdit(Store s,
Cell cell)
Used by tests
|
Modifier and Type | Method and Description |
---|---|
protected Region.FlushResult |
HRegion.internalFlushcache(WAL wal,
long myseqid,
Collection<Store> storesToFlush,
MonitoredTask status,
boolean writeFlushWalMarker)
Flush the memstore.
|
protected Region.FlushResult |
HRegion.internalFlushCacheAndCommit(WAL wal,
MonitoredTask status,
org.apache.hadoop.hbase.regionserver.HRegion.PrepareFlushResult prepareResult,
Collection<Store> storesToFlush) |
protected org.apache.hadoop.hbase.regionserver.HRegion.PrepareFlushResult |
HRegion.internalPrepareFlushCache(WAL wal,
long myseqid,
Collection<Store> storesToFlush,
MonitoredTask status,
boolean writeFlushWalMarker) |
List<CompactionRequest> |
CompactionRequestor.requestCompaction(Region r,
String why,
int pri,
List<Pair<CompactionRequest,Store>> requests,
User user) |
List<CompactionRequest> |
CompactSplitThread.requestCompaction(Region r,
String why,
int p,
List<Pair<CompactionRequest,Store>> requests,
User user) |
List<CompactionRequest> |
CompactionRequestor.requestCompaction(Region r,
String why,
List<Pair<CompactionRequest,Store>> requests) |
List<CompactionRequest> |
CompactSplitThread.requestCompaction(Region r,
String why,
List<Pair<CompactionRequest,Store>> requests) |
Constructor and Description |
---|
DefaultStoreFlusher(org.apache.hadoop.conf.Configuration conf,
Store store) |
StoreScanner(Store store,
boolean cacheBlocks,
Scan scan,
NavigableSet<byte[]> columns,
long ttl,
int minVersions,
long readPt)
An internal constructor.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
long smallestReadPoint,
long earliestPutTs,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow)
Used for compactions that drop deletes from a limited range of rows.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs)
Used for compactions.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
StripeStoreFlusher(org.apache.hadoop.conf.Configuration conf,
Store store,
StripeCompactionPolicy policy,
StripeStoreFileManager stripes) |
Modifier and Type | Field and Description |
---|---|
protected Store |
Compactor.store |
Modifier and Type | Method and Description |
---|---|
protected InternalScanner |
Compactor.createScanner(Store store,
List<StoreFileScanner> scanners,
long smallestReadPoint,
long earliestPutTs,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow) |
protected InternalScanner |
Compactor.createScanner(Store store,
List<StoreFileScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs) |
Constructor and Description |
---|
DefaultCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
StripeCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
Modifier and Type | Method and Description |
---|---|
InternalScanner |
AccessController.preCompact(ObserverContext<RegionCoprocessorEnvironment> e,
Store store,
InternalScanner scanner,
ScanType scanType) |
Copyright © 2007-2016 The Apache Software Foundation. All Rights Reserved.