@InterfaceAudience.Private public class HRegion extends Object implements HeapSize, PropagatingConfigurationObserver, Region
An Region is defined by its table and its key extent.
Locking at the Region level serves only one purpose: preventing the region from being closed (and consequently split) while other operations are ongoing. Each row level operation obtains both a row lock and a region read lock for the duration of the operation. While a scanner is being constructed, getScanner holds a read lock. If the scanner is successfully constructed, it holds a read lock until it is closed. A close takes out a write lock and consequently will block for ongoing operations and will block new operations from starting while the close is in progress.
Modifier and Type | Class and Description |
---|---|
private static class |
HRegion.BatchOperation<T>
Class that tracks the progress of a batch operations, accumulating status codes and tracking
the index at which processing is proceeding.
|
static interface |
HRegion.BulkLoadListener
Listener class to enable callers of bulkLoadHFile() to perform any necessary pre/post
processing of a given bulkload call
|
static interface |
HRegion.FlushResult |
static class |
HRegion.FlushResultImpl
Objects from this class are created when flushing to describe all the different states that
that method ends up in.
|
private static class |
HRegion.MutationBatchOperation
Batch of mutation operations.
|
(package private) static class |
HRegion.ObservedExceptionsInBatch
A class that tracks exceptions that have been observed in one batch.
|
(package private) static class |
HRegion.PrepareFlushResult
A result object from prepare flush cache stage
|
private static class |
HRegion.ReplayBatchOperation
Batch of mutations for replay.
|
(package private) class |
HRegion.RowLockContext |
static class |
HRegion.RowLockImpl
Class used to represent a lock on a row.
|
(package private) static class |
HRegion.WriteState |
Region.Operation, Region.RowLock
Constructor and Description |
---|
HRegion(HRegionFileSystem fs,
WAL wal,
org.apache.hadoop.conf.Configuration confParam,
TableDescriptor htd,
RegionServerServices rsServices)
HRegion constructor.
|
HRegion(org.apache.hadoop.fs.Path tableDir,
WAL wal,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.conf.Configuration confParam,
RegionInfo regionInfo,
TableDescriptor htd,
RegionServerServices rsServices)
Deprecated.
Use other constructors.
|
Modifier and Type | Method and Description |
---|---|
void |
addReadRequestsCount(long readRequestsCount) |
void |
addRegionToSnapshot(org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription desc,
ForeignExceptionSnare exnSnare)
Complete taking the snapshot on the region.
|
void |
addWriteRequestsCount(long writeRequestsCount) |
Result |
append(Append append)
Perform one or more append operations on a row.
|
Result |
append(Append append,
long nonceGroup,
long nonce) |
private void |
applyToMemStore(HStore store,
Cell cell,
MemStoreSizing memstoreAccounting) |
private void |
applyToMemStore(HStore store,
List<Cell> cells,
boolean delta,
MemStoreSizing memstoreAccounting) |
boolean |
areWritesEnabled() |
private OperationStatus[] |
batchMutate(HRegion.BatchOperation<?> batchOp)
Perform a batch of mutations.
|
OperationStatus[] |
batchMutate(Mutation[] mutations)
Perform a batch of mutations.
|
(package private) OperationStatus[] |
batchMutate(Mutation[] mutations,
boolean atomic) |
OperationStatus[] |
batchMutate(Mutation[] mutations,
boolean atomic,
long nonceGroup,
long nonce) |
OperationStatus[] |
batchReplay(WALSplitUtil.MutationReplay[] mutations,
long replaySeqId) |
void |
blockUpdates() |
Map<byte[],List<org.apache.hadoop.fs.Path>> |
bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener)
Attempts to atomically load a group of hfiles.
|
Map<byte[],List<org.apache.hadoop.fs.Path>> |
bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener,
boolean copyFile,
List<String> clusterIds,
boolean replicate)
Attempts to atomically load a group of hfiles.
|
boolean |
checkAndMutate(byte[] row,
byte[] family,
byte[] qualifier,
CompareOperator op,
ByteArrayComparable comparator,
TimeRange timeRange,
Mutation mutation)
Deprecated.
|
boolean |
checkAndMutate(byte[] row,
Filter filter,
TimeRange timeRange,
Mutation mutation)
Deprecated.
|
CheckAndMutateResult |
checkAndMutate(CheckAndMutate checkAndMutate)
Atomically checks if a row matches the conditions and if it does, it performs the actions.
|
CheckAndMutateResult |
checkAndMutate(CheckAndMutate checkAndMutate,
long nonceGroup,
long nonce) |
private CheckAndMutateResult |
checkAndMutateInternal(CheckAndMutate checkAndMutate,
long nonceGroup,
long nonce) |
boolean |
checkAndRowMutate(byte[] row,
byte[] family,
byte[] qualifier,
CompareOperator op,
ByteArrayComparable comparator,
TimeRange timeRange,
RowMutations rm)
Deprecated.
|
boolean |
checkAndRowMutate(byte[] row,
Filter filter,
TimeRange timeRange,
RowMutations rm)
Deprecated.
|
void |
checkFamilies(Collection<byte[]> families)
Check the collection of families for validity.
|
(package private) void |
checkFamily(byte[] family) |
(package private) void |
checkInterrupt()
Check thread interrupt status and throw an exception if interrupted.
|
private void |
checkMutationType(Mutation mutation) |
private void |
checkNegativeMemStoreDataSize(long memStoreDataSize,
long delta) |
private void |
checkReadOnly() |
private void |
checkReadsEnabled() |
private void |
checkResources()
Check if resources to support an update.
|
(package private) void |
checkRow(byte[] row,
String op)
Make sure this is a valid row for the HRegion
|
private void |
checkRow(Row action,
byte[] row) |
Optional<byte[]> |
checkSplit() |
Optional<byte[]> |
checkSplit(boolean force)
Return the split point.
|
private void |
checkTargetRegion(byte[] encodedRegionName,
String exceptionMsg,
Object payload)
Checks whether the given regionName is either equal to our region, or that the regionName is
the primary region to our corresponding range for the secondary replica.
|
void |
checkTimestamps(Map<byte[],List<Cell>> familyMap,
long now)
Check the collection of families for valid timestamps n * @param now current timestamp n
|
Map<byte[],List<HStoreFile>> |
close()
Close down this HRegion.
|
Map<byte[],List<HStoreFile>> |
close(boolean abort)
Close down this HRegion.
|
private void |
closeBulkRegionOperation()
Closes the lock.
|
void |
closeRegionOperation()
Closes the region operation lock.
|
void |
closeRegionOperation(Region.Operation operation)
Closes the region operation lock.
|
void |
compact(boolean majorCompaction)
Synchronously compact all stores in the region.
|
boolean |
compact(CompactionContext compaction,
HStore store,
ThroughputController throughputController)
Called by compaction thread and after region is opened to compact the HStores if necessary.
|
boolean |
compact(CompactionContext compaction,
HStore store,
ThroughputController throughputController,
User user) |
(package private) void |
compactStore(byte[] family,
ThroughputController throughputController)
This is a helper function that compact the given store.
|
void |
compactStores()
This is a helper function that compact all the stores synchronously.
|
static HDFSBlocksDistribution |
computeHDFSBlocksDistribution(org.apache.hadoop.conf.Configuration conf,
TableDescriptor tableDescriptor,
RegionInfo regionInfo)
This is a helper function to compute HDFS block distribution on demand
|
static HDFSBlocksDistribution |
computeHDFSBlocksDistribution(org.apache.hadoop.conf.Configuration conf,
TableDescriptor tableDescriptor,
RegionInfo regionInfo,
org.apache.hadoop.fs.Path tablePath)
This is a helper function to compute HDFS block distribution on demand
|
static HRegion |
createHRegion(org.apache.hadoop.conf.Configuration conf,
RegionInfo regionInfo,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path tableDir,
TableDescriptor tableDesc)
Create a region under the given table directory.
|
static HRegion |
createHRegion(RegionInfo info,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.conf.Configuration conf,
TableDescriptor hTableDescriptor,
WAL wal) |
static HRegion |
createHRegion(RegionInfo info,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.conf.Configuration conf,
TableDescriptor hTableDescriptor,
WAL wal,
boolean initialize)
Convenience method creating new HRegions.
|
static HRegion |
createHRegion(RegionInfo info,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.conf.Configuration conf,
TableDescriptor hTableDescriptor,
WAL wal,
boolean initialize,
RegionServerServices rsRpcServices)
Convenience method creating new HRegions.
|
static HRegionFileSystem |
createRegionDir(org.apache.hadoop.conf.Configuration configuration,
RegionInfo ri,
org.apache.hadoop.fs.Path rootDir)
Create the region directory in the filesystem.
|
(package private) io.opentelemetry.api.trace.Span |
createRegionSpan(String name) |
private static void |
decorateRegionConfiguration(org.apache.hadoop.conf.Configuration conf)
This method modifies the region's configuration in order to inject replication-related features
|
void |
decrementCompactionsQueuedCount() |
protected void |
decrementFlushesQueuedCount() |
private void |
decrMemStoreSize(long dataSizeDelta,
long heapSizeDelta,
long offHeapSizeDelta,
int cellsCountDelta) |
(package private) void |
decrMemStoreSize(MemStoreSize mss) |
void |
delete(Delete delete)
Deletes the specified cells/row.
|
private void |
deleteRecoveredEdits(org.apache.hadoop.fs.FileSystem fs,
Iterable<org.apache.hadoop.fs.Path> files) |
void |
deregisterChildren(ConfigurationManager manager)
Needs to be called to deregister the children from the manager.
|
(package private) void |
disableInterrupts()
If a handler thread is eligible for interrupt, make it ineligible.
|
private void |
doAbortFlushToWAL(WAL wal,
long flushOpSeqId,
Map<byte[],List<org.apache.hadoop.fs.Path>> committedFiles) |
private Map<byte[],List<HStoreFile>> |
doClose(boolean abort,
MonitoredTask status) |
private MemStoreSize |
doDropStoreMemStoreContentsForSeqId(HStore s,
long currentSeqId) |
private void |
doMiniBatchMutate(HRegion.BatchOperation<?> batchOp)
Called to do a piece of the batch that came in to
batchMutate(Mutation[]) In here we
also handle replay of edits on region recover. |
private void |
doProcessRowWithTimeout(RowProcessor<?,?> processor,
long now,
HRegion region,
List<Mutation> mutations,
WALEdit walEdit,
long timeout) |
protected void |
doRegionCompactionPrep()
Do preparation for pending compaction.
|
private static void |
doSyncOfUnflushedWALChanges(WAL wal,
RegionInfo hri)
Sync unflushed WAL changes.
|
private MultiVersionConcurrencyControl.WriteEntry |
doWALAppend(WALEdit walEdit,
Durability durability,
List<UUID> clusterIds,
long now,
long nonceGroup,
long nonce) |
private MultiVersionConcurrencyControl.WriteEntry |
doWALAppend(WALEdit walEdit,
Durability durability,
List<UUID> clusterIds,
long now,
long nonceGroup,
long nonce,
long origLogSeqNum)
Returns writeEntry associated with this append
|
private MemStoreSize |
dropMemStoreContents()
Be careful, this method will drop all data in the memstore of this region.
|
private MemStoreSize |
dropMemStoreContentsForSeqId(long seqId,
HStore store)
Drops the memstore contents after replaying a flush descriptor or region open event replay if
the memstore edits have seqNums smaller than the given seq id n
|
private void |
dropPrepareFlushIfPossible()
If all stores ended up dropping their snapshots, we can safely drop the prepareFlushResult
|
(package private) void |
enableInterrupts()
If a handler thread was made ineligible for interrupt via {
disableInterrupts() , make
it eligible again. |
boolean |
equals(Object o) |
com.google.protobuf.Message |
execService(com.google.protobuf.RpcController controller,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.CoprocessorServiceCall call)
Executes a single protocol buffer coprocessor endpoint
Service method using the
registered protocol handlers. |
HRegion.FlushResult |
flush(boolean flushAllStores)
Flush the cache.
|
(package private) HRegion.FlushResultImpl |
flushcache(boolean flushAllStores,
boolean writeFlushRequestWalMarker,
FlushLifeCycleTracker tracker) |
HRegion.FlushResultImpl |
flushcache(List<byte[]> families,
boolean writeFlushRequestWalMarker,
FlushLifeCycleTracker tracker)
Flush the cache.
|
Result |
get(Get get)
Do a get based on the get parameter.
|
List<Cell> |
get(Get get,
boolean withCoprocessor)
Do a get based on the get parameter.
|
private List<Cell> |
get(Get get,
boolean withCoprocessor,
long nonceGroup,
long nonce) |
(package private) org.apache.hadoop.conf.Configuration |
getBaseConf()
A split takes the config from the parent region & passes it to the daughter region's
constructor.
|
BlockCache |
getBlockCache() |
long |
getBlockedRequestsCount()
Returns the number of blocked requests
|
CellComparator |
getCellComparator()
The comparator to be used with the region
|
long |
getCheckAndMutateChecksFailed()
Returns the number of failed checkAndMutate guards
|
long |
getCheckAndMutateChecksPassed()
Returns the number of checkAndMutate guards that passed
|
CompactionState |
getCompactionState()
Returns if a given region is in compaction now.
|
int |
getCompactPriority()
Returns The priority that this region should have in the compaction queue
|
RegionCoprocessorHost |
getCoprocessorHost()
Returns the coprocessor host
|
long |
getDataInMemoryWithoutWAL()
Returns the size of data processed bypassing the WAL, in bytes
|
long |
getEarliestFlushTimeForAllStores() |
private Durability |
getEffectiveDurability(Durability d)
Returns effective durability from the passed durability and the table descriptor.
|
org.apache.hadoop.fs.FileSystem |
getFilesystem()
Returns
FileSystem being used by this region |
long |
getFilteredReadRequestsCount()
Returns filtered read requests count for this region
|
HDFSBlocksDistribution |
getHDFSBlocksDistribution() |
private List<Cell> |
getInternal(Get get,
boolean withCoprocessor,
long nonceGroup,
long nonce) |
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.RegionLoadStats |
getLoadStatistics()
Returns statistics about the current load of the region
|
ConcurrentHashMap<HashedBytes,HRegion.RowLockContext> |
getLockedRows() |
long |
getMaxFlushedSeqId() |
Map<byte[],Long> |
getMaxStoreSeqId() |
long |
getMemStoreDataSize() |
long |
getMemStoreFlushSize() |
long |
getMemStoreHeapSize() |
long |
getMemStoreOffHeapSize() |
MetricsRegion |
getMetrics() |
MobFileCache |
getMobFileCache() |
MultiVersionConcurrencyControl |
getMVCC() |
protected long |
getNextSequenceId(WAL wal)
Method to safely get the next sequence number.
|
long |
getNumMutationsWithoutWAL()
Returns the number of mutations processed bypassing the WAL
|
long |
getOldestHfileTs(boolean majorCompactionOnly)
This can be used to determine the last time all files of this region were major compacted.
|
long |
getOldestSeqIdOfStore(byte[] familyName) |
private static ThreadPoolExecutor |
getOpenAndCloseThreadPool(int maxThreads,
String threadNamePrefix) |
long |
getOpenSeqNum()
Returns the latest sequence number that was read from storage when this region was opened
|
(package private) HRegion.PrepareFlushResult |
getPrepareFlushResult() |
int |
getReadLockCount() |
org.apache.hadoop.conf.Configuration |
getReadOnlyConfiguration() |
long |
getReadPoint(IsolationLevel isolationLevel)
Returns readpoint considering given IsolationLevel.
|
long |
getReadRequestsCount()
Returns read requests count for this region
|
static org.apache.hadoop.fs.Path |
getRegionDir(org.apache.hadoop.fs.Path tabledir,
String name)
Deprecated.
For tests only; to be removed.
|
HRegionFileSystem |
getRegionFileSystem()
Returns the
HRegionFileSystem used by this region |
RegionInfo |
getRegionInfo()
Returns region information for this region
|
(package private) RegionServerServices |
getRegionServerServices()
Returns Instance of
RegionServerServices used by this HRegion. |
RegionServicesForStores |
getRegionServicesForStores()
Returns store services for this region, to access services required by store level needs
|
(package private) HRegionWALFileSystem |
getRegionWALFileSystem()
Returns the WAL
HRegionFileSystem used by this region |
NavigableMap<byte[],Integer> |
getReplicationScope() |
Region.RowLock |
getRowLock(byte[] row)
Get an exclusive ( write lock ) lock on a given row.
|
Region.RowLock |
getRowLock(byte[] row,
boolean readLock)
Get a row lock for the specified row.
|
private Region.RowLock |
getRowLock(byte[] row,
boolean readLock,
Region.RowLock prevRowLock) |
protected Region.RowLock |
getRowLockInternal(byte[] row,
boolean readLock,
Region.RowLock prevRowLock) |
RegionScannerImpl |
getScanner(Scan scan)
Return an iterator that scans over the HRegion, returning the indicated columns and rows
specified by the
Scan . |
RegionScannerImpl |
getScanner(Scan scan,
List<KeyValueScanner> additionalScanners)
Return an iterator that scans over the HRegion, returning the indicated columns and rows
specified by the
Scan . |
private RegionScannerImpl |
getScanner(Scan scan,
List<KeyValueScanner> additionalScanners,
long nonceGroup,
long nonce) |
long |
getSmallestReadPoint() |
private Collection<HStore> |
getSpecificStores(List<byte[]> families)
get stores which matches the specified families
|
(package private) RegionSplitPolicy |
getSplitPolicy()
Returns split policy for this region.
|
HStore |
getStore(byte[] column)
Return the Store for the given family
|
private HStore |
getStore(Cell cell)
Return HStore instance.
|
List<String> |
getStoreFileList(byte[][] columns)
Returns list of store file names for the given families
|
(package private) ThreadPoolExecutor |
getStoreFileOpenAndCloseThreadPool(String threadNamePrefix) |
private NavigableMap<byte[],List<org.apache.hadoop.fs.Path>> |
getStoreFiles()
Returns Map of StoreFiles by column family
|
private ThreadPoolExecutor |
getStoreOpenAndCloseThreadPool(String threadNamePrefix) |
List<HStore> |
getStores()
Return the list of Stores managed by this region
|
TableDescriptor |
getTableDescriptor()
Returns table descriptor for this region
|
WAL |
getWAL()
Returns WAL in use for this region
|
(package private) org.apache.hadoop.fs.FileSystem |
getWalFileSystem()
Returns the WAL
FileSystem being used by this region |
org.apache.hadoop.fs.Path |
getWALRegionDir() |
long |
getWriteRequestsCount()
Returns write request count for this region
|
private void |
handleException(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path edits,
IOException e) |
int |
hashCode() |
private static boolean |
hasMultipleColumnFamilies(Collection<Pair<byte[],String>> familyPaths)
Determines whether multiple column families are present Precondition: familyPaths is not null
|
boolean |
hasReferences()
Returns True if this region has references.
|
long |
heapSize()
Return the approximate 'exclusive deep size' of implementing object.
|
(package private) void |
incMemStoreSize(long dataSizeDelta,
long heapSizeDelta,
long offHeapSizeDelta,
int cellsCountDelta) |
private void |
incMemStoreSize(MemStoreSize mss)
Increase the size of mem store in this region and the size of global mem store
|
Result |
increment(Increment increment)
Perform one or more increment operations on a row.
|
Result |
increment(Increment increment,
long nonceGroup,
long nonce) |
void |
incrementCompactionsQueuedCount() |
void |
incrementFlushesQueuedCount() |
long |
initialize()
Deprecated.
use HRegion.createHRegion() or HRegion.openHRegion()
|
(package private) long |
initialize(CancelableProgressable reporter)
Initialize this region.
|
private long |
initializeRegionInternals(CancelableProgressable reporter,
MonitoredTask status) |
private long |
initializeStores(CancelableProgressable reporter,
MonitoredTask status)
Open all Stores.
|
private long |
initializeStores(CancelableProgressable reporter,
MonitoredTask status,
boolean warmup) |
private void |
initializeWarmup(CancelableProgressable reporter) |
protected HStore |
instantiateHStore(ColumnFamilyDescriptor family,
boolean warmup) |
protected RegionScannerImpl |
instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners,
long nonceGroup,
long nonce) |
private HRegion.FlushResultImpl |
internalFlushcache(Collection<HStore> storesToFlush,
MonitoredTask status,
boolean writeFlushWalMarker,
FlushLifeCycleTracker tracker)
Flushing given stores.
|
private HRegion.FlushResult |
internalFlushcache(MonitoredTask status)
Flushing all stores.
|
protected HRegion.FlushResultImpl |
internalFlushcache(WAL wal,
long myseqid,
Collection<HStore> storesToFlush,
MonitoredTask status,
boolean writeFlushWalMarker,
FlushLifeCycleTracker tracker)
Flush the memstore.
|
(package private) HRegion.FlushResultImpl |
internalFlushCacheAndCommit(WAL wal,
MonitoredTask status,
HRegion.PrepareFlushResult prepareResult,
Collection<HStore> storesToFlush) |
protected HRegion.PrepareFlushResult |
internalPrepareFlushCache(WAL wal,
long myseqid,
Collection<HStore> storesToFlush,
MonitoredTask status,
boolean writeFlushWalMarker,
FlushLifeCycleTracker tracker) |
private void |
interruptRegionOperations()
Interrupt any region options that have acquired the region lock via
startRegionOperation(org.apache.hadoop.hbase.regionserver.Region.Operation) , or
startBulkRegionOperation(boolean) . |
private boolean |
isAllFamilies(Collection<HStore> families)
Returns True if passed Set is all families in the region.
|
boolean |
isAvailable()
Returns true if region is available (not closed and not closing)
|
boolean |
isClosed()
Returns true if region is closed
|
boolean |
isClosing()
Returns True if closing process has started
|
private boolean |
isFlushSize(MemStoreSize size) |
boolean |
isLoadingCfsOnDemandDefault() |
boolean |
isMergeable()
Returns true if region is mergeable
|
boolean |
isReadOnly()
Returns True if region is read only
|
(package private) boolean |
isReadsEnabled() |
boolean |
isSplittable()
Returns true if region is splittable
|
private static boolean |
isZeroLengthThenDelete(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus stat,
org.apache.hadoop.fs.Path p)
make sure have been through lease recovery before get file status, so the file length can be
trusted.
|
private long |
loadRecoveredHFilesIfAny(Collection<HStore> stores) |
private void |
lock(Lock lock) |
private void |
lock(Lock lock,
int multiplier)
Try to acquire a lock.
|
private void |
logFatLineOnFlush(Collection<HStore> storesToFlush,
long sequenceId)
Utility method broken out of internalPrepareFlushCache so that method is smaller.
|
private void |
logRegionFiles() |
private boolean |
matches(CompareOperator op,
int compareResult) |
(package private) void |
metricsUpdateForGet(List<Cell> results,
long before) |
private OperationStatus |
mutate(Mutation mutation) |
private OperationStatus |
mutate(Mutation mutation,
boolean atomic) |
private OperationStatus |
mutate(Mutation mutation,
boolean atomic,
long nonceGroup,
long nonce) |
Result |
mutateRow(RowMutations rm)
Performs multiple mutations atomically on a single row.
|
Result |
mutateRow(RowMutations rm,
long nonceGroup,
long nonce) |
void |
mutateRowsWithLocks(Collection<Mutation> mutations,
Collection<byte[]> rowsToLock,
long nonceGroup,
long nonce)
Perform atomic (all or none) mutations within the region.
|
static HRegion |
newHRegion(org.apache.hadoop.fs.Path tableDir,
WAL wal,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.conf.Configuration conf,
RegionInfo regionInfo,
TableDescriptor htd,
RegionServerServices rsServices)
A utility method to create new instances of HRegion based on the
HConstants.REGION_IMPL
configuration property. |
void |
onConfigurationChange(org.apache.hadoop.conf.Configuration conf)
This method would be called by the
ConfigurationManager object when the
Configuration object is reloaded from disk. |
private HRegion |
openHRegion(CancelableProgressable reporter)
Open HRegion.
|
static HRegion |
openHRegion(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootDir,
RegionInfo info,
TableDescriptor htd,
WAL wal)
Open a Region.
|
static HRegion |
openHRegion(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootDir,
RegionInfo info,
TableDescriptor htd,
WAL wal,
RegionServerServices rsServices,
CancelableProgressable reporter)
Open a Region.
|
static HRegion |
openHRegion(HRegion other,
CancelableProgressable reporter)
Useful when reopening a closed region (normally for unit tests)
|
static HRegion |
openHRegion(org.apache.hadoop.fs.Path rootDir,
RegionInfo info,
TableDescriptor htd,
WAL wal,
org.apache.hadoop.conf.Configuration conf)
Open a Region.
|
static HRegion |
openHRegion(org.apache.hadoop.fs.Path rootDir,
RegionInfo info,
TableDescriptor htd,
WAL wal,
org.apache.hadoop.conf.Configuration conf,
RegionServerServices rsServices,
CancelableProgressable reporter)
Open a Region.
|
static Region |
openHRegion(Region other,
CancelableProgressable reporter) |
static HRegion |
openHRegion(RegionInfo info,
TableDescriptor htd,
WAL wal,
org.apache.hadoop.conf.Configuration conf)
Open a Region.
|
static HRegion |
openHRegion(RegionInfo info,
TableDescriptor htd,
WAL wal,
org.apache.hadoop.conf.Configuration conf,
RegionServerServices rsServices,
CancelableProgressable reporter)
Open a Region.
|
static HRegion |
openHRegionFromTableDir(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path tableDir,
RegionInfo info,
TableDescriptor htd,
WAL wal,
RegionServerServices rsServices,
CancelableProgressable reporter)
Open a Region.
|
static HRegion |
openReadOnlyFileSystemHRegion(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path tableDir,
RegionInfo info,
TableDescriptor htd)
Open a Region on a read-only file-system (like hdfs snapshots)
|
private void |
prepareDelete(Delete delete)
Prepare a delete for a row mutation processor
|
private void |
prepareDeleteTimestamps(Mutation mutation,
Map<byte[],List<Cell>> familyMap,
byte[] byteNow)
Set up correct timestamps in the KVs in Delete object.
|
(package private) void |
prepareGet(Get get) |
private void |
preProcess(RowProcessor<?,?> processor,
WALEdit walEdit) |
void |
processRowsWithLocks(RowProcessor<?,?> processor)
Performs atomic multiple reads and writes on a given row.
|
void |
processRowsWithLocks(RowProcessor<?,?> processor,
long nonceGroup,
long nonce)
Performs atomic multiple reads and writes on a given row.
|
void |
processRowsWithLocks(RowProcessor<?,?> processor,
long timeout,
long nonceGroup,
long nonce)
Performs atomic multiple reads and writes on a given row.
|
void |
put(Put put)
Puts some data in the table.
|
private void |
recordMutationWithoutWal(Map<byte[],List<Cell>> familyMap)
Update LongAdders for number of puts without wal and the size of possible data loss.
|
boolean |
refreshStoreFiles()
Check the region's underlying store files, open the files that have not been opened yet, and
remove the store file readers for store files no longer available.
|
protected boolean |
refreshStoreFiles(boolean force) |
void |
registerChildren(ConfigurationManager manager)
Needs to be called to register the children to the manager.
|
boolean |
registerService(com.google.protobuf.Service instance)
Registers a new protocol buffer
Service subclass as a coprocessor endpoint to be
available for handling Region#execService(com.google.protobuf.RpcController,
org.apache.hadoop.hbase.protobuf.generated.ClientProtos.CoprocessorServiceCall) calls. |
private void |
releaseRowLocks(List<Region.RowLock> rowLocks) |
private void |
replayFlushInStores(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush,
HRegion.PrepareFlushResult prepareFlushResult,
boolean dropMemstoreSnapshot)
Replays the given flush descriptor by opening the flush files in stores and dropping the
memstore snapshots if requested.
|
private long |
replayRecoveredEdits(org.apache.hadoop.fs.Path edits,
Map<byte[],Long> maxSeqIdInStores,
CancelableProgressable reporter,
org.apache.hadoop.fs.FileSystem fs) |
private long |
replayRecoveredEditsForPaths(long minSeqIdForTheRegion,
org.apache.hadoop.fs.FileSystem fs,
NavigableSet<org.apache.hadoop.fs.Path> files,
CancelableProgressable reporter,
org.apache.hadoop.fs.Path regionDir) |
(package private) long |
replayRecoveredEditsIfAny(Map<byte[],Long> maxSeqIdInStores,
CancelableProgressable reporter,
MonitoredTask status)
Read the edits put under this region by wal splitting process.
|
(package private) void |
replayWALBulkLoadEventMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor bulkLoadEvent) |
(package private) void |
replayWALCompactionMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor compaction,
boolean pickCompactionFiles,
boolean removeFiles,
long replaySeqId)
Call to complete a compaction.
|
private void |
replayWALFlushAbortMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush) |
private void |
replayWALFlushCannotFlushMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush,
long replaySeqId) |
(package private) void |
replayWALFlushCommitMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush) |
(package private) void |
replayWALFlushMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush,
long replaySeqId) |
(package private) HRegion.PrepareFlushResult |
replayWALFlushStartMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush)
Replay the flush marker from primary region by creating a corresponding snapshot of the store
memstores, only if the memstores do not have a higher seqId from an earlier wal edit (because
the events may be coming out of order).
|
(package private) void |
replayWALRegionEventMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.RegionEventDescriptor regionEvent) |
void |
reportCompactionRequestEnd(boolean isMajor,
int numFiles,
long filesSizeCompacted) |
void |
reportCompactionRequestFailure() |
void |
reportCompactionRequestStart(boolean isMajor) |
void |
requestCompaction(byte[] family,
String why,
int priority,
boolean major,
CompactionLifeCycleTracker tracker)
Request compaction for the given family
|
void |
requestCompaction(String why,
int priority,
boolean major,
CompactionLifeCycleTracker tracker)
Request compaction on this region.
|
private void |
requestFlush() |
void |
requestFlush(FlushLifeCycleTracker tracker)
Request flush on this region.
|
private void |
requestFlush0(FlushLifeCycleTracker tracker) |
private void |
requestFlushIfNeeded() |
protected void |
restoreEdit(HStore s,
Cell cell,
MemStoreSizing memstoreAccounting)
Used by tests
|
private void |
rewriteCellTags(Map<byte[],List<Cell>> familyMap,
Mutation m)
Possibly rewrite incoming cell tags.
|
static boolean |
rowIsInRange(RegionInfo info,
byte[] row)
Determines if the specified row is within the row range specified by the specified RegionInfo
|
static boolean |
rowIsInRange(RegionInfo info,
byte[] row,
int offset,
short length) |
void |
setBlockCache(BlockCache blockCache)
Only used for unit test which doesn't start region server.
|
void |
setClosing(boolean closing)
Exposed for some very specific unit tests.
|
(package private) org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos.RegionLoad.Builder |
setCompleteSequenceId(org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos.RegionLoad.Builder regionLoadBldr) |
void |
setCoprocessorHost(RegionCoprocessorHost coprocessorHost) |
private void |
setHTableSpecificConf() |
void |
setMobFileCache(MobFileCache mobFileCache)
Only used for unit test which doesn't start region server.
|
void |
setReadsEnabled(boolean readsEnabled) |
void |
setRestoredRegion(boolean restoredRegion) |
void |
setTableDescriptor(TableDescriptor desc) |
void |
setTimeoutForWriteLock(long timeoutForWriteLock)
The
doClose(boolean, org.apache.hadoop.hbase.monitoring.MonitoredTask) will block forever if someone tries proving the dead lock via the
unit test. |
(package private) boolean |
shouldFlush(StringBuilder whyFlush)
Should the memstore be flushed now
|
(package private) boolean |
shouldFlushStore(HStore store)
Should the store be flushed because it is old enough.
|
private boolean |
shouldSyncWAL()
Check whether we should sync the wal from the table's durability settings
|
private void |
startBulkRegionOperation(boolean writeLockNeeded)
This method needs to be called before any public call that reads or modifies stores in bulk.
|
void |
startRegionOperation()
This method needs to be called before any public call that reads or modifies data.
|
void |
startRegionOperation(Region.Operation op)
This method needs to be called before any public call that reads or modifies data.
|
private void |
sync(long txid,
Durability durability)
Calls sync with the given transaction ID
|
(package private) void |
throwException(String title,
String regionName) |
(package private) IOException |
throwOnInterrupt(Throwable t)
Throw the correct exception upon interrupt
|
String |
toString() |
void |
unblockUpdates() |
private static void |
updateCellTimestamps(Iterable<List<Cell>> cellItr,
byte[] now)
Replace any cell timestamps set to
HConstants.LATEST_TIMESTAMP
provided current timestamp. |
private void |
updateDeleteLatestVersionTimestamp(Cell cell,
Get get,
int count,
byte[] byteNow) |
private void |
updateSequenceId(Iterable<List<Cell>> cellItr,
long sequenceId) |
void |
waitForFlushes()
Wait for all current flushes of the region to complete
|
boolean |
waitForFlushes(long timeout)
Wait for all current flushes of the region to complete
|
void |
waitForFlushesAndCompactions()
Wait for all current flushes and compactions of the region to complete
|
static void |
warmupHRegion(RegionInfo info,
TableDescriptor htd,
WAL wal,
org.apache.hadoop.conf.Configuration conf,
RegionServerServices rsServices,
CancelableProgressable reporter) |
private boolean |
worthPreFlushing()
Returns True if its worth doing a flush before we put up the close flag.
|
private boolean |
writeFlushRequestMarkerToWAL(WAL wal,
boolean writeFlushWalMarker)
Writes a marker to WAL indicating a flush is requested but cannot be complete due to various
reasons.
|
private void |
writeRegionCloseMarker(WAL wal) |
protected void |
writeRegionOpenMarker(WAL wal,
long openSeqId) |
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
checkAndMutate, checkAndMutate, checkAndRowMutate, checkAndRowMutate
private static final org.slf4j.Logger LOG
public static final String LOAD_CFS_ON_DEMAND_CONFIG_KEY
public static final String HBASE_MAX_CELL_SIZE_KEY
public static final int DEFAULT_MAX_CELL_SIZE
public static final String HBASE_REGIONSERVER_MINIBATCH_SIZE
public static final int DEFAULT_HBASE_REGIONSERVER_MINIBATCH_SIZE
public static final String WAL_HSYNC_CONF_KEY
public static final boolean DEFAULT_WAL_HSYNC
public static final String COMPACTION_AFTER_BULKLOAD_ENABLE
public static final String SPLIT_IGNORE_BLOCKING_ENABLED_KEY
public static final String REGION_STORAGE_POLICY_KEY
public static final String DEFAULT_REGION_STORAGE_POLICY
public static final String SPECIAL_RECOVERED_EDITS_DIR
public static final String USE_META_CELL_COMPARATOR
MetaCellComparator
even if we are not meta region. Used when creating
master local region.public static final boolean DEFAULT_USE_META_CELL_COMPARATOR
final AtomicBoolean closed
final AtomicBoolean closing
private volatile long maxFlushedSeqId
private volatile long lastFlushOpSeqId
maxFlushedSeqId
when flushing a single column family. In this case, maxFlushedSeqId
will be older than
the oldest edit in memory.protected volatile long lastReplayedOpenRegionSeqId
protected volatile long lastReplayedCompactionSeqId
private final ConcurrentHashMap<HashedBytes,HRegion.RowLockContext> lockedRows
private Map<String,com.google.protobuf.Service> coprocessorServiceHandlers
private final MemStoreSizing memStoreSizing
RegionServicesForStores regionServicesForStores
final LongAdder numMutationsWithoutWAL
final LongAdder dataInMemoryWithoutWAL
final LongAdder checkAndMutateChecksPassed
final LongAdder checkAndMutateChecksFailed
final LongAdder readRequestsCount
final LongAdder filteredReadRequestsCount
final LongAdder writeRequestsCount
private final LongAdder blockedRequestsCount
final LongAdder compactionsFinished
final LongAdder compactionsFailed
final LongAdder compactionNumFilesCompacted
final LongAdder compactionNumBytesCompacted
final LongAdder compactionsQueued
final LongAdder flushesQueued
private BlockCache blockCache
private MobFileCache mobFileCache
private final HRegionFileSystem fs
protected final org.apache.hadoop.conf.Configuration conf
private final org.apache.hadoop.conf.Configuration baseConf
private final int rowLockWaitDuration
static final int DEFAULT_ROWLOCK_WAIT_DURATION
private org.apache.hadoop.fs.Path regionWalDir
private org.apache.hadoop.fs.FileSystem walFS
private boolean isRestoredRegion
final long busyWaitDuration
static final long DEFAULT_BUSY_WAIT_DURATION
final int maxBusyWaitMultiplier
final long maxBusyWaitDuration
final long maxCellSize
private final int miniBatchSize
static final long DEFAULT_ROW_PROCESSOR_TIMEOUT
final ExecutorService rowProcessorExecutor
final ConcurrentHashMap<RegionScanner,Long> scannerReadPoints
private long openSeqNum
private boolean isLoadingCfsOnDemandDefault
private final AtomicInteger majorInProgress
private final AtomicInteger minorInProgress
Map<byte[],Long> maxSeqIdInStores
private HRegion.PrepareFlushResult prepareFlushResult
private volatile ConfigurationManager configurationManager
private volatile Long timeoutForWriteLock
private final CellComparator cellComparator
final HRegion.WriteState writestate
long memstoreFlushSize
final long timestampSlop
final long rowProcessorTimeout
private final ConcurrentMap<HStore,Long> lastStoreFlushTimeMap
protected RegionServerServices rsServices
private RegionServerAccounting rsAccounting
private long flushCheckInterval
private long flushPerChanges
private long blockingMemStoreSize
final ReentrantReadWriteLock lock
final ConcurrentHashMap<Thread,Boolean> regionLockHolders
private final ReentrantReadWriteLock updatesLock
private final MultiVersionConcurrencyControl mvcc
private volatile RegionCoprocessorHost coprocessorHost
private TableDescriptor htableDescriptor
private RegionSplitPolicy splitPolicy
private RegionSplitRestriction splitRestriction
private FlushPolicy flushPolicy
private final MetricsRegion metricsRegion
private final MetricsRegionWrapperImpl metricsRegionWrapper
private final Durability regionDurability
private final boolean regionStatsEnabled
private final NavigableMap<byte[],Integer> replicationScope
private final StoreHotnessProtector storeHotnessProtector
public static final String FAIR_REENTRANT_CLOSE_LOCK
public static final boolean DEFAULT_FAIR_REENTRANT_CLOSE_LOCK
public static final String MEMSTORE_PERIODIC_FLUSH_INTERVAL
public static final int DEFAULT_CACHE_FLUSH_INTERVAL
public static final int SYSTEM_CACHE_FLUSH_INTERVAL
public static final String MEMSTORE_FLUSH_PER_CHANGES
public static final long DEFAULT_FLUSH_PER_CHANGES
public static final long MAX_FLUSH_PER_CHANGES
public static final String CLOSE_WAIT_ABORT
public static final boolean DEFAULT_CLOSE_WAIT_ABORT
public static final String CLOSE_WAIT_TIME
public static final long DEFAULT_CLOSE_WAIT_TIME
public static final String CLOSE_WAIT_INTERVAL
public static final long DEFAULT_CLOSE_WAIT_INTERVAL
public static final long FIXED_OVERHEAD
public static final long DEEP_OVERHEAD
@Deprecated public HRegion(org.apache.hadoop.fs.Path tableDir, WAL wal, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration confParam, RegionInfo regionInfo, TableDescriptor htd, RegionServerServices rsServices)
createHRegion(org.apache.hadoop.hbase.client.RegionInfo, org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration, org.apache.hadoop.hbase.client.TableDescriptor, org.apache.hadoop.hbase.wal.WAL, boolean)
or
openHRegion(org.apache.hadoop.hbase.client.RegionInfo, org.apache.hadoop.hbase.client.TableDescriptor, org.apache.hadoop.hbase.wal.WAL, org.apache.hadoop.conf.Configuration)
method.tableDir
- qualified path of directory where region should be located, usually the table
directory.wal
- The WAL is the outbound log for any updates to the HRegion The wal file is a
logfile from the previous execution that's custom-computed for this HRegion.
The HRegionServer computes and sorts the appropriate wal info for this
HRegion. If there is a previous wal file (implying that the HRegion has been
written-to before), then read it from the supplied path.fs
- is the filesystem.confParam
- is global configuration settings.regionInfo
- - RegionInfo that describes the region is new), then read them from the
supplied path.htd
- the table descriptorrsServices
- reference to RegionServerServices
or nullpublic HRegion(HRegionFileSystem fs, WAL wal, org.apache.hadoop.conf.Configuration confParam, TableDescriptor htd, RegionServerServices rsServices)
createHRegion(org.apache.hadoop.hbase.client.RegionInfo, org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration, org.apache.hadoop.hbase.client.TableDescriptor, org.apache.hadoop.hbase.wal.WAL, boolean)
or
openHRegion(org.apache.hadoop.hbase.client.RegionInfo, org.apache.hadoop.hbase.client.TableDescriptor, org.apache.hadoop.hbase.wal.WAL, org.apache.hadoop.conf.Configuration)
method.fs
- is the filesystem.wal
- The WAL is the outbound log for any updates to the HRegion The wal file is a
logfile from the previous execution that's custom-computed for this HRegion.
The HRegionServer computes and sorts the appropriate wal info for this
HRegion. If there is a previous wal file (implying that the HRegion has been
written-to before), then read it from the supplied path.confParam
- is global configuration settings.htd
- the table descriptorrsServices
- reference to RegionServerServices
or nullpublic void setRestoredRegion(boolean restoredRegion)
public long getSmallestReadPoint()
private void setHTableSpecificConf()
@Deprecated public long initialize() throws IOException
IOException
- elong initialize(CancelableProgressable reporter) throws IOException
reporter
- Tickle every so often if initialize is taking a while.IOException
private long initializeRegionInternals(CancelableProgressable reporter, MonitoredTask status) throws IOException
IOException
private long initializeStores(CancelableProgressable reporter, MonitoredTask status) throws IOException
IOException
private long initializeStores(CancelableProgressable reporter, MonitoredTask status, boolean warmup) throws IOException
IOException
private void initializeWarmup(CancelableProgressable reporter) throws IOException
IOException
private NavigableMap<byte[],List<org.apache.hadoop.fs.Path>> getStoreFiles()
protected void writeRegionOpenMarker(WAL wal, long openSeqId) throws IOException
IOException
private void writeRegionCloseMarker(WAL wal) throws IOException
IOException
public boolean hasReferences()
public void blockUpdates()
public void unblockUpdates()
public HDFSBlocksDistribution getHDFSBlocksDistribution()
public static HDFSBlocksDistribution computeHDFSBlocksDistribution(org.apache.hadoop.conf.Configuration conf, TableDescriptor tableDescriptor, RegionInfo regionInfo) throws IOException
conf
- configurationtableDescriptor
- TableDescriptor of the tableregionInfo
- encoded name of the regionIOException
public static HDFSBlocksDistribution computeHDFSBlocksDistribution(org.apache.hadoop.conf.Configuration conf, TableDescriptor tableDescriptor, RegionInfo regionInfo, org.apache.hadoop.fs.Path tablePath) throws IOException
conf
- configurationtableDescriptor
- TableDescriptor of the tableregionInfo
- encoded name of the regiontablePath
- the table directoryIOException
private void incMemStoreSize(MemStoreSize mss)
void incMemStoreSize(long dataSizeDelta, long heapSizeDelta, long offHeapSizeDelta, int cellsCountDelta)
void decrMemStoreSize(MemStoreSize mss)
private void decrMemStoreSize(long dataSizeDelta, long heapSizeDelta, long offHeapSizeDelta, int cellsCountDelta)
private void checkNegativeMemStoreDataSize(long memStoreDataSize, long delta)
public RegionInfo getRegionInfo()
Region
getRegionInfo
in interface Region
RegionServerServices getRegionServerServices()
RegionServerServices
used by this HRegion. Can be null.public long getReadRequestsCount()
Region
getReadRequestsCount
in interface Region
public long getFilteredReadRequestsCount()
Region
getFilteredReadRequestsCount
in interface Region
public long getWriteRequestsCount()
Region
getWriteRequestsCount
in interface Region
public long getMemStoreDataSize()
getMemStoreDataSize
in interface Region
public long getMemStoreHeapSize()
getMemStoreHeapSize
in interface Region
public long getMemStoreOffHeapSize()
getMemStoreOffHeapSize
in interface Region
public RegionServicesForStores getRegionServicesForStores()
public long getNumMutationsWithoutWAL()
Region
getNumMutationsWithoutWAL
in interface Region
public long getDataInMemoryWithoutWAL()
Region
getDataInMemoryWithoutWAL
in interface Region
public long getBlockedRequestsCount()
Region
getBlockedRequestsCount
in interface Region
public long getCheckAndMutateChecksPassed()
Region
getCheckAndMutateChecksPassed
in interface Region
public long getCheckAndMutateChecksFailed()
Region
getCheckAndMutateChecksFailed
in interface Region
public MetricsRegion getMetrics()
public boolean isClosed()
Region
public boolean isClosing()
Region
public boolean isReadOnly()
Region
isReadOnly
in interface Region
public boolean isAvailable()
Region
isAvailable
in interface Region
public boolean isSplittable()
Region
isSplittable
in interface Region
public boolean isMergeable()
Region
isMergeable
in interface Region
public boolean areWritesEnabled()
public MultiVersionConcurrencyControl getMVCC()
public long getMaxFlushedSeqId()
getMaxFlushedSeqId
in interface Region
public long getReadPoint(IsolationLevel isolationLevel)
null
for defaultpublic boolean isLoadingCfsOnDemandDefault()
public Map<byte[],List<HStoreFile>> close() throws IOException
This method could take some time to execute, so don't call it from a time-sensitive thread.
IOException
- eDroppedSnapshotException
- Thrown when replay of wal is required because a Snapshot was
not properly persisted. The region is put in closing mode, and
the caller MUST abort after this.public Map<byte[],List<HStoreFile>> close(boolean abort) throws IOException
abort
- true if server is aborting (only during testing)IOException
- eDroppedSnapshotException
- Thrown when replay of wal is required because a Snapshot was
not properly persisted. The region is put in closing mode, and
the caller MUST abort after this.public void setClosing(boolean closing)
public void setTimeoutForWriteLock(long timeoutForWriteLock)
doClose(boolean, org.apache.hadoop.hbase.monitoring.MonitoredTask)
will block forever if someone tries proving the dead lock via the
unit test. Instead of blocking, the doClose(boolean, org.apache.hadoop.hbase.monitoring.MonitoredTask)
will throw exception if you set the
timeout.timeoutForWriteLock
- the second time to wait for the write lock in
doClose(boolean, org.apache.hadoop.hbase.monitoring.MonitoredTask)
private Map<byte[],List<HStoreFile>> doClose(boolean abort, MonitoredTask status) throws IOException
IOException
public void waitForFlushesAndCompactions()
public void waitForFlushes()
public boolean waitForFlushes(long timeout)
Region
waitForFlushes
in interface Region
timeout
- The maximum time to wait in milliseconds.public org.apache.hadoop.conf.Configuration getReadOnlyConfiguration()
getReadOnlyConfiguration
in interface Region
UnsupportedOperationException
if you try to set a configuration.private ThreadPoolExecutor getStoreOpenAndCloseThreadPool(String threadNamePrefix)
ThreadPoolExecutor getStoreFileOpenAndCloseThreadPool(String threadNamePrefix)
private static ThreadPoolExecutor getOpenAndCloseThreadPool(int maxThreads, String threadNamePrefix)
private boolean worthPreFlushing()
public TableDescriptor getTableDescriptor()
Region
getTableDescriptor
in interface Region
public void setTableDescriptor(TableDescriptor desc)
public BlockCache getBlockCache()
public void setBlockCache(BlockCache blockCache)
public MobFileCache getMobFileCache()
public void setMobFileCache(MobFileCache mobFileCache)
RegionSplitPolicy getSplitPolicy()
org.apache.hadoop.conf.Configuration getBaseConf()
public org.apache.hadoop.fs.FileSystem getFilesystem()
FileSystem
being used by this regionpublic HRegionFileSystem getRegionFileSystem()
HRegionFileSystem
used by this regionHRegionWALFileSystem getRegionWALFileSystem() throws IOException
HRegionFileSystem
used by this regionIOException
org.apache.hadoop.fs.FileSystem getWalFileSystem() throws IOException
FileSystem
being used by this regionIOException
public org.apache.hadoop.fs.Path getWALRegionDir() throws IOException
IOException
- if there is an error getting WALRootDirpublic long getEarliestFlushTimeForAllStores()
getEarliestFlushTimeForAllStores
in interface Region
public long getOldestHfileTs(boolean majorCompactionOnly) throws IOException
Region
getOldestHfileTs
in interface Region
majorCompactionOnly
- Only consider HFile that are the result of major compactionIOException
org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos.RegionLoad.Builder setCompleteSequenceId(org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos.RegionLoad.Builder regionLoadBldr)
protected void doRegionCompactionPrep() throws IOException
IOException
public void compact(boolean majorCompaction) throws IOException
This operation could block for a long time, so don't call it from a time-sensitive thread.
Note that no locks are taken to prevent possible conflicts between compaction and splitting activities. The regionserver does not normally compact and split in parallel. However by calling this method you may introduce unexpected and unhandled concurrency. Don't do this unless you know what you are doing.
majorCompaction
- True to force a major compaction regardless of thresholds nIOException
public void compactStores() throws IOException
It is used by utilities and testing
IOException
void compactStore(byte[] family, ThroughputController throughputController) throws IOException
It is used by utilities and testing
IOException
public boolean compact(CompactionContext compaction, HStore store, ThroughputController throughputController) throws IOException
This operation could block for a long time, so don't call it from a time-sensitive thread. Note that no locking is necessary at this level because compaction only conflicts with a region split, and that cannot happen because the region server does them sequentially and not in parallel.
compaction
- Compaction details, obtained by requestCompaction() n * @return whether the
compaction completedIOException
public boolean compact(CompactionContext compaction, HStore store, ThroughputController throughputController, User user) throws IOException
IOException
public HRegion.FlushResult flush(boolean flushAllStores) throws IOException
When this method is called the cache will be flushed unless:
This method may block for some time, so it should not be called from a time-sensitive thread.
flushAllStores
- whether we want to force a flush of all storesIOException
- general io exceptions because a snapshot was not properly persisted.HRegion.FlushResultImpl flushcache(boolean flushAllStores, boolean writeFlushRequestWalMarker, FlushLifeCycleTracker tracker) throws IOException
IOException
public HRegion.FlushResultImpl flushcache(List<byte[]> families, boolean writeFlushRequestWalMarker, FlushLifeCycleTracker tracker) throws IOException
This method may block for some time, so it should not be called from a time-sensitive thread.
families
- stores of region to flush.writeFlushRequestWalMarker
- whether to write the flush request marker to WALtracker
- used to track the life cycle of this flushIOException
- general io exceptionsDroppedSnapshotException
- Thrown when replay of wal is required because a Snapshot was
not properly persisted. The region is put in closing mode, and
the caller MUST abort after this.private Collection<HStore> getSpecificStores(List<byte[]> families)
boolean shouldFlushStore(HStore store)
Every FlushPolicy should call this to determine whether a store is old enough to flush (except that you always flush all stores). Otherwise the method will always returns true which will make a lot of flush requests.
boolean shouldFlush(StringBuilder whyFlush)
private HRegion.FlushResult internalFlushcache(MonitoredTask status) throws IOException
private HRegion.FlushResultImpl internalFlushcache(Collection<HStore> storesToFlush, MonitoredTask status, boolean writeFlushWalMarker, FlushLifeCycleTracker tracker) throws IOException
protected HRegion.FlushResultImpl internalFlushcache(WAL wal, long myseqid, Collection<HStore> storesToFlush, MonitoredTask status, boolean writeFlushWalMarker, FlushLifeCycleTracker tracker) throws IOException
This method may block for some time. Every time you call it, we up the regions sequence id even if we don't flush; i.e. the returned region id will be at least one larger than the last edit applied to this region. The returned id does not refer to an actual edit. The returned id can be used for say installing a bulk loaded file just ahead of the last hfile that was the result of this flush, etc.
wal
- Null if we're NOT to go via wal.myseqid
- The seqid to use if wal
is null writing out flush file.storesToFlush
- The list of stores to flush.IOException
- general io exceptionsDroppedSnapshotException
- Thrown when replay of WAL is required.protected HRegion.PrepareFlushResult internalPrepareFlushCache(WAL wal, long myseqid, Collection<HStore> storesToFlush, MonitoredTask status, boolean writeFlushWalMarker, FlushLifeCycleTracker tracker) throws IOException
IOException
private void logFatLineOnFlush(Collection<HStore> storesToFlush, long sequenceId)
private void doAbortFlushToWAL(WAL wal, long flushOpSeqId, Map<byte[],List<org.apache.hadoop.fs.Path>> committedFiles)
private static void doSyncOfUnflushedWALChanges(WAL wal, RegionInfo hri) throws IOException
IOException
private boolean isAllFamilies(Collection<HStore> families)
private boolean writeFlushRequestMarkerToWAL(WAL wal, boolean writeFlushWalMarker)
HRegion.FlushResultImpl internalFlushCacheAndCommit(WAL wal, MonitoredTask status, HRegion.PrepareFlushResult prepareResult, Collection<HStore> storesToFlush) throws IOException
IOException
protected long getNextSequenceId(WAL wal) throws IOException
IOException
public RegionScannerImpl getScanner(Scan scan) throws IOException
Region
Scan
.
This Iterator must be closed by the caller.
getScanner
in interface Region
scan
- configured Scan
n * @throws IOException read exceptionsIOException
public RegionScannerImpl getScanner(Scan scan, List<KeyValueScanner> additionalScanners) throws IOException
Region
Scan
. The scanner will also include the additional scanners passed
along with the scanners for the specified Scan instance. Should be careful with the usage to
pass additional scanners only within this Region
This Iterator must be closed by the caller.
getScanner
in interface Region
scan
- configured Scan
additionalScanners
- Any additional scanners to be used n * @throws IOException read
exceptionsIOException
private RegionScannerImpl getScanner(Scan scan, List<KeyValueScanner> additionalScanners, long nonceGroup, long nonce) throws IOException
IOException
protected RegionScannerImpl instantiateRegionScanner(Scan scan, List<KeyValueScanner> additionalScanners, long nonceGroup, long nonce) throws IOException
IOException
private void prepareDelete(Delete delete) throws IOException
delete
- The passed delete is modified by this method. WARNING!IOException
public void delete(Delete delete) throws IOException
Region
delete
in interface Region
IOException
private void prepareDeleteTimestamps(Mutation mutation, Map<byte[],List<Cell>> familyMap, byte[] byteNow) throws IOException
IOException
private void updateDeleteLatestVersionTimestamp(Cell cell, Get get, int count, byte[] byteNow) throws IOException
IOException
public void put(Put put) throws IOException
Region
put
in interface Region
IOException
public OperationStatus[] batchMutate(Mutation[] mutations, boolean atomic, long nonceGroup, long nonce) throws IOException
IOException
public OperationStatus[] batchMutate(Mutation[] mutations) throws IOException
Region
Please do not operate on a same column of a single row in a batch, we will not consider the previous operation in the same batch when performing the operations in the batch.
batchMutate
in interface Region
mutations
- the list of mutationsIOException
OperationStatus[] batchMutate(Mutation[] mutations, boolean atomic) throws IOException
IOException
public OperationStatus[] batchReplay(WALSplitUtil.MutationReplay[] mutations, long replaySeqId) throws IOException
IOException
private OperationStatus[] batchMutate(HRegion.BatchOperation<?> batchOp) throws IOException
Durability.SKIP_WAL
.
This function is called from batchReplay(WALSplitUtil.MutationReplay[], long)
with
HRegion.ReplayBatchOperation
instance and batchMutate(Mutation[])
with
HRegion.MutationBatchOperation
instance as an argument. As the processing of replay batch and
mutation batch is very similar, lot of code is shared by providing generic methods in base
class HRegion.BatchOperation
. The logic for this method and
doMiniBatchMutate(BatchOperation)
is implemented using methods in base class which are
overridden by derived classes to implement special behavior.batchOp
- contains the list of mutationsIOException
- if an IO problem is encounteredprivate void doMiniBatchMutate(HRegion.BatchOperation<?> batchOp) throws IOException
batchMutate(Mutation[])
In here we
also handle replay of edits on region recover. Also gets change in size brought about by
applying batchOp
.IOException
private Durability getEffectiveDurability(Durability d)
@Deprecated public boolean checkAndMutate(byte[] row, byte[] family, byte[] qualifier, CompareOperator op, ByteArrayComparable comparator, TimeRange timeRange, Mutation mutation) throws IOException
Region
checkAndMutate
in interface Region
row
- to checkfamily
- column family to checkqualifier
- column qualifier to checkop
- the comparison operatorcomparator
- the expected valuetimeRange
- time range to checkmutation
- data to put if check succeedsIOException
@Deprecated public boolean checkAndMutate(byte[] row, Filter filter, TimeRange timeRange, Mutation mutation) throws IOException
Region
checkAndMutate
in interface Region
row
- to checkfilter
- the filtertimeRange
- time range to checkmutation
- data to put if check succeedsIOException
@Deprecated public boolean checkAndRowMutate(byte[] row, byte[] family, byte[] qualifier, CompareOperator op, ByteArrayComparable comparator, TimeRange timeRange, RowMutations rm) throws IOException
Region
checkAndRowMutate
in interface Region
row
- to checkfamily
- column family to checkqualifier
- column qualifier to checkop
- the comparison operatorcomparator
- the expected valuetimeRange
- time range to checkrm
- data to put if check succeedsIOException
@Deprecated public boolean checkAndRowMutate(byte[] row, Filter filter, TimeRange timeRange, RowMutations rm) throws IOException
Region
checkAndRowMutate
in interface Region
row
- to checkfilter
- the filtertimeRange
- time range to checkrm
- data to put if check succeedsIOException
public CheckAndMutateResult checkAndMutate(CheckAndMutate checkAndMutate) throws IOException
Region
checkAndMutate
in interface Region
checkAndMutate
- the CheckAndMutate objectIOException
- if an error occurred in this methodpublic CheckAndMutateResult checkAndMutate(CheckAndMutate checkAndMutate, long nonceGroup, long nonce) throws IOException
IOException
private CheckAndMutateResult checkAndMutateInternal(CheckAndMutate checkAndMutate, long nonceGroup, long nonce) throws IOException
IOException
private void checkMutationType(Mutation mutation) throws DoNotRetryIOException
DoNotRetryIOException
private void checkRow(Row action, byte[] row) throws DoNotRetryIOException
DoNotRetryIOException
private boolean matches(CompareOperator op, int compareResult)
private OperationStatus mutate(Mutation mutation) throws IOException
IOException
private OperationStatus mutate(Mutation mutation, boolean atomic) throws IOException
IOException
private OperationStatus mutate(Mutation mutation, boolean atomic, long nonceGroup, long nonce) throws IOException
IOException
public void addRegionToSnapshot(org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription desc, ForeignExceptionSnare exnSnare) throws IOException
ForeignExceptionSnare
arg. (In the future other cancellable HRegion methods could
eventually add a ForeignExceptionSnare
, or we could do something fancier).desc
- snapshot description objectexnSnare
- ForeignExceptionSnare that captures external exceptions in case we need to bail
out. This is allowed to be null and will just be ignored in that case.IOException
- if there is an external or internal error causing the snapshot to failprivate void updateSequenceId(Iterable<List<Cell>> cellItr, long sequenceId) throws IOException
IOException
private static void updateCellTimestamps(Iterable<List<Cell>> cellItr, byte[] now) throws IOException
HConstants.LATEST_TIMESTAMP
provided current timestamp. nnIOException
private void rewriteCellTags(Map<byte[],List<Cell>> familyMap, Mutation m)
private void checkResources() throws RegionTooBusyException
RegionTooBusyException
private void checkReadOnly() throws IOException
IOException
- Throws exception if region is in read-only mode.private void checkReadsEnabled() throws IOException
IOException
public void setReadsEnabled(boolean readsEnabled)
private void applyToMemStore(HStore store, List<Cell> cells, boolean delta, MemStoreSizing memstoreAccounting) throws IOException
delta
- If we are doing delta changes -- e.g. increment/append -- then this flag will be
set; when set we will run operations that make sense in the increment/append
scenario but that do not make sense otherwise.IOException
applyToMemStore(HStore, Cell, MemStoreSizing)
private void applyToMemStore(HStore store, Cell cell, MemStoreSizing memstoreAccounting) throws IOException
IOException
applyToMemStore(HStore, List, boolean, MemStoreSizing)
public void checkFamilies(Collection<byte[]> families) throws NoSuchColumnFamilyException
NoSuchColumnFamilyException
public void checkTimestamps(Map<byte[],List<Cell>> familyMap, long now) throws FailedSanityCheckException
FailedSanityCheckException
private boolean isFlushSize(MemStoreSize size)
private void deleteRecoveredEdits(org.apache.hadoop.fs.FileSystem fs, Iterable<org.apache.hadoop.fs.Path> files) throws IOException
IOException
long replayRecoveredEditsIfAny(Map<byte[],Long> maxSeqIdInStores, CancelableProgressable reporter, MonitoredTask status) throws IOException
We can ignore any wal message that has a sequence ID that's equal to or lower than minSeqId. (Because we know such messages are already reflected in the HFiles.)
While this is running we are putting pressure on memory yet we are outside of our usual accounting because we are not yet an onlined region (this stuff is being run as part of Region initialization). This means that if we're up against global memory limits, we'll not be flagged to flush because we are not online. We can't be flushed by usual mechanisms anyways; we're not yet online so our relative sequenceids are not yet aligned with WAL sequenceids -- not till we come up online, post processing of split edits.
But to help relieve memory pressure, at least manage our own heap size flushing if are in excess of per-region limits. Flushing, though, we have to be careful and avoid using the regionserver/wal sequenceid. Its running on a different line to whats going on in here in this region context so if we crashed replaying these edits, but in the midst had a flush that used the regionserver wal with a sequenceid in excess of whats going on in here in this region and with its split editlogs, then we could miss edits the next time we go to recover. So, we have to flush inline, using seqids that make sense in a this single region context only -- until we online.
maxSeqIdInStores
- Any edit found in split editlogs needs to be in excess of the maxSeqId
for the store to be applied, else its skipped.minSeqId
if nothing added from editlogs.IOException
private long replayRecoveredEditsForPaths(long minSeqIdForTheRegion, org.apache.hadoop.fs.FileSystem fs, NavigableSet<org.apache.hadoop.fs.Path> files, CancelableProgressable reporter, org.apache.hadoop.fs.Path regionDir) throws IOException
IOException
private void handleException(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path edits, IOException e) throws IOException
IOException
private long replayRecoveredEdits(org.apache.hadoop.fs.Path edits, Map<byte[],Long> maxSeqIdInStores, CancelableProgressable reporter, org.apache.hadoop.fs.FileSystem fs) throws IOException
edits
- File of recovered edits.maxSeqIdInStores
- Maximum sequenceid found in each store. Edits in wal must be larger
than this to be replayed for each store.minSeqId
if nothing added from editlogs.IOException
void replayWALCompactionMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor compaction, boolean pickCompactionFiles, boolean removeFiles, long replaySeqId) throws IOException
IOException
void replayWALFlushMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush, long replaySeqId) throws IOException
IOException
HRegion.PrepareFlushResult replayWALFlushStartMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush) throws IOException
IOException
void replayWALFlushCommitMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush) throws IOException
IOException
private void replayFlushInStores(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush, HRegion.PrepareFlushResult prepareFlushResult, boolean dropMemstoreSnapshot) throws IOException
IOException
private long loadRecoveredHFilesIfAny(Collection<HStore> stores) throws IOException
IOException
private MemStoreSize dropMemStoreContents() throws IOException
IOException
private MemStoreSize dropMemStoreContentsForSeqId(long seqId, HStore store) throws IOException
IOException
private MemStoreSize doDropStoreMemStoreContentsForSeqId(HStore s, long currentSeqId) throws IOException
IOException
private void replayWALFlushAbortMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush)
private void replayWALFlushCannotFlushMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor flush, long replaySeqId)
HRegion.PrepareFlushResult getPrepareFlushResult()
void replayWALRegionEventMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.RegionEventDescriptor regionEvent) throws IOException
IOException
void replayWALBulkLoadEventMarker(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor bulkLoadEvent) throws IOException
IOException
private void dropPrepareFlushIfPossible()
public boolean refreshStoreFiles() throws IOException
Region
refreshStoreFiles
in interface Region
IOException
protected boolean refreshStoreFiles(boolean force) throws IOException
IOException
private void logRegionFiles()
private void checkTargetRegion(byte[] encodedRegionName, String exceptionMsg, Object payload) throws WrongRegionException
WrongRegionException
protected void restoreEdit(HStore s, Cell cell, MemStoreSizing memstoreAccounting)
s
- Store to add edit too.cell
- Cell to add.private static boolean isZeroLengthThenDelete(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.FileStatus stat, org.apache.hadoop.fs.Path p) throws IOException
p
- File to check.IOException
protected HStore instantiateHStore(ColumnFamilyDescriptor family, boolean warmup) throws IOException
IOException
public HStore getStore(byte[] column)
Region
Use with caution. Exposed for use of fixup utilities.
private HStore getStore(Cell cell)
public List<HStore> getStores()
Region
Use with caution. Exposed for use of fixup utilities.
public List<String> getStoreFileList(byte[][] columns) throws IllegalArgumentException
Region
getStoreFileList
in interface Region
IllegalArgumentException
void checkRow(byte[] row, String op) throws IOException
IOException
public Region.RowLock getRowLock(byte[] row) throws IOException
row
- Which row to lock.IOException
public Region.RowLock getRowLock(byte[] row, boolean readLock) throws IOException
Region
The obtained locks should be released after use by Region.RowLock.release()
NOTE: the boolean passed here has changed. It used to be a boolean that stated whether or not to wait on the lock. Now it is whether it an exclusive lock is requested.
getRowLock
in interface Region
row
- The row actions will be performed againstreadLock
- is the lock reader or writer. True indicates that a non-exclusive lock is
requestedIOException
Region.startRegionOperation()
,
Region.startRegionOperation(Operation)
io.opentelemetry.api.trace.Span createRegionSpan(String name)
protected Region.RowLock getRowLockInternal(byte[] row, boolean readLock, Region.RowLock prevRowLock) throws IOException
IOException
private Region.RowLock getRowLock(byte[] row, boolean readLock, Region.RowLock prevRowLock) throws IOException
IOException
private void releaseRowLocks(List<Region.RowLock> rowLocks)
public int getReadLockCount()
public ConcurrentHashMap<HashedBytes,HRegion.RowLockContext> getLockedRows()
private static boolean hasMultipleColumnFamilies(Collection<Pair<byte[],String>> familyPaths)
familyPaths
- List of (column family, hfilePath)public Map<byte[],List<org.apache.hadoop.fs.Path>> bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths, boolean assignSeqId, HRegion.BulkLoadListener bulkLoadListener) throws IOException
familyPaths
- List of Pair<byte[] column family, String hfilePath>bulkLoadListener
- Internal hooks enabling massaging/preparation of a file about to be
bulk loaded n * @return Map from family to List of store file paths if
successful, null if failed recoverablyIOException
- if failed unrecoverably.public Map<byte[],List<org.apache.hadoop.fs.Path>> bulkLoadHFiles(Collection<Pair<byte[],String>> familyPaths, boolean assignSeqId, HRegion.BulkLoadListener bulkLoadListener, boolean copyFile, List<String> clusterIds, boolean replicate) throws IOException
familyPaths
- List of Pair<byte[] column family, String hfilePath> n * @param
bulkLoadListener Internal hooks enabling massaging/preparation of a file
about to be bulk loadedcopyFile
- always copy hfiles if trueclusterIds
- ids from clusters that had already handled the given bulkload event.IOException
- if failed unrecoverably.public static HRegion newHRegion(org.apache.hadoop.fs.Path tableDir, WAL wal, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration conf, RegionInfo regionInfo, TableDescriptor htd, RegionServerServices rsServices)
HConstants.REGION_IMPL
configuration property.tableDir
- qualified path of directory where region should be located, usually the table
directory.wal
- The WAL is the outbound log for any updates to the HRegion The wal file is a
logfile from the previous execution that's custom-computed for this HRegion.
The HRegionServer computes and sorts the appropriate wal info for this
HRegion. If there is a previous file (implying that the HRegion has been
written-to before), then read it from the supplied path.fs
- is the filesystem.conf
- is global configuration settings.regionInfo
- - RegionInfo that describes the region is new), then read them from the
supplied path.htd
- the table descriptorpublic static HRegion createHRegion(RegionInfo info, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.conf.Configuration conf, TableDescriptor hTableDescriptor, WAL wal, boolean initialize) throws IOException
info
- Info for region to create.rootDir
- Root directory for HBase instancewal
- shared WALinitialize
- - true to initialize the regionIOException
public static HRegion createHRegion(RegionInfo info, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.conf.Configuration conf, TableDescriptor hTableDescriptor, WAL wal, boolean initialize, RegionServerServices rsRpcServices) throws IOException
info
- Info for region to create.rootDir
- Root directory for HBase instancewal
- shared WALinitialize
- - true to initialize the regionrsRpcServices
- An interface we can request flushes against.IOException
public static HRegion createHRegion(org.apache.hadoop.conf.Configuration conf, RegionInfo regionInfo, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path tableDir, TableDescriptor tableDesc) throws IOException
IOException
public static HRegionFileSystem createRegionDir(org.apache.hadoop.conf.Configuration configuration, RegionInfo ri, org.apache.hadoop.fs.Path rootDir) throws IOException
IOException
public static HRegion createHRegion(RegionInfo info, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.conf.Configuration conf, TableDescriptor hTableDescriptor, WAL wal) throws IOException
IOException
public static HRegion openHRegion(RegionInfo info, TableDescriptor htd, WAL wal, org.apache.hadoop.conf.Configuration conf) throws IOException
info
- Info for region to be opened.wal
- WAL for region to use. This method will call WAL#setSequenceNumber(long) passing
the result of the call to HRegion#getMinSequenceId() to ensure the wal id is
properly kept up. HRegionStore does this every time it opens a new region.IOException
public static HRegion openHRegion(RegionInfo info, TableDescriptor htd, WAL wal, org.apache.hadoop.conf.Configuration conf, RegionServerServices rsServices, CancelableProgressable reporter) throws IOException
info
- Info for region to be openedhtd
- the table descriptorwal
- WAL for region to use. This method will call WAL#setSequenceNumber(long)
passing the result of the call to HRegion#getMinSequenceId() to ensure the
wal id is properly kept up. HRegionStore does this every time it opens a new
region.conf
- The Configuration object to use.rsServices
- An interface we can request flushes against.reporter
- An interface we can report progress against.IOException
public static HRegion openHRegion(org.apache.hadoop.fs.Path rootDir, RegionInfo info, TableDescriptor htd, WAL wal, org.apache.hadoop.conf.Configuration conf) throws IOException
rootDir
- Root directory for HBase instanceinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call WAL#setSequenceNumber(long) passing
the result of the call to HRegion#getMinSequenceId() to ensure the wal id is
properly kept up. HRegionStore does this every time it opens a new region.conf
- The Configuration object to use.IOException
public static HRegion openHRegion(org.apache.hadoop.fs.Path rootDir, RegionInfo info, TableDescriptor htd, WAL wal, org.apache.hadoop.conf.Configuration conf, RegionServerServices rsServices, CancelableProgressable reporter) throws IOException
rootDir
- Root directory for HBase instanceinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call WAL#setSequenceNumber(long)
passing the result of the call to HRegion#getMinSequenceId() to ensure the
wal id is properly kept up. HRegionStore does this every time it opens a new
region.conf
- The Configuration object to use.rsServices
- An interface we can request flushes against.reporter
- An interface we can report progress against.IOException
public static HRegion openHRegion(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, RegionInfo info, TableDescriptor htd, WAL wal) throws IOException
conf
- The Configuration object to use.fs
- Filesystem to userootDir
- Root directory for HBase instanceinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call WAL#setSequenceNumber(long) passing
the result of the call to HRegion#getMinSequenceId() to ensure the wal id is
properly kept up. HRegionStore does this every time it opens a new region.IOException
public static HRegion openHRegion(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, RegionInfo info, TableDescriptor htd, WAL wal, RegionServerServices rsServices, CancelableProgressable reporter) throws IOException
conf
- The Configuration object to use.fs
- Filesystem to userootDir
- Root directory for HBase instanceinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call WAL#setSequenceNumber(long)
passing the result of the call to HRegion#getMinSequenceId() to ensure the
wal id is properly kept up. HRegionStore does this every time it opens a new
region.rsServices
- An interface we can request flushes against.reporter
- An interface we can report progress against.IOException
public static HRegion openHRegionFromTableDir(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path tableDir, RegionInfo info, TableDescriptor htd, WAL wal, RegionServerServices rsServices, CancelableProgressable reporter) throws IOException
conf
- The Configuration object to use.fs
- Filesystem to useinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call WAL#setSequenceNumber(long)
passing the result of the call to HRegion#getMinSequenceId() to ensure the
wal id is properly kept up. HRegionStore does this every time it opens a new
region.rsServices
- An interface we can request flushes against.reporter
- An interface we can report progress against.IOException
public NavigableMap<byte[],Integer> getReplicationScope()
public static HRegion openHRegion(HRegion other, CancelableProgressable reporter) throws IOException
other
- original objectreporter
- An interface we can report progress against.IOException
public static Region openHRegion(Region other, CancelableProgressable reporter) throws IOException
IOException
private HRegion openHRegion(CancelableProgressable reporter) throws IOException
this
IOException
public static HRegion openReadOnlyFileSystemHRegion(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path tableDir, RegionInfo info, TableDescriptor htd) throws IOException
conf
- The Configuration object to use.fs
- Filesystem to useinfo
- Info for region to be opened.htd
- the table descriptorIOException
public static void warmupHRegion(RegionInfo info, TableDescriptor htd, WAL wal, org.apache.hadoop.conf.Configuration conf, RegionServerServices rsServices, CancelableProgressable reporter) throws IOException
IOException
@Deprecated public static org.apache.hadoop.fs.Path getRegionDir(org.apache.hadoop.fs.Path tabledir, String name)
tabledir
- qualified path for tablename
- ENCODED region namepublic static boolean rowIsInRange(RegionInfo info, byte[] row)
info
- RegionInfo that specifies the row rangerow
- row to be checkedpublic static boolean rowIsInRange(RegionInfo info, byte[] row, int offset, short length)
public Result get(Get get) throws IOException
Region
get
in interface Region
get
- query parametersIOException
void prepareGet(Get get) throws IOException
IOException
public List<Cell> get(Get get, boolean withCoprocessor) throws IOException
Region
get
in interface Region
get
- query parameterswithCoprocessor
- invoke coprocessor or not. We don't want to always invoke cp.IOException
private List<Cell> get(Get get, boolean withCoprocessor, long nonceGroup, long nonce) throws IOException
IOException
private List<Cell> getInternal(Get get, boolean withCoprocessor, long nonceGroup, long nonce) throws IOException
IOException
void metricsUpdateForGet(List<Cell> results, long before)
public Result mutateRow(RowMutations rm) throws IOException
Region
mutateRow
in interface Region
rm
- object that specifies the set of mutations to perform atomicallyIOException
public Result mutateRow(RowMutations rm, long nonceGroup, long nonce) throws IOException
IOException
public void mutateRowsWithLocks(Collection<Mutation> mutations, Collection<byte[]> rowsToLock, long nonceGroup, long nonce) throws IOException
mutateRowsWithLocks
in interface Region
mutations
- The list of mutations to perform. mutations
can contain
operations for multiple rows. Caller has to ensure that all rows are
contained in this region.rowsToLock
- Rows to locknonceGroup
- Optional nonce group of the operation (client Id)nonce
- Optional nonce of the operation (unique random id to ensure "more
idempotence") If multiple rows are locked care should be taken that
rowsToLock
is sorted in order to avoid deadlocks. nIOException
public org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.RegionLoadStats getLoadStatistics()
public void processRowsWithLocks(RowProcessor<?,?> processor) throws IOException
Region
processRowsWithLocks
in interface Region
processor
- The object defines the reads and writes to a row.IOException
public void processRowsWithLocks(RowProcessor<?,?> processor, long nonceGroup, long nonce) throws IOException
Region
processRowsWithLocks
in interface Region
processor
- The object defines the reads and writes to a row.nonceGroup
- Optional nonce group of the operation (client Id)nonce
- Optional nonce of the operation (unique random id to ensure "more
idempotence")IOException
public void processRowsWithLocks(RowProcessor<?,?> processor, long timeout, long nonceGroup, long nonce) throws IOException
Region
processRowsWithLocks
in interface Region
processor
- The object defines the reads and writes to a row.timeout
- The timeout of the processor.process() execution Use a negative number to
switch off the time boundnonceGroup
- Optional nonce group of the operation (client Id)nonce
- Optional nonce of the operation (unique random id to ensure "more
idempotence")IOException
private void preProcess(RowProcessor<?,?> processor, WALEdit walEdit) throws IOException
IOException
private void doProcessRowWithTimeout(RowProcessor<?,?> processor, long now, HRegion region, List<Mutation> mutations, WALEdit walEdit, long timeout) throws IOException
IOException
public Result append(Append append) throws IOException
Region
append
in interface Region
IOException
public Result append(Append append, long nonceGroup, long nonce) throws IOException
IOException
public Result increment(Increment increment) throws IOException
Region
increment
in interface Region
IOException
public Result increment(Increment increment, long nonceGroup, long nonce) throws IOException
IOException
private MultiVersionConcurrencyControl.WriteEntry doWALAppend(WALEdit walEdit, Durability durability, List<UUID> clusterIds, long now, long nonceGroup, long nonce) throws IOException
IOException
private MultiVersionConcurrencyControl.WriteEntry doWALAppend(WALEdit walEdit, Durability durability, List<UUID> clusterIds, long now, long nonceGroup, long nonce, long origLogSeqNum) throws IOException
IOException
void checkFamily(byte[] family) throws NoSuchColumnFamilyException
NoSuchColumnFamilyException
public long heapSize()
HeapSize
public boolean registerService(com.google.protobuf.Service instance)
Service
subclass as a coprocessor endpoint to be
available for handling Region#execService(com.google.protobuf.RpcController,
org.apache.hadoop.hbase.protobuf.generated.ClientProtos.CoprocessorServiceCall) calls.
Only a single instance may be registered per region for a given Service
subclass (the
instances are keyed on Descriptors.ServiceDescriptor.getFullName()
.
After the first registration, subsequent calls with the same service name will fail with a
return value of false
.
instance
- the Service
subclass instance to expose as a coprocessor endpointtrue
if the registration was successful, false
otherwisepublic com.google.protobuf.Message execService(com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.CoprocessorServiceCall call) throws IOException
Service
method using the
registered protocol handlers. Service
implementations must be registered via the
registerService(com.google.protobuf.Service)
method before they are available.controller
- an RpcContoller
implementation to pass to the invoked servicecall
- a CoprocessorServiceCall
instance identifying the service, method,
and parameters for the method invocationMessage
instance containing the method's resultIOException
- if no registered service handler is found or an error occurs during the
invocationregisterService(com.google.protobuf.Service)
public Optional<byte[]> checkSplit()
public Optional<byte[]> checkSplit(boolean force)
public int getCompactPriority()
public RegionCoprocessorHost getCoprocessorHost()
public void setCoprocessorHost(RegionCoprocessorHost coprocessorHost)
coprocessorHost
- the new coprocessor hostpublic void startRegionOperation() throws IOException
Region
Region.closeRegionOperation()
MUST then always be called after the operation has completed,
whether it succeeded or failed. n
startRegionOperation
in interface Region
IOException
public void startRegionOperation(Region.Operation op) throws IOException
Region
Region.closeRegionOperation()
MUST then always be called after the operation has completed,
whether it succeeded or failed.
startRegionOperation
in interface Region
op
- The operation is about to be taken on the region nIOException
public void closeRegionOperation() throws IOException
Region
closeRegionOperation
in interface Region
IOException
public void closeRegionOperation(Region.Operation operation) throws IOException
Region
Region.startRegionOperation(Operation)
ncloseRegionOperation
in interface Region
IOException
private void startBulkRegionOperation(boolean writeLockNeeded) throws IOException
NotServingRegionException
- when the region is closing or closedRegionTooBusyException
- if failed to get the lock in timeInterruptedIOException
- if interrupted while waiting for a lockIOException
private void closeBulkRegionOperation()
private void recordMutationWithoutWal(Map<byte[],List<Cell>> familyMap)
private void lock(Lock lock) throws IOException
IOException
private void lock(Lock lock, int multiplier) throws IOException
IOException
private void sync(long txid, Durability durability) throws IOException
txid
- should sync up to which transactionIOException
- If anything goes wrong with DFSprivate boolean shouldSyncWAL()
public long getOpenSeqNum()
public Map<byte[],Long> getMaxStoreSeqId()
getMaxStoreSeqId
in interface Region
public long getOldestSeqIdOfStore(byte[] familyName)
public CompactionState getCompactionState()
Region
getCompactionState
in interface Region
public void reportCompactionRequestStart(boolean isMajor)
public void reportCompactionRequestEnd(boolean isMajor, int numFiles, long filesSizeCompacted)
public void reportCompactionRequestFailure()
public void incrementCompactionsQueuedCount()
public void decrementCompactionsQueuedCount()
public void incrementFlushesQueuedCount()
protected void decrementFlushesQueuedCount()
void disableInterrupts()
enableInterrupts()
.void enableInterrupts()
disableInterrupts()
, make
it eligible again. No-op if interrupts are already enabled.private void interruptRegionOperations()
startRegionOperation(org.apache.hadoop.hbase.regionserver.Region.Operation)
, or
startBulkRegionOperation(boolean)
.void checkInterrupt() throws NotServingRegionException, InterruptedIOException
NotServingRegionException
- if region is closingInterruptedIOException
- if interrupted but region is not closingIOException throwOnInterrupt(Throwable t)
t
- causepublic void onConfigurationChange(org.apache.hadoop.conf.Configuration conf)
ConfigurationManager
object when the
Configuration
object is reloaded from disk.onConfigurationChange
in interface ConfigurationObserver
public void registerChildren(ConfigurationManager manager)
registerChildren
in interface PropagatingConfigurationObserver
manager
- : to register topublic void deregisterChildren(ConfigurationManager manager)
deregisterChildren
in interface PropagatingConfigurationObserver
manager
- : to deregister frompublic CellComparator getCellComparator()
Region
getCellComparator
in interface Region
public long getMemStoreFlushSize()
void throwException(String title, String regionName)
public void requestCompaction(String why, int priority, boolean major, CompactionLifeCycleTracker tracker) throws IOException
Region
requestCompaction
in interface Region
IOException
public void requestCompaction(byte[] family, String why, int priority, boolean major, CompactionLifeCycleTracker tracker) throws IOException
Region
requestCompaction
in interface Region
IOException
private void requestFlushIfNeeded() throws RegionTooBusyException
RegionTooBusyException
private void requestFlush()
private void requestFlush0(FlushLifeCycleTracker tracker)
public void requestFlush(FlushLifeCycleTracker tracker) throws IOException
Region
requestFlush
in interface Region
IOException
private static void decorateRegionConfiguration(org.apache.hadoop.conf.Configuration conf)
conf
- region configurationspublic void addReadRequestsCount(long readRequestsCount)
public void addWriteRequestsCount(long writeRequestsCount)
boolean isReadsEnabled()
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.