Class ZkSplitLogWorkerCoordination
java.lang.Object
org.apache.hadoop.hbase.zookeeper.ZKListener
org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination
- All Implemented Interfaces:
SplitLogWorkerCoordination
@Private
public class ZkSplitLogWorkerCoordination
extends ZKListener
implements SplitLogWorkerCoordination
ZooKeeper based implementation of
SplitLogWorkerCoordination It listen for changes in
ZooKeeper and-
Nested Class Summary
Nested ClassesModifier and TypeClassDescription(package private) classAsynchronous handler for zk get-data-set-watch on node results.static classWhen ZK-based implementation wants to complete the task, it needs to know task znode and current znode cversion (needed for subsequent update operation).Nested classes/interfaces inherited from interface org.apache.hadoop.hbase.coordination.SplitLogWorkerCoordination
SplitLogWorkerCoordination.SplitTaskDetails -
Field Summary
FieldsModifier and TypeFieldDescriptionprivate static final intprivate Stringprivate intprivate static final intprivate final Objectprivate static final org.slf4j.Loggerprivate intprivate intprivate RegionServerServicesprivate final ServerNameprivate booleanprivate SplitLogWorker.TaskExecutorprivate final AtomicIntegerprotected final AtomicIntegerprivate SplitLogWorkerprivate booleanFields inherited from class org.apache.hadoop.hbase.zookeeper.ZKListener
watcher -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprivate booleanReturns true if more splitters are available, otherwise false.protected static intattemptToOwnTask(boolean isFirstTime, ZKWatcher zkw, ServerName server, String task, int taskZKVersion) Try to own the task by transitioning the zk node data from UNASSIGNED to OWNED.voidendTask(SplitLogTask slt, LongAdder ctr, SplitLogWorkerCoordination.SplitTaskDetails details) endTask() can fail and the only way to recover out of it is for theSplitLogManagerto timeout the task node.void(package private) voidgetDataSetWatchFailure(String path) (package private) voidgetDataSetWatchSuccess(String path, byte[] data) intUsed by unit tests to check how many tasks were processedprivate booleantry to grab a 'lock' on the task zk node to own and execute the task.voidinit(RegionServerServices server, org.apache.hadoop.conf.Configuration conf, SplitLogWorker.TaskExecutor splitExecutor, SplitLogWorker worker) Override setter fromSplitLogWorkerCoordinationbooleanisReady()Check whether the log splitter is ready to supply tasksbooleanisStop()Returns the current value of exitWorkervoidmarkCorrupted(org.apache.hadoop.fs.Path rootDir, String name, org.apache.hadoop.fs.FileSystem fs) marks log file as corruptedvoidnodeChildrenChanged(String path) Override handler fromZKListenervoidnodeDataChanged(String path) Override handler fromZKListenervoidset the listener for task changes.voidremove the listener for task changes.voidcalled when Coordination should stop processing tasks and exit(package private) voidsubmitTask(String curTask, int curTaskZKVersion, int reportPeriod) Submit a log split task to executor servicevoidtaskLoop()Wait for tasks to become available at /hbase/splitlog zknode.Methods inherited from class org.apache.hadoop.hbase.zookeeper.ZKListener
getWatcher, nodeCreated, nodeDeleted
-
Field Details
-
LOG
-
checkInterval
- See Also:
-
FAILED_TO_OWN_TASK
- See Also:
-
worker
-
splitTaskExecutor
-
taskReadySeq
-
currentTask
-
currentVersion
-
shouldStop
-
grabTaskLock
-
workerInGrabTask
-
reportPeriod
-
server
-
tasksInProgress
-
maxConcurrentTasks
-
serverName
-
-
Constructor Details
-
ZkSplitLogWorkerCoordination
-
-
Method Details
-
nodeChildrenChanged
Override handler fromZKListener- Overrides:
nodeChildrenChangedin classZKListener- Parameters:
path- full path of the node whose children have changed
-
nodeDataChanged
Override handler fromZKListener- Overrides:
nodeDataChangedin classZKListener- Parameters:
path- full path of the updated node
-
init
public void init(RegionServerServices server, org.apache.hadoop.conf.Configuration conf, SplitLogWorker.TaskExecutor splitExecutor, SplitLogWorker worker) Override setter fromSplitLogWorkerCoordination- Specified by:
initin interfaceSplitLogWorkerCoordination- Parameters:
server- instance of RegionServerServices to work withconf- is current configuration.splitExecutor- split executor from SplitLogWorkerworker- instance of SplitLogWorker
-
getDataSetWatchFailure
-
getDataSetWatchAsync
-
getDataSetWatchSuccess
-
grabTask
try to grab a 'lock' on the task zk node to own and execute the task.- Parameters:
path- zk node for the task- Returns:
- boolean value when grab a task success return true otherwise false
-
submitTask
Submit a log split task to executor service- Parameters:
curTask- task to submitcurTaskZKVersion- current version of task
-
areSplittersAvailable
Returns true if more splitters are available, otherwise false. -
attemptToOwnTask
protected static int attemptToOwnTask(boolean isFirstTime, ZKWatcher zkw, ServerName server, String task, int taskZKVersion) Try to own the task by transitioning the zk node data from UNASSIGNED to OWNED.This method is also used to periodically heartbeat the task progress by transitioning the node from OWNED to OWNED.
- Parameters:
isFirstTime- shows whther it's the first attempt.zkw- zk wathcerserver- nametask- to owntaskZKVersion- version of the task in zk- Returns:
- non-negative integer value when task can be owned by current region server otherwise -1
-
taskLoop
Wait for tasks to become available at /hbase/splitlog zknode. Grab a task one at a time. This policy puts an upper-limit on the number of simultaneous log splitting that could be happening in a cluster.Synchronization using
taskReadySeqensures that it will try to grab every task that has been put up- Specified by:
taskLoopin interfaceSplitLogWorkerCoordination- Throws:
InterruptedException- if the SplitLogWorker was stopped
-
getTaskList
- Throws:
InterruptedException
-
markCorrupted
public void markCorrupted(org.apache.hadoop.fs.Path rootDir, String name, org.apache.hadoop.fs.FileSystem fs) Description copied from interface:SplitLogWorkerCoordinationmarks log file as corrupted- Specified by:
markCorruptedin interfaceSplitLogWorkerCoordination- Parameters:
rootDir- where to find the logname- of the logfs- file system
-
isReady
Description copied from interface:SplitLogWorkerCoordinationCheck whether the log splitter is ready to supply tasks- Specified by:
isReadyin interfaceSplitLogWorkerCoordination- Returns:
- false if there is no tasks
- Throws:
InterruptedException- if the SplitLogWorker was stopped
-
getTaskReadySeq
Description copied from interface:SplitLogWorkerCoordinationUsed by unit tests to check how many tasks were processed- Specified by:
getTaskReadySeqin interfaceSplitLogWorkerCoordination- Returns:
- number of tasks
-
registerListener
Description copied from interface:SplitLogWorkerCoordinationset the listener for task changes. Implementation specific- Specified by:
registerListenerin interfaceSplitLogWorkerCoordination
-
removeListener
Description copied from interface:SplitLogWorkerCoordinationremove the listener for task changes. Implementation specific- Specified by:
removeListenerin interfaceSplitLogWorkerCoordination
-
stopProcessingTasks
Description copied from interface:SplitLogWorkerCoordinationcalled when Coordination should stop processing tasks and exit- Specified by:
stopProcessingTasksin interfaceSplitLogWorkerCoordination
-
isStop
Description copied from interface:SplitLogWorkerCoordinationReturns the current value of exitWorker- Specified by:
isStopin interfaceSplitLogWorkerCoordination
-
endTask
public void endTask(SplitLogTask slt, LongAdder ctr, SplitLogWorkerCoordination.SplitTaskDetails details) endTask() can fail and the only way to recover out of it is for theSplitLogManagerto timeout the task node.- Specified by:
endTaskin interfaceSplitLogWorkerCoordination- Parameters:
slt- SeeSplitLogTaskctr- counter to be updateddetails- details about log split task (specific to coordination engine being used).
-