Class SimpleLoadBalancer

java.lang.Object
org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer
org.apache.hadoop.hbase.master.balancer.SimpleLoadBalancer
All Implemented Interfaces:
ConfigurationObserver, LoadBalancer, Stoppable

@LimitedPrivate("Configuration") public class SimpleLoadBalancer extends BaseLoadBalancer
Makes decisions about the placement and movement of Regions across RegionServers.

Cluster-wide load balancing will occur only when there are no regions in transition and according to a fixed period of a time using BaseLoadBalancer.balanceCluster(Map).

On cluster startup, bulk assignment can be used to determine locations for all Regions in a cluster.

This classes produces plans for the AssignmentManager to execute.

  • Field Details

  • Constructor Details

  • Method Details

    • setClusterLoad

      Pass RegionStates and allow balancer to set the current cluster load.
    • preBalanceCluster

      protected void preBalanceCluster(Map<TableName,Map<ServerName,List<RegionInfo>>> loadOfAllTable)
      Description copied from class: BaseLoadBalancer
      Called before actually executing balanceCluster. The sub classes could override this method to do some initialization work.
      Overrides:
      preBalanceCluster in class BaseLoadBalancer
    • loadConf

      protected void loadConf(org.apache.hadoop.conf.Configuration conf)
      Overrides:
      loadConf in class BaseLoadBalancer
    • onConfigurationChange

      public void onConfigurationChange(org.apache.hadoop.conf.Configuration conf)
      Description copied from interface: ConfigurationObserver
      This method would be called by the ConfigurationManager object when the Configuration object is reloaded from disk.
      Specified by:
      onConfigurationChange in interface ConfigurationObserver
      Specified by:
      onConfigurationChange in interface LoadBalancer
      Overrides:
      onConfigurationChange in class BaseLoadBalancer
    • setLoad

      private void setLoad(List<ServerAndLoad> slList, int i, int loadChange)
    • overallNeedsBalance

      private boolean overallNeedsBalance()
      A checker function to decide when we want balance overall and certain table has been balanced, do we still need to re-distribute regions of this table to achieve the state of overall-balance
      Returns:
      true if this table should be balanced.
    • needsBalance

      private boolean needsBalance(BalancerClusterState c)
    • balanceTable

      protected List<RegionPlan> balanceTable(TableName tableName, Map<ServerName,List<RegionInfo>> loadOfOneTable)
      Generate a global load balancing plan according to the specified map of server information to the most loaded regions of each server. The load balancing invariant is that all servers are within 1 region of the average number of regions per server. If the average is an integer number, all servers will be balanced to the average. Otherwise, all servers will have either floor(average) or ceiling(average) regions. HBASE-3609 Modeled regionsToMove using Guava's MinMaxPriorityQueue so that we can fetch from both ends of the queue. At the beginning, we check whether there was empty region server just discovered by Master. If so, we alternately choose new / old regions from head / tail of regionsToMove, respectively. This alternation avoids clustering young regions on the newly discovered region server. Otherwise, we choose new regions from head of regionsToMove. Another improvement from HBASE-3609 is that we assign regions from regionsToMove to underloaded servers in round-robin fashion. Previously one underloaded server would be filled before we move onto the next underloaded server, leading to clustering of young regions. Finally, we randomly shuffle underloaded servers so that they receive offloaded regions relatively evenly across calls to balanceCluster(). The algorithm is currently implemented as such:
      1. Determine the two valid numbers of regions each server should have, MIN=floor(average) and MAX=ceiling(average).
      2. Iterate down the most loaded servers, shedding regions from each so each server hosts exactly MAX regions. Stop once you reach a server that already has <= MAX regions.

        Order the regions to move from most recent to least.

      3. Iterate down the least loaded servers, assigning regions so each server has exactly MIN regions. Stop once you reach a server that already has >= MIN regions. Regions being assigned to underloaded servers are those that were shed in the previous step. It is possible that there were not enough regions shed to fill each underloaded server to MIN. If so we end up with a number of regions required to do so, neededRegions. It is also possible that we were able to fill each underloaded but ended up with regions that were unassigned from overloaded servers but that still do not have assignment. If neither of these conditions hold (no regions needed to fill the underloaded servers, no regions leftover from overloaded servers), we are done and return. Otherwise we handle these cases below.
      4. If neededRegions is non-zero (still have underloaded servers), we iterate the most loaded servers again, shedding a single server from each (this brings them from having MAX regions to having MIN regions).
      5. We now definitely have more regions that need assignment, either from the previous step or from the original shedding from overloaded servers. Iterate the least loaded servers filling each to MIN.
      6. If we still have more regions that need assignment, again iterate the least loaded servers, this time giving each one (filling them to MAX) until we run out.
      7. All servers will now either host MIN or MAX regions. In addition, any server hosting >= MAX regions is guaranteed to end up with MAX regions at the end of the balancing. This ensures the minimal number of regions possible are moved.
      TODO: We can at-most reassign the number of regions away from a particular server to be how many they report as most loaded. Should we just keep all assignment in memory? Any objections? Does this mean we need HeapSize on HMaster? Or just careful monitor? (current thinking is we will hold all assignments in memory)
      Specified by:
      balanceTable in class BaseLoadBalancer
      Parameters:
      loadOfOneTable - Map of regionservers and their load/region information to a list of their most loaded regions
      tableName - the table to be balanced
      Returns:
      a list of regions to be moved, including source and destination, or null if cluster is already balanced
    • balanceOverall

      private void balanceOverall(List<RegionPlan> regionsToReturn, Map<ServerName,SimpleLoadBalancer.BalanceInfo> serverBalanceInfo, boolean fetchFromTail, org.apache.hbase.thirdparty.com.google.common.collect.MinMaxPriorityQueue<RegionPlan> regionsToMove, int max, int min)
      If we need to balanceoverall, we need to add one more round to peel off one region from each max. Together with other regions left to be assigned, we distribute all regionToMove, to the RS that have less regions in whole cluster scope.
    • addRegionPlan

      private void addRegionPlan(org.apache.hbase.thirdparty.com.google.common.collect.MinMaxPriorityQueue<RegionPlan> regionsToMove, boolean fetchFromTail, ServerName sn, List<RegionPlan> regionsToReturn)
      Add a region from the head or tail to the List of regions to return.