Table of Contents
This chapter expands upon the Chapter 1, Getting Started chapter to further explain configuration of Apache HBase. Please read this chapter carefully, especially Section 2.1, “Basic Prerequisites” to ensure that your HBase testing and deployment goes smoothly, and prevent data loss.
Apache HBase uses the same configuration system as Apache Hadoop. All configuration files
are located in the
conf/ directory, which needs to be kept in sync for each
node on your cluster.
HBase Configuration Files
Not present by default. A plain-text file which lists hosts on which the Master should start a backup Master process, one host per line.
Used to connect HBase Hadoop's Metrics2 framework. See the Hadoop Wiki entry for more information on Metrics2. Contains only commented-out examples by default.
Script for Windows and Linux / Unix environments to set up the working environment for HBase, including the location of Java, Java options, and other environment variables. The file contains many commented-out examples to provide guidance.
The default policy configuration file used by RPC servers to make authorization decisions on client requests. Only used if HBase security (Chapter 8, Secure Apache HBase) is enabled.
The main HBase configuration file. This file specifies configuration options which
override HBase's default configuration. You can view (but do not edit) the default
configuration file at
docs/hbase-default.xml. You can also view the
entire effective configuration for your cluster (defaults and overrides) in the
HBase Configuration tab of the HBase Web UI.
Configuration file for HBase logging via
A plain-text file containing a list of hosts which should run a RegionServer in your
HBase cluster. By default this file contains the single entry
localhost. It should contain a list of hostnames or IP addresses, one
per line, and should only contain
localhost if each node in your
cluster will run a RegionServer on its
When you edit XML, it is a good idea to use an XML-aware editor to be sure that your
syntax is correct and your XML is well-formed. You can also use the xmllint
utility to check that your XML is well-formed. By default, xmllint re-flows
and prints the XML to standard output. To check for well-formedness and only print output if
errors exist, use the command xmllint -noout
When running in distributed mode, after you make an edit to an HBase configuration, make
sure you copy the content of the
conf/ directory to all nodes of the
cluster. HBase will not do this for you. Use rsync, scp,
or another secure mechanism for copying the configuration files to your nodes. For most
configuration, a restart is needed for servers to pick up changes An exception is dynamic
configuration. to be described later below.
This section lists required services and some required system configuration.
Table 2.1. Java
|HBase Version||JDK 6||JDK 7||JDK 8|
Running with JDK 8 will work but is not well tested.
Running with JDK 8 works but is not well tested. Building with JDK 8 would require removal of the deprecated remove() method of the PoolMap class and is under consideration. See ee HBASE-7608 for more information about JDK 8 support.
Operating System Utilities
HBase uses the Secure Shell (ssh) command and utilities extensively to communicate between cluster nodes. Each server in the cluster must be running ssh so that the Hadoop and HBase daemons can be managed. You must be able to connect to all nodes via SSH, including the local node, from the Master as well as any backup Master, using a shared key rather than a password. You can see the basic methodology for such a set-up in Linux or Unix systems at Procedure 1.4, “Configure Password-Less SSH Access”. If your cluster nodes use OS X, see the section, SSH: Setting up Remote Desktop and Enabling Self-Login on the Hadoop wiki.
HBase uses the local hostname to self-report its IP address. Both forward and reverse DNS resolving must work in versions of HBase previous to 0.92.0.
If your server has multiple network interfaces, HBase defaults to using the
interface that the primary hostname resolves to. To override this behavior, set the
hbase.regionserver.dns.interface property to a different interface. This
will only work if each server in your cluster uses the same network interface
To choose a different DNS nameserver than the system default, set the
hbase.regionserver.dns.nameserver property to the IP address of
Prior to hbase-0.96.0, HBase only used the IP address
127.0.0.1 to refer to
localhost, and this could
not be configured. See Loopback IP.
The clocks on cluster nodes should be synchronized. A small amount of variation is acceptable, but larger amounts of skew can cause erratic and unexpected behavior. Time synchronization is one of the first things to check if you see unexplained problems in your cluster. It is recommended that you run a Network Time Protocol (NTP) service, or another time-synchronization mechanism, on your cluster, and that all nodes look to the same service for time synchronization. See the Basic NTP Configuration at The Linux Documentation Project (TLDP) to set up NTP.
Apache HBase is a database. It requires the ability to open a large number of files
at once. Many Linux distributions limit the number of files a single user is allowed to
256 on older versions of OS X).
You can check this limit on your servers by running the command ulimit
-n when logged in as the user which runs HBase. See Section 188.8.131.52, “java.io.IOException...(Too many open files)” for some of the problems you may
experience if the limit is too low. You may also notice errors such as the
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
It is recommended to raise the ulimit to at least 10,000, but more likely 10,240, because the value is usually expressed in multiples of 1024. Each ColumnFamily has at least one StoreFile, and possibly more than 6 StoreFiles if the region is under load. The number of open files required depends upon the number of ColumnFamilies and the number of regions. The following is a rough formula for calculating the potential number of open files on a RegionServer.
Example 2.1. Calculate the Potential Number of Open Files
(StoreFiles per ColumnFamily) x (regions per RegionServer)
For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors, not counting open JAR files, configuration files, and others. Opening a file does not take many resources, and the risk of allowing a user to open too many files is minimal.
Another related setting is the number of processes a user is allowed to run at once.
In Linux and Unix, the number of processes is set using the ulimit -u
command. This should not be confused with the nproc command, which
controls the number of CPUs available to a given user. Under load, a
nproc that is too low can cause OutOfMemoryError exceptions. See
Jack Levin's major
hdfs issues thread on the hbase-users mailing list, from 2011.
Configuring the fmaximum number of ile descriptors and processes for the user who is running the HBase process is an operating system configuration, rather than an HBase configuration. It is also important to be sure that the settings are changed for the user that actually runs HBase. To see which user started HBase, and that user's ulimit configuration, look at the first line of the HBase log for that instance.
ulimit Settings on Ubuntu. To configure ulimit settings on Ubuntu, edit
/etc/security/limits.conf, which is a space-delimited file with
four columns. Refer to the man
page for limits.conf for details about the format of this file. In the
following example, the first line sets both soft and hard limits for the number of
open files (
32768 for the operating
system user with the username
hadoop. The second line sets the
number of processes to 32000 for the same user.
hadoop - nofile 32768 hadoop - nproc 32000
The settings are only applied if the Pluggable Authentication Module (PAM)
environment is directed to use them. To configure PAM to use these limits, be sure that
/etc/pam.d/common-session file contains the following line:
session required pam_limits.so
Prior to HBase 0.96, testing for running HBase on Microsoft Windows was limited. Running a on Windows nodes is not recommended for production systems.
To run versions of HBase prior to 0.96 on Microsoft Windows, you must install Cygwin and run HBase within the Cygwin environment. This provides support for Linux/Unix commands and scripts. The full details are explained in the Windows Installation guide. Also search our user mailing list to pick up latest fixes figured by Windows users.
Post-hbase-0.96.0, hbase runs natively on windows with supporting *.cmd scripts bundled.
The following table summarizes the versions of Hadoop supported with each version of HBase. Based on the version of HBase, you should select the most appropriate version of Hadoop. You can use Apache Hadoop, or a vendor's distribution of Hadoop. No distinction is made here. See http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support for information about vendors of Hadoop.
Hadoop 2.x is faster and includes features, such as short-circuit reads, which will help improve your HBase random read profile. Hadoop 2.x also includes important bug fixes that will improve your overall HBase experience. HBase 0.98 deprecates use of Hadoop 1.x, and HBase 1.0 will not support Hadoop 1.x.
Use the following legend to interpret this table:
|S = supported and tested,|
|X = not supported,|
|NT = it should run, but not tested enough.|
Table 2.2. Hadoop version support matrix
[a] Support for Hadoop 1.x is deprecated.
[b] Hadoop 1.x is NOT supported
[c] HBase requires hadoop 1.0.3 at a minimum; there is an issue where we cannot find KerberosUtil compiling against earlier versions of Hadoop.
[d] To get 0.94.x to run on hadoop 2.2.0, you need to change the hadoop 2 and
protobuf versions in the
$ svn diff pom.xml Index: pom.xml =================================================================== --- pom.xml (revision 1545157) +++ pom.xml (working copy) @@ -1034,7 +1034,7 @@ <slf4j.version>1.4.3</slf4j.version> <log4j.version>1.2.16</log4j.version> <mockito-all.version>1.8.5</mockito-all.version> - <protobuf.version>2.4.0a</protobuf.version> + <protobuf.version>2.5.0</protobuf.version> <stax-api.version>1.0.1</stax-api.version> <thrift.version>0.8.0</thrift.version> <zookeeper.version>3.4.5</zookeeper.version> @@ -2241,7 +2241,7 @@ </property> </activation> <properties> - <hadoop.version>2.0.0-alpha</hadoop.version> + <hadoop.version>2.2.0</hadoop.version> <slf4j.version>1.6.1</slf4j.version> </properties> <dependencies>
The next step is to regenerate Protobuf files and assuming that the Protobuf has been installed:
Building against the hadoop 2 profile by running something like the following command:
$ mvn clean install assembly:single -Dhadoop.profile=2.0 -DskipTests
Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar under its
lib directory. The bundled jar is ONLY for use in standalone mode.
In distributed mode, it is critical that the version of Hadoop that
is out on your cluster match what is under HBase. Replace the hadoop jar found in the
HBase lib directory with the hadoop jar you are running on your cluster to avoid version
mismatch issues. Make sure you replace the jar in HBase everywhere on your cluster. Hadoop
version mismatch issues have various manifestations but often all looks like its hung up.
HBase 0.92 and 0.94 versions can work with Hadoop versions, 0.20.205, 0.22.x, 1.0.x, and 1.1.x. HBase-0.94 can additionally work with Hadoop-0.23.x and 2.x, but you may have to recompile the code using the specific maven profile (see top level pom.xml)
As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required. Hadoop 2 is strongly encouraged (faster but also has fixes that help MTTR). We will no longer run properly on older Hadoops such as 0.20.205 or branch-0.20-append. Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop.
HBase will lose data unless it is running on an HDFS that has a durable
sync implementation. DO NOT use Hadoop 0.20.2, Hadoop 0.20.203.0, and
Hadoop 0.20.204.0 which DO NOT have this attribute. Currently only Hadoop versions
0.20.205.x or any release in excess of this version -- this includes hadoop-1.0.0 -- have
a working, durable sync . Sync has to be explicitly enabled by setting
dfs.support.append equal to true on both the client side -- in
hbase-site.xml -- and on the serverside in
hdfs-site.xml (The sync facility HBase needs is a subset of the
append code path).
<property> <name>dfs.support.append</name> <value>true</value> </property>
You will have to restart your cluster after making this edit. Ignore the
chicken-little comment you'll find in the
hdfs-default.xml in the
description for the
Apache HBase will run on any Hadoop 0.20.x that incorporates Hadoop security features as long as you do as suggested above and replace the Hadoop jar that ships with HBase with the secure version. If you want to read more about how to setup Secure HBase, see Section 8.1, “Secure Client Access to Apache HBase”.
An HDFS datanode has an upper bound on the number of files that it will serve
at any one time. Before doing any loading, make sure you have configured
conf/hdfs-site.xml, setting the
dfs.datanode.max.transfer.threads value to at least the following:
<property> <name>dfs.datanode.max.transfer.threads</name> <value>4096</value> </property>
Be sure to restart your HDFS after making the above configuration.
Not having this configuration in place makes for strange-looking failures. One manifestation is a complaint about missing blocks. For example:
10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
See also Section 16.3.4, “Case Study #4 (max.transfer.threads Config)” and note that this
property was previously known as
Hadoop HDFS: Deceived by Xciever).
 A useful read setting config on you hadoop cluster is Aaron Kimballs' Configuration Parameters: What can you just ignore?