Table of Contents
This chapter is the Not-So-Quick start guide to Apache HBase configuration. It goes over system requirements, Hadoop setup, the different Apache HBase run modes, and the various configurations in HBase. Please read this chapter carefully. At a mimimum ensure that all Section 2.1, “Basic Prerequisites” have been satisfied. Failure to do so will cause you (and us) grief debugging strange errors and/or data loss.
Apache HBase uses the same configuration system as Apache Hadoop.
To configure a deploy, edit a file of environment variables
conf/hbase-env.sh -- this configuration
is used mostly by the launcher shell scripts getting the cluster
off the ground -- and then add configuration to an XML file to
do things like override HBase defaults, tell HBase what Filesystem to
use, and the location of the ZooKeeper ensemble
When running in distributed mode, after you make
an edit to an HBase configuration, make sure you copy the
content of the
conf directory to
all nodes of the cluster. HBase will not do this for you.
Use rsync. For most configuration, a restart is
needed for servers to pick up changes (caveat dynamic config. to be described later below).
This section lists required services and some required system configuration.
Just like Hadoop, HBase requires at least Java 6 from Oracle.
ssh must be installed and sshd must be running to use Hadoop's scripts to manage remote Hadoop and HBase daemons. You must be able to ssh to all nodes, including your local node, using passwordless login (Google "ssh passwordless login"). If on mac osx, see the section, SSH: Setting up Remote Desktop and Enabling Self-Login on the hadoop wiki.
HBase uses the local hostname to self-report its IP address. Both forward and reverse DNS resolving must work in versions of HBase previous to 0.92.0 .
If your machine has multiple interfaces, HBase will use the interface that the primary hostname resolves to.
If this is insufficient, you can set
hbase.regionserver.dns.interface to indicate the
primary interface. This only works if your cluster configuration is
consistent and every host has the same network interface
Another alternative is setting
hbase.regionserver.dns.nameserver to choose a
different nameserver than the system wide default.
Previous to hbase-0.96.0, HBase expects the loopback IP address to be 127.0.0.1. See Section 184.108.40.206, “Loopback IP”
The clocks on cluster members should be in basic alignments. Some skew is tolerable but wild skew could generate odd behaviors. Run NTP on your cluster, or an equivalent.
If you are having problems querying data, or "weird" cluster operations, check system time!
Apache HBase is a database. It uses a lot of files all at the same time. The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems is insufficient (On mac os x its 256). Any significant amount of loading will lead you to Section 220.127.116.11, “java.io.IOException...(Too many open files)”. You may also notice errors such as...
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
Do yourself a favor and change the upper bound on the number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors (not counting open jar files, config files, etc.)
To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration. Also, a common mistake is that administrators will up the file descriptors for a particular user but for whatever reason, HBase will be running as some one else. HBase prints in its logs as the first line the ulimit its seeing. Ensure its correct. 
If you are on Ubuntu you will need to make the following changes:
In the file
a line like:
hadoop - nofile 32768
hadoop with whatever user is running
Hadoop and HBase. If you have separate users, you will need 2
entries, one for each user. In the same file set nproc hard and soft
limits. For example:
hadoop soft/hard nproc 32000
In the file
as the last line in the file:
session required pam_limits.so
Otherwise the changes in
/etc/security/limits.conf won't be
Don't forget to log out and back in again for the changes to take effect!
Previous to hbase-0.96.0, Apache HBase was little tested running on Windows. Running a production install of HBase on top of Windows is not recommended.
If you are running HBase on Windows pre-hbase-0.96.0, you must install Cygwin to have a *nix-like environment for the shell scripts. The full details are explained in the Windows Installation guide. Also search our user mailing list to pick up latest fixes figured by Windows users.
Post-hbase-0.96.0, hbase runs natively on windows with supporting *.cmd scripts bundled.
The below table shows some information about what versions of Hadoop are supported by various HBase versions. Based on the version of HBase, you should select the most appropriate version of Hadoop. We are not in the Hadoop distro selection business. You can use Hadoop distributions from Apache, or learn about vendor distributions of Hadoop at http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support
Hadoop 2.x is faster, with more features such as short-circuit reads which will help improve your HBase random read profile as well important bug fixes that will improve your overall HBase experience. You should run Hadoop 2. rather than Hadoop 1. if you can.
Table 2.1. Hadoop version support matrix
[a] HBase requires hadoop 1.0.3 at a minimum; there is an issue where we cannot find KerberosUtil compiling against earlier versions of Hadoop.
[b] To get 0.94.x to run on hadoop 2.2.0,
you need to change the hadoop 2 and protobuf versions in the
$ mvn clean install assembly:single -Dhadoop.profile=2.0 -DskipTests
Here is a diff with pom.xml changes:
$ svn diff pom.xml Index: pom.xml =================================================================== --- pom.xml (revision 1545157) +++ pom.xml (working copy) @@ -1034,7 +1034,7 @@ <slf4j.version>1.4.3</slf4j.version> <log4j.version>1.2.16</log4j.version> <mockito-all.version>1.8.5</mockito-all.version> - <protobuf.version>2.4.0a</protobuf.version> + <protobuf.version>2.5.0</protobuf.version> <stax-api.version>1.0.1</stax-api.version> <thrift.version>0.8.0</thrift.version> <zookeeper.version>3.4.5</zookeeper.version> @@ -2241,7 +2241,7 @@ </property> </activation> <properties> - <hadoop.version>2.0.0-alpha</hadoop.version> + <hadoop.version>2.2.0</hadoop.version> <slf4j.version>1.6.1</slf4j.version> </properties> <dependencies>
|S = supported and tested,|
|X = not supported,|
|NT = it should run, but not tested enough.|
Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar under its
lib directory. The bundled jar is ONLY for use in standalone mode. In distributed mode, it is critical that the version of Hadoop that is out on your cluster match what is under HBase. Replace the hadoop jar found in the HBase lib directory with the hadoop jar you are running on your cluster to avoid version mismatch issues. Make sure you replace the jar in HBase everywhere on your cluster. Hadoop version mismatch issues have various manifestations but often all looks like its hung up.
HBase 0.92 and 0.94 versions can work with Hadoop versions, 0.20.205, 0.22.x, 1.0.x, and 1.1.x. HBase-0.94 can additionally work with Hadoop-0.23.x and 2.x, but you may have to recompile the code using the specific maven profile (see top level pom.xml)
As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required. Hadoop 2 is strongly encouraged (faster but also has fixes that help MTTR). We will no longer run properly on older Hadoops such as 0.20.205 or branch-0.20-append. Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop.
HBase will lose data unless it is running on an HDFS that has a durable
sync implementation. DO NOT use Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 which DO NOT have this attribute. Currently only Hadoop versions 0.20.205.x or any release in excess of this version -- this includes hadoop-1.0.0 -- have a working, durable sync
. Sync has to be explicitly enabled by setting
to true on both the client side -- in
-- and on the serverside in
hdfs-site.xml (The sync
facility HBase needs is a subset of the append code path).
<property> <name>dfs.support.append</name> <value>true</value> </property>
You will have to restart your cluster after making this edit. Ignore the chicken-little
comment you'll find in the
hdfs-default.xml in the
description for the
Apache HBase will run on any Hadoop 0.20.x that incorporates Hadoop security features as long as you do as suggested above and replace the Hadoop jar that ships with HBase with the secure version. If you want to read more about how to setup Secure HBase, see Section 8.1, “Secure Client Access to Apache HBase”.
An Hadoop HDFS datanode has an upper bound on the number of
files that it will serve at any one time. The upper bound parameter is
xcievers (yes, this is misspelled). Again,
before doing any loading, make sure you have configured Hadoop's
conf/hdfs-site.xml setting the
xceivers value to at least the following:
<property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property>
Be sure to restart your HDFS after making the above configuration.
Not having this configuration in place makes for strange looking
failures. Eventually you'll see a complain in the datanode logs
complaining about the xcievers exceeded, but on the run up to this one
manifestation is complaint about missing blocks. For example:
10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block
blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node:
java.io.IOException: No live nodes contain current block. Will get new
block locations from namenode and retry...
 Be careful editing XML. Make sure you close all elements. Run your file through xmllint or similar to ensure well-formedness of your document after an edit session.
 The requirement that a database requires upping of system limits is not peculiar to Apache HBase. See for example the section Setting Shell Limits for the Oracle User in Short Guide to install Oracle 10 on Linux.
 A useful read setting config on you hadoop cluster is Aaron Kimballs' Configuration Parameters: What can you just ignore?
 The Cloudera blog post An update on Apache Hadoop 1.0 by Charles Zedlweski has a nice exposition on how all the Hadoop versions relate. Its worth checking out if you are having trouble making sense of the Hadoop version morass.