HBase 0.96.x will run on hadoop 1.x or hadoop 2.x but building, you must choose which to build against; we cannot make a single HBase binary to run against both hadoop1 and hadoop2. Since we include the Hadoop we were built against -- so we can do standalone mode -- the set of modules included in the tarball changes dependent on whether the hadoop1 or hadoop2 target chosen. You can tell which HBase you have -- whether it is for hadoop1 or hadoop2 by looking at the version; the HBase for hadoop1 will include 'hadoop1' in its version. Ditto for hadoop2.
Maven, our build system, natively will not let you have a single product built against different dependencies. Its understandable. But neither could we convince maven to change the set of included modules and write out the correct poms w/ appropriate dependencies even though we have two build targets; one for hadoop1 and another for hadoop2. So, there is a prestep required. This prestep takes as input the current pom.xmls and it generates hadoop1 or hadoop2 versions. You then reference these generated poms when you build. Read on for examples
Publishing to maven requires you sign the artifacts you want to upload. To have the
build do this for you, you need to make sure you have a properly configured
settings.xml in your local repository under
Here is my
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <servers> <!- To publish a snapshot of some part of Maven --> <server> <id>apache.snapshots.https</id> <username>YOUR_APACHE_ID </username> <password>YOUR_APACHE_PASSWORD </password> </server> <!-- To publish a website using Maven --> <!-- To stage a release of some part of Maven --> <server> <id>apache.releases.https</id> <username>YOUR_APACHE_ID </username> <password>YOUR_APACHE_PASSWORD </password> </server> </servers> <profiles> <profile> <id>apache-release</id> <properties> <gpg.keyname>YOUR_KEYNAME</gpg.keyname> <!--Keyname is something like this ... 00A5F21E... do gpg --list-keys to find it--> <gpg.passphrase>YOUR_KEY_PASSWORD </gpg.passphrase> </properties> </profile> </profiles> </settings>
You must use maven 3.0.x (Check by running mvn -version).
I'll explain by running through the process. See later in this section for more detail on particular steps.
If you are making a point release (for example to quickly address a critical incompatability or security problem) off of a release branch instead of a development branch the tagging instructions are slightly different. I'll prefix those special steps with "Point Release Only".
I would advise before you go about making a release candidate, do a practise run by deploying a SNAPSHOT. Also, make sure builds have been passing recently for the branch from where you are going to take your release. You should also have tried recent branch tips out on a cluster under load running for instance our hbase-it integration test suite for a few hours to 'burn in' the near-candidate bits.
Point Release Only: At this point you should make svn copy of the previous release branch (ex: 0.96.1) with the new point release tag (e.g. 0.96.1.1 tag). Any commits with changes or mentioned below for the point release should be appled to the new tag.
$ svn copy http://svn.apache.org/repos/asf/hbase/tags/0.96.1 http://svn.apache.org/repos/asf/hbase/tags/0.96.1.1 $ svn checkout http://svn.apache.org/repos/asf/hbase/tags/0.96.1.1
dev-support/make_rc.sh automates most of this. It does all but the close of the
staging repository up in apache maven, the checking of the produced artifacts to ensure they are 'good' -- e.g.
undoing the produced tarballs, eyeballing them to make sure they look right then starting and checking all is
running properly -- and then the signing and pushing of the tarballs to people.apache.org. Familiarize yourself
by all that is involved by reading the below before resorting to this release candidate-making script.
The Hadoop How To Release wiki page informs much of the below and may have more detail on particular sections so it is worth review.
Update CHANGES.txt with the changes since the last release. Make sure the URL to the JIRA points to the properly location listing fixes for this release. Adjust the version in all the poms appropriately. If you are making a release candidate, you must remove the -SNAPSHOT from all versions. If you are running this receipe to publish a SNAPSHOT, you must keep the -SNAPSHOT suffix on the hbase version. The Versions Maven Plugin can be of use here. To set a version in all the many poms of the hbase multi-module project, do something like this:
$ mvn clean org.codehaus.mojo:versions-maven-plugin:1.3.1:set -DnewVersion=0.96.0
CHANGES.txt and any version changes.
Update the documentation under
src/main/docbkx. This usually involves copying the
latest from trunk making version-particular adjustments to suit this release candidate version.
Now, build the src tarball. This tarball is hadoop version independent. It is just the pure src code and documentation without an hadoop1 or hadoop2 taint.
-Prelease profile when building; it checks files for licenses and will fail the build if unlicensed files present.
$ MAVEN_OPTS="-Xmx2g" mvn clean install -DskipTests assembly:single -Dassembly.file=hbase-assembly/src/main/assembly/src.xml -Prelease
Undo the tarball and make sure it looks good. A good test for the src tarball being 'complete' is to see if you can build new tarballs from this source bundle. For example:
$ tar xzf hbase-0.96.0-src.tar.gz $ cd hbase-0.96.0 $ bash ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop1-SNAPSHOT $ bash ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop2-SNAPSHOT $ export MAVEN=/home/stack/bin/mvn/bin/mvn $ MAVEN_OPTS="-Xmx3g" $MAVEN -f pom.xml.hadoop1 clean install -DskipTests javadoc:aggregate site assembly:single -Prelease # Check the produced bin tarball is good -- run it, eyeball it, etc. $ MAVEN_OPTS="-Xmx3g" $MAVEN -f pom.xml.hadoop2 clean install -DskipTests javadoc:aggregate site assembly:single -Prelease # Check the produced bin tarball is good -- run it, eyeball it, etc.
If the source tarball is good, save it off to a version directory, i.e a directory somewhere where you are collecting
all of the tarballs you will publish as part of the release candidate. For example if we were building a
hbase-0.96.0 release candidate, we might call the directory
we will publish this directory as our release candidate up on people.apache.org/~you.
Now we are into the making of the hadoop1 and hadoop2 specific binary builds. Lets do hadoop1 first. First generate the hadoop1 poms.
We cannot use maven to publish what is in essence two hbase artifacts both of the same version only one is for hadoop1 and the other for hadoop2. So, we generate hadoop1 and hadoop2 particular poms from the checked-in pom using a dev-tool script and we run two builds; one for hadoop1 artifacts and one for the hadoop2 artifacts.
generate-hadoopX-poms.sh script usage for what it expects by way of arguments.
You will find it in the
dev-support subdirectory. In the below, we generate hadoop1 poms with a version
0.96.0-hadoop1 (the script will look for a version of
0.96.0 in the current
$ ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop1
The script will work silently if all goes well. It will drop a
pom.xml.hadoop1 beside all
pom.xmls in all modules.
Now build the hadoop1 tarball. Note how we reference the new
We also add the
-Prelease profile when building; it checks files for licenses and will fail the build if unlicensed files present.
Do it in two steps. First install into the local repository and then generate documentation and assemble the tarball
(Otherwise build complains that hbase modules are not in maven repo when we try to do it all in the one go especially on fresh repo).
It seems that you need the install goal in both steps.
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 clean install -DskipTests -Prelease $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 install -DskipTests site assembly:single -Prelease
Undo the generated tarball and check it out. Look at doc. and see if it runs, etc. Are the set of modules appropriate: e.g. do we have a hbase-hadoop2-compat in the hadoop1 tarball? If good, copy the tarball to the above mentioned version directory.
Point Release Only: The following step that creates a new tag can be skipped since you've already created the point release tag
I'll tag the release at this point since its looking good. If we find an issue later, we can delete the tag and start over. Release needs to be tagged when we do next step.
Now deploy hadoop1 hbase to mvn. Do the mvn deploy and tgz for a particular version all together in the one go else if you flip between hadoop1 and hadoop2 builds,
you might mal-publish poms and hbase-default.xml's (the version interpolations won't match).
This time we use the
apache-release profile instead of just
release profile when doing mvn deploy;
it will invoke the apache pom referenced by our poms. It will also sign your artifacts published to mvn as long as your settings.xml in your local
repository is configured correctly (your
settings.xml adds your gpg password property to the apache profile).
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 deploy -DskipTests -Papache-release
The last command above copies all artifacts for hadoop1 up to a temporary staging apache mvn repo in an 'open' state. We'll need to do more work on these maven artifacts to make them generally available but before we do that, lets get the hadoop2 build to the same stage as this hadoop1 build.
Lets do the hadoop2 artifacts (read above hadoop1 section closely before coming here because we don't repeat explaination in the below).
# Generate the hadoop2 poms. $ ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop2 # Install the hbase hadoop2 jars into local repo then build the doc and tarball $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 clean install -DskipTests -Prelease $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 install -DskipTests site assembly:single -Prelease # Undo the tgz and check it out. If good, copy the tarball to your 'version directory'. Now deploy to mvn. $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 deploy -DskipTests -Papache-release
Now lets get back to what is up in maven. We should now have two sets of artifacts up in the apache maven staging area both in the 'open' state (they may both be under the one staging if they were pushed to maven around the same time). While in this 'open' state you can check out what you've published to make sure all is good. To do this, login at repository.apache.org using your apache id. Find your artifacts in the staging repository. Browse the content. Make sure all artifacts made it up and that the poms look generally good. If it checks out, 'close' the repo. This will make the artifacts publically available. You will receive an email with the URL to give out for the temporary staging repository for others to use trying out this new release candidate. Include it in the email that announces the release candidate. Folks will need to add this repo URL to their local poms or to their local settings.xml file to pull the published release candidate artifacts. If the published artifacts are incomplete or borked, just delete the 'open' staged artifacts.
See the hbase-downstreamer test for a simple example of a project that is downstream of hbase an depends on it. Check it out and run its simple test to make sure maven hbase-hadoop1 and hbase-hadoop2 are properly deployed to the maven repository. Be sure to edit the pom to point at the proper staging repo. Make sure you are pulling from the repo when tests run and that you are not getting from your local repo (pass -U or delete your local repo content and check maven is pulling from remote out of the staging repo).
See Publishing Maven Artifacts for some pointers on this maven staging process.
We no longer publish using the maven release plugin. Instead we do mvn deploy. It seems to give us a backdoor to maven release publishing. If no -SNAPSHOT on the version string, then we are 'deployed' to the apache maven repository staging directory from which we can publish URLs for candidates and later, if they pass, publish as release (if a -SNAPSHOT on the version string, deploy will put the artifacts up into apache snapshot repos).
If the hbase version ends in
-SNAPSHOT, the artifacts go elsewhere. They are put into the apache snapshots repository
directly and are immediately available. Making a SNAPSHOT release, this is what you want to happen.
At this stage we have three tarballs in our 'version directory' and two sets of artifacts up in maven in staging area in the
'closed' state publically available in a temporary staging repository whose URL you should have gotten in an email.
The above mentioned script,
make_rc.sh does all of the above for you minus the check of the artifacts built,
the closing of the staging repository up in maven, and the tagging of the release. If you run the script, do your checks at this
stage verifying the src and bin tarballs and checking what is up in staging using hbase-downstreamer project. Tag before you start
the build. You can always delete it if the build goes haywire.
If all checks out, next put the version directory up on people.apache.org. You will need to sign and fingerprint them before you push them up. In the version directory do this:
$ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done $ for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i ; done $ cd .. # Presuming our 'version directory' is named 0.96.0RC0, now copy it up to people.apache.org. $ rsync -av 0.96.0RC0 people.apache.org:public_html
Make sure the people.apache.org directory is showing and that the mvn repo urls are good. Announce the release candidate on the mailing list and call a vote.
Make sure your
settings.xml is set up properly (see above for how).
Make sure the hbase version includes
-SNAPSHOT as a suffix. Here is how I published SNAPSHOTS of
a checked that had an hbase version of 0.96.0 in its poms.
First we generated the hadoop1 poms with a version that has a
We then installed the build into the local repository. Then we deploy this build to apache. See the output for the location
up in apache to where the snapshot is copied. Notice how add the
when install locally -- to find files that are without proper license -- and then the
profile to deploy to the apache maven repository.
$ ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop1-SNAPSHOT $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 clean install -DskipTests javadoc:aggregate site assembly:single -Prelease $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 -DskipTests deploy -Papache-release
Next, do the same to publish the hadoop2 artifacts.
$ ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop2-SNAPSHOT $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 clean install -DskipTests javadoc:aggregate site assembly:single -Prelease $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 deploy -DskipTests -Papache-release
make_rc.sh script mentioned above in the
(see Section 16.4.1, “Making a Release Candidate”) can help you publish
Make sure your hbase.version has a
-SNAPSHOT suffix and then run
the script. It will put a snapshot up into the apache snapshot repository for you.