Class CommonFSUtils

java.lang.Object
org.apache.hadoop.hbase.util.CommonFSUtils

@Private public final class CommonFSUtils extends Object
Utility methods for interacting with the underlying file system.

Note that setStoragePolicy(FileSystem, Path, String) is tested in TestFSUtils and pre-commit will run the hbase-server tests if there's code change in this class. See HBASE-20838 for more details.

  • Nested Class Summary

    Nested Classes
    Modifier and Type
    Class
    Description
    static class 
    Helper exception for those cases where the place where we need to check a stream capability is not where we have the needed context to explain the impact and mitigation for a lack.
  • Field Summary

    Fields
    Modifier and Type
    Field
    Description
    static final String
    Full access permissions (starting point for a umask)
    static final String
    Parameter name for HBase WAL directory
    private static final org.slf4j.Logger
     
    static final String
    Parameter to disable stream capability enforcement checks
    private static final Map<org.apache.hadoop.fs.FileSystem,Boolean>
     
  • Constructor Summary

    Constructors
    Modifier
    Constructor
    Description
    private
     
  • Method Summary

    Modifier and Type
    Method
    Description
    static void
    checkShortCircuitReadBufferSize(org.apache.hadoop.conf.Configuration conf)
    Check if short circuit read buffer size is set and if not, set it to hbase value.
    static org.apache.hadoop.fs.FSDataOutputStream
    create(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission perm, boolean overwrite)
    Create the specified file on the filesystem.
    static boolean
    delete(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, boolean recursive)
    Calls fs.delete() and returns the value returned by the fs.delete()
    static boolean
    deleteDirectory(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir)
    Delete if exists.
    static org.apache.hadoop.fs.FileSystem
    getCurrentFileSystem(org.apache.hadoop.conf.Configuration conf)
    Returns the filesystem of the hbase rootdir.
    static long
    getDefaultBlockSize(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path)
    Return the number of bytes that large input files should be optimally be split into to minimize i/o time.
    static int
    getDefaultBufferSize(org.apache.hadoop.fs.FileSystem fs)
    Returns the default buffer size to use during writes.
    static short
    getDefaultReplication(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path)
     
    static String
    getDirUri(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path p)
    Returns the URI in the string format
    static org.apache.hadoop.fs.permission.FsPermission
    getFilePermissions(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration conf, String permssionConfKey)
    Get the file permissions specified in the configuration, if they are enabled.
    static org.apache.hadoop.fs.Path
    getNamespaceDir(org.apache.hadoop.fs.Path rootdir, String namespace)
    Returns the Path object representing the namespace directory under path rootdir
    static String
    getPath(org.apache.hadoop.fs.Path p)
    Return the 'path' component of a Path.
    static org.apache.hadoop.fs.Path
    getRegionDir(org.apache.hadoop.fs.Path rootdir, TableName tableName, String regionName)
    Returns the Path object representing the region directory under path rootdir
    static org.apache.hadoop.fs.Path
    getRootDir(org.apache.hadoop.conf.Configuration c)
    Get the path for the root data directory
    static org.apache.hadoop.fs.FileSystem
    getRootDirFileSystem(org.apache.hadoop.conf.Configuration c)
     
    static org.apache.hadoop.fs.Path
    getTableDir(org.apache.hadoop.fs.Path rootdir, TableName tableName)
    Returns the Path object representing the table directory under path rootdir
    static TableName
    getTableName(org.apache.hadoop.fs.Path tablePath)
    Returns the TableName object representing the table directory under path rootdir
    static org.apache.hadoop.fs.FileSystem
    getWALFileSystem(org.apache.hadoop.conf.Configuration c)
     
    static org.apache.hadoop.fs.Path
    getWALRegionDir(org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName)
    Returns the WAL region directory based on the given table name and region name
    static org.apache.hadoop.fs.Path
    getWALRootDir(org.apache.hadoop.conf.Configuration c)
    Get the path for the root directory for WAL data
    static org.apache.hadoop.fs.Path
    getWALTableDir(org.apache.hadoop.conf.Configuration conf, TableName tableName)
    Returns the Table directory under the WALRootDir for the specified table name
    static org.apache.hadoop.fs.Path
    getWrongWALRegionDir(org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName)
    Deprecated.
    For compatibility, will be removed in 4.0.0.
    private static void
    invokeSetStoragePolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, String storagePolicy)
     
    static boolean
    isExists(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path)
    Calls fs.exists().
    static boolean
    isHDFS(org.apache.hadoop.conf.Configuration conf)
    Return true if this is a filesystem whose scheme is 'hdfs'.
    static boolean
    isMatchingTail(org.apache.hadoop.fs.Path pathToSearch, String pathTail)
    Compare path component of the Path URI; e.g.
    static boolean
    isMatchingTail(org.apache.hadoop.fs.Path pathToSearch, org.apache.hadoop.fs.Path pathTail)
    Compare path component of the Path URI; e.g.
    static boolean
    isRecoveredEdits(org.apache.hadoop.fs.Path path)
    Checks if the given path is the one with 'recovered.edits' dir.
    static boolean
    isStartingWithPath(org.apache.hadoop.fs.Path rootPath, String path)
    Compare of path component.
    private static boolean
    isValidWALRootDir(org.apache.hadoop.fs.Path walDir, org.apache.hadoop.conf.Configuration c)
     
    static List<org.apache.hadoop.fs.LocatedFileStatus>
    listLocatedStatus(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir)
    Calls fs.listFiles() to get FileStatus and BlockLocations together for reducing rpc call
    static org.apache.hadoop.fs.FileStatus[]
    listStatus(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir)
    Calls fs.listStatus() and treats FileNotFoundException as non-fatal This would accommodates differences between hadoop versions
    static org.apache.hadoop.fs.FileStatus[]
    listStatus(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.PathFilter filter)
    Calls fs.listStatus() and treats FileNotFoundException as non-fatal This accommodates differences between hadoop versions, where hadoop 1 does not throw a FileNotFoundException, and return an empty FileStatus[] while Hadoop 2 will throw FileNotFoundException.
    static void
    logFileSystemState(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path root, org.slf4j.Logger log)
    Log the current state of the filesystem from a certain root directory
    private static void
    logFSTree(org.slf4j.Logger log, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path root, String prefix)
    Recursive helper to log the state of the FS
    static String
    removeWALRootPath(org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf)
    Checks for the presence of the WAL log root path (using the provided conf object) in the given path.
    static boolean
    renameAndSetModifyTime(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dest)
     
    static void
    setFsDefault(org.apache.hadoop.conf.Configuration c, String uri)
     
    static void
    setFsDefault(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path root)
     
    static void
    setRootDir(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path root)
     
    static void
    setStoragePolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, String storagePolicy)
    Sets storage policy for given path.
    (package private) static void
    setStoragePolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, String storagePolicy, boolean throwException)
     
    static void
    setWALRootDir(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path root)
     
    static org.apache.hadoop.fs.Path
    validateRootPath(org.apache.hadoop.fs.Path root)
    Verifies root directory path is a valid URI with a scheme

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
  • Field Details

  • Constructor Details

  • Method Details

    • isStartingWithPath

      public static boolean isStartingWithPath(org.apache.hadoop.fs.Path rootPath, String path)
      Compare of path component. Does not consider schema; i.e. if schemas different but path starts with rootPath, then the function returns true
      Parameters:
      rootPath - value to check for
      path - subject to check
      Returns:
      True if path starts with rootPath
    • isMatchingTail

      public static boolean isMatchingTail(org.apache.hadoop.fs.Path pathToSearch, String pathTail)
      Compare path component of the Path URI; e.g. if hdfs://a/b/c and /a/b/c, it will compare the '/a/b/c' part. Does not consider schema; i.e. if schemas different but path or subpath matches, the two will equate.
      Parameters:
      pathToSearch - Path we will be trying to match against.
      pathTail - what to match
      Returns:
      True if pathTail is tail on the path of pathToSearch
    • isMatchingTail

      public static boolean isMatchingTail(org.apache.hadoop.fs.Path pathToSearch, org.apache.hadoop.fs.Path pathTail)
      Compare path component of the Path URI; e.g. if hdfs://a/b/c and /a/b/c, it will compare the '/a/b/c' part. If you passed in 'hdfs://a/b/c and b/c, it would return true. Does not consider schema; i.e. if schemas different but path or subpath matches, the two will equate.
      Parameters:
      pathToSearch - Path we will be trying to match agains against
      pathTail - what to match
      Returns:
      True if pathTail is tail on the path of pathToSearch
    • deleteDirectory

      public static boolean deleteDirectory(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException
      Delete if exists.
      Parameters:
      fs - filesystem object
      dir - directory to delete
      Returns:
      True if deleted dir
      Throws:
      IOException - e
    • getDefaultBlockSize

      public static long getDefaultBlockSize(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path)
      Return the number of bytes that large input files should be optimally be split into to minimize i/o time.
      Parameters:
      fs - filesystem object
      Returns:
      the default block size for the path's filesystem
    • getDefaultReplication

      public static short getDefaultReplication(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path)
    • getDefaultBufferSize

      public static int getDefaultBufferSize(org.apache.hadoop.fs.FileSystem fs)
      Returns the default buffer size to use during writes. The size of the buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
      Parameters:
      fs - filesystem object
      Returns:
      default buffer size to use during writes
    • create

      public static org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission perm, boolean overwrite) throws IOException
      Create the specified file on the filesystem. By default, this will:
      1. apply the umask in the configuration (if it is enabled)
      2. use the fs configured buffer size (or 4096 if not set)
      3. use the default replication
      4. use the default block size
      5. not track progress
      Parameters:
      fs - FileSystem on which to write the file
      path - Path to the file to write
      perm - intial permissions
      overwrite - Whether or not the created file should be overwritten.
      Returns:
      output stream to the created file
      Throws:
      IOException - if the file cannot be created
    • getFilePermissions

      public static org.apache.hadoop.fs.permission.FsPermission getFilePermissions(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration conf, String permssionConfKey)
      Get the file permissions specified in the configuration, if they are enabled.
      Parameters:
      fs - filesystem that the file will be created on.
      conf - configuration to read for determining if permissions are enabled and which to use
      permssionConfKey - property key in the configuration to use when finding the permission
      Returns:
      the permission to use when creating a new file on the fs. If special permissions are not specified in the configuration, then the default permissions on the the fs will be returned.
    • validateRootPath

      public static org.apache.hadoop.fs.Path validateRootPath(org.apache.hadoop.fs.Path root) throws IOException
      Verifies root directory path is a valid URI with a scheme
      Parameters:
      root - root directory path
      Returns:
      Passed root argument.
      Throws:
      IOException - if not a valid URI with a scheme
    • removeWALRootPath

      public static String removeWALRootPath(org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf) throws IOException
      Checks for the presence of the WAL log root path (using the provided conf object) in the given path. If it exists, this method removes it and returns the String representation of remaining relative path.
      Parameters:
      path - must not be null
      conf - must not be null
      Returns:
      String representation of the remaining relative path
      Throws:
      IOException - from underlying filesystem
    • getPath

      public static String getPath(org.apache.hadoop.fs.Path p)
      Return the 'path' component of a Path. In Hadoop, Path is a URI. This method returns the 'path' component of a Path's URI: e.g. If a Path is hdfs://example.org:9000/hbase_trunk/TestTable/compaction.dir, this method returns /hbase_trunk/TestTable/compaction.dir. This method is useful if you want to print out a Path without qualifying Filesystem instance.
      Parameters:
      p - Filesystem Path whose 'path' component we are to return.
      Returns:
      Path portion of the Filesystem
    • getRootDir

      public static org.apache.hadoop.fs.Path getRootDir(org.apache.hadoop.conf.Configuration c) throws IOException
      Get the path for the root data directory
      Parameters:
      c - configuration
      Returns:
      Path to hbase root directory from configuration as a qualified Path.
      Throws:
      IOException - e
    • setRootDir

      public static void setRootDir(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path root)
    • setFsDefault

      public static void setFsDefault(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path root)
    • setFsDefault

      public static void setFsDefault(org.apache.hadoop.conf.Configuration c, String uri)
    • getRootDirFileSystem

      public static org.apache.hadoop.fs.FileSystem getRootDirFileSystem(org.apache.hadoop.conf.Configuration c) throws IOException
      Throws:
      IOException
    • getWALRootDir

      public static org.apache.hadoop.fs.Path getWALRootDir(org.apache.hadoop.conf.Configuration c) throws IOException
      Get the path for the root directory for WAL data
      Parameters:
      c - configuration
      Returns:
      Path to hbase log root directory: e.g. "hbase.wal.dir" from configuration as a qualified Path. Defaults to HBase root dir.
      Throws:
      IOException - e
    • getDirUri

      public static String getDirUri(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path p) throws IOException
      Returns the URI in the string format
      Parameters:
      c - configuration
      p - path
      Returns:
      - the URI's to string format
      Throws:
      IOException
    • setWALRootDir

      public static void setWALRootDir(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path root)
    • getWALFileSystem

      public static org.apache.hadoop.fs.FileSystem getWALFileSystem(org.apache.hadoop.conf.Configuration c) throws IOException
      Throws:
      IOException
    • isValidWALRootDir

      private static boolean isValidWALRootDir(org.apache.hadoop.fs.Path walDir, org.apache.hadoop.conf.Configuration c) throws IOException
      Throws:
      IOException
    • getWALRegionDir

      public static org.apache.hadoop.fs.Path getWALRegionDir(org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName) throws IOException
      Returns the WAL region directory based on the given table name and region name
      Parameters:
      conf - configuration to determine WALRootDir
      tableName - Table that the region is under
      encodedRegionName - Region name used for creating the final region directory
      Returns:
      the region directory used to store WALs under the WALRootDir
      Throws:
      IOException - if there is an exception determining the WALRootDir
    • getWALTableDir

      public static org.apache.hadoop.fs.Path getWALTableDir(org.apache.hadoop.conf.Configuration conf, TableName tableName) throws IOException
      Returns the Table directory under the WALRootDir for the specified table name
      Parameters:
      conf - configuration used to get the WALRootDir
      tableName - Table to get the directory for
      Returns:
      a path to the WAL table directory for the specified table
      Throws:
      IOException - if there is an exception determining the WALRootDir
    • getWrongWALRegionDir

      @Deprecated public static org.apache.hadoop.fs.Path getWrongWALRegionDir(org.apache.hadoop.conf.Configuration conf, TableName tableName, String encodedRegionName) throws IOException
      Deprecated.
      For compatibility, will be removed in 4.0.0.
      For backward compatibility with HBASE-20734, where we store recovered edits in a wrong directory without BASE_NAMESPACE_DIR. See HBASE-22617 for more details.
      Throws:
      IOException
    • getTableDir

      public static org.apache.hadoop.fs.Path getTableDir(org.apache.hadoop.fs.Path rootdir, TableName tableName)
      Returns the Path object representing the table directory under path rootdir
      Parameters:
      rootdir - qualified path of HBase root directory
      tableName - name of table
      Returns:
      Path for table
    • getRegionDir

      public static org.apache.hadoop.fs.Path getRegionDir(org.apache.hadoop.fs.Path rootdir, TableName tableName, String regionName)
      Returns the Path object representing the region directory under path rootdir
      Parameters:
      rootdir - qualified path of HBase root directory
      tableName - name of table
      regionName - The encoded region name
      Returns:
      Path for region
    • getTableName

      public static TableName getTableName(org.apache.hadoop.fs.Path tablePath)
      Returns the TableName object representing the table directory under path rootdir
      Parameters:
      tablePath - path of table
      Returns:
      Path for table
    • getNamespaceDir

      public static org.apache.hadoop.fs.Path getNamespaceDir(org.apache.hadoop.fs.Path rootdir, String namespace)
      Returns the Path object representing the namespace directory under path rootdir
      Parameters:
      rootdir - qualified path of HBase root directory
      namespace - namespace name
      Returns:
      Path for table
    • setStoragePolicy

      public static void setStoragePolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, String storagePolicy)
      Sets storage policy for given path. If the passed path is a directory, we'll set the storage policy for all files created in the future in said directory. Note that this change in storage policy takes place at the FileSystem level; it will persist beyond this RS's lifecycle. If we're running on a version of FileSystem that doesn't support the given storage policy (or storage policies at all), then we'll issue a log message and continue. See http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html
      Parameters:
      fs - We only do anything it implements a setStoragePolicy method
      path - the Path whose storage policy is to be set
      storagePolicy - Policy to set on path; see hadoop 2.6+ org.apache.hadoop.hdfs.protocol.HdfsConstants for possible list e.g 'COLD', 'WARM', 'HOT', 'ONE_SSD', 'ALL_SSD', 'LAZY_PERSIST'.
    • setStoragePolicy

      static void setStoragePolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, String storagePolicy, boolean throwException) throws IOException
      Throws:
      IOException
    • invokeSetStoragePolicy

      private static void invokeSetStoragePolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, String storagePolicy) throws IOException
      Throws:
      IOException
    • isHDFS

      public static boolean isHDFS(org.apache.hadoop.conf.Configuration conf) throws IOException
      Return true if this is a filesystem whose scheme is 'hdfs'.
      Throws:
      IOException - from underlying FileSystem
    • isRecoveredEdits

      public static boolean isRecoveredEdits(org.apache.hadoop.fs.Path path)
      Checks if the given path is the one with 'recovered.edits' dir.
      Parameters:
      path - must not be null
      Returns:
      True if we recovered edits
    • getCurrentFileSystem

      public static org.apache.hadoop.fs.FileSystem getCurrentFileSystem(org.apache.hadoop.conf.Configuration conf) throws IOException
      Returns the filesystem of the hbase rootdir.
      Throws:
      IOException - from underlying FileSystem
    • listStatus

      public static org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.PathFilter filter) throws IOException
      Calls fs.listStatus() and treats FileNotFoundException as non-fatal This accommodates differences between hadoop versions, where hadoop 1 does not throw a FileNotFoundException, and return an empty FileStatus[] while Hadoop 2 will throw FileNotFoundException. Where possible, prefer FSUtils#listStatusWithStatusFilter(FileSystem, Path, FileStatusFilter) instead.
      Parameters:
      fs - file system
      dir - directory
      filter - path filter
      Returns:
      null if dir is empty or doesn't exist, otherwise FileStatus array
      Throws:
      IOException
    • listStatus

      public static org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException
      Calls fs.listStatus() and treats FileNotFoundException as non-fatal This would accommodates differences between hadoop versions
      Parameters:
      fs - file system
      dir - directory
      Returns:
      null if dir is empty or doesn't exist, otherwise FileStatus array
      Throws:
      IOException
    • listLocatedStatus

      public static List<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException
      Calls fs.listFiles() to get FileStatus and BlockLocations together for reducing rpc call
      Parameters:
      fs - file system
      dir - directory
      Returns:
      LocatedFileStatus list
      Throws:
      IOException
    • delete

      public static boolean delete(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, boolean recursive) throws IOException
      Calls fs.delete() and returns the value returned by the fs.delete()
      Parameters:
      fs - must not be null
      path - must not be null
      recursive - delete tree rooted at path
      Returns:
      the value returned by the fs.delete()
      Throws:
      IOException - from underlying FileSystem
    • isExists

      public static boolean isExists(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) throws IOException
      Calls fs.exists(). Checks if the specified path exists
      Parameters:
      fs - must not be null
      path - must not be null
      Returns:
      the value returned by fs.exists()
      Throws:
      IOException - from underlying FileSystem
    • logFileSystemState

      public static void logFileSystemState(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path root, org.slf4j.Logger log) throws IOException
      Log the current state of the filesystem from a certain root directory
      Parameters:
      fs - filesystem to investigate
      root - root file/directory to start logging from
      log - log to output information
      Throws:
      IOException - if an unexpected exception occurs
    • logFSTree

      private static void logFSTree(org.slf4j.Logger log, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path root, String prefix) throws IOException
      Recursive helper to log the state of the FS
      Throws:
      IOException
      See Also:
    • renameAndSetModifyTime

      public static boolean renameAndSetModifyTime(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dest) throws IOException
      Throws:
      IOException
    • checkShortCircuitReadBufferSize

      public static void checkShortCircuitReadBufferSize(org.apache.hadoop.conf.Configuration conf)
      Check if short circuit read buffer size is set and if not, set it to hbase value.
      Parameters:
      conf - must not be null