Package org.apache.hadoop.hbase.fs
Class HFileSystem
java.lang.Object
org.apache.hadoop.conf.Configured
org.apache.hadoop.fs.FileSystem
org.apache.hadoop.fs.FilterFileSystem
org.apache.hadoop.hbase.fs.HFileSystem
- All Implemented Interfaces:
Closeable
,AutoCloseable
,org.apache.hadoop.conf.Configurable
,org.apache.hadoop.fs.BulkDeleteSource
,org.apache.hadoop.fs.PathCapabilities
,org.apache.hadoop.security.token.DelegationTokenIssuer
An encapsulation for the FileSystem object that hbase uses to access data. This class allows the
flexibility of using separate filesystem objects for reading and writing hfiles and wals.
-
Nested Class Summary
Modifier and TypeClassDescription(package private) static interface
Interface to implement to add a specific reordering logic in hdfs.(package private) static class
We're putting at lowest priority the wal files blocks that are on the same datanode as the original regionserver which created these files.Nested classes/interfaces inherited from class org.apache.hadoop.fs.FileSystem
org.apache.hadoop.fs.FileSystem.DirectoryEntries, org.apache.hadoop.fs.FileSystem.DirListingIterator<T extends org.apache.hadoop.fs.FileStatus>, org.apache.hadoop.fs.FileSystem.Statistics
-
Field Summary
Modifier and TypeFieldDescriptionstatic final org.slf4j.Logger
private final org.apache.hadoop.fs.FileSystem
private static byte
private final boolean
Fields inherited from class org.apache.hadoop.fs.FilterFileSystem
fs, swapScheme
Fields inherited from class org.apache.hadoop.fs.FileSystem
DEFAULT_FS, FS_DEFAULT_NAME_KEY, SHUTDOWN_HOOK_PRIORITY, statistics, TRASH_PREFIX, USER_HOME_PREFIX
Fields inherited from interface org.apache.hadoop.security.token.DelegationTokenIssuer
TOKEN_LOG
-
Constructor Summary
ConstructorDescriptionHFileSystem
(org.apache.hadoop.conf.Configuration conf, boolean useHBaseChecksum) Create a FileSystem object for HBase regionservers.HFileSystem
(org.apache.hadoop.fs.FileSystem fs) Wrap a FileSystem object within a HFileSystem. -
Method Summary
Modifier and TypeMethodDescriptionstatic boolean
addLocationsOrderInterceptor
(org.apache.hadoop.conf.Configuration conf) (package private) static boolean
addLocationsOrderInterceptor
(org.apache.hadoop.conf.Configuration conf, HFileSystem.ReorderBlocks lrb) Add an interceptor on the calls to the namenode#getBlockLocations from the DFSClient linked to this FileSystem.void
close()
Close this filesystem objectorg.apache.hadoop.fs.FSDataOutputStream
createNonRecursive
(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) The org.apache.hadoop.fs.FilterFileSystem does not yet support createNonRecursive.private static org.apache.hadoop.hdfs.protocol.ClientProtocol
createReorderingProxy
(org.apache.hadoop.hdfs.protocol.ClientProtocol cp, HFileSystem.ReorderBlocks lrb, org.apache.hadoop.conf.Configuration conf) static org.apache.hadoop.fs.FileSystem
get
(org.apache.hadoop.conf.Configuration conf) Create a new HFileSystem object, similar to FileSystem.get().org.apache.hadoop.fs.FileSystem
Returns the underlying filesystemorg.apache.hadoop.fs.FileSystem
Returns the filesystem that is specially setup for doing reads from storage.private String
getStoragePolicyForOldHDFSVersion
(org.apache.hadoop.fs.Path path) Before Hadoop 2.8.0, there's no getStoragePolicy method for FileSystem interface, and we need to keep compatible with it.getStoragePolicyName
(org.apache.hadoop.fs.Path path) Get the storage policy of the source path (directory/file).private org.apache.hadoop.fs.FileSystem
maybeWrapFileSystem
(org.apache.hadoop.fs.FileSystem base, org.apache.hadoop.conf.Configuration conf) Returns an instance of Filesystem wrapped into the class specified in hbase.fs.wrapper property, if one is set in the configuration, returns unmodified FS instance passed in as an argument otherwise.private static org.apache.hadoop.fs.FileSystem
newInstanceFileSystem
(org.apache.hadoop.conf.Configuration conf) Returns a brand new instance of the FileSystem.void
setStoragePolicy
(org.apache.hadoop.fs.Path path, String policyName) Set the source path (directory/file) to the specified storage policy.boolean
Are we verifying checksums in HBase?Methods inherited from class org.apache.hadoop.fs.FilterFileSystem
access, append, appendFile, canonicalizeUri, checkPath, completeLocalOutput, concat, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyToLocalFile, create, create, createFile, createNonRecursive, createPathHandle, createSnapshot, createSymlink, delete, deleteSnapshot, getAclStatus, getAllStoragePolicies, getCanonicalUri, getChildFileSystems, getConf, getDefaultBlockSize, getDefaultBlockSize, getDefaultReplication, getDefaultReplication, getEnclosingRoot, getFileBlockLocations, getFileChecksum, getFileChecksum, getFileLinkStatus, getFileStatus, getHomeDirectory, getInitialWorkingDirectory, getLinkTarget, getRawFileSystem, getServerDefaults, getServerDefaults, getStatus, getStoragePolicy, getTrashRoot, getTrashRoots, getUri, getUsed, getUsed, getWorkingDirectory, getXAttr, getXAttrs, getXAttrs, hasPathCapability, initialize, listCorruptFileBlocks, listLocatedStatus, listLocatedStatus, listStatus, listStatusIterator, listXAttrs, makeQualified, mkdirs, mkdirs, modifyAclEntries, msync, open, open, openFile, openFile, openFileWithOptions, openFileWithOptions, primitiveCreate, primitiveMkdir, removeAcl, removeAclEntries, removeDefaultAcl, removeXAttr, rename, rename, renameSnapshot, resolveLink, resolvePath, satisfyStoragePolicy, setAcl, setOwner, setPermission, setReplication, setTimes, setVerifyChecksum, setWorkingDirectory, setWriteChecksum, setXAttr, setXAttr, startLocalOutput, supportsSymlinks, truncate, unsetStoragePolicy
Methods inherited from class org.apache.hadoop.fs.FileSystem
append, append, append, append, areSymlinksEnabled, cancelDeleteOnExit, clearStatistics, closeAll, closeAllForUGI, copyFromLocalFile, copyToLocalFile, copyToLocalFile, create, create, create, create, create, create, create, create, create, create, create, createBulkDelete, createDataInputStreamBuilder, createDataInputStreamBuilder, createDataOutputStreamBuilder, createMultipartUploader, createNewFile, createNonRecursive, createSnapshot, delete, deleteOnExit, enableSymlinks, exists, fixRelativePart, get, get, getAdditionalTokenIssuers, getAllStatistics, getBlockSize, getCanonicalServiceName, getContentSummary, getDefaultPort, getDefaultUri, getDelegationToken, getFileBlockLocations, getFileSystemClass, getFSofPath, getGlobalStorageStatistics, getLength, getLocal, getName, getNamed, getPathHandle, getQuotaUsage, getReplication, getScheme, getStatistics, getStatistics, getStatus, getStorageStatistics, globStatus, globStatus, isDirectory, isFile, listFiles, listStatus, listStatus, listStatus, listStatusBatch, mkdirs, moveFromLocalFile, moveFromLocalFile, moveToLocalFile, newInstance, newInstance, newInstance, newInstanceLocal, open, open, primitiveMkdir, printStatistics, processDeleteOnExit, setDefaultUri, setDefaultUri, setQuota, setQuotaByStorageType
Methods inherited from class org.apache.hadoop.conf.Configured
setConf
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.hadoop.security.token.DelegationTokenIssuer
addDelegationTokens
-
Field Details
-
LOG
-
noChecksumFs
-
useHBaseChecksum
-
unspecifiedStoragePolicyId
-
-
Constructor Details
-
HFileSystem
public HFileSystem(org.apache.hadoop.conf.Configuration conf, boolean useHBaseChecksum) throws IOException Create a FileSystem object for HBase regionservers.- Parameters:
conf
- The configuration to be used for the filesystemuseHBaseChecksum
- if true, then use checksum verfication in hbase, otherwise delegate checksum verification to the FileSystem.- Throws:
IOException
-
HFileSystem
Wrap a FileSystem object within a HFileSystem. The noChecksumFs and writefs are both set to be the same specified fs. Do not verify hbase-checksums while reading data from filesystem.- Parameters:
fs
- Set the noChecksumFs and writeFs to this specified filesystem.
-
-
Method Details
-
getNoChecksumFs
Returns the filesystem that is specially setup for doing reads from storage. This object avoids doing checksum verifications for reads.- Returns:
- The FileSystem object that can be used to read data from files.
-
getBackingFs
Returns the underlying filesystem- Returns:
- The underlying FileSystem for this FilterFileSystem object.
- Throws:
IOException
-
setStoragePolicy
Set the source path (directory/file) to the specified storage policy.- Overrides:
setStoragePolicy
in classorg.apache.hadoop.fs.FilterFileSystem
- Parameters:
path
- The source path (directory/file).policyName
- The name of the storage policy: 'HOT', 'COLD', etc. See see hadoop 2.6+ org.apache.hadoop.hdfs.protocol.HdfsConstants for possible list e.g 'COLD', 'WARM', 'HOT', 'ONE_SSD', 'ALL_SSD', 'LAZY_PERSIST'.
-
getStoragePolicyName
Get the storage policy of the source path (directory/file).- Parameters:
path
- The source path (directory/file).- Returns:
- Storage policy name, or
null
if not usingDistributedFileSystem
or exception thrown when trying to get policy
-
getStoragePolicyForOldHDFSVersion
Before Hadoop 2.8.0, there's no getStoragePolicy method for FileSystem interface, and we need to keep compatible with it. See HADOOP-12161 for more details.- Parameters:
path
- Path to get storage policy against- Returns:
- the storage policy name
-
useHBaseChecksum
Are we verifying checksums in HBase?- Returns:
- True, if hbase is configured to verify checksums, otherwise false.
-
close
Close this filesystem object- Specified by:
close
in interfaceAutoCloseable
- Specified by:
close
in interfaceCloseable
- Overrides:
close
in classorg.apache.hadoop.fs.FilterFileSystem
- Throws:
IOException
-
newInstanceFileSystem
private static org.apache.hadoop.fs.FileSystem newInstanceFileSystem(org.apache.hadoop.conf.Configuration conf) throws IOException Returns a brand new instance of the FileSystem. It does not use the FileSystem.Cache. In newer versions of HDFS, we can directly invoke FileSystem.newInstance(Configuration).- Parameters:
conf
- Configuration- Returns:
- A new instance of the filesystem
- Throws:
IOException
-
maybeWrapFileSystem
private org.apache.hadoop.fs.FileSystem maybeWrapFileSystem(org.apache.hadoop.fs.FileSystem base, org.apache.hadoop.conf.Configuration conf) Returns an instance of Filesystem wrapped into the class specified in hbase.fs.wrapper property, if one is set in the configuration, returns unmodified FS instance passed in as an argument otherwise.- Parameters:
base
- Filesystem instance to wrapconf
- Configuration- Returns:
- wrapped instance of FS, or the same instance if no wrapping configured.
-
addLocationsOrderInterceptor
public static boolean addLocationsOrderInterceptor(org.apache.hadoop.conf.Configuration conf) throws IOException - Throws:
IOException
-
addLocationsOrderInterceptor
static boolean addLocationsOrderInterceptor(org.apache.hadoop.conf.Configuration conf, HFileSystem.ReorderBlocks lrb) Add an interceptor on the calls to the namenode#getBlockLocations from the DFSClient linked to this FileSystem. See HBASE-6435 for the background. There should be no reason, except testing, to create a specific ReorderBlocks.- Returns:
- true if the interceptor was added, false otherwise.
-
createReorderingProxy
private static org.apache.hadoop.hdfs.protocol.ClientProtocol createReorderingProxy(org.apache.hadoop.hdfs.protocol.ClientProtocol cp, HFileSystem.ReorderBlocks lrb, org.apache.hadoop.conf.Configuration conf) -
get
public static org.apache.hadoop.fs.FileSystem get(org.apache.hadoop.conf.Configuration conf) throws IOException Create a new HFileSystem object, similar to FileSystem.get(). This returns a filesystem object that avoids checksum verification in the filesystem for hfileblock-reads. For these blocks, checksum verification is done by HBase.- Throws:
IOException
-
createNonRecursive
public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException The org.apache.hadoop.fs.FilterFileSystem does not yet support createNonRecursive. This is a hadoop bug and when it is fixed in Hadoop, this definition will go away.- Overrides:
createNonRecursive
in classorg.apache.hadoop.fs.FileSystem
- Throws:
IOException
-