Package org.apache.hadoop.hbase.util
Class TestFSUtils
java.lang.Object
org.apache.hadoop.hbase.util.TestFSUtils
Test
FSUtils.-
Nested Class Summary
Nested ClassesModifier and TypeClassDescription(package private) class(package private) static interface -
Field Summary
FieldsModifier and TypeFieldDescription(package private) static final intstatic final HBaseClassTestRuleprivate org.apache.hadoop.conf.Configurationprivate org.apache.hadoop.fs.FileSystemprivate HBaseTestingUtil(package private) final Stringprivate static final org.slf4j.Loggerprivate Random(package private) static final long -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprivate voidcheckAndEraseData(byte[] actual, int from, byte[] expected, String message) voidprivate voidcleanupFile(org.apache.hadoop.fs.FileSystem fileSys, org.apache.hadoop.fs.Path name) private voiddoPread(org.apache.hadoop.fs.FSDataInputStream stm, long position, byte[] buffer, int offset, int length) private voidpReadFile(org.apache.hadoop.fs.FileSystem fileSys, org.apache.hadoop.fs.Path name) voidsetUp()voidprivate voidtestComputeHDFSBlocksDistribution(TestFSUtils.HDFSBlockDistributionFunction fileToBlockDistribution) voidvoidvoidvoidUgly test that ensures we can get at the hedged read counters in dfsclient.voidvoidvoidprivate voidtestIsSameHdfs(int nnport) voidvoidvoidvoidvoidvoidvoidprivate voidprivate voidNote: currently the default policy is set to defer to HDFS and this case is to verify the logic, will need to remove the check if the default policy is changedprivate voidWriteDataToHDFS(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path file, int dataSize) private voidwriteVersionFile(org.apache.hadoop.fs.Path versionFile, String version)
-
Field Details
-
CLASS_RULE
-
LOG
-
htu
-
fs
-
conf
-
INVALID_STORAGE_POLICY
- See Also:
-
blockSize
- See Also:
-
seed
- See Also:
-
rand
-
-
Constructor Details
-
TestFSUtils
public TestFSUtils()
-
-
Method Details
-
setUp
- Throws:
IOException
-
testIsHDFS
- Throws:
Exception
-
testLocalFileSystemSafeMode
- Throws:
Exception
-
WriteDataToHDFS
private void WriteDataToHDFS(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path file, int dataSize) throws Exception - Throws:
Exception
-
testComputeHDFSBlocksDistributionByInputStream
- Throws:
Exception
-
testComputeHDFSBlockDistribution
- Throws:
Exception
-
testComputeHDFSBlocksDistribution
private void testComputeHDFSBlocksDistribution(TestFSUtils.HDFSBlockDistributionFunction fileToBlockDistribution) throws Exception - Throws:
Exception
-
writeVersionFile
private void writeVersionFile(org.apache.hadoop.fs.Path versionFile, String version) throws IOException - Throws:
IOException
-
testVersion
public void testVersion() throws org.apache.hadoop.hbase.exceptions.DeserializationException, IOException- Throws:
org.apache.hadoop.hbase.exceptions.DeserializationExceptionIOException
-
testPermMask
- Throws:
Exception
-
testDeleteAndExists
- Throws:
Exception
-
testFilteredStatusDoesNotThrowOnNotFound
- Throws:
Exception
-
testRenameAndSetModifyTime
- Throws:
Exception
-
testSetStoragePolicyDefault
- Throws:
Exception
-
verifyNoHDFSApiInvocationForDefaultPolicy
Note: currently the default policy is set to defer to HDFS and this case is to verify the logic, will need to remove the check if the default policy is changed- Throws:
URISyntaxExceptionIOException
-
testSetStoragePolicyValidButMaybeNotPresent
- Throws:
Exception
-
testSetStoragePolicyInvalid
- Throws:
Exception
-
verifyFileInDirWithStoragePolicy
- Throws:
Exception
-
testDFSHedgedReadMetrics
Ugly test that ensures we can get at the hedged read counters in dfsclient. Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.- Throws:
Exception
-
testCopyFilesParallel
- Throws:
Exception
-
pReadFile
private void pReadFile(org.apache.hadoop.fs.FileSystem fileSys, org.apache.hadoop.fs.Path name) throws IOException - Throws:
IOException
-
checkAndEraseData
-
doPread
private void doPread(org.apache.hadoop.fs.FSDataInputStream stm, long position, byte[] buffer, int offset, int length) throws IOException - Throws:
IOException
-
cleanupFile
private void cleanupFile(org.apache.hadoop.fs.FileSystem fileSys, org.apache.hadoop.fs.Path name) throws IOException - Throws:
IOException
-
checkStreamCapabilitiesOnHdfsDataOutputStream
- Throws:
Exception
-
testIsSameHdfs
- Throws:
IOException
-
testIsSameHdfs
- Throws:
IOException
-