Package org.apache.hadoop.hbase.util
Class TestFSUtils
java.lang.Object
org.apache.hadoop.hbase.util.TestFSUtils
Test
FSUtils
.-
Nested Class Summary
Modifier and TypeClassDescription(package private) class
(package private) static interface
-
Field Summary
Modifier and TypeFieldDescription(package private) static final int
static final HBaseClassTestRule
private org.apache.hadoop.conf.Configuration
private org.apache.hadoop.fs.FileSystem
private HBaseTestingUtil
(package private) final String
private static final org.slf4j.Logger
private Random
(package private) static final long
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprivate void
checkAndEraseData
(byte[] actual, int from, byte[] expected, String message) void
private void
cleanupFile
(org.apache.hadoop.fs.FileSystem fileSys, org.apache.hadoop.fs.Path name) private void
doPread
(org.apache.hadoop.fs.FSDataInputStream stm, long position, byte[] buffer, int offset, int length) private void
pReadFile
(org.apache.hadoop.fs.FileSystem fileSys, org.apache.hadoop.fs.Path name) void
setUp()
void
private void
testComputeHDFSBlocksDistribution
(TestFSUtils.HDFSBlockDistributionFunction fileToBlockDistribution) void
void
void
void
Ugly test that ensures we can get at the hedged read counters in dfsclient.void
void
void
private void
testIsSameHdfs
(int nnport) void
void
void
void
void
void
private void
private void
Note: currently the default policy is set to defer to HDFS and this case is to verify the logic, will need to remove the check if the default policy is changedprivate void
WriteDataToHDFS
(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path file, int dataSize) private void
writeVersionFile
(org.apache.hadoop.fs.Path versionFile, String version)
-
Field Details
-
CLASS_RULE
-
LOG
-
htu
-
fs
-
conf
-
INVALID_STORAGE_POLICY
- See Also:
-
blockSize
- See Also:
-
seed
- See Also:
-
rand
-
-
Constructor Details
-
TestFSUtils
public TestFSUtils()
-
-
Method Details
-
setUp
- Throws:
IOException
-
testIsHDFS
- Throws:
Exception
-
WriteDataToHDFS
private void WriteDataToHDFS(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path file, int dataSize) throws Exception - Throws:
Exception
-
testComputeHDFSBlocksDistributionByInputStream
- Throws:
Exception
-
testComputeHDFSBlockDistribution
- Throws:
Exception
-
testComputeHDFSBlocksDistribution
private void testComputeHDFSBlocksDistribution(TestFSUtils.HDFSBlockDistributionFunction fileToBlockDistribution) throws Exception - Throws:
Exception
-
writeVersionFile
private void writeVersionFile(org.apache.hadoop.fs.Path versionFile, String version) throws IOException - Throws:
IOException
-
testVersion
public void testVersion() throws org.apache.hadoop.hbase.exceptions.DeserializationException, IOException- Throws:
org.apache.hadoop.hbase.exceptions.DeserializationException
IOException
-
testPermMask
- Throws:
Exception
-
testDeleteAndExists
- Throws:
Exception
-
testFilteredStatusDoesNotThrowOnNotFound
- Throws:
Exception
-
testRenameAndSetModifyTime
- Throws:
Exception
-
testSetStoragePolicyDefault
- Throws:
Exception
-
verifyNoHDFSApiInvocationForDefaultPolicy
Note: currently the default policy is set to defer to HDFS and this case is to verify the logic, will need to remove the check if the default policy is changed- Throws:
URISyntaxException
IOException
-
testSetStoragePolicyValidButMaybeNotPresent
- Throws:
Exception
-
testSetStoragePolicyInvalid
- Throws:
Exception
-
verifyFileInDirWithStoragePolicy
- Throws:
Exception
-
testDFSHedgedReadMetrics
Ugly test that ensures we can get at the hedged read counters in dfsclient. Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.- Throws:
Exception
-
testCopyFilesParallel
- Throws:
Exception
-
pReadFile
private void pReadFile(org.apache.hadoop.fs.FileSystem fileSys, org.apache.hadoop.fs.Path name) throws IOException - Throws:
IOException
-
checkAndEraseData
-
doPread
private void doPread(org.apache.hadoop.fs.FSDataInputStream stm, long position, byte[] buffer, int offset, int length) throws IOException - Throws:
IOException
-
cleanupFile
private void cleanupFile(org.apache.hadoop.fs.FileSystem fileSys, org.apache.hadoop.fs.Path name) throws IOException - Throws:
IOException
-
checkStreamCapabilitiesOnHdfsDataOutputStream
- Throws:
Exception
-
testIsSameHdfs
- Throws:
IOException
-
testIsSameHdfs
- Throws:
IOException
-