Uses of Enum Class
org.apache.hadoop.hbase.io.compress.Compression.Algorithm
Package
Description
Provides HBase Client
Provides implementations of
HFile
and HFile
BlockCache
.-
Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase
Modifier and TypeFieldDescriptionstatic final org.apache.hadoop.hbase.io.compress.Compression.Algorithm[]
HBaseCommonTestingUtility.COMPRESSION_ALGORITHMS
Deprecated.Compression algorithms to use in testingModifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.io.compress.Compression.Algorithm[]
HBaseTestingUtility.getSupportedCompressionAlgorithms()
Deprecated.Get supported compression algorithms.Modifier and TypeMethodDescriptionstatic int
HBaseTestingUtility.createPreSplitLoadTestTable
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[][] columnFamilies, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, org.apache.hadoop.hbase.client.Durability durability) Deprecated.Creates a pre-split table for load testing.static int
HBaseTestingUtility.createPreSplitLoadTestTable
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[] columnFamily, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding) Deprecated.Creates a pre-split table for load testing.static int
HBaseTestingUtility.createPreSplitLoadTestTable
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[] columnFamily, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, org.apache.hadoop.hbase.client.Durability durability) Deprecated.Creates a pre-split table for load testing. -
Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.client
Modifier and TypeFieldDescriptionstatic final org.apache.hadoop.hbase.io.compress.Compression.Algorithm
ColumnFamilyDescriptorBuilder.DEFAULT_COMPRESSION
Default compression type.Modifier and TypeMethodDescriptionorg.apache.hadoop.hbase.io.compress.Compression.Algorithm
ColumnFamilyDescriptor.getCompactionCompressionType()
Returns Compression type setting.org.apache.hadoop.hbase.io.compress.Compression.Algorithm
ColumnFamilyDescriptor.getCompressionType()
Returns Compression type setting.org.apache.hadoop.hbase.io.compress.Compression.Algorithm
ColumnFamilyDescriptorBuilder.getCompressionType()
org.apache.hadoop.hbase.io.compress.Compression.Algorithm
ColumnFamilyDescriptor.getMajorCompactionCompressionType()
Returns Compression type setting for major compactions.org.apache.hadoop.hbase.io.compress.Compression.Algorithm
ColumnFamilyDescriptor.getMinorCompactionCompressionType()
Returns Compression type setting for minor compactions.Modifier and TypeMethodDescriptionorg.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
ColumnFamilyDescriptorBuilder.setCompactionCompressionType
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm value) org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
ColumnFamilyDescriptorBuilder.setCompressionType
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm value) org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
ColumnFamilyDescriptorBuilder.setMajorCompactionCompressionType
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm value) org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
ColumnFamilyDescriptorBuilder.setMinorCompactionCompressionType
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm value) -
Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.io.compress
Modifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.io.compress.Compression.Algorithm
Compression.getCompressionAlgorithmByName
(String compressName) static org.apache.hadoop.hbase.io.compress.Compression.Algorithm
Compression.Algorithm.valueOf
(String name) static org.apache.hadoop.hbase.io.compress.Compression.Algorithm[]
Compression.Algorithm.values()
-
Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.io.encoding
Modifier and TypeMethodDescriptionstatic int
EncodedDataBlock.getCompressedSize
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm algo, org.apache.hadoop.io.compress.Compressor compressor, byte[] inputBuffer, int offset, int length) Find the size of compressed data assuming that buffer will be compressed using given algorithm.int
EncodedDataBlock.getEncodedCompressedSize
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm comprAlgo, org.apache.hadoop.io.compress.Compressor compressor) Estimate size after second stage of compression (e.g. -
Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.io.hfile
Modifier and TypeFieldDescriptionstatic final org.apache.hadoop.hbase.io.compress.Compression.Algorithm
HFile.DEFAULT_COMPRESSION_ALGORITHM
Default compression: none.Modifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.io.compress.Compression.Algorithm
HFileWriterImpl.compressionByName
(String algoName) org.apache.hadoop.hbase.io.compress.Compression.Algorithm
HFileContext.getCompression()
org.apache.hadoop.hbase.io.compress.Compression.Algorithm
HFileReaderImpl.getCompressionAlgorithm()
org.apache.hadoop.hbase.io.compress.Compression.Algorithm
FixedFileTrailer.getCompressionCodec()
Modifier and TypeMethodDescriptionvoid
FixedFileTrailer.setCompressionCodec
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm compressionCodec) org.apache.hadoop.hbase.io.hfile.HFileContextBuilder
HFileContextBuilder.withCompression
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression) -
Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.mob
Modifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.regionserver.StoreFileWriter
MobUtils.createWriter
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.hbase.client.ColumnFamilyDescriptor family, String date, org.apache.hadoop.fs.Path basePath, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, String startKey, org.apache.hadoop.hbase.io.hfile.CacheConfig cacheConfig, org.apache.hadoop.hbase.io.crypto.Encryption.Context cryptoContext, boolean isCompaction, String regionName) Creates a writer for the mob file in temp directory.static org.apache.hadoop.hbase.regionserver.StoreFileWriter
MobUtils.createWriter
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.hbase.client.ColumnFamilyDescriptor family, org.apache.hadoop.fs.Path path, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.hfile.CacheConfig cacheConfig, org.apache.hadoop.hbase.io.crypto.Encryption.Context cryptoContext, org.apache.hadoop.hbase.util.ChecksumType checksumType, int bytesPerChecksum, int blocksize, org.apache.hadoop.hbase.regionserver.BloomType bloomType, boolean isCompaction) Creates a writer for the mob file in temp directory.static org.apache.hadoop.hbase.regionserver.StoreFileWriter
MobUtils.createWriter
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.hbase.client.ColumnFamilyDescriptor family, org.apache.hadoop.fs.Path path, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.hfile.CacheConfig cacheConfig, org.apache.hadoop.hbase.io.crypto.Encryption.Context cryptoContext, org.apache.hadoop.hbase.util.ChecksumType checksumType, int bytesPerChecksum, int blocksize, org.apache.hadoop.hbase.regionserver.BloomType bloomType, boolean isCompaction, Consumer<org.apache.hadoop.fs.Path> writerCreationTracker) Creates a writer for the mob file in temp directory.static org.apache.hadoop.hbase.regionserver.StoreFileWriter
MobUtils.createWriter
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.hbase.client.ColumnFamilyDescriptor family, org.apache.hadoop.hbase.mob.MobFileName mobFileName, org.apache.hadoop.fs.Path basePath, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.hfile.CacheConfig cacheConfig, org.apache.hadoop.hbase.io.crypto.Encryption.Context cryptoContext, boolean isCompaction) Creates a writer for the mob file in temp directory. -
Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.regionserver
Modifier and TypeMethodDescriptionorg.apache.hadoop.hbase.io.compress.Compression.Algorithm
CreateStoreFileWriterParams.compression()
Modifier and TypeMethodDescriptionorg.apache.hadoop.hbase.regionserver.CreateStoreFileWriterParams
CreateStoreFileWriterParams.compression
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression) Set the compression algorithm to useorg.apache.hadoop.hbase.regionserver.StoreFileWriter
HMobStore.createWriter
(Date date, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, byte[] startKey, boolean isCompaction, Consumer<org.apache.hadoop.fs.Path> writerCreationTracker) Creates the writer for the mob file in the mob family directory.org.apache.hadoop.hbase.regionserver.StoreFileWriter
HMobStore.createWriterInTmp
(String date, org.apache.hadoop.fs.Path basePath, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, byte[] startKey, boolean isCompaction, Consumer<org.apache.hadoop.fs.Path> writerCreationTracker) Creates the writer for the mob file in temp directory.org.apache.hadoop.hbase.regionserver.StoreFileWriter
HMobStore.createWriterInTmp
(Date date, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, byte[] startKey, boolean isCompaction) Creates the writer for the mob file in temp directory.org.apache.hadoop.hbase.regionserver.StoreFileWriter
HMobStore.createWriterInTmp
(org.apache.hadoop.hbase.mob.MobFileName mobFileName, org.apache.hadoop.fs.Path basePath, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, boolean isCompaction, Consumer<org.apache.hadoop.fs.Path> writerCreationTracker) Creates the writer for the mob file in temp directory. -
Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.regionserver.wal
Modifier and TypeFieldDescriptionprotected org.apache.hadoop.hbase.io.compress.Compression.Algorithm
AbstractProtobufWALReader.valueCompressionType
Modifier and TypeMethodDescriptionstatic org.apache.hadoop.hbase.io.compress.Compression.Algorithm
CompressionContext.getValueCompressionAlgorithm
(org.apache.hadoop.conf.Configuration conf) ModifierConstructorDescriptionCompressionContext
(Class<? extends org.apache.hadoop.hbase.io.util.Dictionary> dictType, boolean recoveredEdits, boolean hasTagCompression, boolean hasValueCompression, org.apache.hadoop.hbase.io.compress.Compression.Algorithm valueCompressionType) -
Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.util
Modifier and TypeFieldDescriptionprotected org.apache.hadoop.hbase.io.compress.Compression.Algorithm
LoadTestTool.compressAlgo
Modifier and TypeMethodDescriptionstatic int
LoadTestUtil.createPreSplitLoadTestTable
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[][] columnFamilies, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, org.apache.hadoop.hbase.client.Durability durability) Creates a pre-split table for load testing.static int
LoadTestUtil.createPreSplitLoadTestTable
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[] columnFamily, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding) Creates a pre-split table for load testing.static int
LoadTestUtil.createPreSplitLoadTestTable
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[] columnFamily, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, org.apache.hadoop.hbase.client.Durability durability) Creates a pre-split table for load testing.static void
CompressionTest.testCompression
(org.apache.hadoop.hbase.io.compress.Compression.Algorithm algo)