Uses of Enum Class
org.apache.hadoop.hbase.io.compress.Compression.Algorithm

Packages that use org.apache.hadoop.hbase.io.compress.Compression.Algorithm
  • Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase

    Fields in org.apache.hadoop.hbase declared as org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Field
    Description
    static final org.apache.hadoop.hbase.io.compress.Compression.Algorithm[]
    HBaseCommonTestingUtility.COMPRESSION_ALGORITHMS
    Deprecated.
    Compression algorithms to use in testing
    Methods in org.apache.hadoop.hbase that return org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.io.compress.Compression.Algorithm[]
    HBaseTestingUtility.getSupportedCompressionAlgorithms()
    Deprecated.
    Get supported compression algorithms.
    Methods in org.apache.hadoop.hbase with parameters of type org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    static int
    HBaseTestingUtility.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[][] columnFamilies, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, org.apache.hadoop.hbase.client.Durability durability)
    Deprecated.
    Creates a pre-split table for load testing.
    static int
    HBaseTestingUtility.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[] columnFamily, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding)
    Deprecated.
    Creates a pre-split table for load testing.
    static int
    HBaseTestingUtility.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[] columnFamily, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, org.apache.hadoop.hbase.client.Durability durability)
    Deprecated.
    Creates a pre-split table for load testing.
  • Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.client

    Fields in org.apache.hadoop.hbase.client declared as org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Field
    Description
    static final org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    ColumnFamilyDescriptorBuilder.DEFAULT_COMPRESSION
    Default compression type.
    Methods in org.apache.hadoop.hbase.client that return org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    ColumnFamilyDescriptor.getCompactionCompressionType()
    Returns Compression type setting.
    org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    ColumnFamilyDescriptor.getCompressionType()
    Returns Compression type setting.
    org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    ColumnFamilyDescriptorBuilder.getCompressionType()
     
    org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    ColumnFamilyDescriptor.getMajorCompactionCompressionType()
    Returns Compression type setting for major compactions.
    org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    ColumnFamilyDescriptor.getMinorCompactionCompressionType()
    Returns Compression type setting for minor compactions.
    Methods in org.apache.hadoop.hbase.client with parameters of type org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
    ColumnFamilyDescriptorBuilder.setCompactionCompressionType(org.apache.hadoop.hbase.io.compress.Compression.Algorithm value)
     
    org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
    ColumnFamilyDescriptorBuilder.setCompressionType(org.apache.hadoop.hbase.io.compress.Compression.Algorithm value)
     
    org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
    ColumnFamilyDescriptorBuilder.setMajorCompactionCompressionType(org.apache.hadoop.hbase.io.compress.Compression.Algorithm value)
     
    org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
    ColumnFamilyDescriptorBuilder.setMinorCompactionCompressionType(org.apache.hadoop.hbase.io.compress.Compression.Algorithm value)
     
  • Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.io.compress

    Methods in org.apache.hadoop.hbase.io.compress that return org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    Compression.getCompressionAlgorithmByName(String compressName)
     
    static org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    Compression.Algorithm.valueOf(String name)
     
    static org.apache.hadoop.hbase.io.compress.Compression.Algorithm[]
    Compression.Algorithm.values()
     
  • Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.io.encoding

    Methods in org.apache.hadoop.hbase.io.encoding with parameters of type org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    static int
    EncodedDataBlock.getCompressedSize(org.apache.hadoop.hbase.io.compress.Compression.Algorithm algo, org.apache.hadoop.io.compress.Compressor compressor, byte[] inputBuffer, int offset, int length)
    Find the size of compressed data assuming that buffer will be compressed using given algorithm.
    int
    EncodedDataBlock.getEncodedCompressedSize(org.apache.hadoop.hbase.io.compress.Compression.Algorithm comprAlgo, org.apache.hadoop.io.compress.Compressor compressor)
    Estimate size after second stage of compression (e.g.
  • Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.io.hfile

    Fields in org.apache.hadoop.hbase.io.hfile declared as org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Field
    Description
    static final org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    HFile.DEFAULT_COMPRESSION_ALGORITHM
    Default compression: none.
    Methods in org.apache.hadoop.hbase.io.hfile that return org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    HFileWriterImpl.compressionByName(String algoName)
     
    org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    HFileContext.getCompression()
     
    org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    HFileReaderImpl.getCompressionAlgorithm()
     
    org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    FixedFileTrailer.getCompressionCodec()
     
    Methods in org.apache.hadoop.hbase.io.hfile with parameters of type org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    void
    FixedFileTrailer.setCompressionCodec(org.apache.hadoop.hbase.io.compress.Compression.Algorithm compressionCodec)
     
    org.apache.hadoop.hbase.io.hfile.HFileContextBuilder
    HFileContextBuilder.withCompression(org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression)
     
  • Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.mob

    Methods in org.apache.hadoop.hbase.mob with parameters of type org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.regionserver.StoreFileWriter
    MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.hbase.client.ColumnFamilyDescriptor family, String date, org.apache.hadoop.fs.Path basePath, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, String startKey, org.apache.hadoop.hbase.io.hfile.CacheConfig cacheConfig, org.apache.hadoop.hbase.io.crypto.Encryption.Context cryptoContext, boolean isCompaction, String regionName)
    Creates a writer for the mob file in temp directory.
    static org.apache.hadoop.hbase.regionserver.StoreFileWriter
    MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.hbase.client.ColumnFamilyDescriptor family, org.apache.hadoop.fs.Path path, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.hfile.CacheConfig cacheConfig, org.apache.hadoop.hbase.io.crypto.Encryption.Context cryptoContext, org.apache.hadoop.hbase.util.ChecksumType checksumType, int bytesPerChecksum, int blocksize, org.apache.hadoop.hbase.regionserver.BloomType bloomType, boolean isCompaction)
    Creates a writer for the mob file in temp directory.
    static org.apache.hadoop.hbase.regionserver.StoreFileWriter
    MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.hbase.client.ColumnFamilyDescriptor family, org.apache.hadoop.fs.Path path, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.hfile.CacheConfig cacheConfig, org.apache.hadoop.hbase.io.crypto.Encryption.Context cryptoContext, org.apache.hadoop.hbase.util.ChecksumType checksumType, int bytesPerChecksum, int blocksize, org.apache.hadoop.hbase.regionserver.BloomType bloomType, boolean isCompaction, Consumer<org.apache.hadoop.fs.Path> writerCreationTracker)
    Creates a writer for the mob file in temp directory.
    static org.apache.hadoop.hbase.regionserver.StoreFileWriter
    MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.hbase.client.ColumnFamilyDescriptor family, org.apache.hadoop.hbase.mob.MobFileName mobFileName, org.apache.hadoop.fs.Path basePath, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.hfile.CacheConfig cacheConfig, org.apache.hadoop.hbase.io.crypto.Encryption.Context cryptoContext, boolean isCompaction)
    Creates a writer for the mob file in temp directory.
  • Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.regionserver

    Methods in org.apache.hadoop.hbase.regionserver that return org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    CreateStoreFileWriterParams.compression()
     
    Methods in org.apache.hadoop.hbase.regionserver with parameters of type org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    org.apache.hadoop.hbase.regionserver.CreateStoreFileWriterParams
    CreateStoreFileWriterParams.compression(org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression)
    Set the compression algorithm to use
    org.apache.hadoop.hbase.regionserver.StoreFileWriter
    HMobStore.createWriter(Date date, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, byte[] startKey, boolean isCompaction, Consumer<org.apache.hadoop.fs.Path> writerCreationTracker)
    Creates the writer for the mob file in the mob family directory.
    org.apache.hadoop.hbase.regionserver.StoreFileWriter
    HMobStore.createWriterInTmp(String date, org.apache.hadoop.fs.Path basePath, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, byte[] startKey, boolean isCompaction, Consumer<org.apache.hadoop.fs.Path> writerCreationTracker)
    Creates the writer for the mob file in temp directory.
    org.apache.hadoop.hbase.regionserver.StoreFileWriter
    HMobStore.createWriterInTmp(Date date, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, byte[] startKey, boolean isCompaction)
    Creates the writer for the mob file in temp directory.
    org.apache.hadoop.hbase.regionserver.StoreFileWriter
    HMobStore.createWriterInTmp(org.apache.hadoop.hbase.mob.MobFileName mobFileName, org.apache.hadoop.fs.Path basePath, long maxKeyCount, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, boolean isCompaction, Consumer<org.apache.hadoop.fs.Path> writerCreationTracker)
    Creates the writer for the mob file in temp directory.
  • Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.regionserver.wal

    Fields in org.apache.hadoop.hbase.regionserver.wal declared as org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Field
    Description
    protected org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    AbstractProtobufWALReader.valueCompressionType
     
    Methods in org.apache.hadoop.hbase.regionserver.wal that return org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    CompressionContext.getValueCompressionAlgorithm(org.apache.hadoop.conf.Configuration conf)
     
    Constructors in org.apache.hadoop.hbase.regionserver.wal with parameters of type org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier
    Constructor
    Description
     
    CompressionContext(Class<? extends org.apache.hadoop.hbase.io.util.Dictionary> dictType, boolean recoveredEdits, boolean hasTagCompression, boolean hasValueCompression, org.apache.hadoop.hbase.io.compress.Compression.Algorithm valueCompressionType)
     
  • Uses of org.apache.hadoop.hbase.io.compress.Compression.Algorithm in org.apache.hadoop.hbase.util

    Fields in org.apache.hadoop.hbase.util declared as org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Field
    Description
    protected org.apache.hadoop.hbase.io.compress.Compression.Algorithm
    LoadTestTool.compressAlgo
     
    Methods in org.apache.hadoop.hbase.util with parameters of type org.apache.hadoop.hbase.io.compress.Compression.Algorithm in in
    Modifier and Type
    Method
    Description
    static int
    LoadTestUtil.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[][] columnFamilies, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, org.apache.hadoop.hbase.client.Durability durability)
    Creates a pre-split table for load testing.
    static int
    LoadTestUtil.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[] columnFamily, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding)
    Creates a pre-split table for load testing.
    static int
    LoadTestUtil.createPreSplitLoadTestTable(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName, byte[] columnFamily, org.apache.hadoop.hbase.io.compress.Compression.Algorithm compression, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding dataBlockEncoding, int numRegionsPerServer, int regionReplication, org.apache.hadoop.hbase.client.Durability durability)
    Creates a pre-split table for load testing.
    static void
    CompressionTest.testCompression(org.apache.hadoop.hbase.io.compress.Compression.Algorithm algo)