Package | Description |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.client |
Provides HBase Client
|
org.apache.hadoop.hbase.io.compress | |
org.apache.hadoop.hbase.io.encoding | |
org.apache.hadoop.hbase.io.hfile |
Provides implementations of
HFile and HFile
BlockCache . |
org.apache.hadoop.hbase.mapreduce |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mob | |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.regionserver.compactions | |
org.apache.hadoop.hbase.thrift2 |
Provides an HBase Thrift
service.
|
org.apache.hadoop.hbase.util |
Modifier and Type | Method and Description |
---|---|
Compression.Algorithm |
HColumnDescriptor.getCompactionCompression()
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0
(HBASE-13655).
Use
HColumnDescriptor.getCompactionCompressionType() . |
Compression.Algorithm |
HColumnDescriptor.getCompactionCompressionType()
Deprecated.
|
Compression.Algorithm |
HColumnDescriptor.getCompression()
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0
(HBASE-13655).
Use
HColumnDescriptor.getCompressionType() . |
Compression.Algorithm |
HColumnDescriptor.getCompressionType()
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
HColumnDescriptor |
HColumnDescriptor.setCompactionCompressionType(Compression.Algorithm value)
Deprecated.
Compression types supported in hbase.
|
HColumnDescriptor |
HColumnDescriptor.setCompressionType(Compression.Algorithm value)
Deprecated.
Compression types supported in hbase.
|
Modifier and Type | Field and Description |
---|---|
static Compression.Algorithm |
ColumnFamilyDescriptorBuilder.DEFAULT_COMPRESSION
Default compression type.
|
Modifier and Type | Method and Description |
---|---|
Compression.Algorithm |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.getCompactionCompressionType() |
Compression.Algorithm |
ColumnFamilyDescriptor.getCompactionCompressionType() |
Compression.Algorithm |
ColumnFamilyDescriptorBuilder.getCompressionType() |
Compression.Algorithm |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.getCompressionType() |
Compression.Algorithm |
ColumnFamilyDescriptor.getCompressionType() |
Modifier and Type | Method and Description |
---|---|
ColumnFamilyDescriptorBuilder |
ColumnFamilyDescriptorBuilder.setCompactionCompressionType(Compression.Algorithm value) |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.setCompactionCompressionType(Compression.Algorithm type)
Compression types supported in hbase.
|
ColumnFamilyDescriptorBuilder |
ColumnFamilyDescriptorBuilder.setCompressionType(Compression.Algorithm value) |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.setCompressionType(Compression.Algorithm type)
Compression types supported in hbase.
|
Modifier and Type | Method and Description |
---|---|
static Compression.Algorithm |
Compression.getCompressionAlgorithmByName(String compressName) |
static Compression.Algorithm |
Compression.Algorithm.valueOf(String name)
Returns the enum constant of this type with the specified name.
|
static Compression.Algorithm[] |
Compression.Algorithm.values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
Modifier and Type | Method and Description |
---|---|
static void |
Compression.decompress(byte[] dest,
int destOffset,
InputStream bufferedBoundedStream,
int compressedSize,
int uncompressedSize,
Compression.Algorithm compressAlgo)
Decompresses data from the given stream using the configured compression
algorithm.
|
Modifier and Type | Method and Description |
---|---|
static int |
EncodedDataBlock.getCompressedSize(Compression.Algorithm algo,
org.apache.hadoop.io.compress.Compressor compressor,
byte[] inputBuffer,
int offset,
int length)
Find the size of compressed data assuming that buffer will be compressed
using given algorithm.
|
int |
EncodedDataBlock.getEncodedCompressedSize(Compression.Algorithm comprAlgo,
org.apache.hadoop.io.compress.Compressor compressor)
Estimate size after second stage of compression (e.g.
|
Modifier and Type | Field and Description |
---|---|
private Compression.Algorithm |
HFileContext.compressAlgo
Compression algorithm used
|
private Compression.Algorithm |
HFileReaderImpl.compressAlgo
Filled when we read in the trailer.
|
private Compression.Algorithm |
HFileContextBuilder.compression
Compression algorithm used
|
private Compression.Algorithm |
FixedFileTrailer.compressionCodec
The compression codec used for all blocks.
|
static Compression.Algorithm |
HFile.DEFAULT_COMPRESSION_ALGORITHM
Default compression: none.
|
Modifier and Type | Method and Description |
---|---|
static Compression.Algorithm |
HFileWriterImpl.compressionByName(String algoName) |
Compression.Algorithm |
HFileContext.getCompression() |
Compression.Algorithm |
HFileReaderImpl.getCompressionAlgorithm() |
Compression.Algorithm |
HFile.Reader.getCompressionAlgorithm() |
Compression.Algorithm |
FixedFileTrailer.getCompressionCodec() |
Modifier and Type | Method and Description |
---|---|
void |
FixedFileTrailer.setCompressionCodec(Compression.Algorithm compressionCodec) |
HFileContextBuilder |
HFileContextBuilder.withCompression(Compression.Algorithm compression) |
Constructor and Description |
---|
HFileContext(boolean useHBaseChecksum,
boolean includesMvcc,
boolean includesTags,
Compression.Algorithm compressAlgo,
boolean compressTags,
ChecksumType checksumType,
int bytesPerChecksum,
int blockSize,
DataBlockEncoding encoding,
Encryption.Context cryptoContext,
long fileCreateTime,
String hfileName) |
Modifier and Type | Method and Description |
---|---|
(package private) static Map<byte[],Compression.Algorithm> |
HFileOutputFormat2.createFamilyCompressionMap(org.apache.hadoop.conf.Configuration conf)
Runs inside the task to deserialize column family to compression algorithm
map from the configuration.
|
Modifier and Type | Method and Description |
---|---|
static StoreFileWriter |
MobUtils.createDelFileWriter(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
ColumnFamilyDescriptor family,
String date,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
byte[] startKey,
CacheConfig cacheConfig,
Encryption.Context cryptoContext)
Creates a writer for the del file in temp directory.
|
static StoreFileWriter |
MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
ColumnFamilyDescriptor family,
MobFileName mobFileName,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
CacheConfig cacheConfig,
Encryption.Context cryptoContext,
boolean isCompaction)
Creates a writer for the mob file in temp directory.
|
static StoreFileWriter |
MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
ColumnFamilyDescriptor family,
org.apache.hadoop.fs.Path path,
long maxKeyCount,
Compression.Algorithm compression,
CacheConfig cacheConfig,
Encryption.Context cryptoContext,
ChecksumType checksumType,
int bytesPerChecksum,
int blocksize,
BloomType bloomType,
boolean isCompaction)
Creates a writer for the mob file in temp directory.
|
static StoreFileWriter |
MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
ColumnFamilyDescriptor family,
String date,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
byte[] startKey,
CacheConfig cacheConfig,
Encryption.Context cryptoContext,
boolean isCompaction)
Creates a writer for the mob file in temp directory.
|
static StoreFileWriter |
MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
ColumnFamilyDescriptor family,
String date,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
String startKey,
CacheConfig cacheConfig,
Encryption.Context cryptoContext,
boolean isCompaction)
Creates a writer for the mob file in temp directory.
|
Modifier and Type | Method and Description |
---|---|
StoreFileWriter |
HMobStore.createDelFileWriterInTmp(Date date,
long maxKeyCount,
Compression.Algorithm compression,
byte[] startKey)
Creates the writer for the del file in temp directory.
|
private HFileContext |
HStore.createFileContext(Compression.Algorithm compression,
boolean includeMVCCReadpoint,
boolean includesTag,
Encryption.Context cryptoContext) |
StoreFileWriter |
HMobStore.createWriterInTmp(Date date,
long maxKeyCount,
Compression.Algorithm compression,
byte[] startKey,
boolean isCompaction)
Creates the writer for the mob file in temp directory.
|
StoreFileWriter |
HStore.createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint,
boolean includesTag,
boolean shouldDropBehind) |
StoreFileWriter |
HMobStore.createWriterInTmp(MobFileName mobFileName,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction)
Creates the writer for the mob file in temp directory.
|
StoreFileWriter |
HMobStore.createWriterInTmp(String date,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
byte[] startKey,
boolean isCompaction)
Creates the writer for the mob file in temp directory.
|
Modifier and Type | Field and Description |
---|---|
protected Compression.Algorithm |
Compactor.compactionCompression |
Modifier and Type | Method and Description |
---|---|
static Compression.Algorithm |
ThriftUtilities.compressionAlgorithmFromThrift(org.apache.hadoop.hbase.thrift2.generated.TCompressionAlgorithm in) |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hbase.thrift2.generated.TCompressionAlgorithm |
ThriftUtilities.compressionAlgorithmFromHBase(Compression.Algorithm in) |
Modifier and Type | Method and Description |
---|---|
static void |
CompressionTest.testCompression(Compression.Algorithm algo) |
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.