A Partitioner implementation that will separate records to different HBase Regions based on region splits
This is a wrapper over a byte array so it can work as a key in a hashMap
A wrapper class that will allow both columnFamily and qualifier to be the key of a hashMap.
Contains information related to a filters for a given column.
A collection of ColumnFilters indexed by column names.
DefaultSource for integration with Spark's dataframe datasources.
Dynamic logic for SQL push down logic there is an instance for most common operations and a pass through for other operations not covered here
This object is a clean way to store and sort all cells that will be bulk loaded into a single row
This object will hold optional data for how a given column family's writer will work
To log the state of 'HBaseConnectionCache'
Denotes a unique key to an HBase Connection instance.
HBaseContext is a façade for HBase operations like bulk put, get, increment, delete, and scan
Implementation of Spark BaseRelation that will build up our scan logic , do the scan pruning, filter push down, and value conversions
This is the Java Wrapper over HBaseContext which is written in Scala.
This is the key to be used for sorting and shuffling.
Contains information related to a filters for a given column.
Construct to contain a single scan ranges information.
Status object to store static functions but also to hold last executed information that can be used for unit testing.
HBaseDStreamFunctions contains a set of implicit functions that can be applied to a Spark DStream so that we can easily interact with HBase
HBaseRDDFunctions contains a set of implicit functions that can be applied to a Spark RDD so that we can easily interact with HBase
* On top level, the converters provide three high level interface.