Package org.apache.hadoop.hbase.mapred
Class TableRecordReaderImpl
java.lang.Object
org.apache.hadoop.hbase.mapred.TableRecordReaderImpl
Iterate over an HBase table data, return (Text, RowResult) pairs
-
Field Summary
FieldsModifier and TypeFieldDescriptionprivate byte[]private Tableprivate byte[]private static final org.slf4j.Loggerprivate intprivate booleanprivate intprivate ResultScannerprivate byte[]private longprivate byte[][]private Filter -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoidclose()longgetPos()float(package private) byte[]voidinit()Build the scanner.booleannext(ImmutableBytesWritable key, Result value) voidrestart(byte[] firstRow) Restart from survivable exceptions by creating a new scanner.voidsetEndRow(byte[] endRow) voidvoidsetInputColumns(byte[][] inputColumns) voidsetRowFilter(Filter rowFilter) voidsetStartRow(byte[] startRow)
-
Field Details
-
LOG
-
startRow
-
endRow
-
lastSuccessfulRow
-
trrRowFilter
-
scanner
-
htable
-
trrInputColumns
-
timestamp
-
rowcount
-
logScannerActivity
-
logPerRowCount
-
-
Constructor Details
-
TableRecordReaderImpl
public TableRecordReaderImpl()
-
-
Method Details
-
restart
Restart from survivable exceptions by creating a new scanner.- Throws:
IOException
-
init
Build the scanner. Not done in constructor to allow for extension.- Throws:
IOException
-
getStartRow
byte[] getStartRow() -
setHTable
- Parameters:
htable- the table to scan.
-
setInputColumns
- Parameters:
inputColumns- the columns to be placed inResult.
-
setStartRow
- Parameters:
startRow- the first row in the split
-
setEndRow
- Parameters:
endRow- the last row in the split
-
setRowFilter
- Parameters:
rowFilter- theFilterto be used.
-
close
-
createKey
- See Also:
-
RecordReader.createKey()
-
createValue
- See Also:
-
RecordReader.createValue()
-
getPos
-
getProgress
-
next
- Parameters:
key- HStoreKey as input key.value- MapWritable as input value- Returns:
- true if there was more data
- Throws:
IOException
-