@InterfaceAudience.Private public final class BlockIOUtils extends Object
Modifier | Constructor and Description |
---|---|
private |
BlockIOUtils() |
Modifier and Type | Method and Description |
---|---|
private static int |
copyToByteBuff(byte[] buf,
int offset,
int len,
ByteBuff out) |
static boolean |
isByteBufferReadable(org.apache.hadoop.fs.FSDataInputStream is) |
static boolean |
preadWithExtra(ByteBuff buff,
org.apache.hadoop.fs.FSDataInputStream dis,
long position,
int necessaryLen,
int extraLen)
Read from an input stream at least
necessaryLen and if possible,
extraLen also if available. |
static void |
readFully(ByteBuff buf,
org.apache.hadoop.fs.FSDataInputStream dis,
int length)
Read length bytes into ByteBuffers directly.
|
static void |
readFullyWithHeapBuffer(InputStream in,
ByteBuff out,
int length)
Copying bytes from InputStream to
ByteBuff by using an temporary heap byte[] (default
size is 1024 now). |
static boolean |
readWithExtra(ByteBuff buf,
org.apache.hadoop.fs.FSDataInputStream dis,
int necessaryLen,
int extraLen)
Read bytes into ByteBuffers directly, those buffers either contains the extraLen bytes or only
contains necessaryLen bytes, which depends on how much bytes do the last time we read.
|
private static boolean |
readWithExtraOnHeap(InputStream in,
byte[] buf,
int bufOffset,
int necessaryLen,
int extraLen)
Read from an input stream at least
necessaryLen and if possible,
extraLen also if available. |
private BlockIOUtils()
public static boolean isByteBufferReadable(org.apache.hadoop.fs.FSDataInputStream is)
public static void readFully(ByteBuff buf, org.apache.hadoop.fs.FSDataInputStream dis, int length) throws IOException
buf
- the destination ByteBuff
dis
- the HDFS input stream which implement the ByteBufferReadable interface.length
- bytes to read.IOException
- exception to throw if any error happenpublic static void readFullyWithHeapBuffer(InputStream in, ByteBuff out, int length) throws IOException
ByteBuff
by using an temporary heap byte[] (default
size is 1024 now).in
- the InputStream to readout
- the destination ByteBuff
length
- to readIOException
- if any io error encountered.private static boolean readWithExtraOnHeap(InputStream in, byte[] buf, int bufOffset, int necessaryLen, int extraLen) throws IOException
necessaryLen
and if possible,
extraLen
also if available. Analogous to
IOUtils.readFully(InputStream, byte[], int, int)
, but specifies a number of "extra"
bytes to also optionally read.in
- the input stream to read frombuf
- the buffer to read intobufOffset
- the destination offset in the buffernecessaryLen
- the number of bytes that are absolutely necessary to readextraLen
- the number of extra bytes that would be nice to readIOException
- if failed to read the necessary bytespublic static boolean readWithExtra(ByteBuff buf, org.apache.hadoop.fs.FSDataInputStream dis, int necessaryLen, int extraLen) throws IOException
buf
- the destination ByteBuff
.dis
- input stream to read.necessaryLen
- bytes which we must readextraLen
- bytes which we may readIOException
- if failed to read the necessary bytes.public static boolean preadWithExtra(ByteBuff buff, org.apache.hadoop.fs.FSDataInputStream dis, long position, int necessaryLen, int extraLen) throws IOException
necessaryLen
and if possible,
extraLen
also if available. Analogous to
IOUtils.readFully(InputStream, byte[], int, int)
, but uses positional read and
specifies a number of "extra" bytes that would be desirable but not absolutely necessary to
read.buff
- ByteBuff to read into.dis
- the input stream to read fromposition
- the position within the stream from which to start readingnecessaryLen
- the number of bytes that are absolutely necessary to readextraLen
- the number of extra bytes that would be nice to readIOException
- if failed to read the necessary bytesprivate static int copyToByteBuff(byte[] buf, int offset, int len, ByteBuff out) throws IOException
IOException
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.