@InterfaceAudience.Private public abstract class AbstractRpcClient extends Object implements RpcClient
Modifier and Type | Class and Description |
---|---|
static class |
AbstractRpcClient.BlockingRpcChannelImplementation
Blocking rpc channel that goes via hbase rpc.
|
Modifier and Type | Field and Description |
---|---|
protected String |
clusterId |
protected Codec |
codec |
protected org.apache.hadoop.io.compress.CompressionCodec |
compressor |
protected org.apache.hadoop.conf.Configuration |
conf |
protected int |
connectTO |
protected long |
failureSleep |
protected boolean |
fallbackAllowed |
protected IPCUtil |
ipcUtil |
protected SocketAddress |
localAddr |
static org.apache.commons.logging.Log |
LOG |
protected int |
maxRetries |
protected int |
minIdleTimeBeforeClose |
protected int |
readTO |
protected boolean |
tcpKeepAlive |
protected boolean |
tcpNoDelay |
protected UserProvider |
userProvider |
protected int |
writeTO |
DEFAULT_CODEC_CLASS, DEFAULT_SOCKET_TIMEOUT_CONNECT, DEFAULT_SOCKET_TIMEOUT_READ, DEFAULT_SOCKET_TIMEOUT_WRITE, FAILED_SERVER_EXPIRY_DEFAULT, FAILED_SERVER_EXPIRY_KEY, IDLE_TIME, IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_DEFAULT, IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY, PING_CALL_ID, SOCKET_TIMEOUT_CONNECT, SOCKET_TIMEOUT_READ, SOCKET_TIMEOUT_WRITE, SPECIFIC_WRITE_THREAD
Constructor and Description |
---|
AbstractRpcClient(org.apache.hadoop.conf.Configuration conf,
String clusterId,
SocketAddress localAddr)
Construct an IPC client for the cluster
clusterId |
Modifier and Type | Method and Description |
---|---|
protected abstract Pair<com.google.protobuf.Message,CellScanner> |
call(PayloadCarryingRpcController pcrc,
com.google.protobuf.Descriptors.MethodDescriptor md,
com.google.protobuf.Message param,
com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress isa)
Make a call, passing
param , to the IPC server running at
address which is servicing the protocol protocol,
with the ticket credentials, returning the value. |
com.google.protobuf.BlockingRpcChannel |
createBlockingRpcChannel(ServerName sn,
User ticket,
int defaultOperationTimeout)
Creates a "channel" that can be used by a blocking protobuf service.
|
static String |
getDefaultCodec(org.apache.hadoop.conf.Configuration c) |
protected static int |
getPoolSize(org.apache.hadoop.conf.Configuration config)
Return the pool size specified in the configuration, which is applicable only if
the pool type is
PoolMap.PoolType.RoundRobin . |
protected static PoolMap.PoolType |
getPoolType(org.apache.hadoop.conf.Configuration config)
Return the pool type specified in the configuration, which must be set to
either
PoolMap.PoolType.RoundRobin or
PoolMap.PoolType.ThreadLocal ,
otherwise default to the former. |
boolean |
hasCellBlockSupport() |
protected IOException |
wrapException(InetSocketAddress addr,
Exception exception)
Takes an Exception and the address we were trying to connect to and return an IOException with
the input exception as the cause.
|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
cancelConnections, close
public static final org.apache.commons.logging.Log LOG
protected final org.apache.hadoop.conf.Configuration conf
protected String clusterId
protected final SocketAddress localAddr
protected UserProvider userProvider
protected final IPCUtil ipcUtil
protected final int minIdleTimeBeforeClose
protected final int maxRetries
protected final long failureSleep
protected final boolean tcpNoDelay
protected final boolean tcpKeepAlive
protected final Codec codec
protected final org.apache.hadoop.io.compress.CompressionCodec compressor
protected final boolean fallbackAllowed
protected final int connectTO
protected final int readTO
protected final int writeTO
public AbstractRpcClient(org.apache.hadoop.conf.Configuration conf, String clusterId, SocketAddress localAddr)
clusterId
conf
- configurationclusterId
- the cluster idlocalAddr
- client socket bind address.public static String getDefaultCodec(org.apache.hadoop.conf.Configuration c)
public boolean hasCellBlockSupport()
hasCellBlockSupport
in interface RpcClient
Codec
and so
supports cell blocks.protected static PoolMap.PoolType getPoolType(org.apache.hadoop.conf.Configuration config)
PoolMap.PoolType.RoundRobin
or
PoolMap.PoolType.ThreadLocal
,
otherwise default to the former.
For applications with many user threads, use a small round-robin pool. For
applications with few user threads, you may want to try using a
thread-local pool. In any case, the number of RpcClient
instances should not exceed the operating system's hard limit on the number of
connections.config
- configurationPoolMap.PoolType.RoundRobin
or
PoolMap.PoolType.ThreadLocal
protected static int getPoolSize(org.apache.hadoop.conf.Configuration config)
PoolMap.PoolType.RoundRobin
.config
- configurationprotected abstract Pair<com.google.protobuf.Message,CellScanner> call(PayloadCarryingRpcController pcrc, com.google.protobuf.Descriptors.MethodDescriptor md, com.google.protobuf.Message param, com.google.protobuf.Message returnType, User ticket, InetSocketAddress isa) throws IOException, InterruptedException
param
, to the IPC server running at
address
which is servicing the protocol
protocol,
with the ticket
credentials, returning the value.
Throws exceptions if there are network problems or if the remote code
threw an exception.ticket
- Be careful which ticket you pass. A new user will mean a new Connection.
UserProvider.getCurrent()
makes a new instance of User each time so
will be a
new Connection each time.InterruptedException
IOException
public com.google.protobuf.BlockingRpcChannel createBlockingRpcChannel(ServerName sn, User ticket, int defaultOperationTimeout) throws UnknownHostException
RpcClient
createBlockingRpcChannel
in interface RpcClient
sn
- server name describing location of serverticket
- which is to use the connectiondefaultOperationTimeout
- default rpc operation timeoutUnknownHostException
protected IOException wrapException(InetSocketAddress addr, Exception exception)
addr
- target addressexception
- the relevant exceptionCopyright © 2007-2016 The Apache Software Foundation. All Rights Reserved.