Uses of Class
org.apache.hadoop.hbase.security.User
Packages that use User
Package
Description
Provides HBase Client
Table of Contents
Tools to help define network clients and servers.
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
-
Uses of User in org.apache.hadoop.hbase
Fields in org.apache.hadoop.hbase declared as UserMethods in org.apache.hadoop.hbase that return UserModifier and TypeMethodDescriptionstatic UserHBaseTestingUtility.getDifferentUser(org.apache.hadoop.conf.Configuration c, String differentiatingSuffix) Deprecated.This method clones the passedcconfiguration setting a new user into the clone.static UserAuthUtil.loginClient(org.apache.hadoop.conf.Configuration conf) Deprecated.For kerberized cluster, return login user (from kinit or from keytab if specified).private static UserAuthUtil.loginClientAsService(org.apache.hadoop.conf.Configuration conf) Deprecated.For kerberized cluster, return login user (from kinit or from keytab).private static UserAuthUtil.loginFromKeytabAndReturnUser(UserProvider provider) Deprecated.Methods in org.apache.hadoop.hbase with parameters of type UserModifier and TypeMethodDescriptionLocalHBaseCluster.addRegionServer(org.apache.hadoop.conf.Configuration config, int index, User user) HBaseTestingUtility.getAsyncConnection(User user) Deprecated.Get a assigned AsyncClusterConnection to the cluster.HBaseTestingUtility.getConnection(User user) Deprecated.Get a assigned Connection to the cluster.intHBaseRpcServicesBase.getPriority(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader header, org.apache.hbase.thirdparty.com.google.protobuf.Message param, User user) -
Uses of User in org.apache.hadoop.hbase.backup.master
Methods in org.apache.hadoop.hbase.backup.master with parameters of type UserModifier and TypeMethodDescriptionvoidLogRollMasterProcedureManager.checkPermissions(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ProcedureDescription desc, AccessChecker accessChecker, User user) -
Uses of User in org.apache.hadoop.hbase.client
Fields in org.apache.hadoop.hbase.client declared as UserModifier and TypeFieldDescriptionprotected final UserAsyncConnectionImpl.userprivate final UserClusterIdFetcher.userprivate final UserConnectionRegistryRpcStubHolder.userMethods in org.apache.hadoop.hbase.client that return UserMethods in org.apache.hadoop.hbase.client with parameters of type UserModifier and TypeMethodDescription(package private) static ConnectionRegistryReturns the connection registry implementation to use, for the given connection urluri.(package private) static ConnectionRegistryReturns the connection registry implementation to use.Instantiate theConnectionRegistryusing the given parameters.RpcConnectionRegistryURIFactory.create(URI uri, org.apache.hadoop.conf.Configuration conf, User user) ZKConnectionRegistryURIFactory.create(URI uri, org.apache.hadoop.conf.Configuration conf, User user) static AsyncClusterConnectionClusterConnectionFactory.createAsyncClusterConnection(URI uri, org.apache.hadoop.conf.Configuration conf, SocketAddress localAddress, User user) Create a newAsyncClusterConnectioninstance.static AsyncClusterConnectionClusterConnectionFactory.createAsyncClusterConnection(org.apache.hadoop.conf.Configuration conf, SocketAddress localAddress, User user) Create a newAsyncClusterConnectioninstance.private static AsyncClusterConnectionClusterConnectionFactory.createAsyncClusterConnection(org.apache.hadoop.conf.Configuration conf, ConnectionRegistry registry, SocketAddress localAddress, User user) static AsyncClusterConnectionClusterConnectionFactory.createAsyncClusterConnection(ConnectionRegistryEndpoint endpoint, org.apache.hadoop.conf.Configuration conf, SocketAddress localAddress, User user) Create a newAsyncClusterConnectioninstance to be used at server side where we have aConnectionRegistryEndpoint.static CompletableFuture<AsyncConnection>ConnectionFactory.createAsyncConnection(URI connectionUri, org.apache.hadoop.conf.Configuration conf, User user) Create a new AsyncConnection instance using the passedconnectionUri,confanduser.static CompletableFuture<AsyncConnection>ConnectionFactory.createAsyncConnection(URI connectionUri, org.apache.hadoop.conf.Configuration conf, User user, Map<String, byte[]> connectionAttributes) Create a new AsyncConnection instance using the passedconnectionUri,confanduser.static CompletableFuture<AsyncConnection>ConnectionFactory.createAsyncConnection(org.apache.hadoop.conf.Configuration conf, User user) Create a new AsyncConnection instance using the passedconfanduser.static CompletableFuture<AsyncConnection>ConnectionFactory.createAsyncConnection(org.apache.hadoop.conf.Configuration conf, User user, Map<String, byte[]> connectionAttributes) Create a new AsyncConnection instance using the passedconfanduser.static ConnectionConnectionFactory.createConnection(URI connectionUri, org.apache.hadoop.conf.Configuration conf, ExecutorService pool, User user) Create a new Connection instance using the passedconfinstance.static ConnectionConnectionFactory.createConnection(URI connectionUri, org.apache.hadoop.conf.Configuration conf, ExecutorService pool, User user, Map<String, byte[]> connectionAttributes) Create a new Connection instance using the passedconfinstance.static ConnectionConnectionFactory.createConnection(URI connectionUri, org.apache.hadoop.conf.Configuration conf, User user) Create a new Connection instance using the passedconfinstance.static ConnectionConnectionFactory.createConnection(org.apache.hadoop.conf.Configuration conf, ExecutorService pool, User user) Create a new Connection instance using the passedconfinstance.static ConnectionConnectionFactory.createConnection(org.apache.hadoop.conf.Configuration conf, ExecutorService pool, User user, Map<String, byte[]> connectionAttributes) Create a new Connection instance using the passedconfinstance.static ConnectionConnectionFactory.createConnection(org.apache.hadoop.conf.Configuration conf, User user) Create a new Connection instance using the passedconfinstance.Constructors in org.apache.hadoop.hbase.client with parameters of type UserModifierConstructorDescriptionprotectedAbstractRpcBasedConnectionRegistry(org.apache.hadoop.conf.Configuration conf, User user, String hedgedReqsFanoutConfigName, String initialRefreshDelaySecsConfigName, String refreshIntervalSecsConfigName, String minRefreshIntervalSecsConfigName) AsyncClusterConnectionImpl(org.apache.hadoop.conf.Configuration conf, ConnectionRegistry registry, String clusterId, SocketAddress localAddress, User user) AsyncConnectionImpl(org.apache.hadoop.conf.Configuration conf, ConnectionRegistry registry, String clusterId, SocketAddress localAddress, User user) AsyncConnectionImpl(org.apache.hadoop.conf.Configuration conf, ConnectionRegistry registry, String clusterId, SocketAddress localAddress, User user, Map<String, byte[]> connectionAttributes) (package private)ClusterIdFetcher(org.apache.hadoop.conf.Configuration conf, User user, RpcControllerFactory rpcControllerFactory, Set<ServerName> bootstrapServers) (package private)ConnectionRegistryRpcStubHolder(org.apache.hadoop.conf.Configuration conf, User user, RpcControllerFactory rpcControllerFactory, Set<ServerName> bootstrapNodes) (package private)MasterRegistry(org.apache.hadoop.conf.Configuration conf, User user) Deprecated.(package private)RpcConnectionRegistry(org.apache.hadoop.conf.Configuration conf, User user) (package private)ZKConnectionRegistry(org.apache.hadoop.conf.Configuration conf, User ignored) Deprecated. -
Uses of User in org.apache.hadoop.hbase.coprocessor
Fields in org.apache.hadoop.hbase.coprocessor declared as UserMethods in org.apache.hadoop.hbase.coprocessor that return types with arguments of type UserModifier and TypeMethodDescriptionObserverContext.getCaller()Returns the active user for the coprocessor call.ObserverContextImpl.getCaller()Constructors in org.apache.hadoop.hbase.coprocessor with parameters of type UserModifierConstructorDescriptionObserverContextImpl(User caller) ObserverContextImpl(User caller, boolean bypassable) (package private)ObserverOperation(CoprocessorHost.ObserverGetter<C, O> observerGetter, User user) (package private)ObserverOperation(CoprocessorHost.ObserverGetter<C, O> observerGetter, User user, boolean bypassable) ObserverOperationWithoutResult(CoprocessorHost.ObserverGetter<C, O> observerGetter, User user) ObserverOperationWithoutResult(CoprocessorHost.ObserverGetter<C, O> observerGetter, User user, boolean bypassable) ObserverOperationWithResult(CoprocessorHost.ObserverGetter<C, O> observerGetter, R result, User user) privateObserverOperationWithResult(CoprocessorHost.ObserverGetter<C, O> observerGetter, R result, User user, boolean bypassable) -
Uses of User in org.apache.hadoop.hbase.ipc
Fields in org.apache.hadoop.hbase.ipc declared as UserModifier and TypeFieldDescriptionprotected final UserAbstractRpcClient.AbstractRpcChannel.ticket(package private) final UserConnectionId.ticketprotected final UserServerCall.userprotected UserServerRpcConnection.userMethods in org.apache.hadoop.hbase.ipc that return UserMethods in org.apache.hadoop.hbase.ipc that return types with arguments of type UserModifier and TypeMethodDescriptionRpcCallContext.getRequestUser()Returns the user credentials associated with the current RPC request or not present if no credentials were provided.RpcServer.getRequestUser()Returns the user credentials associated with the current RPC request or not present if no credentials were provided.ServerCall.getRequestUser()Methods in org.apache.hadoop.hbase.ipc with parameters of type UserModifier and TypeMethodDescriptionprivate org.apache.hbase.thirdparty.com.google.protobuf.MessageAbstractRpcClient.callBlockingMethod(org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md, HBaseRpcController hrc, org.apache.hbase.thirdparty.com.google.protobuf.Message param, org.apache.hbase.thirdparty.com.google.protobuf.Message returnType, User ticket, Address isa) Make a blocking call.private CallAbstractRpcClient.callMethod(org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md, HBaseRpcController hrc, org.apache.hbase.thirdparty.com.google.protobuf.Message param, org.apache.hbase.thirdparty.com.google.protobuf.Message returnType, User ticket, Address addr, org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback<org.apache.hbase.thirdparty.com.google.protobuf.Message> callback) org.apache.hbase.thirdparty.com.google.protobuf.BlockingRpcChannelAbstractRpcClient.createBlockingRpcChannel(ServerName sn, User ticket, int rpcTimeout) org.apache.hbase.thirdparty.com.google.protobuf.BlockingRpcChannelRpcClient.createBlockingRpcChannel(ServerName sn, User user, int rpcTimeout) Creates a "channel" that can be used by a blocking protobuf service.org.apache.hbase.thirdparty.com.google.protobuf.RpcChannelAbstractRpcClient.createRpcChannel(ServerName sn, User user, int rpcTimeout) org.apache.hbase.thirdparty.com.google.protobuf.RpcChannelRpcClient.createRpcChannel(ServerName sn, User user, int rpcTimeout) Creates a "channel" that can be used by a protobuf service.intAnnotationReadingPriorityFunction.getPriority(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader header, org.apache.hbase.thirdparty.com.google.protobuf.Message param, User user) Returns a 'priority' based on the request type.intPriorityFunction.getPriority(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader header, org.apache.hbase.thirdparty.com.google.protobuf.Message param, User user) Returns the 'priority type' of the specified request.static intConstructors in org.apache.hadoop.hbase.ipc with parameters of type UserModifierConstructorDescriptionprotectedAbstractRpcChannel(AbstractRpcClient<?> rpcClient, Address addr, User ticket, int rpcTimeout) protectedBlockingRpcChannelImplementation(AbstractRpcClient<?> rpcClient, Address addr, User ticket, int rpcTimeout) ConnectionId(User ticket, String serviceName, Address address) protectedRpcChannelImplementation(AbstractRpcClient<?> rpcClient, Address addr, User ticket, int rpcTimeout) RpcObserverOperation(User user) RpcObserverOperation(User user, boolean bypassable) -
Uses of User in org.apache.hadoop.hbase.mapreduce
Methods in org.apache.hadoop.hbase.mapreduce with parameters of type UserModifier and TypeMethodDescriptionprivate static voidTableMapReduceUtil.addTokenForJob(IOExceptionSupplier<Connection> connSupplier, User user, org.apache.hadoop.mapreduce.Job job) -
Uses of User in org.apache.hadoop.hbase.master
Methods in org.apache.hadoop.hbase.master with parameters of type UserModifier and TypeMethodDescriptionvoidMasterCoprocessorHost.postCompletedCreateTableAction(TableDescriptor htd, RegionInfo[] regions, User user) voidMasterCoprocessorHost.postCompletedDeleteTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postCompletedDisableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postCompletedEnableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postCompletedMergeRegionsAction(RegionInfo[] regionsToMerge, RegionInfo mergedRegion, User user) Invoked after completing merge regions operationvoidMasterCoprocessorHost.postCompletedModifyTableAction(TableName tableName, TableDescriptor oldDescriptor, TableDescriptor currentDescriptor, User user) voidMasterCoprocessorHost.postCompletedSplitRegionAction(RegionInfo regionInfoA, RegionInfo regionInfoB, User user) Invoked just after a splitvoidMasterCoprocessorHost.postCompletedTruncateTableAction(TableName tableName, User user) voidMasterCoprocessorHost.postMergeRegionsCommit(RegionInfo[] regionsToMerge, RegionInfo mergedRegion, User user) Invoked after merge regions operation writes the new region to hbase:metavoidMasterCoprocessorHost.postRollBackMergeRegionsAction(RegionInfo[] regionsToMerge, User user) Invoked after rollback merge regions operationvoidMasterCoprocessorHost.postRollBackSplitRegionAction(User user) Invoked just after the rollback of a failed splitvoidMasterCoprocessorHost.postSnapshot(SnapshotDescription snapshot, TableDescriptor hTableDescriptor, User user) voidMasterCoprocessorHost.postTruncateRegionAction(RegionInfo region, User user) Invoked after calling the truncate region procedurevoidMasterCoprocessorHost.preCreateTableAction(TableDescriptor htd, RegionInfo[] regions, User user) voidMasterCoprocessorHost.preDeleteTableAction(TableName tableName, User user) voidMasterCoprocessorHost.preDisableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.preEnableTableAction(TableName tableName, User user) voidMasterCoprocessorHost.preMergeRegionsAction(RegionInfo[] regionsToMerge, User user) Invoked just before a mergevoidMasterCoprocessorHost.preMergeRegionsCommit(RegionInfo[] regionsToMerge, List<Mutation> metaEntries, User user) Invoked before merge regions operation writes the new region to hbase:metavoidMasterCoprocessorHost.preModifyTableAction(TableName tableName, TableDescriptor currentDescriptor, TableDescriptor newDescriptor, User user) voidMasterCoprocessorHost.preSnapshot(SnapshotDescription snapshot, TableDescriptor hTableDescriptor, User user) voidMasterCoprocessorHost.preSplitAfterMETAAction(User user) This will be called after update META step as part of split table region procedure.voidMasterCoprocessorHost.preSplitBeforeMETAAction(byte[] splitKey, List<Mutation> metaEntries, User user) This will be called before update META step as part of split table region procedure.voidMasterCoprocessorHost.preSplitRegionAction(TableName tableName, byte[] splitRow, User user) Invoked just before a splitvoidMasterCoprocessorHost.preTruncateRegionAction(RegionInfo region, User user) Invoked just before calling the truncate region procedurevoidMasterCoprocessorHost.preTruncateTableAction(TableName tableName, User user) Constructors in org.apache.hadoop.hbase.master with parameters of type UserModifierConstructorDescriptionMasterObserverOperation(User user) MasterObserverOperation(User user, boolean bypassable) -
Uses of User in org.apache.hadoop.hbase.master.procedure
Fields in org.apache.hadoop.hbase.master.procedure declared as UserMethods in org.apache.hadoop.hbase.master.procedure that return UserModifier and TypeMethodDescriptionMasterProcedureEnv.getRequestUser()protected UserAbstractStateMachineTableProcedure.getUser()static UserMasterProcedureUtil.toUserInfo(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.UserInformation userInfoProto) Methods in org.apache.hadoop.hbase.master.procedure with parameters of type UserModifier and TypeMethodDescriptionprotected voidstatic org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.UserInformationMasterProcedureUtil.toProtoUserInfo(User user) -
Uses of User in org.apache.hadoop.hbase.master.snapshot
Methods in org.apache.hadoop.hbase.master.snapshot with parameters of type UserModifier and TypeMethodDescriptionvoidSnapshotManager.checkPermissions(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ProcedureDescription desc, AccessChecker accessChecker, User user) -
Uses of User in org.apache.hadoop.hbase.mob
Methods in org.apache.hadoop.hbase.mob with parameters of type UserModifier and TypeMethodDescriptionList<org.apache.hadoop.fs.Path>DefaultMobStoreCompactor.compact(CompactionRequestImpl request, ThroughputController throughputController, User user) -
Uses of User in org.apache.hadoop.hbase.procedure
Methods in org.apache.hadoop.hbase.procedure with parameters of type UserModifier and TypeMethodDescriptionabstract voidMasterProcedureManager.checkPermissions(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ProcedureDescription desc, AccessChecker accessChecker, User user) Check for required permissions before executing the procedure. -
Uses of User in org.apache.hadoop.hbase.procedure.flush
Methods in org.apache.hadoop.hbase.procedure.flush with parameters of type UserModifier and TypeMethodDescriptionvoidMasterFlushTableProcedureManager.checkPermissions(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ProcedureDescription desc, AccessChecker accessChecker, User user) -
Uses of User in org.apache.hadoop.hbase.procedure2
Methods in org.apache.hadoop.hbase.procedure2 with parameters of type UserModifier and TypeMethodDescriptionbooleanProcedureExecutor.isProcedureOwner(long procId, User user) Check if the user is this procedure's ownervoidProcedureExecutor.setFailureResultForNonce(NonceKey nonceKey, String procName, User procOwner, IOException exception) If the failure failed before submitting it, we may want to give back the same error to the requests with the same nonceKey.voidConstructors in org.apache.hadoop.hbase.procedure2 with parameters of type UserModifierConstructorDescriptionFailedProcedure(long procId, String procName, User owner, NonceKey nonceKey, IOException exception) -
Uses of User in org.apache.hadoop.hbase.regionserver
Fields in org.apache.hadoop.hbase.regionserver declared as UserModifier and TypeFieldDescriptionprivate UserCompactSplit.CompactionRunner.userprivate final UserSplitRequest.userMethods in org.apache.hadoop.hbase.regionserver that return UserMethods in org.apache.hadoop.hbase.regionserver with parameters of type UserModifier and TypeMethodDescriptionList<org.apache.hadoop.fs.Path>DateTieredStoreEngine.DateTieredCompactionContext.compact(ThroughputController throughputController, User user) List<org.apache.hadoop.fs.Path>DefaultStoreEngine.DefaultCompactionContext.compact(ThroughputController throughputController, User user) booleanHRegion.compact(CompactionContext compaction, HStore store, ThroughputController throughputController, User user) We are trying to remove / relax the region read lock for compaction.HStore.compact(CompactionContext compaction, ThroughputController throughputController, User user) Compact the StoreFiles.List<org.apache.hadoop.fs.Path>StripeStoreEngine.StripeCompaction.compact(ThroughputController throughputController, User user) private org.apache.hadoop.fs.PathSecureBulkLoadManager.createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, String randomDir) private org.apache.hadoop.fs.PathSecureBulkLoadManager.createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, TableName tableName) private voidCompactSplit.CompactionRunner.doCompaction(User user) protected List<HStoreFile>HStore.doCompaction(CompactionRequestImpl cr, Collection<HStoreFile> filesToCompact, User user, long compactionStartTime, List<org.apache.hadoop.fs.Path> newFiles) voidRegionCoprocessorHost.postCompact(HStore store, HStoreFile resultFile, CompactionLifeCycleTracker tracker, CompactionRequest request, User user) Called after the store compaction has completed.voidRegionCoprocessorHost.postCompactSelection(HStore store, List<HStoreFile> selected, CompactionLifeCycleTracker tracker, CompactionRequest request, User user) Called after theHStoreFiles to be compacted have been selected from the available candidates.voidRegionCoprocessorHost.preCleanupBulkLoad(User user) RegionCoprocessorHost.preCompact(HStore store, InternalScanner scanner, ScanType scanType, CompactionLifeCycleTracker tracker, CompactionRequest request, User user) Called prior to rewriting the store files selected for compactionRegionCoprocessorHost.preCompactScannerOpen(HStore store, ScanType scanType, CompactionLifeCycleTracker tracker, CompactionRequest request, User user) Called prior to opening store scanner for compaction.booleanRegionCoprocessorHost.preCompactSelection(HStore store, List<HStoreFile> candidates, CompactionLifeCycleTracker tracker, User user) Called prior to selecting theHStoreFiles for compaction from the list of currently available candidates.voidRegionCoprocessorHost.prePrepareBulkLoad(User user) voidvoidCompactSplit.requestCompaction(HRegion region, String why, int priority, CompactionLifeCycleTracker tracker, User user) voidCompactSplit.requestCompaction(HRegion region, HStore store, String why, int priority, CompactionLifeCycleTracker tracker, User user) HStore.requestCompaction(int priority, CompactionLifeCycleTracker tracker, User user) private voidCompactSplit.requestCompactionInternal(HRegion region, String why, int priority, boolean selectNow, CompactionLifeCycleTracker tracker, CompactSplit.CompactionCompleteTracker completeTracker, User user) protected voidCompactSplit.requestCompactionInternal(HRegion region, HStore store, String why, int priority, boolean selectNow, CompactionLifeCycleTracker tracker, CompactSplit.CompactionCompleteTracker completeTracker, User user) private voidCompactSplit.requestSplit(Region r, byte[] midKey, User user) private Optional<CompactionContext>CompactSplit.selectCompaction(HRegion region, HStore store, int priority, CompactionLifeCycleTracker tracker, CompactSplit.CompactionCompleteTracker completeTracker, User user) voidStops the regionserver.Constructors in org.apache.hadoop.hbase.regionserver with parameters of type UserModifierConstructorDescriptionCompactionRunner(HStore store, HRegion region, CompactionContext compaction, CompactionLifeCycleTracker tracker, CompactSplit.CompactionCompleteTracker completeTracker, ThreadPoolExecutor parent, User user) RegionObserverOperationWithoutResult(User user, boolean bypassable) (package private)SplitRequest(Region region, byte[] midKey, HRegionServer hrs, User user) -
Uses of User in org.apache.hadoop.hbase.regionserver.compactions
Methods in org.apache.hadoop.hbase.regionserver.compactions with parameters of type UserModifier and TypeMethodDescriptionabstract List<org.apache.hadoop.fs.Path>CompactionContext.compact(ThroughputController throughputController, User user) protected final List<org.apache.hadoop.fs.Path>Compactor.compact(CompactionRequestImpl request, Compactor.InternalScannerFactory scannerFactory, Compactor.CellSinkFactory<T> sinkFactory, ThroughputController throughputController, User user) List<org.apache.hadoop.fs.Path>DateTieredCompactor.compact(CompactionRequestImpl request, List<Long> lowerBoundaries, Map<Long, String> lowerBoundariesPolicies, ThroughputController throughputController, User user) List<org.apache.hadoop.fs.Path>DefaultCompactor.compact(CompactionRequestImpl request, ThroughputController throughputController, User user) Do a minor/major compaction on an explicit set of storefiles from a Store.List<org.apache.hadoop.fs.Path>StripeCompactor.compact(CompactionRequestImpl request, int targetCount, long targetSize, byte[] left, byte[] right, byte[] majorRangeFromRow, byte[] majorRangeToRow, ThroughputController throughputController, User user) List<org.apache.hadoop.fs.Path>StripeCompactor.compact(CompactionRequestImpl request, List<byte[]> targetBoundaries, byte[] majorRangeFromRow, byte[] majorRangeToRow, ThroughputController throughputController, User user) List<org.apache.hadoop.fs.Path>StripeCompactionPolicy.BoundaryStripeCompactionRequest.execute(StripeCompactor compactor, ThroughputController throughputController, User user) List<org.apache.hadoop.fs.Path>StripeCompactionPolicy.SplitStripeCompactionRequest.execute(StripeCompactor compactor, ThroughputController throughputController, User user) abstract List<org.apache.hadoop.fs.Path>StripeCompactionPolicy.StripeCompactionRequest.execute(StripeCompactor compactor, ThroughputController throughputController, User user) Executes the request against compactor (essentially, just calls correct overload of compact method), to simulate more dynamic dispatch.private InternalScannerCompactor.postCompactScannerOpen(CompactionRequestImpl request, ScanType scanType, InternalScanner scanner, User user) Calls coprocessor, if any, to create scanners - after normal scanner creation.private ScanInfoCompactor.preCompactScannerOpen(CompactionRequestImpl request, ScanType scanType, User user) voidCompactionRequester.requestCompaction(HRegion region, String why, int priority, CompactionLifeCycleTracker tracker, User user) Request compaction on all the stores of the given region.voidCompactionRequester.requestCompaction(HRegion region, HStore store, String why, int priority, CompactionLifeCycleTracker tracker, User user) Request compaction on the given store. -
Uses of User in org.apache.hadoop.hbase.replication.regionserver
Methods in org.apache.hadoop.hbase.replication.regionserver with parameters of type UserModifier and TypeMethodDescriptionprivate org.apache.hadoop.fs.PathHFileReplicator.createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, String randomDir) private org.apache.hadoop.fs.PathHFileReplicator.createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, TableName tableName) -
Uses of User in org.apache.hadoop.hbase.security
Subclasses of User in org.apache.hadoop.hbase.securityModifier and TypeClassDescriptionstatic final classBridgesUserinvocations to underlying calls toUserGroupInformationfor secure Hadoop 0.20 and versions 0.21 and above.Fields in org.apache.hadoop.hbase.security declared as UserMethods in org.apache.hadoop.hbase.security that return UserModifier and TypeMethodDescriptionstatic UserUser.create(org.apache.hadoop.security.UserGroupInformation ugi) Wraps an underlyingUserGroupInformationinstance.UserProvider.create(org.apache.hadoop.security.UserGroupInformation ugi) Wraps an underlyingUserGroupInformationinstance.static UserUser.createUserForTesting(org.apache.hadoop.conf.Configuration conf, String name, String[] groups) Generates a newUserinstance specifically for use in test code.static UserUser.SecureHadoopUser.createUserForTesting(org.apache.hadoop.conf.Configuration conf, String name, String[] groups) Create a user for testing.static UserUser.getCurrent()Returns theUserinstance within current execution context.UserProvider.getCurrent()Return the current user within the current execution contextstatic UserSuperusers.getSystemUser()Methods in org.apache.hadoop.hbase.security with parameters of type UserModifier and TypeMethodDescriptionstatic booleanSuperusers.isSuperUser(User user) Check if the current user is a super user -
Uses of User in org.apache.hadoop.hbase.security.access
Subclasses of User in org.apache.hadoop.hbase.security.accessModifier and TypeClassDescriptionstatic classA temporary user class to instantiate User instance based on the name and groups.Fields in org.apache.hadoop.hbase.security.access declared as UserModifier and TypeFieldDescriptionprivate UserAccessControlFilter.userprivate final UserAuthResult.userMethods in org.apache.hadoop.hbase.security.access that return UserModifier and TypeMethodDescriptionprivate UserAccessController.getActiveUser(ObserverContext<?> ctx) Returns the active user to which authorization checks should be applied.private UserSnapshotScannerHDFSAclController.getActiveUser(ObserverContext<?> ctx) AuthResult.getUser()AccessChecker.validateCallerWithFilterUser(User caller, TablePermission tPerm, String inputUserName) Methods in org.apache.hadoop.hbase.security.access with parameters of type UserModifier and TypeMethodDescriptionbooleanAuthManager.accessUserTable(User user, TableName table, Permission.Action action) Checks if the user has access to the full table or at least a family/qualifier for the specified action.static AuthResultAuthResult.allow(String request, String reason, User user, Permission.Action action, String namespace) static AuthResultAuthResult.allow(String request, String reason, User user, Permission.Action action, TableName table, byte[] family, byte[] qualifier) static AuthResultAuthResult.allow(String request, String reason, User user, Permission.Action action, TableName table, Map<byte[], ? extends Collection<?>> families) booleanAuthManager.authorizeCell(User user, TableName table, Cell cell, Permission.Action action) Check if user has given action privilige in cell scope.booleanAuthManager.authorizeUserFamily(User user, TableName table, byte[] family, Permission.Action action) Check if user has given action privilige in table:family scope.booleanAuthManager.authorizeUserGlobal(User user, Permission.Action action) Check if user has given action privilige in global scope.booleanAuthManager.authorizeUserNamespace(User user, String namespace, Permission.Action action) Check if user has given action privilige in namespace scope.booleanAuthManager.authorizeUserTable(User user, TableName table, byte[] family, byte[] qualifier, Permission.Action action) Check if user has given action privilige in table:family:qualifier scope.booleanAuthManager.authorizeUserTable(User user, TableName table, byte[] family, Permission.Action action) Check if user has given action privilige in table:family scope.booleanAuthManager.authorizeUserTable(User user, TableName table, Permission.Action action) Check if user has given action privilige in table scope.private booleanAccessController.checkCoveringPermission(User user, AccessController.OpType request, RegionCoprocessorEnvironment e, byte[] row, Map<byte[], ? extends Collection<?>> familyMap, long opTs, Permission.Action... actions) Determine if cell ACLs covered by the operation grant access.private voidAccessController.checkForReservedTagPresence(User user, Mutation m) voidAccessChecker.checkLockPermissions(User user, String namespace, TableName tableName, RegionInfo[] regionInfos, String reason) voidNoopAccessChecker.checkLockPermissions(User user, String namespace, TableName tableName, RegionInfo[] regionInfos, String reason) private voidAccessController.checkSystemOrSuperUser(User activeUser) static AuthResultAuthResult.deny(String request, String reason, User user, Permission.Action action, String namespace) static AuthResultAuthResult.deny(String request, String reason, User user, Permission.Action action, TableName table, byte[] family, byte[] qualifier) static AuthResultAuthResult.deny(String request, String reason, User user, Permission.Action action, TableName table, Map<byte[], ? extends Collection<?>> families) static List<Permission>PermissionStorage.getCellPermissionsForUser(User user, ExtendedCell cell) private booleanAccessController.hasFamilyQualifierPermission(User user, Permission.Action perm, RegionCoprocessorEnvironment env, Map<byte[], ? extends Collection<byte[]>> familyMap) Returnstrueif the current user is allowed the given action over at least one of the column qualifiers in the given column families.booleanAccessChecker.hasUserPermission(User user, String request, Permission permission) Authorizes that if the current user has the given permissions.booleanNoopAccessChecker.hasUserPermission(User user, String request, Permission permission) voidAccessChecker.performOnSuperuser(String request, User caller, String userToBeChecked) Check if caller is granting or revoking superusers's or supergroups's permissions.voidNoopAccessChecker.performOnSuperuser(String request, User caller, String userToBeChecked) private AuthResultAccessChecker.permissionGranted(String request, User user, Permission.Action permRequest, TableName tableName, byte[] family, byte[] qualifier) AccessChecker.permissionGranted(String request, User user, Permission.Action permRequest, TableName tableName, Map<byte[], ? extends Collection<?>> families) Check the current user for authorization to perform a specific action against the given set of row data.private AuthResultAccessController.permissionGranted(AccessController.OpType opType, User user, RegionCoprocessorEnvironment e, Map<byte[], ? extends Collection<?>> families, Permission.Action... actions) Check the current user for authorization to perform a specific action against the given set of row data.NoopAccessChecker.permissionGranted(String request, User user, Permission.Action permRequest, TableName tableName, Map<byte[], ? extends Collection<?>> families) private voidAccessController.preGetUserPermissions(User caller, String userName, String namespace, TableName tableName, byte[] family, byte[] qualifier) private voidAccessController.preGrantOrRevoke(User caller, String request, UserPermission userPermission) private voidAccessController.preHasUserPermissions(User caller, String userName, List<Permission> permissions) voidAccessChecker.requireAccess(User user, String request, TableName tableName, Permission.Action... permissions) Authorizes that the current user has any of the given permissions to access the table.voidNoopAccessChecker.requireAccess(User user, String request, TableName tableName, Permission.Action... permissions) voidAccessChecker.requireGlobalPermission(User user, String request, Permission.Action perm, String namespace) Checks that the user has the given global permission.voidAccessChecker.requireGlobalPermission(User user, String request, Permission.Action perm, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, String filterUser) Checks that the user has the given global permission.voidNoopAccessChecker.requireGlobalPermission(User user, String request, Permission.Action perm, String namespace) voidNoopAccessChecker.requireGlobalPermission(User user, String request, Permission.Action perm, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, String filterUser) voidAccessChecker.requireNamespacePermission(User user, String request, String namespace, String filterUser, Permission.Action... permissions) Checks that the user has the given global or namespace permission.voidAccessChecker.requireNamespacePermission(User user, String request, String namespace, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, Permission.Action... permissions) Checks that the user has the given global or namespace permission.voidNoopAccessChecker.requireNamespacePermission(User user, String request, String namespace, String filterUser, Permission.Action... permissions) voidNoopAccessChecker.requireNamespacePermission(User user, String request, String namespace, TableName tableName, Map<byte[], ? extends Collection<byte[]>> familyMap, Permission.Action... permissions) voidAccessChecker.requirePermission(User user, String request, String filterUser, Permission.Action perm) Authorizes that the current user has global privileges for the given action.voidAccessChecker.requirePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, String filterUser, Permission.Action... permissions) Authorizes that the current user has any of the given permissions for the given table, column family and column qualifier.voidNoopAccessChecker.requirePermission(User user, String request, String filterUser, Permission.Action perm) voidNoopAccessChecker.requirePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, String filterUser, Permission.Action... permissions) voidAccessChecker.requireTablePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) Authorizes that the current user has any of the given permissions for the given table, column family and column qualifier.voidNoopAccessChecker.requireTablePermission(User user, String request, TableName tableName, byte[] family, byte[] qualifier, Permission.Action... permissions) AccessChecker.validateCallerWithFilterUser(User caller, TablePermission tPerm, String inputUserName) Constructors in org.apache.hadoop.hbase.security.access with parameters of type UserModifierConstructorDescription(package private)AccessControlFilter(AuthManager mgr, User ugi, TableName tableName, AccessControlFilter.Strategy strategy, Map<ByteRange, Integer> cfVsMaxVersions) AuthResult(boolean allowed, String request, String reason, User user, Permission.Action action, String namespace) AuthResult(boolean allowed, String request, String reason, User user, Permission.Action action, TableName table, byte[] family, byte[] qualifier) AuthResult(boolean allowed, String request, String reason, User user, Permission.Action action, TableName table, Map<byte[], ? extends Collection<?>> families) -
Uses of User in org.apache.hadoop.hbase.security.provider
Methods in org.apache.hadoop.hbase.security.provider with parameters of type UserModifier and TypeMethodDescriptionorg.apache.hadoop.security.UserGroupInformationGssSaslClientAuthenticationProvider.getRealUser(User user) default org.apache.hadoop.security.UserGroupInformationSaslClientAuthenticationProvider.getRealUser(User ugi) Returns the "real" user, the user who has the credentials being authenticated by the remote service, in the form of anUserGroupInformationobject.org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.UserInformationDigestSaslClientAuthenticationProvider.getUserInfo(User user) org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.UserInformationGssSaslClientAuthenticationProvider.getUserInfo(User user) org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.UserInformationSaslClientAuthenticationProvider.getUserInfo(User user) Constructs aRPCProtos.UserInformationfrom the givenUserGroupInformationorg.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.UserInformationSimpleSaslClientAuthenticationProvider.getUserInfo(User user) Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> AuthenticationProviderSelector.selectProvider(String clusterId, User user) Chooses the authentication provider which should be used given the provided client context from the authentication providers passed in viaAuthenticationProviderSelector.configure(Configuration, Collection).Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> BuiltInProviderSelector.selectProvider(String clusterId, User user) Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> SaslClientAuthenticationProviders.selectProvider(String clusterId, User clientUser) Chooses the best authentication provider and corresponding token given the HBase cluster identifier and the user. -
Uses of User in org.apache.hadoop.hbase.security.provider.example
Methods in org.apache.hadoop.hbase.security.provider.example with parameters of type UserModifier and TypeMethodDescriptionorg.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.UserInformationShadeSaslClientAuthenticationProvider.getUserInfo(User user) Pair<SaslClientAuthenticationProvider,org.apache.hadoop.security.token.Token<? extends org.apache.hadoop.security.token.TokenIdentifier>> ShadeProviderSelector.selectProvider(String clusterId, User user) -
Uses of User in org.apache.hadoop.hbase.security.token
Methods in org.apache.hadoop.hbase.security.token with parameters of type UserModifier and TypeMethodDescriptionstatic voidTokenUtil.addTokenForJob(Connection conn, User user, org.apache.hadoop.mapreduce.Job job) Checks for an authentication token for the given user, obtaining a new token if necessary, and adds it to the credentials for the given map reduce job.static voidTokenUtil.addTokenForJob(Connection conn, org.apache.hadoop.mapred.JobConf job, User user) Checks for an authentication token for the given user, obtaining a new token if necessary, and adds it to the credentials for the given map reduce job.static booleanTokenUtil.addTokenIfMissing(Connection conn, User user) Checks if an authentication tokens exists for the connected cluster, obtaining one if needed and adding it to the user's credentials.private static org.apache.hadoop.security.token.Token<AuthenticationTokenIdentifier>TokenUtil.getAuthToken(Connection conn, User user) Get the authentication token of the user for the cluster specified in the configurationstatic voidClientTokenUtil.obtainAndCacheToken(Connection conn, User user) Obtain an authentication token for the given user and add it to the user's credentials.static voidTokenUtil.obtainAndCacheToken(Connection conn, User user) (package private) static org.apache.hadoop.security.token.Token<AuthenticationTokenIdentifier>ClientTokenUtil.obtainToken(Connection conn, User user) Obtain and return an authentication token for the given user.static org.apache.hadoop.security.token.Token<AuthenticationTokenIdentifier>TokenUtil.obtainToken(Connection conn, User user) Deprecated.External users should not use this method, will be removed in 4.0.0.static voidTokenUtil.obtainTokenForJob(Connection conn, User user, org.apache.hadoop.mapreduce.Job job) Obtain an authentication token on behalf of the given user and add it to the credentials for the given map reduce job.static voidTokenUtil.obtainTokenForJob(Connection conn, org.apache.hadoop.mapred.JobConf job, User user) Obtain an authentication token on behalf of the given user and add it to the credentials for the given map reduce job. -
Uses of User in org.apache.hadoop.hbase.security.visibility
Methods in org.apache.hadoop.hbase.security.visibility that return UserMethods in org.apache.hadoop.hbase.security.visibility with parameters of type UserModifier and TypeMethodDescriptionDefinedSetFilterScanLabelGenerator.getLabels(User user, Authorizations authorizations) EnforcingScanLabelGenerator.getLabels(User user, Authorizations authorizations) FeedUserAuthScanLabelGenerator.getLabels(User user, Authorizations authorizations) ScanLabelGenerator.getLabels(User user, Authorizations authorizations) Helps to get a list of lables associated with an UGISimpleScanLabelGenerator.getLabels(User user, Authorizations authorizations) booleanDefaultVisibilityLabelServiceImpl.havingSystemAuth(User user) booleanVisibilityLabelService.havingSystemAuth(User user) System checks for user auth during admin operations. -
Uses of User in org.apache.hadoop.hbase.snapshot
Methods in org.apache.hadoop.hbase.snapshot with parameters of type UserModifier and TypeMethodDescriptionstatic booleanSnapshotDescriptionUtils.isSnapshotOwner(SnapshotDescription snapshot, User user) Check if the user is this table snapshot's owner -
Uses of User in org.apache.hadoop.hbase.thrift2.client
Fields in org.apache.hadoop.hbase.thrift2.client declared as UserConstructors in org.apache.hadoop.hbase.thrift2.client with parameters of type UserModifierConstructorDescriptionThriftConnection(org.apache.hadoop.conf.Configuration conf, ExecutorService pool, User user, Map<String, byte[]> connectionAttributes) -
Uses of User in org.apache.hadoop.hbase.util
Fields in org.apache.hadoop.hbase.util declared as UserModifier and TypeFieldDescriptionprivate UserLoadTestTool.userOwnerprivate UserMultiThreadedUpdaterWithACL.userOwnerprivate UserMultiThreadedWriterWithACL.userOwnerFields in org.apache.hadoop.hbase.util with type parameters of type UserModifier and TypeFieldDescriptionMultiThreadedReaderWithACL.usersMultiThreadedUpdaterWithACL.usersConstructors in org.apache.hadoop.hbase.util with parameters of type UserModifierConstructorDescriptionMultiThreadedUpdaterWithACL(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName, double updatePercent, User userOwner, String userNames) MultiThreadedWriterWithACL(LoadTestDataGenerator dataGen, org.apache.hadoop.conf.Configuration conf, TableName tableName, User userOwner)