当前位置:Gxlcms > mysql > 自定义Hive权限控制(4)扩展Hive以实现自定义权限控制

自定义Hive权限控制(4)扩展Hive以实现自定义权限控制

时间:2021-07-01 10:21:17 帮助过:11人阅读

在前3节中,已经就 hive 权限控制 进行了基础数据的维护,现在用户权限配置功能已经实现。并且可以通过界面话的方式进行维护和管理。接着,最重要的事情就是针对Hive源码的修改。 主要是针对org.apache. hadoop . hive .conf.HiveConf及org.apache. hadoop .

在前3节中,已经就hive权限控制进行了基础数据的维护,现在用户权限配置功能已经实现。并且可以通过界面话的方式进行维护和管理。接着,最重要的事情就是针对Hive源码的修改。
主要是针对org.apache.hadoop.hive.conf.HiveConf及org.apache.hadoop.hive.ql.Driver
首先针对我们的特定需求,
扩展org.apache.hadoop.hive.conf.HiveConf

??public static enum ConfVars {	KUXUNUSER("hive.kuxun.username",""), //用户名	KUXUNPASSWORD("hive.kuxun.password",""),//密码	KUXUN_HIVESERVER_URL("hive.kuxun.hiveserver.url",""), //权限认证数据库地址	KUXUN_HIVESERVER_USER("hive.kuxun.hiveserver.username",""),//权限认证数据库用户名	KUXUN_HIVESERVER_PASSWORD("hive.kuxun.hiveserver.password",""),//权限认证数据库密码	KUXUN_RESERVE_A("hive.kuxun.resrver.a",""),//保留	KUXUN_RESERVE_B("hive.kuxun.resrver.b",""),//保留	KUXUN_RESERVE_C("hive.kuxun.resrver.c",""),//保留	KUXUN_RESERVE_D("hive.kuxun.resrver.d",""),//保留????????.......}

扩展org.apache.hadoop.hive.ql.Driver类
新增2个私有变量。用于存储传递来的用户和密码信息。
private String username ="";private String password ="";

在run()方法中增加获取username和password的实现
this.username = HiveConf.getVar(conf, HiveConf.ConfVars.KUXUNUSER);this.password = HiveConf.getVar(conf, HiveConf.ConfVars.KUXUNPASSWORD);

增加方法:
private void doAuthorizationExtend(BaseSemanticAnalyzer sem) throws HiveException, AuthorizationException {	//获取用户权限信息	UserAuthDataMode ua ;	try{		ua = new UserAuthDataMode(this.username,this.password,this.conf);		ua.run();	}catch(Exception e){		throw new AuthorizationException(e.getMessage());	}	if(ua.isSuperUser()){		LOG.error("current user is super user,do not check authorization.");		return ;	}	LOG.warn("current user is ["+this.username+"]. start check authorization.......");????LOG.warn("current user["+this.username+"] execute command ["+this.userCommand+"].");?	HashSet inputs = sem.getInputs();	SessionState ss = SessionState.get();	HiveOperation op = ss.getHiveOperation();	if (op != null) {}//不处理这种方式,hiveserver并不提供写入操作	LOG.debug("---------auth KUXUN--------------");	if (inputs != null && inputs.size() > 0) {		if (inputs.size() > ua.getMaxMapCount()){			String errorMsg = "The max partition numbers which you can handler in one job is ["+ua.getMaxMapCount()+"],but current is ["+inputs.size()+"]. Pemission denied.";			Exception ex = new Exception(errorMsg);			throw new AuthorizationException(errorMsg,ex);		}		for (ReadEntity read : inputs) {			if (read.getPartition() != null) {				Table tbl = read.getTable();				String tblName = tbl.getTableName();				LOG.debug("-----dbName.tableName---------"+tbl.getDbName()+"."+tblName);				String tblFullName = tbl.getDbName()+"."+tblName;				//如果当前表所在db不在用户权限db中 ,同时表不在用户权限table中,则抛出异常				if(ua.getDbNameList().indexOf(tbl.getDbName())  partValueList = part.getValues();				List partList = tbl.getPartitionKeys();				int partSize = partList.size();				for (int i=0;i tsoTopMap = parseCtx					.getTopToTable();?			for (Map.Entry> topOpMap : querySem					.getParseContext().getTopOps().entrySet()) {				Operator topOp = topOpMap						.getValue();				if (topOp instanceof TableScanOperator						&& tsoTopMap.containsKey(topOp)) {					TableScanOperator tableScanOp = (TableScanOperator) topOp;					Table tbl = tsoTopMap.get(tableScanOp);					String dbName = tbl.getDbName();					String tblName = tbl.getTableName();					List neededColumnIds = tableScanOp							.getNeededColumnIDs();					List columns = tbl.getCols();					List cols = new ArrayList();					if (neededColumnIds != null){					LOG.debug("-------neededColumnIds-----"+neededColumnIds.size());					}else{					LOG.debug("-------neededColumnIds-----null");					}					if (neededColumnIds != null							&& neededColumnIds.size() > 0) {						for (int i = 0; i < neededColumnIds.size(); i++) {							cols.add(columns.get(neededColumnIds.get(i))									.getName());						}					} else {						for (int i = 0; i < columns.size(); i++) {							cols.add(columns.get(i).getName());						}					}					//判断非分区表,是否存在于权限对象中					String fullTableName = dbName +"."+tblName;					if(ua.getDbNameList().indexOf(tbl.getDbName())  authColList = ua.getExcludeColumnList().get(fullTableName);						for(String col:cols){							if(authColList.indexOf(col) !=-1){								throw new AuthorizationException("table ["+fullTableName+"] column ["+col+"] Pemission denied.");							}							LOG.debug("--------col------------"+dbName+"."+tblName+":"+col);						}												}					//判断是否有必须包含的列,但是在使用中没有使用的					if(ua.getIncludeColumnList().containsKey(fullTableName)){						List authColList = ua.getIncludeColumnList().get(fullTableName);						for(String authCol:authColList){							if(cols.indexOf(authCol) == -1 ){								throw new AuthorizationException("table ["+fullTableName+"] must contain??column ["+authCol+"]. Pemission denied.");							}						}					}?				}			}		}	}}

在complie方法中增加自定义权限认证方法调用
public int compile(String command, boolean resetTaskIds){??????try{????	??doAuthorizationExtend(sem);??????}catch (AuthorizationException authExp){????	??errorMessage ="FAILED:Kuxun Authorization failed:" + authExp.getMessage()??????????+ " Please contact anyoneking@163.com for your information." ;????	??console.printError("Kuxun Authorization failed:" + authExp.getMessage()??????????????????+ "Please contact anyoneking@163.com for your information.");??????????????return 403;??????}}请注意:errorMessage在获取异常后,必须要进行赋值,否则通过hive client访问的时候,出现异常的时候不会给出异常提示,只会给出NUll。

以上完成后,重新打包,放到hivelib下面就ok了。
同时要注意修改hive-site.xml以传递对应的信息。

??hive.kuxun.username??test???hive.kuxun.password??test?????hive.kuxun.hiveserver.url??jdbc:mysql://localhost:3306/hiveserver??hiveserver jdbc connection url???hive.kuxun.hiveserver.username??test??username to use against hiveserver database???hive.kuxun.hiveserver.password??test??password to use against hiveserver database

人气教程排行