CDH集群获取自动生成的Keytab路径

  1. export dirname=/var/run/cloudera-scm-agent/process/
  2. find $dirname -not -empty `-ls -l` | grep keytab

通用认证

  1. kinit -kt hdfs.keytab hdfs/cdh01@FOO.COM
  2. klist -ket xx.keytab

Impala

下载Jar包依赖,上传到Hive 的auxlib目录下。
下载地址如下
https://www.cloudera.com/downloads/connectors/impala/jdbc/2-6-12.html

  1. beeline -d "com.cloudera.impala.jdbc41.Driver" -u "jdbc:impala://eng-cdh3:21050;AuthMech=1;KrbRealm=DTSTACK.COM;KrbHostFQDN=eng-cdh3;KrbServiceName=impala"
  2. !connect jdbc:impala://eng-cdh3:21050;AuthMech=1;KrbRealm=DTSTACK.COM;KrbHostFQDN=eng-cdh3;KrbServiceName=impala

https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/impala_proxy.html
https://blog.csdn.net/yu616568/article/details/53821439

Kafka

  1. export KAFKA_OPTS="-Djava.security.auth.login.config=/root/kafka/jaas.conf"
  2. kafka-console-producer --broker-list eng-cdh1:9092,eng-cdh2:9092 --topic test --producer.config client.properties
  3. kafka-console-consumer --topic test --from-beginning --bootstrap-server eng-cdh1:9092 --consumer.config client.properties
  4. kafka-console-consumer --topic wuren_foo --from-beginning --bootstrap-server kudu1:9092
  5. kafka-topics --create --topic ods_wuren --replication-factor 1 --partitions 1 --zookeeper cdp01.dtstack.com:2181,cdp02.dtstack.com:2181,cdp03.dtstack.com:2181/kafka

client.properties 文件内容

  1. security.protocol=SASL_PLAINTEXT
  2. sasl.kerberos.service.name=kafka

jaas.conf文件内容举例

  1. KafkaClient {
  2. com.sun.security.auth.module.Krb5LoginModule required
  3. useKeyTab=true
  4. keyTab="/root/kafka/kafka.keytab"
  5. principal="kafka/eng-cdh1@DTSTACK.COM";
  6. };

Python

Kafka Kerberos Python脚本编写
https://github.com/dpkp/kafka-python/issues/1336

参考资料

https://docs.cloudera.com/runtime/7.2.1/kafka-securing/topics/kafka-secure-kerberos-enable.html

HBase

  1. hbase shell
  2. put ’<table name>’,’row1’,’<colfamily:colname>’,’<value>’
  3. put 'dim_foo','1','cf1:id','1'
  4. put 'dim_foo', '1', 'cf1:name', 'foo'

ZooKeeper

ZooKeeper开启Kerberos后Client还是可以连接的,它是根据单独设置每个Znode的访问权限,不同与其他组件开启Kerberos认证后,未认证用户根本无法连接。

命令行认证Kerberos后连接ZK的方法

1、配置JAAS文件

jaas.conf配置样例

  1. Client {
  2. com.sun.security.auth.module.Krb5LoginModule required
  3. useKeyTab=true
  4. keyTab="/root/hbase.keytab"
  5. storeKey=true
  6. useTicketCache=false
  7. principal="hbase/krb02.k.com@K.COM";
  8. };

2、设置环境变量

  1. export JVMFLAGS="-Djava.security.auth.login.config=/etc/zookeeper/conf/jaas.conf"

3、连接ZK集群

注意:这里一定要设置ZK服务端的主机名不要走默认的127.0.0.1会导致Kerberos认证失败。

  1. bin/zkCli.sh -server krb02.k.com:2181
  1. getAcl /hbase
  2. 'world,'anyone
  3. : r
  4. 'sasl,'hbase
  5. : cdrwa

Hive

Hive的JDBC URL中principal参数设置成Server自己的principal,不是客户认证的principal。

  1. jdbc:hive2://eng-cdh3:10001/default;principal=hive/eng-cdh3@DTSTACK.COM