HBase: Kerberize/SSL Installation

In this tutorial I will show you how to use Kerberos/SSL with HBase. I will use self signed certs for this example. Before you begin ensure you have installed Kerberos Server, Hadoop and Zookeeper.

This assumes your hostname is “hadoop”

We will install a Master, RegionServer and Rest Client

Create Kerberos Principals

cd /etc/security/keytabs/

sudo kadmin.local

#You can list princepals
listprincs

#Create the following principals
addprinc -randkey hbase/hadoop@REALM.CA
addprinc -randkey hbaseHTTP/hadoop@REALM.CA

#Create the keytab files.
#You will need these for Hadoop to be able to login
xst -k hbase.service.keytab hbase/hadoop@REALM.CA
xst -k hbaseHTTP.service.keytab hbaseHTTP/hadoop@REALM.CA

Set Keytab Permissions/Ownership

sudo chown root:hadoopuser /etc/security/keytabs/*
sudo chmod 750 /etc/security/keytabs/*

Install HBase

wget http://apache.forsale.plus/hbase/2.1.0/hbase-2.1.0-bin.tar.gz
tar -zxvf hbase-2.1.0-bin.tar.gz
sudo mv hbase-2.1.0 /usr/local/hbase/
cd /usr/local/hbase/conf/

Setup .bashrc:

 sudo nano ~/.bashrc

Add the following to the end of the file.

#HBASE VARIABLES START
export HBASE_HOME=/usr/local/hbase
export PATH=$PATH:$HBASE_HOME/bin
export HBASE_CONF_DIR=$HBASE_HOME/conf
#HBASE VARIABLES END

 source ~/.bashrc

hbase_client_jaas.conf

Client {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=false
        useTicketCache=true;
};

hbase_server_jaas.conf

Client {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        useTicketCache=false
        keyTab="/etc/security/keytabs/hbase.service.keytab"
        principal="hbase/hadoop@REALM.CA";
};

regionservers

hadoop

hbase-env.sh

Add or modify the following settings.

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
export HBASE_CONF_DIR=${HBASE_CONF_DIR:-/usr/local/hbase/conf}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/usr/local/hadoop/etc/hadoop}
export HBASE_CLASSPATH="$CLASSPATH:$HADOOP_CONF_DIR"
export HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers
export HBASE_LOG_DIR=${HBASE_HOME}/logs
export HBASE_PID_DIR=/home/hadoopuser
export HBASE_MANAGES_ZK=false
export HBASE_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase_client_jaas.conf"
export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase_server_jaas.conf"
export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase_server_jaas.conf"

hbase-site.xml

<configuration>
	<property>
		<name>hbase.rootdir</name>
		<value>hdfs://hadoop:54310/hbase</value>
	</property>
	<property>
		<name>hbase.zookeeper.property.dataDir</name>
		<value>/usr/local/zookeeper/data</value>
	</property>
	<property>
		<name>hbase.cluster.distributed</name>
		<value>true</value>
	</property>
	<property>
		<name>hbase.regionserver.kerberos.principal</name>
		<value>hbase/_HOST@REALM.CA</value>
	</property>
	<property>
		<name>hbase.regionserver.keytab.file</name>
		<value>/etc/security/keytabs/hbase.service.keytab</value>
	</property>
	<property>
		<name>hbase.master.kerberos.principal</name>
		<value>hbase/_HOST@REALM.CA</value>
	</property>
	<property>
		<name>hbase.master.keytab.file</name>
		<value>/etc/security/keytabs/hbase.service.keytab</value>
	</property>
	<property>
		<name>hbase.security.authentication.spnego.kerberos.principal</name>
		<value>hbaseHTTP/_HOST@REALM.CA</value>
	</property>
	<property>
		<name>hbase.security.authentication.spnego.kerberos.keytab</name>
		<value>/etc/security/keytabs/hbaseHTTP.service.keytab</value>
	</property>
	<property>
		<name>hbase.security.authentication</name>
		<value>kerberos</value>
	</property>
	<property>
		<name>hbase.security.authorization</name>
		<value>true</value>
	</property>
	<property>
		<name>hbase.coprocessor.region.classes</name>
		<value>org.apache.hadoop.hbase.security.token.TokenProvider</value>
	</property>
	<property>
		<name>hbase.rpc.protection</name>
		<value>integrity</value>
	</property>
	<property>
		<name>hbase.rpc.engine</name>
		<value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
	</property>
	<property>
		<name>hbase.coprocessor.master.classes</name>
		<value>org.apache.hadoop.hbase.security.access.AccessController</value>
	</property>
	<property>
		<name>hbase.coprocessor.region.classes</name>
		<value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController</value>
	</property>
	<property>
		<name>hbase.security.authentication.ui</name>
		<value>kerberos</value>
		<description>Controls what kind of authentication should be used for the HBase web UIs.</description>
	</property>
	<property>
		<name>hbase.master.port</name>
		<value>16000</value>
	</property>
	<property>
		<name>hbase.master.info.bindAddress</name>
		<value>0.0.0.0</value>
	</property>
	<property>
		<name>hbase.master.info.port</name>
		<value>16010</value>
	</property>
	<property>
		<name>hbase.regionserver.hostname</name>
		<value>hadoop</value>
	</property>
	<property>
		<name>hbase.regionserver.port</name>
		<value>16020</value>
	</property>
	<property>
		<name>hbase.regionserver.info.port</name>
		<value>16030</value>
	</property>
	<property>
		<name>hbase.regionserver.info.bindAddress</name>
		<value>0.0.0.0</value>
	</property>
	<property>
		<name>hbase.master.ipc.address</name>
		<value>0.0.0.0</value>
	</property>
	<property>
		<name>hbase.regionserver.ipc.address</name>
		<value>0.0.0.0</value>
	</property>
	<property>
		<name>hbase.ssl.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>hadoop.ssl.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>ssl.server.keystore.keypassword</name>
		<value>startrek</value>
	</property>
	<property>
		<name>ssl.server.keystore.password</name>
		<value>startrek</value>
	</property>
	<property>
		<name>ssl.server.keystore.location</name>
		<value>/etc/security/serverKeys/keystore.jks</value>
	</property>
	<property>
		<name>hbase.rest.ssl.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>hbase.rest.ssl.keystore.store</name>
		<value>/etc/security/serverKeys/keystore.jks</value>
	</property>
	<property>
		<name>hbase.rest.ssl.keystore.password</name>
		<value>startrek</value>
	</property>
	<property>
		<name>hbase.rest.ssl.keystore.keypassword</name>
		<value>startrek</value>
	</property>
	<property>
		<name>hbase.superuser</name>
		<value>hduser</value>
	</property>
	<property>
		<name>hbase.tmp.dir</name>
		<value>/tmp/hbase-${user.name}</value>
	</property>
	<property>
		<name>hbase.local.dir</name>
		<value>${hbase.tmp.dir}/local</value>
	</property>
	<property>
		<name>hbase.zookeeper.property.clientPort</name>
		<value>2181</value>
	</property>
	<property>
		<name>hbase.unsafe.stream.capability.enforce</name>
		<value>false</value>
	</property>
	<property>
		<name>hbase.zookeeper.quorum</name>
		<value>hadoop</value>
	</property>
	<property>
		<name>zookeeper.znode.parent</name>
		<value>/hbase-secure</value>
	</property>
	<property>
		<name>hbase.regionserver.dns.interface</name>
		<value>enp0s3</value>
	</property>
        <property>
                <name>hbase.rest.authentication.type</name>
                <value>kerberos</value>
        </property>
        <property>
                <name>hadoop.proxyuser.HTTP.groups</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.HTTP.hosts</name>
                <value>*</value>
        </property>
        <property>
                <name>hbase.rest.authentication.kerberos.keytab</name>
                <value>/etc/security/keytabs/hbaseHTTP.service.keytab</value>
        </property>
        <property>
                <name>hbase.rest.authentication.kerberos.principal</name>
                <value>hbaseHTTP/_HOST@REALM.CA</value>
        </property>
        <property>
                <name>hbase.rest.kerberos.principal</name>
                <value>hbase/_HOST@REALM.CA</value>
        </property>
        <property>
                <name>hbase.rest.keytab.file</name>
                <value>/etc/security/keytabs/hbase.service.keytab</value>
        </property>
</configuration>

Change Ownership of HBase files

sudo chown hadoopuser:hadoopuser -R /usr/local/hbase/*

Hadoop HDFS Config Changes

You will need to add two properties into the core-site.xml file of Hadoop.

nano /usr/local/hadoop/etc/hadoop/core-site.xml

<property>
	<name>hadoop.proxyuser.hbase.hosts</name>
	<value>*</value>
</property>
<property>
	<name>hadoop.proxyuser.hbase.groups</name>
	<value>*</value>
</property>
<property>
	<name>hadoop.proxyuser.HTTP.hosts</name>
	<value>*</value>
</property>
<property>
	<name>hadoop.proxyuser.HTTP.groups</name>
	<value>*</value>
</property>

AutoStart

crontab -e

@reboot /usr/local/hbase/bin/hbase-daemon.sh --config /usr/local/hbase/conf/ start master
@reboot /usr/local/hbase/bin/hbase-daemon.sh --config /usr/local/hbase/conf/ start regionserver
@reboot /usr/local/hbase/bin/hbase-daemon.sh --config /usr/local/hbase/conf/ start rest --infoport 17001 -p 17000

Validation

kinit -kt /etc/security/keytabs/hbase.service.keytab hbase/hadoop@REALM.ca
hbase shell
status 'detailed'
whoami
kdestroy

References

https://hbase.apache.org/0.94/book/security.html
https://pivotalhd-210.docs.pivotal.io/doc/2100/webhelp/topics/ConfiguringSecureHBase.html
https://ambari.apache.org/1.2.5/installing-hadoop-using-ambari/content/ambari-kerb-2-3-2-1.html
https://hbase.apache.org/book.html#_using_secure_http_https_for_the_web_ui

Kafka: Kerberize/SSL

In this tutorial I will show you how to use Kerberos/SSL with NiFi. I will use self signed certs for this example. Before you begin ensure you have installed Kerberos Server and Kafka.

If you don’t want to use the built in Zookeeper you can setup your own. To do that following this tutorial.

This assumes your hostname is “hadoop”

Create Kerberos Principals

cd /etc/security/keytabs/

sudo kadmin.local

#You can list princepals
listprincs

#Create the following principals
addprinc -randkey kafka/hadoop@REALM.CA
addprinc -randkey zookeeper/hadoop@REALM.CA

#Create the keytab files.
#You will need these for Hadoop to be able to login
xst -k kafka.service.keytab kafka/hadoop@REALM.CA
xst -k zookeeper.service.keytab zookeeper/hadoop@REALM.CA

Set Keytab Permissions/Ownership

sudo chown root:hadoopuser /etc/security/keytabs/*
sudo chmod 750 /etc/security/keytabs/*

Hosts Update

sudo nano /etc/hosts

#Remove 127.0.1.1 line

#Change 127.0.0.1 to the following
127.0.0.1 realm.ca hadoop localhost

Ubuntu Firewall

sudo ufw disable

SSL

Setup SSL Directories if you have not previously done so.

sudo mkdir -p /etc/security/serverKeys
sudo chown -R root:hadoopuser /etc/security/serverKeys/
sudo chmod 755 /etc/security/serverKeys/

cd /etc/security/serverKeys

Setup Keystore

sudo keytool -genkey -alias NAMENODE -keyalg RSA -keysize 1024 -dname "CN=NAMENODE,OU=ORGANIZATION_UNIT,C=canada" -keypass PASSWORD -keystore /etc/security/serverKeys/keystore.jks -storepass PASSWORD
sudo keytool -export -alias NAMENODE -keystore /etc/security/serverKeys/keystore.jks -rfc -file /etc/security/serverKeys/NAMENODE.csr -storepass PASSWORD

Setup Truststore

sudo keytool -import -noprompt -alias NAMENODE -file /etc/security/serverKeys/NAMENODE.csr -keystore /etc/security/serverKeys/truststore.jks -storepass PASSWORD

Generate Self Signed Certifcate

sudo openssl genrsa -out /etc/security/serverKeys/NAMENODE.key 2048

sudo openssl req -x509 -new -key /etc/security/serverKeys/NAMENODE.key -days 300 -out /etc/security/serverKeys/NAMENODE.pem

sudo keytool -keystore /etc/security/serverKeys/keystore.jks -alias NAMENODE -certreq -file /etc/security/serverKeys/NAMENODE.cert -storepass PASSWORD -keypass PASSWORD

sudo openssl x509 -req -CA /etc/security/serverKeys/NAMENODE.pem -CAkey /etc/security/serverKeys/NAMENODE.key -in /etc/security/serverKeys/NAMENODE.cert -out /etc/security/serverKeys/NAMENODE.signed -days 300 -CAcreateserial

Setup File Permissions

sudo chmod 440 /etc/security/serverKeys/*
sudo chown root:hadoopuser /etc/security/serverKeys/*

Edit server.properties Config

cd /usr/local/kafka/config

sudo nano server.properties

#Edit or Add the following properties.
ssl.endpoint.identification.algorithm=HTTPS
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.key.password=PASSWORD
ssl.keystore.location=/etc/security/serverKeys/keystore.jks
ssl.keystore.password=PASSWORD
ssl.truststore.location=/etc/security/serverKeys/truststore.jks
ssl.truststore.password=PASSWORD
listeners=SASL_SSL://:9094
security.inter.broker.protocol=SASL_SSL
ssl.client.auth=required
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
ssl.keystore.type=JKS
ssl.truststore.type=JKS
sasl.kerberos.service.name=kafka
zookeeper.connect=hadoop:2181
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI

Edit zookeeper.properties Config

sudo nano zookeeper.properties

#Edit or Add the following properties.

server.1=hadoop:2888:3888
clientPort=2181
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=SASL
jaasLoginRenew=3600000

Edit producer.properties Config

sudo nano producer.properties

bootstrap.servers=hadoop:9094
security.protocol=SASL_SSL
sasl.kerberos.service.name=kafka
ssl.truststore.location=/etc/security/serverKeys/truststore.jks
ssl.truststore.password=PASSWORD
ssl.keystore.location=/etc/security/serverKeys/keystore.jks
ssl.keystore.password=PASSWORD
ssl.key.password=PASSWORD
sasl.mechanism=GSSAPI

Edit consumer.properties Config

sudo nano consumer.properties

zookeeper.connect=hadoop:2181
bootstrap.servers=hadoop:9094
group.id=securing-kafka-group
security.protocol=SASL_SSL
sasl.kerberos.service.name=kafka
ssl.truststore.location=/etc/security/serverKeys/truststore.jks
ssl.truststore.password=PASSWORD
sasl.mechanism=GSSAPI

Add zookeeper_jass.conf Config

sudo nano zookeeper_jass.conf

Server {
  com.sun.security.auth.module.Krb5LoginModule required
  debug=true
  useKeyTab=true
  keyTab="/etc/security/keytabs/zookeeper.service.keytab"
  storeKey=true
  useTicketCache=true
  refreshKrb5Config=true
  principal="zookeeper/hadoop@REALM.CA";
};

Add kafkaserver_jass.conf Config

sudo nano kafkaserver_jass.conf

KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    debug=true
    useKeyTab=true
    storeKey=true
    refreshKrb5Config=true
    keyTab="/etc/security/keytabs/kafka.service.keytab"
    principal="kafka/hadoop@REALM.CA";
};

kafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useTicketCache=true
    refreshKrb5Config=true
    debug=true
    useKeyTab=true
    storeKey=true
    keyTab="/etc/security/keytabs/kafka.service.keytab"
    principal="kafka/hadoop@REALM.CA";
};

Edit kafka-server-start.sh

cd /usr/local/kafka/bin/

sudo nano kafka-server-start.sh

jaas="$base_dir/../config/kafkaserver_jaas.conf"

export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"

Edit zookeeper-server-start.sh

sudo nano zookeeper-server-start.sh

jaas="$base_dir/../config/zookeeper_jaas.conf"

export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"

Kafka-ACL

cd /usr/local/kafka/bin/

#Grant topic access and cluster access
./kafka-acls.sh  --operation All --allow-principal User:kafka --authorizer-properties zookeeper.connect=hadoop:2181 --add --cluster
./kafka-acls.sh  --operation All --allow-principal User:kafka --authorizer-properties zookeeper.connect=hadoop:2181 --add --topic TOPIC

#Grant all groups for a specific topic
./kafka-acls.sh --operation All --allow-principal User:kafka --authorizer-properties zookeeper.connect=hadoop:2181 --add --topic TOPIC --group *

#If you want to remove cluster access
./kafka-acls.sh --authorizer-properties zookeeper.connect=hadoop:2181 --remove --cluster

#If you want to remove topic access
./kafka-acls.sh --authorizer-properties zookeeper.connect=hadoop:2181 --remove --topic TOPIC

#List access for cluster
./kafka-acls.sh --list --authorizer-properties zookeeper.connect=hadoop:2181 --cluster

#List access for topic
./kafka-acls.sh --list --authorizer-properties zookeeper.connect=hadoop:2181 --topic TOPIC

kafka-console-producer.sh

If you want to test using the console producer you need to make these changes.

cd /usr/local/kafka/bin/
nano kafka-console-producer.sh

#Add the below before the last line

base_dir=$(dirname $0)
jaas="$base_dir/../config/kafkaserver_jaas.conf"
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"


#Now you can run the console producer
./kafka-console-producer.sh --broker-list hadoop:9094 --topic TOPIC -producer.config ../config/producer.properties

kafka-console-consumer.sh

If you want to test using the console consumer you need to make these changes.

cd /usr/local/kafka/bin/
nano kafka-console-consumer.sh

#Add the below before the last line

base_dir=$(dirname $0)
jaas="$base_dir/../config/kafkaserver_jaas.conf"
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"


#Now you can run the console consumer
./kafka-console-consumer.sh --bootstrap-server hadoop:9094 --topic TOPIC --consumer.config ../config/consumer.properties --from-beginning

References

https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
https://github.com/confluentinc/securing-kafka-blog/blob/master/manifests/default.pp

HortonWorks: SSL Setup

This entry is part 3 of 7 in the series HortonWorks

If you want to use SSL with Ambari Server (note this is not with Hadoop yet) then follow the below steps. Please note this does not cover the creation of a SSL Cert as there are many tutorials on how to create self signed certs, etc available.

Step 1: Stop the Ambari Server

sudo ambari-server stop

Step 2: Run Ambari Server Security Setup Command

sudo ambari-server setup-security

Select option 1 during the prompts and note that you cannot use port 443 for https as that is reserved in Ambari. The default is 8443 and that is what they recommend. Enter path to your cert /etc/ssl/certs/hostname.cer file. Enter path to your encrypted key /etc/ssl/private/hostname.key file. Follow the rest of the prompts.

Step 3: Start Ambari Server

sudo ambari-server start

Step 4: Login to Ambari Server now available at https://hostname:8443