Package manager
installers:
YUM with Red hat, Centos, Oracle Linux.
ZIPPER for SLES-11,
APT-GET for
ubuntu 12
-Kerberos—server components
and users are referred as principals. Is a authentication mechanism that is
used by principals connecting to different server components or communication
between server components between one and other is accomplished by Kerberos .
Kerberos when
installed on cluster it has two components:
KDC (key distribution c enter server) and creating a Kerberos
database. After installing a Kerberos server start the server and creating a
KDC admin.
Realm: Is nothing but a set of hosts, services and users over which
the Kerberos server has control
On Centos
using yum installing Kerberos on a host in a cluster step up
1)
yum install krb5-server krb5-libs
krb5-auth-dialog krb5-workstation
2)Here
you are installing a krb5 workstation,
cli>Gedit /etc/krb.conf in its installing
path.
3)
In krb.conf file change realm properties with FQDN
KDC
= FQDN and
ADMIN_SERVER = FQDN
4)
Database installation for principal’s storage (server components and user
limitation)
cli>Kdb5_util create –s
5)
Start the KDCcli>/etc/rc.d/init.d/krb5kdc start
cli>/etc/rc.d/init.d/kadmin start
6)create admin principals using
kadmin.local util
· Open the kadmin.local utility on the KDC machine
/usr/sbin/kadmin.local
/usr/sbin>kadmin.local
-q "addprinc admin/admin"
7)In location
/var/kerberos/krb5kdc/kadm5.acl
check for entry .edit
realm:
*/admin@FQDN *.( And restart the kadmind process.)
The next step is
setup Kerberos for ambari server
The views that can be
enabled in Ambari web needs
access to certain service components
like ATS . But the YARN ATS component require SPENGO authentication to
connect to these views developed on such API’s. Therefore, the Ambari Server
requires a Kerberos principal in order to authenticate via SPNEGO against these
APIs.
So Ambari Server with a Kerberos principal and keytab allows
views to authenticate via SPNEGO against cluster components.
1) (the host where the ambari server is installed)
addprinc -randkey ambari-server@FQDN
2) To generate a keytab
xst -k ambari.server.keytab ambari-server@EXAMPLE.COM
3) Place
that keytab on the Ambari Server host.
/etc/security/keytabs/ambari.server.keytab
4)Stop the server
ambari-server stop
5)Setup-security command
ambari-server setup-security
6) Select 3
for Setup Ambari kerberos JAAS configuration.
7) Enter
the Kerberos principal name for the Ambari Server you set up earlier.
8) Enter the path to the keytab for the
Ambari principal.
9) Restart Ambari Server.
ambari-server restart
On the Ambari Server, run the special
setup command and answer the prompts:ambari-server setup-security
·
Select Option
3
:
Choose one of the following options:
o
[1] Enable HTTPS for Ambari server.
o
[2] Encrypt passwords stored in
ambari.properties file.
o
[3] Setup Ambari kerberos JAAS configuration.
To transfer principals from KDC to Ambari persistence store:
Post mapping you need to do is:
1)In core-site.xml we need to set a property to map
the principals of Kerberos into Hadoop (user
mapping/Group mappings)
Hadoop.security.auth-to-local = default( rule
is to apply for user names syntax’s to translate from existing systems to
Hadoop syntax’s properly)
Auth-to-local rules: Base, filter and
substitution
++++++++++++++++++++++++++++++++++++++++++++++++
Configuring
Ambari for LDAP or Active Directory Authentication
- mkdir /etc/ambari-server/keys
where the
keys directory does not exist, but should be created.
- $JAVA_HOME/bin/keytool -import -trustcacerts -alias root -file $PATH_TO_YOUR_LDAPS_CERT -keystore /etc/ambari-server/keys/ldaps-keystore.jks
- Set a password when prompted. You will use this during ambari-server setup-ldap.
ambari-server setup-ldap
- At the Primary URL* prompt, enter the server URL and port you collected above. Prompts marked with an asterisk are required values.
- At the Secondary URL* prompt, enter the secondary server URL and port. This value is optional.
- At the Use SSL* prompt, enter your selection. If using LDAPS, enter true.
- At the User object class* prompt, enter the object class that is used for users.
- At the User name attribute* prompt, enter your selection. The default value is uid.
- At the Group object class* prompt, enter the object class that is used for groups.
- At the Group name attribute* prompt, enter the attribute for group name.
- At the Group member attribute* prompt, enter the attribute for group membership.
- At the Distinguished name attribute* prompt, enter the attribute that is used for the distinguished name.
- At the Base DN* prompt, enter your selection.
- At the Referral method* prompt, enter to follow or ignore LDAP referrals.
- At the Bind anonymously* prompt, enter your selection.
- At the Manager DN* prompt, enter your selection if you have set bind.Anonymously to false.
- At the Enter the Manager Password* prompt, enter the password for your LDAP manager DN.
- If you set Use SSL* = true in step 3, the following prompt appears: Do you want to provide custom TrustStore for Ambari?
Consider
the following options and respond as appropriate.
- More secure option: If using a self-signed certificate that you do not want imported to the existing JDK keystore, enter y.
For
example, you want this certificate used only by Ambari, not by any other
applications run by JDK on the same host.
If you
choose this option, additional prompts appear. Respond to the additional
prompts as follows:
- At the TrustStore type prompt, enter jks.
- At the Path to TrustStore file prompt, enter /keys/ldaps-keystore.jks (or the actual path to your keystore file).
- At the Password for TrustStore prompt, enter the password that you defined for the keystore.
- Less secure option: If using a self-signed certificate that you want to import and store in the existing, default JDK keystore, enter n.
- Convert the SSL certificate to X.509 format, if necessary, by executing the following command:
openssl x509 -in
slapd.pem -out <slapd.crt>
Where
<slapd.crt> is the path to the X.509 certificate.
- Import the SSL certificate to the existing keystore, for example the default jre certificates storage, using the following instruction:
/usr/jdk64/jdk1.7.0_45/bin/keytool
-import -trustcacerts -file slapd.crt -keystore
/usr/jdk64/jdk1.7.0_45/jre/lib/security/cacerts
Where
Ambari is set up to use JDK 1.7. Therefore, the certificate must be imported in
the JDK 7 keystore.
- Review your settings and if they are correct, select y.
- Start or restart the Server
ambari-server restart
The users
you have just imported are initially granted the Ambari User privilege. Ambari
Users can read metrics, view service status and configuration, and browse job
information. For these new users to be able to start or stop services, modify
configurations, and run smoke tests, they need to be Admins. To make this
change, as an Ambari Admin, use Manage
Ambari > Users > Edit. For
instructions, see Managing Users and Groups.
Active Directory Configuration
Directory Server implementations use
specific object classes and attributes for storing identities. In this example,
configurations specific to Active Directory are displayed as an example. Only
those properties that are specific to Active Directory are displayed.
Run ambari-server setup-ldap and provide the following information about your Domain.
Run ambari-server setup-ldap and provide the following information about your Domain.
Prompt
|
Example
AD Values
|
User object class* (posixAccount)
|
user
|
User name attribute* (uid)
|
cn
|
Group object class* (posixGroup)
|
group
|
Group member attribute*
(memberUid)
|
member
|
Synchronizing LDAP Users and Groups
Run the LDAP synchronize command and answer the prompts to initiate the sync:ambari-server sync-ldap [option]
To perform this operation, your
Ambari Server must be running.
·
When prompted, you must provide credentials for
an Ambari Admin.
·
When syncing ldap, Local user accounts with
matching username will switch to LDAP type, which means their authentication
will be against the external LDAP and not against the Local Ambari user store.
·
LDAP sync only syncs up-to-1000 users. If your
LDAP contains over 1000 users and you plan to import over 1000 users, you must
use the --users option when syncing and specify a filtered list of users to
perform import in batches.
The utility provides three options for synchronization:
·
Specific set of users and groups, or
·
Synchronize the existing users and groups in
Ambari with LDAP, or
·
All users and groups
Review log files for failed synchronization attempts, at /var/log/ambari-server/ambari-server.log
on the
Ambari Server host. Specific Set of Users and Groups
ambari-server sync-ldap --users users.txt
--groups groups.txt
Use this option to synchronize a specific set of users and groups from LDAP into Ambari. Provide the command a text file of comma-separated users and groups. The comma separated entries in each of these files should be based off of the values in LDAP of the attributes chosen during setup. The "User name attribute" should be used for the users.txt file, and the "Group name attribute" should be used for the groups.txt file. This command will find, import, and synchronize the matching LDAP entities with Ambari.
Group membership is determined using the Group Membership Attribute (groupMembershipAttr) specified during setup-ldap. User name is determined by using the Username Attribute (usernameAttribute) specified during setup-ldap.
Existing Users and Groups
ambari-server sync-ldap --existing
After you have performed a synchronization of a specific set of users and groups, you use this option to synchronize only those entities that are in Ambari with LDAP. Users will be removed from Ambari if they no longer exist in LDAP, and group membership in Ambari will be updated to match LDAP.
Group membership is determined using the Group Membership Attribute specified during setup-ldap.
All Users and Groups
Only use this option if you are sure you want to synchronize all users and groups from LDAP into Ambari. If you only want to synchronize a subset of users and groups, use a specific set of users and groups option.ambari-server sync-ldap --all
This will import all entities with matching LDAP user and group object classes into Ambari.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Prerequisites: Before enabling Kerberos JCE(java cryptography
encryption) should be installed on every node
1) obtain
the JCE policy file appropriate for the JDK version in your cluster.
2) On Ambari Server and on each host in the cluster, add the
unlimited security policy JCE jars to $JAVA_HOME/jre/lib/security/.
For
example, run the following to extract the policy jars into the JDK installed on
your host:
unzip
-o -j -q jce_policy-8.zip -d /usr/jdk64/jdk1.8.0_60/jre/lib/security/
3)After that start the wizard in ambari
· Log in to Ambari Web and Browse to Admin > Kerberos.
· Click “Enable
Kerberos” to launch the wizard.
3) Ambari
Metrics will not be secured with Kerberos unless it is configured for
distributed metrics storage
---Ambari Metrics
will not be secured with Kerberos unless it is configured for distributed
metrics storage
--When
running in embedded mode, you should confirm the " hbase.rootdir" and
"hbase.tmp.dir" directory configurations in Ambari Metrics >
Configs > Advanced > ams-hbase-site are using a sufficiently sized
and not heavily utilized partition, such as:
```` --Note
1: If your cluster if configured for a highly-available NameNode, set the hbase.rootdir value
to use the HDFS nameservice, instead of the NameNode hostname:
hdfs://hdfsnameservice/apps/ams/metrics
· Copy the
metric data from the AMS local directory to an HDFS directory. This is the
value of hbase.rootdir in Advanced ams-hbase-site used when running in embedded
mode. For example:
su
- hdfs -c 'hdfs dfs -copyFromLocal /var/lib/ambari-metrics-collector/hbase/*
/apps/ams/metrics'
su - hdfs -c 'hdfs dfs
-chown -R ams:hadoop /apps/ams/metrics'
Script for generating service principals and keytabs:In Ambari before enabling Kerberos the required principals
and keytabs for services must be created in kdc host machine for service
components using kadmin.local utility .
As the components for services might get installed on different FQDN on different hosts so the components in general has a principal name of for example datanode would be dn/FQDN
As the components for services might get installed on different FQDN on different hosts so the components in general has a principal name of for example datanode would be dn/FQDN
--So
to get huge list of default principal names and key tabs use a work around go
to
--Select
Admin view->Security->Enable Security-> and run
the Add security wizard, using the
default values. At the bottom of the third page, Create Principals and Keytabs,
click Download CSV. Then use the Back button to exit the wizard
until you have finished your setup.
Table 13.3. Ambari Principals
User
|
Mandatory
Principal Name
|
Ambari Smoke Test User
|
ambari-user
|
Ambari HDFS Test User
|
hdfs
|
Ambari HBase Test User
|
hbase
|
mkdir -p /etc/security/keytabs/
chown root:hadoop /etc/security/keytabs
chmod 750 /etc/security/keytabs· Copy the appropriate keytab file to each host. If a host runs more than one component (for example, both TaskTracker and DataNode), copy keytabs for both components. The Ambari Test User keytabs should be copied to the NameNode host.
· Set appropriate permissions for the keytabs.
a. On
the HDFS NameNode and SecondaryNameNode hosts:
b. chown hdfs:hadoop /etc/security/keytabs/nn.service.keytab
c. chmod 400 /etc/security/keytabs/nn.service.keytab
d. chown root:hadoop /etc/security/keytabs/spnego.service.keytab
chmod 440 /etc/security/keytabs/spnego.service.keytab
e. On
the HDFS NameNode host, for the Ambari Test Users:
f. chown ambari-qa:hadoop /etc/security/keytabs/smokeuser.headless.keytab
g. chmod 440 /etc/security/keytabs/smokeuser.headless.keytab
h. chown hdfs:hadoop /etc/security/keytabs/hdfs.headless.keytab
i. chmod 440 /etc/security/keytabs/hdfs.headless.keytab
j. chown hbase:hadoop /etc/security/keytabs/hbase.headless.keytab
chmod 440 /etc/security/keytabs/hbase.headless.keytab
k. On
each host that runs an HDFS DataNode:
l. chown hdfs:hadoop /etc/security/keytabs/dn.service.keytab
chmod 400 /etc/security/keytabs/dn.service.keytab
m. On
the host that runs the MapReduce JobTracker:
n. chown mapred:hadoop /etc/security/keytabs/jt.service.keytab
chmod 400 /etc/security/keytabs/jt.service.keytab
o. On
each host that runs a MapReduce TaskTracker:
p. chown mapred:hadoop /etc/security/keytabs/tt.service.keytab
chmod 400 /etc/security/keytabs/tt.service.keytab
q. On
the host that runs the Oozie Server:
r. chown oozie:hadoop /etc/security/keytabs/oozie.service.keytab
s. chmod 400 /etc/security/keytabs/oozie.service.keytab
t. chown root:hadoop /etc/security/keytabs/spnego.service.keytab
chmod 440 /etc/security/keytabs/spnego.service.keytab
u. On
the host that runs the Hive Metastore, HiveServer2 and WebHCat:
v. chown hive:hadoop /etc/security/keytabs/hive.service.keytab
w. chmod 400 /etc/security/keytabs/hive.service.keytab
x. chown root:hadoop /etc/security/keytabs/spnego.service.keytab
chmod 440 /etc/security/keytabs/spnego.service.keytab
y. On
hosts that run the HBase MasterServer, RegionServer and ZooKeeper:
z. chown hbase:hadoop /etc/security/keytabs/hbase.service.keytab
aa.chmod 400 /etc/security/keytabs/hbase.service.keytab
bb.chown zookeeper:hadoop /etc/security/keytabs/zk.service.keytab
chmod 400 /etc/security/keytabs/zk.service.keytab
cc. On
the host that runs the Nagios server:
dd.chown nagios:nagios /etc/security/keytabs/nagios.service.keytab
chmod 400 /etc/security/keytabs/nagios.service.keytab
· Verify that
the correct keytab files and principals are associated with the correct service
using the
klist
command. For example,
on the NameNode: klist –k -t /etc/security/nn.service.keytab
Do this on each respective service in your cluster.
==================================
Linux IP
TABLES works with chains(chains are set of rules)
TCP and UDP:
TCP protocols: FTP -21/20,
ssh(secure
shell) – 22,
Telnet
–23,
smtp
email out -25,
pop3 email in ----110,
Imap
email in --–143,vpn—1723,
Kerberos—88,
web/http-tcp80,
ssl(secure socket layer)/https---tcp 443,
DNS-UDP 53,
DHCP
–UDP 67/68,
samba-137/139/445,
netbios-137/139,
Active
D-445,
snmp-161/162
There are
three types of IP chains in IPTables: Input, Forward, output
---Define
the rules which you want to govern first and at last define a catch all block
specifying in which traffic you don’t want to allow
---Vsftpd:very secure file transfer
protocol daemon port 21.
--cli>sudo
IPtables –A input allow –j accept –p tcp –-destination –port 22 –I eth0\
--cli>sudo
IPtables –A input allow –j Drop –p tcp –I eth0
--cli>sudo
IPtables –L
--cli>sudo
Iptables –D delete chain number(is the number of line in the list from top to
bottom)
Sample
output cli>sudo iptables -L
Chain
INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all
-- anywhere anywhere state
RELATED,ESTABLISHED
ACCEPT icmp --
anywhere anywhere
ACCEPT all
-- anywhere anywhere
ACCEPT tcp
-- anywhere anywhere state NEW tcp
dpt:ssh
REJECT all
-- anywhere anywhere reject-with
icmp-host-prohibited
Chain
FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all
-- anywhere anywhere reject-with
icmp-host-prohibited
Chain
OUTPUT (policy ACCEPT)
target prot opt source destination
--------------------------------------------------------------------
To know iptables and chains run the following commands below
cli>sudo iptables -t filter -L
cli> sudo iptables -t nat -L
cli> sudo iptables -t mangle -L
cli> sudo iptables -t raw -L
Cli>sudo iptables --help
--------------------------------------------------------------------
To know iptables and chains run the following commands below
cli>sudo iptables -t filter -L
cli> sudo iptables -t nat -L
cli> sudo iptables -t mangle -L
cli> sudo iptables -t raw -L
Cli>sudo iptables --help