Previous Section Table of Contents Next Section

6.4 Security and OSCAR

OSCAR uses a layered approach to security. The architecture used in this chapter, a single-server node as the only connection to the external network, implies that everything must go through the server. If you can control the placement of the server on the external network, e.g., behind a corporate firewall, you can minimize the threat to the cluster. While outside the scope of this discussion, this is something you should definitely investigate.

The usual advice for securing a server applies to an OSCAR server. For example, you should disable unneeded services and delete unused accounts. With a Red Hat installation, TCP wrappers is compiled into xinetd and available by default. You'll need to edit the /etc/hosts.allow and /etc/hosts.deny files to configure this correctly. There are a number of good books (and web pages) on security. Get one and read it!

6.4.1 pfilter

In an OSCAR cluster, access to the cluster is controlled through pfilter, a package included in the OSCAR distribution. pfilter is both a firewall and a compiler for firewall rulesets. (The pfilter software can be downloaded separately from http://pfilter.sourceforge.net/.)

pfilter is run as a service, which makes it easy to start it, stop it, or check its status.

[root@amy root]# service pfilter stop

Stopping pfilter:                                          [  OK  ]

[root@amy root]# service pfilter start

Starting pfilter:                                          [  OK  ]

[root@amy root]# service pfilter status

pfilter is running

If you are having communications problems between nodes, you may want to temporarily disable pfilter. Just don't forget to restart it when you are done!

You can request a list of the chains or rules used by pfilter with the service command.

[root@amy root]# service pfilter chains

   

table filter:

...

This produces a lot of output that is not included here.

The configuration file for pfilter, /etc/pfilter.conf, contains the rules used by pfilter and can be edited if you need to change them. The OSCAR installation adds some rules to the default configuration. These appear to be quite reasonable, so it is unlikely that you'll need to make any changes. The manpages for pfilter.conf(5) and pfilter.rulesets(5) provide detailed instructions should you wish to make changes. While the rules use a very simple and readable syntax, instruction in firewall rulesets is outside the scope of this book.

6.4.2 SSH and OPIUM

Within the cluster, OSCAR is designed to use the SSH protocol for communications. Use of older protocols such as TELNET or RSH is strongly discouraged and really isn't needed. openSSH is set up for you as part of the installation. OPIUM, the OSCAR Password Installer and User Manager tool, handles this. OPIUM installs scripts that will automatically generate SSH keys for users. Once OSCAR is installed, the next time a user logs in or starts a new shell, she will see the output from the key generation script. (Actually, at any point after Step 3 in the installation of OSCAR, key generation is enabled.) Figure 6-19 shows such a login. Note that no action is required on the part of the user. Apart from the display of a few messages, the process is transparent to users.

Figure 6-19. Key setup upon login
figs/hplc_0619.gif


Once you set up the cluster, you should be able to use the ssh command to log onto any node from any other node, including the server, without using a password. On first use, you will see a warning that the host has been added to the list of known hosts. All this is normal. (The changes are saved to the directory /etc/profile.d.)

The openSSH configuration was not designed to work with other systems such as Kerberos or NIS.


In addition to setting up openSSH on the cluster, OPIUM includes a sync_users script that synchronizes password and group files among the cluster using C3 as a transport mechanism. By default, this is run every 15 minutes by cron. It can also be run by root with the --force option if you don't want to wait for cron. It cannot be run by other users. OPIUM is installed in /opt/opium with sync_users in the subdirectory bin. The configuration file for sync_users, sync_user.conf, is in the etc subdirectory. You can edit the configuration file to change how often cron runs sync_user or which files are updated, among other things. (sync_users is something of a misnomer since it can be used to update any file.)

Because the synchronization is done from the server to the clients, it is important that passwords always be changed on the server and never on the clients. The next time sync_user runs, password changes on client will be lost as the password changes on the server propagate to the clients.


    Previous Section Table of Contents Next Section