Software Overview


The BEAM Beowulf system aims to provide high performance computing by running applications as a set of tasks running in parallel on a number of processing nodes interconnected by a fast communications network. The software is based upon the popular Linux operating system with public domain parallel processing environments installed.

While high of the 'wish list' the goal of developing parallel programs to be the same as simple single processor versions, the reality is that the multi-processor topology obviously impinges upon development of programs for the system and as so often the case there are a number of optional routes. For Beowulf systems there are a number of packages available, the main two being PVM (Parallel Virtual Machine) and MPI (Message passing interface). Both these systems are installed and either may be used. Both systems have their merits and we have included a simple tutorial on each. Both systems have a number of administration tools.

Beowulf System

The Beowulf system is based upon the Linux operating system. This multi-user, multi-tasking, Unix like operating system provides full access to system resources from any node in the system. The Linux kernel used is a full SMP kernel with some parallel system extensions. It manages the local processors, memory, system devices and communications for each node.

The system consists a host node and any number of slave nodes. The software on each node is essentially the same but the configuration is different on the host node. The host is the master node in the system and provides services to all of the nodes. Users log into the host node and all user files are stored there. It provides services to all of the worker nodes. The host node also provides an X-Windows based user interface for a single operator.

All of the nodes are connected to a display monitor, keyboard and mouse via a switch that allows any of the nodes to be accessed. By default the monitor , keyboard and mouse is connected to the host node for user operation.

Networking System

The nodes are interconnected using 100Mbit ethernet through a "wire speed" ethernet switch. This provides a peak bi-directional data rate of 200MBits/s to each node. The Host node has two ethernet cards, one is connected to the beowulf cluster and the other is connected to the sites local area network. The cluster has been configured to us the private IP address range: -> The host node provides all network services for the slave nodes. This includes network configuration information, name services, user configuration and file systems.

Services Provided by the Host

Network Name Service

The host node provides a Domain Name Service for the slave nodes. Each of the nodes run a cache only name server linked to the hosts name server. The hosts name server serves the beowulf domain and caches the sites network information from the sites name server.

Node Network Configuration

On boot each node requests information on its configuration from the host node using the BOOTP protocol from the ethernet cards built in address. Included in this information is the host name and IP address of the node.

User Configuration

The password and group information is provided using the NIS system to each node. It is necessary for users to use yppasswd to change their password.

File Systems

The host node provides the main file system storage for the system. All of the slave nodes mount the host nodes file systems using NFS. The main file systems mounted include:
Mount Point
System applications and configuration files
User directories
Shared large file area
Each slave node has its own root (/) file system. This contains the kernel, main system programs and node configuration files. The /tmp file system, used for temporary files, is in this root file system. Each node also has its own swap area for applications that use a lot of virtual memory.
Each slave has extra disk space available that is not used. This could be used to cache large data sets if required.

Installed Software

The host node has a complete installation of RedHat Linux installed together with all normal applications. Each slave node has access to this software via the NFS mounted /usr directory. In addition to the normal Linux release the following main software packages have been added:
Package Usage
PVM Parallel Virtual machine parallel processing environment
MPI Message Passing Interface parallel processing environment
Bmon Beam Beowulf system monitor and administration tool
DDD GUI Symbolic Debugger
Nedit GUI Text Editor
Btools Various Parallel Processing Tools

Installing Software

The software on the host node has been installed from BEAM's master CDROM and then configured for the beowulf cluster. Due to the amount of configuration necessary, we recommend re-installing the software from the master backup tape supplied rather than re-installing from scratch.
The software on each slave node is installed from the host node over the network. The node is booted from a floppy disk and the shell script /usr/beowulf/install/InstallNode is run.

Due to the Linux basis of the system, there are a huge number of applications available on the internet for the system. These are best installed in the /usr directory on the host node. If the applications have parts installed in the root file system, then it will be necessary to copy these files to each of the nodes. This is best accomplished by re-installing each node from scratch as this will recreate the node from the files on the host node and is quite quick to do.

Parallel Processing Software

There are two parallel processing environments installed on the Beowulf system. The PVM and MPI environments. These two environments provide what are essentially the same facilities in slightly different ways. Both systems use a message passing approach where a number of tasks perform work communicate with other tasks. At BEAM we tend to favour the MPI system as it is based upon industry standards and has a cleaner API than PVM.
All of the parallel processing software has been installed in the /usr/beowulf directory. Also included are some system scripts to start daemons and setup the users environment variables.
File Usage
/etc/rc.d/init.d/beowulf Starts up daemons on each node
/usr/beowulf/bin/beowulf_shell Sets up user environment variables

User Configuration and Usage

Users log into the host node either at the system console or from a remote machine through the local area network. Each has a login id and a user directory to work from. If required the users home directory can be mounted from a remote file server. The user can then create, compile and run applications on the cluster using PVM or MPI. Additionally users can log into any of the clusters nodes and run applications on them.
When a user logs in to the system the file /etc/profile.d/ is run to load the environment variables from /usr/beowulf/bin/beowulf_shell. If the user performs an rsh to one of the nodes, as used by the MPI and PVM systems, the file /etc/bashrc is run which also calls /usr/beowulf/bin/beowulf_shell.

Other Information

Information on the Net

There is much information available on the world wide web regarding Beowulf class systems.

Software and Source Code