user guide

the following user guide pertains to the wesley cluster. training for the visualization facilities is done on an individual basis and varies depending on the requirements of your project.

logging into the head node

logging into wesley is done through the ssh protocol and requires that you have an ssh client installed on your computer. linux and mac have built-in ssh clients available from their command prompts. windows users can download and install one of several freely available ssh clients. we recommend mobaxterm or putty.

when you log in to wesley you are actually logging onto the head node (sometimes also referred to as the master, the frontend, or the login node). the head node is where you will perform virtually all of your work including submitting your programs (jobs) for execution, monitoring your jobs, editing, compiling and debugging programs, managing your files, etc.

logging into a compute node

in some circumstances you may need to log in to one of the compute nodes. for example, you may need to monitor the processes of one of your jobs on that node; using top for example, or manage temporary job files on a node's local scratch disk (/data). for small tasks like these, it is acceptable to logon to the node directly using ssh. to do so, you must first be logged into the head node, and then from there use the ssh command to log into the node itself. for example, to log into the compute node named wes-04-03 you would issue the following command from the head node:

ssh wes-04-03

in some situations you may need to run a software package interactively on a compute node. this might be the case if, for example, your interactive work will require more than the 14gb ram limit per user on the head node, or if it will require access to hardware only available on a particular node (eg. gpu software development using cuda). in these cases you should use the queuing system to submit an interactive job. the queuing system will then find a free node, reserve a cpu core and the appropriate amount of ram, and then log you into the node to give you an interactive shell. by using interactive jobs in this manner, you will not interfere with other running jobs on the cluster, and other user's jobs will never interfere with you. as an example, to get an interactive shell on any node that has 24gb available ram and reserve it for 4 hours use:

qsub -i -l pvmem=24gb -l walltime=4:00:00

access restrictions (firewall)

wesley is accessible from any computer on the 阿根廷vs墨西哥竞猜 network and to several off-campus, but well known, networks. for security reasons wesley is not visible to the entire world.

changing your password

to change your password, log in to the head node and type:

passwd

the first time that you log in you should change the initial password that was assigned to you. it is also recommended that you change your password periodically.

transferring files

files can be transferred between wesley and your computer using the sftp or scp protocols (both are secure protocols based on the ssh protocol). mac and linux have these available through the command line. windows users can download free clients such as mobaxterm or winscp.

file storage

home directory

your home directory is located in /home/your_username. /home is a filesystem which is physically located on the storage node of the cluster, but exported using the nfs protocol so that it is fully accessible from the head node and all of the compute nodes. you see exactly the same files under /home no matter which node you are logged into or running jobs on.

there are currently no quotas imposed on the amount of data that you can store in your home directory. however, since the entire /home filesystem is only 9 tb and is shared by all users it is recommended that you keep your usage below 30 gb.

if usage on /home begins to become a problem users with the largest amount of data will be contacted directly and asked to reduce their files. if problems persist then system imposed disk quotas will be implemented.

scratch file system

some hpc applications require large and/or many temporary scratch files while running but these files are not kept after the calculations complete. in other cases, output files might be kept, but they are large and written with very frequent small write operations. because /home is accessed over a relatively slow 1 gbps ethernet network, and because there may be hundreds of these types of applications running simultaneously, read and write performance in /home can be severely impacted.

to address this issue, each compute node is equipped with a 1 tb locally attached disk (in reality, two disks in raid 0 (stripped)). high i/o jobs running on a compute node can use this for their scratch files in order to obtain better performance and increase overall cluster efficiency.

to provide access to these local disks a directory /data/your_user_name is automatically created for you on each node. to make use to these simply write your job control script appropriately.

running programs (jobs)

running programs on the cluster is somewhat different than running a programs on a standalone linux/unix machine. in general your programs must be submitted to a job queue rather than run directly on the login/head node. an automatic scheduler is responsible for identifying compute nodes with adequate resources, taking jobs from the queue and starting them on the assigned compute node, and cleaning things up after the job completes. several user commands are provided for submitting your jobs to the queue and for monitoring and managing (e.g. deleting) them. these commands are described in detail in the job control section.

running programs directly on the head node itself should be limited to editing, compiling, debugging, simple interactive graphical apps (eg. word processing, graphing, etc.), managing your files, and submitting/managing your jobs. limits are imposed on the memory size and run times of user processes on the login node to prevent system overloading.

using the gpgpus

running cuda programs

the nvidia tesla m2050 gpgpu is attached to the compute node wes-00-00. in order to run a program that requires the gpu, you must explicitly request wes-00-00 in your job control script.

compiling and debugging cuda code

open an interactive job to wes-00-00 in order to compile your cuda gpu code or to interactively debug your cuda code using nvidia's cuda debugger. for example, the following will request a 4 hour session on wes-00-00 and reserve 12gb of ram.

qsub -i -x -l host=wes-00-00 -l walltime=4:00:00 -l pvmem=8gb

the -x option turns on x11 (graphics) forwarding and is required if you intend to run any graphical user interface. this also requires that you used x11 forwarding when you initially logged onto the head node.