How to Use Sun Grid Engine
This is the page for how to use Sun Grid Engine to run jobs on the BabylonCluster
- SunGridEngineExamples (actual examples of how to make SGE do the stuff described below - you should READ THIS BEFORE LAUNCHING SGE JOBS, because examples are very often more useful than reading the manual)
- some How To pages from the SGE project site, with useful stuff, particularly:
- this file, which is just man sge_intro - it's a good overview of all the SGE commands and what man files to consult for more info
Please leave comments
(notes, questions, etc.) below, or edit the wiki as appropriate.
- The master/submit node is sheridan. Everywhere below, it is implied that you are logged into sheridan as yourself (please do not log in as sgeadmin unless there is something drastically going wrong that you need SGE manager privileges to fix). You should never have to log into any of the exec nodes, other than to troubleshoot errors or clean up a mess.
- The general algorithm for submitting jobs to SGE is:
- Copy the files (scripts, data, source code, makefiles, binaries, libraries, etc.) that your job needs to the appropriate directories on the cluster NFS (mounted on /mnt/nfs/ everywhere).
- Make sure that the environment you want for your job (like $PATH, $PERL5LIB, and other environment variables) is set in your cluster home directory (/mnt/nfs/users/<username>/).
- SSH into sheridan as yourself.
- Submit jobs there, making sure their output is written to the NFS/RAID where it doesn't conflict with other users' files.
- Your jobs will run on the exec nodes, and you can check their status with qstat (or qstat -j <job number> for details on some specific job). Basically, it will run as if you logged into some exec node under your username, and ran the thing you submitted to qsub there.
- N.B. about the path where your job will run:
- if you use the -cwd parameter to qsub, your job will execute in the path that you submitted it from;
- if you do not use the -cwd parameter to qsub, your job will execute in your home directory (/mnt/nfs/users/<your user name>/).
- If you don't specify a queue name (see below) when submitting a job, your job will be put it whatever queue is available, indiscriminately. Useful when you don't care which node your job runs on.
- Your job will run on the exec node in the environment of that node, plus whatever is specified by .bash_profile in your home directory (/mnt/nfs/users/<your user name>/). Your job can therefore access some useful environment variables that give you details on what's going on - e.g. $HOSTNAME will tell you what node you are on, and so on.
- Note that for all jobs you submit, they will (unless otherwise specified) run on whatever shell is specified for your username in /etc/passwd on the exec node where that job gets scheduled. For most people, that shell is /bin/bash. If you want to execute your script on a different shell, use qsub -S, or alternately if you don't want a shell at all (i.e. use the exec sys call directly to launch your job), use qsub -shell n.
- If your jobs involve a lot of reading/writing temporary files to disk, I would recommend writing to the local disk of the exec node instead of to the NFS - that is, write to /tmp/ instead of /mnt/nfs/tmp/.
Note: these commands all have man
files, which you should read from sheridan
, as the man
files on the exec nodes seem to be... different.
(esp. with -j <job number>
Our current queues
are listed here
Submit a job
$ qsub -e <error log path/filename> -o <output log path/filename> <job executable> <parameters to job executable>
specify which files to redirect standard error and standard out to when you job runs. They can be ignored, in which case the standard error and standard out will be written to .e<job num>
and .o<job num>
files in your home directory on the NFS. If you ignore these options, they'll just be written to files (in your home dir) named after your job executable and your job number (the odds of a filename conflict are really nonexistent).
The job executable can be a shell script or a binary. Note that your jobs will run on the shell specified for you in /etc/passwd
of the exec node it runs on, and from your home directory on the NFS. That is, your jobs will run as if the shell was started, and someone executed the job executable from within that shell, from within your home directory. If you don't like this default behavior, you can specify the interpreting shell you want using:
$ qsub -S <path to interpreting shell executable> ...
if you use the -S option, your job will not be running on your default shell, so you will lose the environment set up in your .bash_profile
- for example, your $PATH
will not be available. So, use the -S option with caution!
So, for example, if you want to submit a Perl program, you can do it like this by specifying Perl interpreter as your shell:
$ qsub -S </usr/bin/perl> someProgram.pl
Alternately, you don't have to use a shell at all, and have your job and its parameters be started using the exec
sys call. The syntax is:
$ qsub -shell n ...
If you want to export a local environment variable to the exec node, use:
$ qsub -v <env variable name>
$ qsub -v PATH
The only reason you would ever want to use this is if you don't want to put stuff in .bash_profile
on your NFS home directory.
Lastly, you can set the working directory of your job to be the directory that you were in when you submitted the job. So, instead of executing in your home directory, the job will execute in the directory on the exec node
that you submitted your job from on the head node
. The syntax is:
$ qsub -cwd ...
Yeah, that's worded pretty awkwardly. Here's an example then, where you start on your local machine, SSH into sheridan
and submit a job that just prints the directory that it is running from:
[yourusername@yourlocalmachine ~]$ ssh sheridan
[yourusername@sheridan ~]$ cd /mnt/nfs/users/<yourusername>/job_scripts
[yourusername@sheridan job_scripts]$ qsub -o out.log -e err.log pwd-wrapper.bash
This job will print the path it was running in (/mnt/nfs/users/<yourusername>/job_scripts/
) to the file out.log
in... yep, /mnt/nfs/users/<yourusername>/job_scripts/
. Note that we are using local paths, instead of absolute paths, because we can be certain what directory the job executes in because of the -cwd
Submit a job to a single, specific host
$ qsub -l hostname=<host name> ...
parameter can set what resources your job needs to use (such as what queues, how much memory, etc. - see man qsub
for more info) using the syntax -l <resource>=<value>
. SGE will then dispatch your jobs only to hosts that satisfy the necessary resources. See man complex
for a list of what you are allowed to specify as <resource>.
Submit a job to a specific queue
see the queue list
for what our queues are currently.
$ qsub -l qname=<queue name> ...
Get a list of the queues you can submit stuff to
$ qconf -sql
Delete a job
$ qdel <job number list>
Get status/information about a specific job
(including possible reasons why it's not working)
If the job is not yet terminated:
$ qstat -j <job number>
But if the job terminated:
$ qacct -j <job number>
Get status/information about all jobs submitted by some user
$ qacct -j -o <user name>
have a lot of useful options for job accounting, see see man qstat
and man qacct
for a straightforward explanation.
Why Won't My Job Run Correctly? (aka How To Diagnose Problems)
This section has been moved here
SGE "To Do" List
this is now summarized under the SGE subsection of SysAdminTasks
, and this page will probably cease being maintained.
We are tapping only a fraction of SGE's features, but as I learn the system more, the pages (SunGridEngine
, and HowToAdministerSunGridEngine
) will grow. Some particular things to look at are:
- adding spare machines in the lab as an SGE queue
- scheduling and spooling optimizations
- setting up user notification e-mails so that users are notified when their jobs encounter problems (would be very, very useful)
- policy management (making sharing cluster resources more fair, not just on a "first come, first serve" basis as it is now)
- installing a shadow master host
- making the Macs in the lab submit and administartive hosts (so people don't have to log into sheridan all the time, they can just submit jobs from the Macs directly)
Of course any help in figuring stuff out is appreciated...
COMMENTS (QUESTIONS, NOTES, SUGGESTIONS, ETC.)
-- Started by: AndrewUzilov
- 08 Apr 2006