Reaching Others University at Buffalo - The State University of New York
Skip to Content

SLURM - Job Scheduler

What is SLURM?

SLURM (Simple Linux Utility for Resourcce Management) is a workload manager that provides a framework for job queues, allocation of compute nodes, and the start and execution of jobs.

Using SLURM

The cluster compute node are available in SLURM partitions.  User submit jobs to requestion node resources in a partition.  SLURM partitions for general use are general-compute, debug, gpu, largemem and supporters.  The default is the general-compute partition.

Login to Front-end of CCR Cluster

Login to rush.ccr.buffalo.edu using ssh (Secure Shell).  See the user guide for details on logging in from Linux, MAC and Windows.

User Guide: Login

Enabling X-Display will allow use of the sview and slurmjobvis commands.

User Guide: X-Display

Overview of SLURM Commands

  • squeue - show status of jobs in queue
  • scancel - delete a job
  • sinfo - show status of compute nodes
  • srun - run a command on allocated compute nodes
  • sbatch - submit a job script
  • fisbatch - wrapper script for submitting an interactive job
  • salloc - allocate compute nodes for interactive use
  • slurmjobvis -graphical tool for monitoring running jobs
  • snodes - tool for viewing details of compute nodes.
 
8/7/14
How to submit a SLURM script job.
7/19/14
How to submit an interactive SLURM job.
8/6/14
How to check the status of a job.  Monitoring a running job.  Show status of partitions and nodes.
8/6/14
How to view job priority.  Factors that determine job priority, including the formula and weights.
8/6/14
How to retreive job history and accounting information.
8/22/14
A job array is a collection of similar jobs.
7/19/14
Guide to using PI (principal investigator) partitions.  PI partitions are resources purchased by faculty for use by a specific research group.  These nodes are not available to all CCR users.