🔌 Toolbox of short, reusable pieces of code and knowledge.
(only relevant to Stanford affiliates)
Stanford offers linux clusters that students can use to perform computational tasks. The main ones to use are:
Fortunately, Farmshare provides access to a Tesla K40 GPU via oat.stanford.edu
on Farmshare.
Besides having an easy to access remote computer instance to do development in, you can use these clusters to do things like submit slurm jobs for intensive parallel computing. Here is an example
log into rice.stanford.edu
.
cool.py
which contains the following code:import time
print("IT IS RUNNING!")
time.sleep(100)
print("Done!")
run.sh
:#!/bin/bash
#
#SBATCH --job-name=sample
#
srun python cool.py
you should prepend the command that you would use to run the script with srun
(slurm).
sbatch run.sh
. Returns something like:Submitted batch job 8292018
You should then see the file slurm-8292018.out
which is the output log of the script you ran. It will be populated once the run is complete. You can view the queue of jobs running on the cluster via squeue -r | grep ${USER}
and you should see your job:
(base) sunetid@rice06:~/dev/job_sample$ squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
8292018 normal sample sunetid R 1:12 1 wheat07
after the run is complete, check out slurm-8292018.out
:
(base) sunetid@rice06:~/dev/job_sample$ cat slurm-8292018.out
IT IS RUNNING!
Done!
Pretty cool because you can submit some jobs and log out of your instance! You can view a history of your jobs via sacct
.
See the FAQ.
sbatch --partition=gpu --qos=gpu --gres=gpu:<num gpus> --time <days-hours:minutes> run.sh
example:
sbatch --partition=gpu --qos=gpu --gres=gpu:4 --time 1-23:58 run.sh