UMBC HPC Bootcamp

last updated 2025 March 24

iHARP at UMBC held a High Performance Computing (HPC) bootcamp in November 2024, below are the resources that were developed. Please note some access to materials and high performance resources are restricted to UMBC members only.

Introduction

The video and document (access restricted to UMBC) introduces basic BASH commands for navigating the Linux terminal, including file navigation and manipulation. It then walks users through the process of accessing Ada, the university’s High-Performance Computing (HPC) cluster, via CLI tools. This section provides instructions for logging in, navigating the cluster environment, and accessing resources.

The resources covers job management with SLURM, teaching users how to write, submit, and track SLURM jobs on Ada. This includes creating job scripts, submitting tasks to the cluster, tracking job progress, and optimizing resource usage to ensure that computational workloads run smoothly and efficiently on the HPC cluster.

 

Instructional Introduction Video on UMBC’s High Performance Computing Ada Cluster

The bootcamp recording includes a voice over walk-through of UMBC’s Ada cluster.

Click the arrow in the circle (center of image) to play the video. To view the video chapters, please click the up arrow at the bottom of the screen.

UMBC High Performance Computing (HPC) Bootcamp Walk-through Documentation

Click here to navigate to the Google Document that contains written documentation and instructions to access UMBC High-Performance Computing (HPC) Ada cluster.

*Please note you must be a UMBC member to access the Google Document

 

  • UMBC HPC Bootcamp 2024 0:04
  • Let’s log in to ADA 0:25
  • High Performance Computing 1:10
  • What is HPC? 1:15
  • Purpose of HPC? 2:10
  • Nodes in ADA 3:04
  • Login Node 3:16
  • Worker Nodes 4:13
  • Storage System 5:37
  • Storage on Ada 5:43
  • Home directory vs Working directory 5:55
  • Home Directory 7:16
  • Shared Group Storage 8:16
  • HPCF Research Storage 9:13
  • Scratch Space 11:22
  • List of Commands 12:49
  • ID Share 13:04
  • p! List of Commands 17:55
  • List of Commands * st a 19:22
  • Running the Bash Script created earlier 21:46
  • 3. Modules 21:52
  • my test.sh test/ 22:34
  • Environment Setup 25:10
  • Bash Environment 25:28
  • Why we need modules? 27:33
  • Solution 28:51
  • List of Commands 30:08
  • ID Share 31:58
  • Python Environment 32:38
  • Python Environment 33:29
  • Venv Environment 34:32
  • Conda Environment 37:37
  • SLURM 42:00
  • (HARP 42:33
  • Requesting Resources 43:21
  • Requesting Memory 44:36
  • Requesting GPUs 45:21
  • Requesting Time 46:21
  • Batch Jobs 47:48
  • Running MNIST script through sbatch 49:05
  • Install the following packages 50:14
  • How to create a shell file to request resources 50:53
  • 6. Jupyter Notebooks 57:59
  • List of Commands 4- m 1:00:11
  • Monitoring SLURM Jobs 1:06:21
  • Monitoring the status of the running jobs 1:06:27
  • SQUEUE Command 1:06:57
  • Modifying Jobs 1:07:27
  • Case Study: Using patch -CNN in High -Performance Computing… 1:08:39
  • Overview 1:09:29
  • Problem Statement 1:10:21
  • Motivation 1:11:05
  • Model Explanation 1:11:53
  • Research Objectives 1:13:13
  • Methodology Overview 1:13:59
  • Challenges 1:15:23
  • Results 1:16:13
  • (seconds) 1:16:33
  • Conclusion 1:16:49
  • C ►► Code 1:18:10
  • Antarctica 1:18:13
  • median IC: 1:18:54
  • 3 e myenv Idle 1:20:23
  • Cancelling jobs 1:20:4