Open OnDemand: Eureka2¶
The OOD GUI-based interactive applications can be launched from any available partition on Eureka2. This allows you to run jobs for up to 8 hours on the debug partition and up to 7 days on the other partitions.
Unlike on AISurrey, where Apptainer images are used by default for submitting jobs, including starting interactive apps on OOD, Eureka2 supports Apptainer images but does not employ them for OOD interactive apps. Instead, the OOD interactive applications—such as VS Code and JupyterLab—initially start in a standard Anaconda environment. Once the interactive session is running, you can switch to or activate your preferred Conda environment as needed.
When starting both VS Code and JupyterLab sessions, you can specify the maximum duration (in hours), the number of CPU cores, the partition (queue), and whether you’d like to use a custom Conda environment.
Note
GPU resources are not available on all partitions. You can only specify the number and type of GPUs when submitting jobs to the gpu partition. However, these GPUs are MIG-partitioned to allow multiple users to share the available GPU slices.
Below, we explain how to start both OOD interactive applications: VS Code and JupyterLab.
In addition to the common features you select when starting a VS Code session on OOD, you can also choose the specific VS Code version you’d like to use during your interactive session.
Screenshot of the Eureka2-OOD VS-Code session.¶
To select your custom conda environment to work on during your VS-Code session:
1- Press Ctrl+Shift+P (Cmd+Shift+P on macOS) to open the Command Palette.
2- Type Python: Select Interpreter and press Enter. VS Code will display a list of available Python interpreters, including those from Conda environments.
3- Click on the interpreter corresponding to your desired Conda environment.
4- If PyTorch is installed in the activated Conda environment, you can run the following code snippet to determine the available GPU MIG instances and the number of CPU cores:
import os
num_cpu_cores = os.cpu_count()
print(num_cpu_cores)
import torch
num_gpus = torch.cuda.device_count()
print("Num_GPUS="+str(num_gpus))
# Print the name of each available GPU
for i in range(num_gpus):
gpu_name = torch.cuda.get_device_name(i)
print(f"GPU {i}: {gpu_name}")
Unlike VS Code, when using JupyterLab in an OOD (OnDemand) session, you must specify the path to the bin directory of your custom Conda environment to run code within that environment. This ensures that JupyterLab uses the correct Python interpreter and packages associated with your Conda environment. Refer to the following screenshot for guidance on how to configure this.
Screenshot of the Eureka2-OOD JupyterLab session.¶
Once your JupyterLab session starts, create a new notebook and execute the same Python code snippet used in the VS Code session to determine the number of CPU cores and GPU MIG instances. Make sure that your Conda environment includes the necessary pytorch library to ensure the code runs successfully.