Quick start

Tutorial

This is a guide to help you get started with Genv.

Note

This tutorial uses Genv locally on your GPU machine. If you want to work with one or more remote machines, check out the quick start tutorial of remote features.

Before beginning, make sure that you are running on a GPU machine. This could be either your local machine or a remote one over SSH. In my case, I have two GPUs in my machine.

You can verify that by running the command:

$ nvidia-smi
Tue Apr  4 11:17:31 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03   Driver Version: 470.161.03   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
...

First, you will need to install Genv. You can choose your preferred way of installation: using pip or Conda.

Verify the installation by running the command:

$ genv status
Environment is not active

You can see that the terminal is not running in an active environment.

Note

You can run genv to see all the available commands.

Let’s see device information:

$ genv devices
ID      ENV ID      ENV NAME        ATTACHED
0
1

We can see that we have two devices and that both of them are available. This is because we don’t have any GPU environment attached to any of them.

Let’s now list the environments and we will see that there are no active environments yet:

$ genv envs
ID      USER            NAME            CREATED              PID(S)

Now, let’s activate a new environment and give it a name:

$ genv activate --name quick-start
(genv) $

Note

You can pass --no-prompt to genv activate to not change the shell prompt.

We can now rerun the genv envs command and see our environment:

$ genv envs
ID      USER            NAME            CREATED              PID(S)
13320   raz(1040)       quick-start     42 seconds ago       13320

Environments start detached from any GPUs when being first activated. You can see this with the following command:

$ nvidia-smi
No devices were found

You can see that even though we have two GPUs on the machine, nvidia-smi sees none of them. Any CUDA application you will run will see no GPUs.

Note

Genv sets an environment variable that controls the GPU indices CUDA uses to -1. You can see this with the command:

$ echo $CUDA_VISIBLE_DEVICES
-1

However, nvidia-smi is based on NVML and therefore is implemented differently in Genv using a shim. You can see this with the command:

$ vim $(which nvidia-smi)

Let’s now attach devices to the environment. We will configure the environment device count to 1 and let Genv pick the device index for us.

$ genv config gpus 1
$ genv attach

You can now run genv status and see information about your activated environment. Also, running nvidia-smi will show us our single device:

$ nvidia-smi
...
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   52C    P8    17W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
...

You can see the device index and its memory information. In our case, it’s device at index 0 that has just a bit less than 16GB of GPU memory.

Now, if we don’t need all the memory of the GPU, we can configure the environment GPU memory capacity with the command:

$ genv config gpu-memory 4g

Rerunning nvidia-smi will now show the configured amount as the total memory of the device:

$ nvidia-smi
...
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   52C    P8    17W /  70W |      0MiB / 3814MiB  |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
...

If you ran any GPU consuming application, you will see its used memory in the output of nvidia-smi as well as its process.

If you also have more than a single device in your machine, you can attach the second device to your environment with the command:

$ genv attach --index 1

Now, nvidia-smi will show information on both devices.

That wraps the Genv quick start tutorial.

Where to Go Next

If you have more than a single GPU machine, it is recommended to follow the quick start tutorial of Genv remote features.

Additionally, you should check out the usage guide to learn more Genv features.