Installing NVIDIA GPU Drivers on SUSE Linux Micro
- WHAT?
NVIDIA GPU drivers use the full potential of the GPU unit.
- WHY?
To learn how to install NVIDIA GPU drivers on SUSE Linux Micro to fully use the computing power of the GPU in higher-level applications, such as AI workloads.
- EFFORT
Understanding the information in this article and installing NVIDIA GPU drivers on your SUSE Linux Micro host requires less than one hour of your time and basic Linux administration skills.
- GOAL
You can install NVIDIA GPU drivers on a SUSE Linux Micro host with a supported GPU card attached.
1 Installing NVIDIA GPU drivers on SUSE Linux Micro #
1.1 Introduction #
This guide demonstrates how to implement host-level NVIDIA GPU support via the open-driver on SUSE Linux Micro 6.1. The open-driver is part of the core SUSE Linux Micro package repositories. Therefore, there is no need to compile it or download executable packages. This driver is built into the operating system rather than dynamically loaded by the NVIDIA GPU Operator. This configuration is desirable for customers that want to pre-build all artifacts required for deployment into the image, and where the dynamic selection of the driver version via Kubernetes is not a requirement.
1.2 Requirements #
If you are following this guide, it assumes that you have the following already available:
At least one host with SUSE Linux Micro 6.1 installed, physical or virtual.
Your hosts are attached to a subscription as this is required for package access.
A compatible NVIDIA GPU installed or fully passed through to the virtual machine in which SUSE Linux Micro is running.
Access to the
root
user—these instructions assume you are theroot
user, and not escalating your privileges viasudo
.
1.3 Considerations before the installation #
1.3.1 Select the driver generation #
You must verify the driver generation for the NVIDIA GPU that your
system has. For modern GPUs, the G06
driver is the
most common choice. Find more details in
the
support database.
This section details the installation of the G06
generation of the driver.
1.3.2 Additional NVIDIA components #
Besides the NVIDIA open-driver provided by SUSE as part of
SUSE Linux Micro, you might also need additional NVIDIA components. These
could include OpenGL libraries, CUDA toolkits, command-line utilities
such as nvidia-smi
, and container-integration
components such as nvidia-container-toolkit. Many of these components
are not shipped by SUSE as they are proprietary NVIDIA software.
This section describes how to configure additional repositories that
give you access to these components and provides examples of using these
tools to achieve a fully functional system.
1.4 The installation procedure #
On SUSE Linux Micro host, open up a
transactional-update shell
session to create a new read/write snapshot of the underlying operating system so that we can make changes to the immutable platform.#
transactional-update shell
When you are in the
transactional-update shell
session, add a package repository from NVIDIA. This allows pulling in additional utilities, for example,nvidia-smi
.transactional update #
zypper ar \ https://developer.download.nvidia.com/compute/cuda/repos/sles15/x86_64/ \ cuda-sle15
transactional update #
zypper --gpg-auto-import-keys refresh
Install the Open Kernel driver KMP and detect the driver version.
transactional update #
zypper install -y --auto-agree-with-licenses \ nvidia-open-driver-G06-signed-cuda-kmp-default
transactional update #
version=$(rpm -qa --queryformat '%{VERSION}\n' \ nvidia-open-driver-G06-signed-cuda-kmp-default \ | cut -d "_" -f1 | sort -u | tail -n 1)
You can then install the appropriate packages for additional utilities that are useful for testing purposes.
transactional update #
zypper install -y --auto-agree-with-licenses \ nvidia-compute-utils-G06=${version}
Exit the
transactional-update
session and reboot to the new snapshot that contains the changes you have made.transactional update #
exit
#
rebootAfter the system has rebooted, log back in and use the
nvidia-smi
tool to verify that the driver is loaded successfully and that it can both access and enumerate your GPUs.#
nvidia-smi
The output of this command should show you something similar to the following output. In the example below, the system has one GPU.
Fri Aug 1 14:53:26 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.133.20 Driver Version: 570.133.20 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 34C P8 10W / 70W | 0MiB / 15360MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+
1.5 Validation of the driver installation #
Running the nvidia-smi
command has verified that, at
the host level, the NVIDIA device can be accessed and that the drivers
are loading successfully. To validate that it is functioning, you need to
validate that the GPU can take instructions from a user-space application,
ideally via a container and through the CUDA library, as that is typically
what a real workload would use. For this, we can make a further
modification to the host OS by installing
nvidia-container-toolkit.
Open another transactional-update shell.
#
transactional-update shell
Install the nvidia-container-toolkit package from the NVIDIA Container Toolkit repository.
transactional update #
zypper ar \ "https://nvidia.github.io/libnvidia-container/stable/rpm/"\ nvidia-container-toolkit.repo
transactional update #
zypper --gpg-auto-import-keys install \ -y nvidia-container-toolkitThe
nvidia-container-toolkit.repo
file contains a stable repositorynvidia-container-toolkit
and an experimental repositorynvidia-container-toolkit-experimental
. Use the stable repository for production use. The experimental repository is disabled by default.Exit the
transactional-update
session and reboot to the new snapshot that contains the changes you have made.transactional update #
exit
#
rebootVerify that the system can successfully enumerate the devices using the NVIDIA Container Toolkit. The output should be verbose, with INFO and WARN messages, but no ERROR messages.
#
nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
This ensures that any container started on the machine can employ discovered NVIDIA GPU devices.
You can then run a Podman-based container. Doing this via
podman
gives you a good way of validating access to the NVIDIA device from within a container, which should give confidence for doing the same with Kubernetes at a later stage.Give Podman access to the labeled NVIDIA devices that were taken care of by the previous command and simply run the
bash
command.#
podman run --rm --device nvidia.com/gpu=all \ --security-opt=label=disable \ -it registry.suse.com/bci/bci-base:latest bash
You can now execute commands from within a temporary Podman container. It does not have access to your underlying system and is ephemeral—whatever you change in the container does not persist. Also, you cannot break anything on the underlying host.
Inside the container, install the required CUDA libraries. Identify their version from the output of the
nvidia-smi
command. From the above example, we are installing CUDA version 12.8 with many examples, demos and development kits to fully validate the GPU.#
zypper ar \ http://developer.download.nvidia.com/compute/cuda/repos/sles15/x86_64/ \ cuda-sle15-sp6
#
zypper --gpg-auto-import-keys refresh
#
zypper install -y cuda-libraries-12-8 cuda-demo-suite-12-8
Inside the container, run the
deviceQuery
CUDA example of the same version, which comprehensively validates GPU access via CUDA and from within the container itself.#
/usr/local/cuda-12.8/extras/demo_suite/deviceQuery
Starting... CUDA Device Query (Runtime API) Detected 1 CUDA Capable device(s) Device 0: "Tesla T4" CUDA Driver Version / Runtime Version 12.8 / 12.8 CUDA Capability Major/Minor version number: 7.5 Total amount of global memory: 14914 MBytes (15638134784 bytes) (40) Multiprocessors, ( 64) CUDA Cores/MP: 2560 CUDA Cores GPU Max Clock rate: 1590 MHz (1.59 GHz) Memory Clock rate: 5001 Mhz Memory Bus Width: 256-bit L2 Cache Size: 4194304 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 1024 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 3 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Enabled Device supports Unified Addressing (UVA): Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 30 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.8, CUDA Runtime Version = 12.8, NumDevs = 1, Device0 = Tesla T4 Result = PASSFrom inside the container, you can continue to run any other CUDA workload—such as compilers—to run further tests. When finished, you can exit the container.
#
exit
ImportantChanges you have made in the container and packages you have installed inside will be lost and will not impact the underlying operating system.
2 Legal Notice #
Copyright© 2006–2025 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see https://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors, nor the translators shall be held liable for possible errors or the consequences thereof.