Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Introduction to SUSE AI

Introduction to SUSE AI

Publication Date: 12 Nov 2024
WHAT?

SUSE AI is an open generative AI solution.

WHY?

To learn more about AI and the benefits of running your private AI service inside your company or in the cloud.

EFFORT

To understand the basics and benefits of SUSE AI requires less than 30 minutes of your time.

GOAL

To make you realize that SUSE AI is the right choice to run private and secure AI workloads.

1 Introduction

This topic describes what SUSE AI is, of which components it consists, and what the benefits of using SUSE AI are.

1.1 What is SUSE AI?

SUSE AI is an open generative AI solution. It offers the customers the freedom to run private AI workloads both on-premises, in the cloud or even in air-gapped environments. SUSE AI provides secure auditable AI capabilities ensures complete control over data and compliance with regulatory requirements.

1.2 What are the benefits of using SUSE AI

Running the SUSE AI service brings the following benefits:

  • Cloud AI services raise concerns about data security and accessibility. AI applications running on-premises, however, increase data privacy, security and compliance with regulatory standards.

  • SUSE AI provides a user-friendly interface for managing and deploying AI models.

  • Pre-built NVIDIA drivers and operator provide increased AI performance.

  • SUSE AI components are dynamically scalable, from the Web user interface to the underlying data store.

  • SUSE AI cares about security—it uses TLS certificates for Web UI and API access. Data stored in a database are encrypted as well.

  • High level of SUSE quality control on the whole software production chain used by the AI stack.

1.3 What are typical scenarios for using SUSE AI

After you deploy and configure SUSE AI inside your company and extend AI knowledge base by providing your data, you can:

  • Implement AI-driven chatbots to handle customer inquiries, providing continuous support and reducing the load on human agents.

  • Build a knowledge base where users can easily interact with chatbots to ask questions about the company policies, processes and workflows.

  • Generate reports and summaries on business performance or sales metrics with minimal manual input.

  • Automate the generation of blog posts or social media content, ensuring consistency and saving time.

  • Generate personalized e-mail content tailored to individual customer segments.

  • Offer personalized product or content recommendations based on customer preferences.

  • Forecast trends, customer behaviors, and market shifts, enabling more informed strategic decisions.

1.4 How does SUSE AI work?

This section describes individual components of SUSE AI and what happens after you enter a user prompt for an AI-driven chatbot.

1.4.1 Structure of SUSE AI

SUSE AI is designed to run on a cluster of nodes. It consists of two separate components—the SUSE AI Foundation and the SUSE AI Library.

The SUSE AI Foundation includes:

  • SUSE Linux Micro as the underlying operating system with the optional NVIDIA driver installed.

  • RKE2 cluster managed by Rancher Manager ensuring container and application lifecycle management.

  • NVIDIA GPU Operator to utilize the NVIDIA GPU computing power and capabilities for processing AI-related tasks.

  • SUSE Security for security and compliance.

  • SUSE Observability providing advanced performance and data monitoring.

The SUSE AI Library includes:

Ollama (https://ollama.com)

A platform that simplifies installation and management of large language models (LLM) on local devices.

Open WebUI (https://openwebui.com)

An extensible Web user interface for the Ollama LLM runner.

Milvus (https://milvus.io)

A vector database built for generative AI applications with minimal performance loss.

Basic schema of SUSE AI
Figure 1: Basic schema of SUSE AI

1.4.2 Processing user prompts by an AI-driven chatbot

When you enter a user prompt, several processes happen in the background to generate the response.

  1. Input processing. The AI first processes the text of your prompt to understand its meaning. This involves identifying the subject, intent and any details or context provided. This process is called Natural Language Understanding (NLU).

  2. Contextual analysis. If you are interacting with AI in a session where you have already asked previous questions, the AI considers the context of the conversation. This results in more relevant and coherent answers.

  3. Knowledge retrieval. The AI retrieves information from its pre-trained knowledge base. This database includes facts, data and concepts that the AI has been trained on. AI models can also utilize retrieval-augmented generation (RAG) systems to get contextual information from the data provided by the organization. If the AI has access to real-time data, it can search for the latest information online to provide an up-to-date response.

  4. Response generation. Using natural language generation (NLG) techniques, the AI constructs a coherent and grammatically correct response based on the information it retrieved.

  5. Output. The AI delivers the response to you in a human-readable format. This might be a straightforward answer, a detailed explanation or a step-by-step guide, depending on the complexity of your question.

  6. Feedback Loop (optional). In specific AI systems, your feedback or follow-up questions can help refine future responses, allowing the AI to improve its answers over time.