Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE OpenStack Cloud 8

2 Get started with OpenStack

The OpenStack project is an open source cloud computing platform for all types of clouds, which aims to be simple to implement, massively scalable, and feature rich. Developers and cloud computing technologists from around the world create the OpenStack project.

OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a set of interrelated services. Each service offers an Application Programming Interface (API) that facilitates this integration. Depending on your needs, you can install some or all services.

The following table describes the OpenStack services that make up the OpenStack architecture:

Table 2.1: OpenStack Services


Project name




Provides a web-based self-service portal to interact with underlying OpenStack services, such as launching an instance, assigning IP addresses and configuring access controls.



Manages the lifecycle of compute instances in an OpenStack environment. Responsibilities include spawning, scheduling and decommissioning of virtual machines on demand.



Enables Network-Connectivity-as-a-Service for other OpenStack services, such as OpenStack Compute. Provides an API for users to define networks and the attachments into them. Has a pluggable architecture that supports many popular networking vendors and technologies.

Object Storage


Stores and retrieves arbitrary unstructured data objects via a RESTful, HTTP based API. It is highly fault tolerant with its data replication and scale-out architecture. Its implementation is not like a file server with mountable directories. In this case, it writes objects and files to multiple drives, ensuring the data is replicated across a server cluster.

Block Storage


Provides persistent block storage to running instances. Its pluggable driver architecture facilitates the creation and management of block storage devices.

Identity service


Provides an authentication and authorization service for other OpenStack services. Provides a catalog of endpoints for all OpenStack services.

Image service


Stores and retrieves virtual machine disk images. OpenStack Compute makes use of this during instance provisioning.



Monitors and meters the OpenStack cloud for billing, benchmarking, scalability, and statistical purposes.



Orchestrates multiple composite cloud applications by using either the native HOT template format or the AWS CloudFormation template format, through both an OpenStack-native REST API and a CloudFormation-compatible Query API.

Database service


Provides scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines.

Data processing service


Provides capabilities to provision and scale Hadoop clusters in OpenStack by specifying parameters like Hadoop version, cluster topology and nodes hardware details.

2.1 Conceptual architecture

The following diagram shows the relationships among the OpenStack services:

2.2 Logical architecture

To design, deploy, and configure OpenStack, administrators must understand the logical architecture.

As shown in Section 2.1, “Conceptual architecture”, OpenStack consists of several independent parts, named the OpenStack services. All services authenticate through a common Identity service. Individual services interact with each other through public APIs, except where privileged administrator commands are necessary.

Internally, OpenStack services are composed of several processes. All services have at least one API process, which listens for API requests, preprocesses them and passes them on to other parts of the service. With the exception of the Identity service, the actual work is done by distinct processes.

For communication between the processes of one service, an AMQP message broker is used. The service's state is stored in a database. When deploying and configuring your OpenStack cloud, you can choose among several message broker and database solutions, such as RabbitMQ, MySQL, MariaDB, and SQLite.

Users can access OpenStack via the web-based user interface implemented by Section 2.3.7, “Dashboard overview”, via command-line clients and by issuing API requests through tools like browser plug-ins or curl. For applications, several SDKs are available. Ultimately, all these access methods issue REST API calls to the various OpenStack services.

The following diagram shows the most common, but not the only possible, architecture for an OpenStack cloud:

2.3 OpenStack services

This section describes OpenStack services in detail.

2.3.1 Compute service overview

Use OpenStack Compute to host and manage cloud computing systems. OpenStack Compute is a major part of an Infrastructure-as-a-Service (IaaS) system. The main modules are implemented in Python.

OpenStack Compute interacts with OpenStack Identity for authentication; OpenStack Image service for disk and server images; and OpenStack Dashboard for the user and administrative interface. Image access is limited by projects, and by users; quotas are limited per project (the number of instances, for example). OpenStack Compute can scale horizontally on standard hardware, and download images to launch instances.

OpenStack Compute consists of the following areas and their components:

nova-api service

Accepts and responds to end user compute API calls. The service supports the OpenStack Compute API, the Amazon EC2 API, and a special Admin API for privileged users to perform administrative actions. It enforces some policies and initiates most orchestration activities, such as running an instance.

nova-api-metadata service

Accepts metadata requests from instances. The nova-api-metadata service is generally used when you run in multi-host mode with nova-network installations. For details, see Metadata service in the OpenStack Administrator Guide.

nova-compute service

A worker daemon that creates and terminates virtual machine instances through hypervisor APIs. For example:

  • XenAPI for XenServer/XCP

  • libvirt for KVM or QEMU

  • VMwareAPI for VMware

Processing is fairly complex. Basically, the daemon accepts actions from the queue and performs a series of system commands such as launching a KVM instance and updating its state in the database.

nova-scheduler service

Takes a virtual machine instance request from the queue and determines on which compute server host it runs.

nova-conductor module

Mediates interactions between the nova-compute service and the database. It eliminates direct accesses to the cloud database made by the nova-compute service. The nova-conductor module scales horizontally. However, do not deploy it on nodes where the nova-compute service runs. For more information, see Configuration Reference Guide.

nova-cert module

A server daemon that serves the Nova Cert service for X509 certificates. Used to generate certificates for euca-bundle-image. Only needed for the EC2 API.

nova-consoleauth daemon

Authorizes tokens for users that console proxies provide. See nova-novncproxy. This service must be running for console proxies to work. You can run proxies of either type against a single nova-consoleauth service in a cluster configuration. For information, see About nova-consoleauth.

nova-novncproxy daemon

Provides a proxy for accessing running instances through a VNC connection. Supports browser-based novnc clients.

nova-spicehtml5proxy daemon

Provides a proxy for accessing running instances through a SPICE connection. Supports browser-based HTML5 client.

The queue

A central hub for passing messages between daemons. Usually implemented with RabbitMQ, also can be implemented with another AMQP message queue, such as ZeroMQ.

SQL database

Stores most build-time and run-time states for a cloud infrastructure, including:

  • Available instance types

  • Instances in use

  • Available networks

  • Projects

Theoretically, OpenStack Compute can support any database that SQLAlchemy supports. Common databases are SQLite3 for test and development work, MySQL, MariaDB, and PostgreSQL.

2.3.2 Storage concepts

The OpenStack stack uses the following storage types:

Table 2.2: Storage types

On-instance / ephemeral

Block storage (cinder)

Object Storage (swift)

File Storage (manila)

Runs operating systems and provides scratch space

Used for adding additional persistent storage to a virtual machine (VM)

Used for storing virtual machine images and data

Used for providing file shares to a virtual machine

Persists until VM is terminated

Persists until deleted

Persists until deleted

Persists until deleted

Access associated with a VM

Access associated with a VM

Available from anywhere

Access can be provided to a VM

Implemented as a filesystem underlying OpenStack Compute

Mounted via OpenStack Block Storage controlled protocol (for example, iSCSI)


Provides Shared File System service via nfs, cifs, glusterfs, or hdfs protocol

Encryption is available

Encryption is available

Work in progress - expected for the Mitaka release

Encryption is not available yet

Administrator configures size setting, based on flavors

Sizings based on need

Easily scalable for future growth

Sizing based on need

Example: 10 GB first disk, 30 GB/core second disk

Example: 1 TB "extra hard drive"

Example: 10s of TBs of data set storage

Example: 1 TB of file share

  • You cannot use OpenStack Object Storage like a traditional hard drive. The Object Storage relaxes some of the constraints of a POSIX-style file system to get other gains. You can access the objects through an API which uses HTTP. Subsequently you don't have to provide atomic operations (that is, relying on eventual consistency), you can scale a storage system easily and avoid a central point of failure.

  • The OpenStack Image service is used to manage the virtual machine images in an OpenStack cluster, not store them. It provides an abstraction to different methods for storage - a bridge to the storage, not the storage itself.

  • The OpenStack Object Storage can function on its own. The Object Storage (swift) product can be used independently of the Compute (nova) product.

2.3.3 Object Storage service overview

The OpenStack Object Storage is a multi-tenant object storage system. It is highly scalable and can manage large amounts of unstructured data at low cost through a RESTful HTTP API.

It includes the following components:

Proxy servers (swift-proxy-server)

Accepts OpenStack Object Storage API and raw HTTP requests to upload files, modify metadata, and create containers. It also serves file or container listings to web browsers. To improve performance, the proxy server can use an optional cache that is usually deployed with memcache.

Account servers (swift-account-server)

Manages accounts defined with Object Storage.

Container servers (swift-container-server)

Manages the mapping of containers or folders, within Object Storage.

Object servers (swift-object-server)

Manages actual objects, such as files, on the storage nodes.

Various periodic processes

Performs housekeeping tasks on the large data store. The replication services ensure consistency and availability through the cluster. Other periodic processes include auditors, updaters, and reapers.

WSGI middleware

Handles authentication and is usually OpenStack Identity.

swift client

Enables users to submit commands to the REST API through a command-line client authorized as either a admin user, reseller user, or swift user.


Script that initializes the building of the ring file, takes daemon names as parameter and offers commands. Documented in Managing Services.


A cli tool used to retrieve various metrics and telemetry information about a cluster that has been collected by the swift-recon middleware.


Storage ring build and rebalance utility. Documented in Managing the Rings.

2.3.4 Block Storage service overview

The OpenStack Block Storage service (cinder) adds persistent storage to a virtual machine. Block Storage provides an infrastructure for managing volumes, and interacts with OpenStack Compute to provide volumes for instances. The service also enables management of volume snapshots, and volume types.

The Block Storage service consists of the following components:


Accepts API requests, and routes them to the cinder-volume for action.


Interacts directly with the Block Storage service, and processes such as the cinder-scheduler. It also interacts with these processes through a message queue. The cinder-volume service responds to read and write requests sent to the Block Storage service to maintain state. It can interact with a variety of storage providers through a driver architecture.

cinder-scheduler daemon

Selects the optimal storage provider node on which to create the volume. A similar component to the nova-scheduler.

cinder-backup daemon

The cinder-backup service provides backing up volumes of any type to a backup storage provider. Like the cinder-volume service, it can interact with a variety of storage providers through a driver architecture.

Messaging queue

Routes information between the Block Storage processes.

2.3.5 Shared File Systems service overview

The OpenStack Shared File Systems service (manila) provides file storage to a virtual machine. The Shared File Systems service provides an infrastructure for managing and provisioning of file shares. The service also enables management of share types as well as share snapshots if a driver supports them.

The Shared File Systems service consists of the following components:


A WSGI app that authenticates and routes requests throughout the Shared File Systems service. It supports the OpenStack APIs.


A standalone service whose purpose is to receive requests, process data operations such as copying, share migration or backup, and send back a response after an operation has been completed.


Schedules and routes requests to the appropriate share service. The scheduler uses configurable filters and weighers to route requests. The Filter Scheduler is the default and enables filters on things like Capacity, Availability Zone, Share Types, and Capabilities as well as custom filters.


Manages back-end devices that provide shared file systems. A manila-share process can run in one of two modes, with or without handling of share servers. Share servers export file shares via share networks. When share servers are not used, the networking requirements are handled outside of Manila.

Messaging queue

Routes information between the Shared File Systems processes.

For more information, see OpenStack Configuration Reference.

2.3.6 Networking service overview

OpenStack Networking (neutron) allows you to create and attach interface devices managed by other OpenStack services to networks. Plug-ins can be implemented to accommodate different networking equipment and software, providing flexibility to OpenStack architecture and deployment.

It includes the following components:


Accepts and routes API requests to the appropriate OpenStack Networking plug-in for action.

OpenStack Networking plug-ins and agents

Plug and unplug ports, create networks or subnets, and provide IP addressing. These plug-ins and agents differ depending on the vendor and technologies used in the particular cloud. OpenStack Networking ships with plug-ins and agents for Cisco virtual and physical switches, NEC OpenFlow products, Open vSwitch, Linux bridging, and the VMware NSX product.

The common agents are L3 (layer 3), DHCP (dynamic host IP addressing), and a plug-in agent.

Messaging queue

Used by most OpenStack Networking installations to route information between the neutron-server and various agents. Also acts as a database to store networking state for particular plug-ins.

OpenStack Networking mainly interacts with OpenStack Compute to provide networks and connectivity for its instances.

2.3.7 Dashboard overview

The OpenStack Dashboard is a modular Django web application that provides a graphical interface to OpenStack services.

The dashboard is usually deployed through mod_wsgi in Apache. You can modify the dashboard code to make it suitable for different sites.

From a network architecture point of view, this service must be accessible to customers and the public API for each OpenStack service. To use the administrator functionality for other services, it must also connect to Admin API endpoints, which should not be accessible by customers.

2.3.8 Identity service overview

The OpenStack Identity service (keystone) provides a single point of integration for managing authentication, authorization, and a catalog of services.

The Identity service is typically the first service a user interacts with. Once authenticated, an end user can use their identity to access other OpenStack services. Likewise, other OpenStack services leverage the Identity service to ensure users are who they say they are and discover where other services are within the deployment. The Identity service can also integrate with some external user management systems (such as LDAP).

Users and services can locate other services by using the service catalog, which is managed by the Identity service. As the name implies, a service catalog is a collection of available services in an OpenStack deployment. Each service can have one or many endpoints and each endpoint can be one of three types: admin, internal, or public. In a production environment, different endpoint types might reside on separate networks exposed to different types of users for security reasons. For instance, the public API network might be visible from the Internet so customers can manage their clouds. The admin API network might be restricted to operators within the organization that manages cloud infrastructure. The internal API network might be restricted to the hosts that contain OpenStack services. OpenStack supports multiple regions for scalability. However Cloud 8 does not support multiple regions, so this guide uses the default RegionOne region. In addition, for simplicity this guide uses the management network for all endpoint types. Together, regions, services, and endpoints created within the Identity service comprise the service catalog for a deployment. Each OpenStack service in your deployment needs a service entry with corresponding endpoints stored in the Identity service. This can all be done after the Identity service has been installed and configured.

The Identity service contains these components:


A centralized server provides authentication and authorization services using a RESTful interface.


Drivers or a service back end are integrated to the centralized server. They are used for accessing identity information in repositories external to OpenStack, and may already exist in the infrastructure where OpenStack is deployed (for example, SQL databases or LDAP servers).


Middleware modules run in the address space of the OpenStack component that is using the Identity service. These modules intercept service requests, extract user credentials, and send them to the centralized server for authorization. The integration between the middleware modules and OpenStack components uses the Python Web Server Gateway Interface.

2.3.9 Image service overview

The OpenStack Image service is central to Infrastructure-as-a-Service (IaaS) as shown in Section 2.1, “Conceptual architecture”. It accepts API requests for disk or server images, and metadata definitions from end users or OpenStack Compute components. It also supports the storage of disk or server images on various repository types, including OpenStack Object Storage.

A number of periodic processes run on the OpenStack Image service to support caching. Replication services ensure consistency and availability through the cluster. Other periodic processes include auditors, updaters, and reapers.

The OpenStack Image service includes the following components:


Accepts Image API calls for image discovery, retrieval, and storage.


Stores, processes, and retrieves metadata about images. Metadata includes items such as size and type.


The registry is a private internal service meant for use by OpenStack Image service. Do not expose this service to users.


Stores image metadata and you can choose your database depending on your preference. Most deployments use MySQL or SQLite.

Storage repository for image files

Various repository types are supported including normal file systems (or any filesystem mounted on the glance-api controller node), Object Storage, RADOS block devices, VMware datastore, and HTTP. Note that some repositories will only support read-only usage.

Metadata definition service

A common API for vendors, admins, services, and users to meaningfully define their own custom metadata. This metadata can be used on different types of resources like images, artifacts, volumes, flavors, and aggregates. A definition includes the new property's key, description, constraints, and the resource types which it can be associated with.

2.3.10 Telemetry service overview Telemetry Data Collection service

The Telemetry Data Collection services provide the following functions:

  • Efficiently polls metering data related to OpenStack services.

  • Collects event and metering data by monitoring notifications sent from services.

  • Publishes collected data to various targets including data stores and message queues.

The Telemetry service consists of the following components:

A compute agent (ceilometer-agent-compute)

Runs on each compute node and polls for resource utilization statistics. There may be other types of agents in the future, but for now our focus is creating the compute agent.

A central agent (ceilometer-agent-central)

Runs on a central management server to poll for resource utilization statistics for resources not tied to instances or compute nodes. Multiple agents can be started to scale service horizontally.

A notification agent (ceilometer-agent-notification)

Runs on a central management server(s) and consumes messages from the message queue(s) to build event and metering data.

A collector (ceilometer-collector)

Runs on central management server(s) and dispatches collected telemetry data to a data store or external consumer without modification.

An API server (ceilometer-api)

Runs on one or more central management servers to provide data access from the data store. Telemetry Alarming service

The Telemetry Alarming services trigger alarms when the collected metering or event data break the defined rules.

The Telemetry Alarming service consists of the following components:

An API server (aodh-api)

Runs on one or more central management servers to provide access to the alarm information stored in the data store.

An alarm evaluator (aodh-evaluator)

Runs on one or more central management servers to determine when alarms fire due to the associated statistic trend crossing a threshold over a sliding time window.

A notification listener (aodh-listener)

Runs on a central management server and determines when to fire alarms. The alarms are generated based on defined rules against events, which are captured by the Telemetry Data Collection service's notification agents.

An alarm notifier (aodh-notifier)

Runs on one or more central management servers to allow alarms to be set based on the threshold evaluation for a collection of samples.

These services communicate by using the OpenStack messaging bus. Only the collector and API server have access to the data store.

2.3.11 Orchestration service overview

The Orchestration service provides a template-based orchestration for describing a cloud application by running OpenStack API calls to generate running cloud applications. The software integrates other core components of OpenStack into a one-file template system. The templates allow you to create most OpenStack resource types such as instances, floating IPs, volumes, security groups, and users. It also provides advanced functionality such as instance high availability, instance auto-scaling, and nested stacks. This enables OpenStack core projects to receive a larger user base.

The service enables deployers to integrate with the Orchestration service directly or through custom plug-ins.

The Orchestration service consists of the following components:

heat command-line client

A CLI that communicates with the heat-api to run AWS CloudFormation APIs. End developers can directly use the Orchestration REST API.

heat-api component

An OpenStack-native REST API that processes API requests by sending them to the heat-engine over Remote Procedure Call (RPC).

heat-api-cfn component

An AWS Query API that is compatible with AWS CloudFormation. It processes API requests by sending them to the heat-engine over RPC.


Orchestrates the launching of templates and provides events back to the API consumer.

2.3.12 Database service overview

The Database service provides scalable and reliable cloud provisioning functionality for both relational and non-relational database engines. Users can quickly and easily use database features without the burden of handling complex administrative tasks. Cloud users and database administrators can provision and manage multiple database instances as needed.

The Database service provides resource isolation at high performance levels and automates complex administrative tasks such as deployment, configuration, patching, backups, restores, and monitoring.

Process flow example

This example is a high-level process flow for using Database services:

  1. The OpenStack Administrator configures the basic infrastructure using the following steps:

    1. Install the Database service.

    2. Create an image for each type of database. For example, one for MySQL and one for MongoDB.

    3. Use the trove-manage command to import images and offer them to tenants.

  2. The OpenStack end user deploys the Database service using the following steps:

    1. Create a Database service instance using the trove create command.

    2. Use the trove list command to get the ID of the instance, followed by the trove show command to get the IP address of it.

    3. Access the Database service instance using typical database access commands. For example, with MySQL:

      $ mysql -u myuser -p -h TROVE_IP_ADDRESS mydb


The Database service includes the following components:

python-troveclient command-line client

A CLI that communicates with the trove-api component.

trove-api component

Provides an OpenStack-native RESTful API that supports JSON to provision and manage Trove instances.

trove-conductor service

Runs on the host, and receives messages from guest instances that want to update information on the host.

trove-taskmanager service

Instruments the complex system flows that support provisioning instances, managing the lifecycle of instances, and performing operations on instances.

trove-guestagent service

Runs within the guest instance. Manages and performs operations on the database itself.

2.3.13 Data Processing service overview

The Data processing service for OpenStack (sahara) aims to provide users with a simple means to provision data processing (Hadoop, Spark) clusters by specifying several parameters like Hadoop version, cluster topology, node hardware details and a few more. After a user fills in all the parameters, the Data processing service deploys the cluster in a few minutes. Sahara also provides a means to scale already provisioned clusters by adding or removing worker nodes on demand.

The solution addresses the following use cases:

  • Fast provisioning of Hadoop clusters on OpenStack for development and QA.

  • Utilization of unused compute power from general purpose OpenStack IaaS cloud.

  • Analytics-as-a-Service for ad-hoc or bursty analytic workloads.

Key features are:

  • Designed as an OpenStack component.

  • Managed through REST API with UI available as part of OpenStack Dashboard.

  • Support for different Hadoop distributions:

    • Pluggable system of Hadoop installation engines.

    • Integration with vendor specific management tools, such as Apache Ambari or Cloudera Management Console.

  • Predefined templates of Hadoop configurations with the ability to modify parameters.

  • User-friendly UI for ad-hoc analytics queries based on Hive or Pig.

2.4 Feedback

To provide feedback on documentation, join and use the openstack-docs@lists.openstack.org mailing list at OpenStack Documentation Mailing List, or report a bug.

Print this page