2 Business aspect #
Agility is driving developers toward more cloud-native methodologies that focus on microservices architectures and streamlined workflows. Container technologies, like Kubernetes, embody this agile approach and help enable cloud-native transformation.
By unifying IT operations with Kubernetes, organizations realize key benefits like increased reliability, improved security and greater efficiencies with standardized automation. Therefore, Kubernetes infrastructure platforms are adopted by enterprises to deliver:
- Cluster Operations
Improved Production and DevOps efficiencies with simplified cluster usage and robust operations
- Security Policy & User Management
Consistent security policy enforcement plus advanced user management on any Kubernetes infrastructure
- Access to Shared Tools & Services
A high level of reliability with easy, consistent access to a broad set of tools and services
2.1 Business problem #
Many organizations are deploying Kubernetes clusters everywhere — in the cloud, on-premises, and at the edge — to unify IT operations. Such organizations can realize dramatic benefits, including:
Consistently deliver a high level of reliability on any infrastructure
Improve DevOps efficiency with standardized automation
Ensure enforcement of security policies on any infrastructure
However, simply relying on upstream Kubernetes alone can introduce extra overhead and risk because Kubernetes clusters are typically deployed:
Without central visibility
Without consistent security policies
And must be managed independently
Deploying a scalable kubernetes infrastructure requires consideration of a larger ecosystem, encompassing many software and infrastructure components and providers. Further, the ability to continually address the needs and concerns of:
- Developers
For those who focus on writing code to build their apps securely using a preferred workflow, providing a simple, push-button deployment mechanism of their containerized workloads where needed.
- IT Operators
General infrastructure requirements still rely upon traditional IT pillars are for the stacked, underlying infrastructure. Ease of deployment, availability, scalability, resiliency, performance, security and integrity are still core concerns to be addressed for administrative control and observability.
Beyond the core infrastructure software layers of managed Kubernetes clusters, organizations may be also be impacted by:
- Compute Platform
Potential inconsistencies and impacts of multiple target system platforms for the distributed deployments of the cluster elements, across:
physical, baremetal, hypervisors and virtual machines
2.2 Business value #
With Rancher Kubernetes Engine, the operation of Kubernetes is easily automated and entirely independent of the operating system and platform running. Using a supported version of the container runtime engine, one can deploy and run Kubernetes with Rancher Kubernetes Engine. It builds a cluster from a single command in a few minutes, and its declarative configuration makes Kubernetes upgrades atomic and safe.
By allowing operation teams to focus on infrastructure and developers to deploy code the way they want too, SUSE and the Rancher offerings helps bring products to market faster and accelerate an organization’s digital transformation.
SUSE Rancher is a fundamental part of the complete software stack for teams adopting containers. It provides DevOps teams with integrated tools for running containerized workloads while also addressing the operational and security challenges of managing multiple Kubernetes clusters across any targetedd infrastructure.
- Developers
SUSE Rancher makes it easy to securely deploy containerized applications no matter where the Kubernetes infrastructure runs -– in the cloud, on-premises, or at the edge. Using Helm or the App Catalog to deploy and manage applications across any or all these environments, ensuring multi-cluster consistency with a single deployment process.
- IT Operators
SUSE Rancher not only deploys and manages production-grade Kubernetes clusters from datacenter to cloud to the edge, it also unites them with centralized authentication, access control and observability. Further, it streamlines cluster deployment on bare metal or virtual machines and maintains them using defined security policies.
With this increased consistency of the managed Kubernetes infrastructure clusters, organizations benefit from an even higher level of the Cloud Native Computing model where each layer only relies upon the API and version of the adjacent layer, such as:
- Compute Platform
Using the above software application and technology solutions with the server platforms offered by Cisco Unified Computing System (UCS) brings increased productivity, reduced total cost of ownership, and scalability into your computing realm. Cisco UCS is based upon industry-standard, x86-architecture servers with Cisco innovations and delivers a better balance of CPU, memory, and I/O resources. This balance brings processor power to life with more than 150 world-record-setting benchmark results that demonstrate leadership in application areas including virtualization, cloud computing, enterprise applications, database management systems, enterprise middleware, high-performance computing, and basic CPU integer and floating-point performance metrics.
Match servers to workloads - The breadth of the server product line makes the process of matching servers to workloads straightforward, enabling you to achieve the best balance of CPU, memory, I/O, internal disk, and external storage-access resources using the blade, rack, multinode, or storage server form factor that best meets your organization’s data center requirements and preferred purchasing model.
Powered by AMD EPYC processors or Intel Xeon Scalable processors
Industry-leading bandwidth - Cisco UCS virtual interface cards have dramatically simplified the deployment of servers for specific applications. By making the number and type of I/O devices programmable on demand, enables organizations to deploy and repurpose server I/O configurations without ever touching the hardware.
Lower infrastructure cost - Designed for lower infrastructure cost per server, is a choice that makes scaling fast, easy, and inexpensive in comparison to manually configured approaches.
Rack server deployment flexibility - Cisco UCS C-Series Rack Servers unique in the industry because they can be integrated with Cisco UCS connectivity and management or used as stand-alone servers
Integrated Management Controller (IMC) - Running in the system’s Baseboard Management Controller (BMC), when a Cisco UCS C-Series Rack Servers is integrated into a Cisco UCS domain, the fabric interconnects interface with the IMC to make the server part of a single unified management domain. When a server is used as a standalone server, direct access to the IMC through the servers’s management port allows a range of software tools (including Cisco Intersight) to configure the server through its API.