This is documentation for Apprenda 7 and 8.
Documentation for older versions are also available.

Kubernetes on your Platform

As of Apprenda 7.0.0, you are able to add a Kubernetes cluster to your Platform and use Platform features to deploy and manage applications on the cluster. If you already have a cluster added to your Platform, see more creating applications with Kubernetes components.

Cluster Requirements

The Platform will not create a Kubernetes cluster, you must have a cluster before it can be added to the Platform. Clusters added to the Platform must meet the following requirements:

  • Running Kubernetes 1.4 or later
  • The cluster URL is resolvable to a IP address
  • A user that is able to interact with the Kubernetes API, or as of Platform version 7.2.0, a X.509 Certificate signed by a Certificate Authority that is trusted by the Kubernetes API server
  • Cluster nodes are accessible to the Platform. This can be a subset of nodes on the cluster that the Platform can use as gateways to access the cluster

If your cluster meets these requirements, add it to the Platform on the Infrastructure>Clouds page in the SOC.

Creating a cluster

If you do not already have a cluster, we recommend using the Kismatic Enterprise Toolkit (KET) to set up your cluster. KET is Apprenda's open source cluster installer that simplifies the process of creating a Kubernetes cluster for enterprise environments.

See the KET docs for more about installing Kubernetes with KET.

Why add a Kubernetes cluster?

The Platform acts as a governing layer to your Kubernetes cluster, allowing you to leverage the native behavior of the Platform to manage application settings before deploying the application onto the cluster. Using the Platform to manage both your new Kubernetes applications and existing applications will simplify interactions between the two groups. The Platform also allows you to separate the concerns of developers and cluster manages allowing developers to focus on developing better applications and operators on efficiently managing infrastructure.

Utilizing Platform features like Resource Policies, Custom Properties, or Platform Registry Settings, Platform Operators can establish rules for the behavior of Pods on the cluster. The Platform will automatically inject your desired configurations into into spec files at deploy time. 

The Platform also abstracts away routing concerns by creating Services for all your applications by default. Developers are able to focus on developing their applications, without having to be concerned with the internal workings of Kubernetes. Once a spec file is uploaded,  the Platform handles the necessarily configurations for things like aligning with resource management policies or exposing your application to a URL.

How does the Platform connect to a cluster?

The Platform does not make any configuration changes to your cluster and will not affect any existing applications on the cluster. Instead, the Platform will create a new Namespace for all applications that are created via the Platform. The default Namespace is acp for the Apprenda Cloud Platform. The Platform will not know about applications in different Namespaces, so its recommended, though not required, that you use a cluster that solely dedicated to applications from the Platform.

Gateway Nodes

Gateway nodes are how the Platform is able to access the cluster. Assigned when the cluster is added to the Platform, they allow the Platform to connect to the cluster to route traffic to deployed Pods.

The Platform must be able to reach all nodes assigned as a gateway node and you must have at least one node assigned as a gateway node in order for the Platform to be able to interact with the cluster. Its recommended that you assign more than one node as a gateway to handle traffic loads and make sure that at least one node is always available for the Platform to connect to the cluster.

Accessing the Kubernetes API

The Kubernetes REST API is the main entrypoint to a Kubernetes cluster. It is used by operators and developers to manage all the resources that are deployed on the cluster. In order to access the API, the user has to authenticate using one of the supported authentication strategies. Some of these include HTTP Basic Auth, a static token, X509 Client Certificate, etc. Once the user is authenticated, the Authorization module kicks in to verify that the user has permissions to perform the requested action.

The Platform interacts with the Kubernetes API to deploy and manage the Kubernetes components of applications. In order to communicate with the cluster, the Platform’s Cluster Manager needs to authenticate with the Kubernetes API server. As of Platform version 7.2.0, the Platform supports different authentication strategies, and allows the operator to select the strategy that should be used. In Platform versions before 7.2.0, only Basic Authentication is supported.

Basic Authentication uses HTTP Basic Authentication, in which the Cluster Manager sends a username and password when issuing a request to the API server.

X.509 Client Certificate relies on X.509 certificates signed by a Certificate Authority that is trusted by the Kubernetes API server. The Cluster Manager presents this certificate to the Kubernetes API server when issuing requests. If the API server trusts the signing CA of the certificate presented by the Cluster Manager, the request will be authenticated. Once authenticated, the API server will derive the username and groups from the certificate itself. The username is derived from the Common Name of the Subject, and the groups are obtained from the Organization fields of the certificate.

To create a PFX file from an existing certificate and private key, you may use OpenSSL.

Requirements:

  • OpenSSL
  • Client certificate
  • Client certificate’s corresponding private key
  • Certificate Authority’s certificate or public key

The following OpenSSL command will create an admin.pfx file using the client certificate file named admin.pem, the corresponding private key file named admin-key.pem and the CA cert file named ca.pem.

openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pem

Viewing cluster nodes on the Platform

When the Platform first connects to a cluster it asks Kubernetes about the topology of the cluster and then adds each of the reported nodes as a Kubernetes Linux Host. The reported nodes are visible on the Infrastructure>Servers page the SOC. You will be able to view all nodes on the cluster and the Pods deployed to each cluster through the SOC.

Resource Management for Kubernetes Components

You can create and assign Resource Policies to Kubernetes components to help manage resource consumption on your cluster. Resource Policies are translated into resource requests and limits in a PodTemplate for a Kubermetes component by the Platform at deploy time. When the Pods are deployed to the cluster, Kubernetes handles throttling resources. See the Kubernetes documentation more about resource management on Kubernetes.

An important difference in resource management between Kubernetes and the Platform is that Kubernetes applies resource limits at the container level and the Platform applies at the Pod level. This means that all containers within the same Pod are each given the same resource limits as the assigned Resource Policy making the the total allotment n * CPU/Memory Limit (where n is the number of containers in a Pod). For example, if a Pod is defined with 2 containers and assigned a Resource Policy that sets a limit of .1 CPU, both containers will be assigned a limit of .1, making the total consumption limit .2 fraction of cores.

Heapster

The Platform utilizes Heapster to collect resource utilization information for the cluster. In order to view this information in the on the Platform, you much configure your cluster to use Heapster. This is not required to add the cluster to the Platform, but is recommended so that Platform Operators and Developers can view accurate information in the SOC and Development Portal. See Heapster's documentation for more on how to configure it on your cluster.

Note that utilization information will be reported in fractions of cores instead of MHZ to more accurately reflect the resource consumption on your cluster to how Kubernetes will track it. The Platform will poll your cluster in intervals defined in the Kubernetes.PodUtilizationUpdatePeriodSeconds (default: 60 seconds) to collect utilization data if Kubernetes.CapturePodUtilization is enabled (default: True). If you do not set up Heapster, no utilization information will be available on the Platform.