This is documentation for the current major version Apprenda 7.
Documentation for older versions are also available.

Kubernetes on your Platform

You are able to add a Kubernetes cluster to your Platform and use Platform features to deploy and manage applications on the cluster. If you already have a cluster added to your Platform, see more creating applications with Kubernetes components.

Cluster Requirements

The Platform will not create a Kubernetes cluster, you must have a cluster before it can be added to the Platform. Clusters added to the Platform must meet the following requirements:

  • Running Kubernetes 1.4 or later
  • The cluster URL is resolvable to a IP address
  • A user that is able to interact with the Kubernetes API (e.g., able to create and delete Pods) that can be used by the Platform
  • Cluster nodes are accessible to the Platform. This can be a subset of nodes on the cluster that the Platform can use as gateways to access the cluster

If your cluster meets these requirements, add it to the Platform on the Infrastructure>Clouds page in the SOC.

Creating a cluster

If you do not already have a cluster, we recommend using the Kismatic Enterprise Toolkit (KET) to set up your cluster. KET is Apprenda's open source cluster installer that simplifies the process of creating a Kubernetes cluster for enterprise environments.

See more about installing with KET.

Why add a Kubernetes cluster?

The Platform acts as a governing layer to your Kubernetes cluster, allowing you to leverage the native behavior of the Platform to manage application settings before deploying the application onto the cluster. Using the Platform to manage both your new Kubernetes applications and existing applications will simplify interactions between the two groups. The Platform also allows you to separate the concerns of developers and cluster managers allowing developers to focus on developing better applications and operators on efficiently managing infrastructure.

Utilizing Platform features like Resource Policies, Custom Properties, or Platform Registry Settings, Platform Operators can establish rules for the behavior of Pods on the cluster. The Platform will automatically inject your desired configurations into into spec files at deploy time.

The Platform also abstracts away routing concerns by creating Services for all your applications by default and registering it to the Platform's Load Manager. Developers are able to focus on developing their applications, without having to be concerned with the internal workings of Kubernetes. Once a spec file is uploaded, the Platform handles the necessarily configurations for things like aligning with resource management policies or exposing your application to a URL.

How does the Platform connect to a cluster?

The Platform does not make any configuration changes to your cluster and will not affect any existing applications on the cluster. Instead, the Platform will create a new Namespace for all applications that are created via the Platform. The default Namespace is acp for the Apprenda Cloud Platform. The Platform will not know about applications in different Namespaces, so its recommended, though not required, that you use a cluster that solely dedicated to applications from the Platform.

Gateway Nodes

Gateway nodes are how the Platform is able to access the cluster. Assigned when the cluster is added to the Platform, they allow the Platform to connect to the cluster to route traffic to deployed Pods.

The Platform must be able to reach all nodes assigned as a gateway node and you must have at least one node assigned as a gateway node in order for the Platform to be able to interact with the cluster. Its recommended that you assign more than one node as a gateway to handle traffic loads and make sure that at least one node is always available for the Platform to connect to the cluster.

Viewing cluster nodes on the Platform

When the Platform first connects to a cluster it asks Kubernetes about the topology of the cluster and then adds each of the reported nodes as a Kubernetes Linux Host. The reported nodes are visible on the Infrastructure>Servers page the SOC. You will be able to view all nodes on the cluster and the Pods deployed to each cluster through the SOC.

Resource Management for Kubernetes Components

You can create and assign Resource Policies to Kubernetes components to help manage resource consumption on your cluster. Resource Policies are translated into resource requests and limits in a PodTemplate for a Kubermetes component by the Platform at deploy time. When the Pods are deployed to the cluster, Kubernetes handles throttling resources. See the Kubernetes documentation more about resource management on Kubernetes.

An important difference in resource management between Kubernetes and the Platform is that Kubernetes applies resource limits at the container level and the Platform applies at the Pod level. This means that all containers within the same Pod are each given the same resource limits as the assigned Resource Policy making the the total allotment n * CPU/Memory Limit (where n is the number of containers in a Pod). For example, if a Pod is defined with 2 containers and assigned a Resource Policy that sets a limit of .1 CPU, both containers will be assigned a limit of .1, making the total consumption limit .2 fraction of cores.


The Platform utilizes Heapster to collect resource utilization information for the cluster. In order to view this information in the on the Platform, you much configure your cluster to use Heapster. This is not required to add the cluster to the Platform, but is recommended so that Platform Operators and Developers can view accurate information in the SOC and Development Portal. See Heapster's documentation for more on how to configure it on your cluster.

Note that utilization information will be reported in fractions of cores instead of MHZ to more accurately reflect the resource consumption on your cluster to how Kubernetes will track it. The Platform will poll your cluster in intervals defined in the Kubernetes.PodUtilizationUpdatePeriodSeconds (default: 60 seconds) to collect utilization data if Kubernetes.CapturePodUtilization is enabled (default: True). If you do not set up Heapster, no utilization information will be available on the Platform.