This is documentation for the current major version Apprenda 7.
Older versions are also available.

Deploy and Manage Kubernetes Components

This page explains how you can create and deploy guest applications on a Kubernetes cluster using your Platform. Before deploying applications, Platform Operators must first add a cluster to the Platform. You should be familiar with application fundamentals and working with Kubernetes before continuing with this guide.

Pod Definition Requirements

Pod specs that you plan to upload to the Platform must meet the following requirements:

  • Only ReplicationControllers, ReplicaSets, or Deployments are allowed (the Platform will create a Service for you automatically)
  • Spec files must be in YAML, YML, or JSON
  • Only a single resource can be defined in the spec file

See the Kubernetes documentation for more about Pods.


At least one container must be defined in the spec file, but you can define more. If more than one port is specified in the containers defined in the spec, then one (and only one) must be specified with the name "http" to indicate it is the port to route Platform http traffic to for the application. If only one port is specified it will be automatically selected for this purpose.

Image Registry

Platform Operators have the ability to whitelist registries that can be used for deployments by setting the Kubernetes.ContainerRegistryWhitelist Platform Registry Setting. If this has been set, you will only be able to use the specified registries in the spec file and images will only be pulled from whitelisted registries. If Kubernetes.ContainerRegistryWhitelist is not set, images can be pulled from any registry defined in a spec. Consult your Platform Operators if you have questions about the whitelisted image registries.


The Platform will not create volumes for you. If you want to use a volume with your application, Platform Operators should create it first and your spec file should reference it.

Creating an Apprenda Archive for a Kubernetes Component

You can upload a Kubernetes component to the Platform by packaging it into an Apprenda Archive or by simply uploading the spec file. If you are creating an application that includes a Kubernetes component and other components, you should create an archive that holds all the components of the application. Only one pod component is allowed per archive and you can not define a component for another HTTP routable component (such as .NET interfaces or a WAR) if a pod is present in an archive. See more about including additional components in an application archive.

Once you have a valid archive or Pod spec file, you can upload it to the Platform as you would any other archive.

Include Kubernetes Pods in a folder named pods in the root of the archive. The pods folder should include a subfolder (named for the component) that contains the Pod spec YAML, YML, or JSON file. 

In the example above, the archive includes a pods directory containing a subfolder named for the Kubernetes component, k8scomp. The spec file for the component (a ReplicationController, ReplicaSet, or Deployment) is within the k8scomp folder. 

Token Switching

You are able to use the Platform's token switching capabilities on the spec file for your Kubernetes components. This can be useful for defining environment variables for your containers in the spec file, and letting the Platform inject the correct values before deployment to the cluster. See how to define environment variables in Kubernetes spec files

Deployment Manifest

You can also include a Deployment Manifest in the root of the archive to define some configuration options for your application. If you include a Deployment Manifest, it should reference the same Pod name as the component subfolder in the pods folder of the archive. For example, a Deployment Manifest for the should use k8scomp as the name of the Pod.

<?xml version="1.0"?>
<appManifest xmlns:xs="" xmlns:xsi="" xsi:schemaLocation="" xmlns="">
  <linuxServices />
    <pod name="k8scomp" throttlingPolicy="TestPolicy" instanceCount="3" scalingType="Manual">
        <customProperty name="PodProperty">
            <propertyValue value="Value1" />

You are able to include configuration options to define an instance count, resource policy, and scaling type, scaling schedule, and Custom Properties. See more about Deployment Manifest attributes.

Note that in cases where configuration options are different between the spec and the Deployment Manifest, the Platform will use the setting value from the spec file. For example, if you define a ReplicaSet with a replica count set to 3 and also defined the instanceCount in the Deployment Manifest to 2, the Platform will configure the application scaling settings to 3. The only exception to this is resource management, where Platform Resource Policies will override limits defined in the the spec file.

Deploying to the Cluster

When deploying to the cluster, the Platform will perform some configurations on your uploaded spec file before the Platform sends it to the cluster to be deployed with Kubernetes. The Platform converts your uploaded spec file into the equivalent ReplicaSet regardless of the type that was defined in the spec file (ReplicationController, Deployment, or ReplicaSet). At this time the Platform also creates a Service for the ReplicSet, to route traffic from the Platform to the Pods of your application and assigns Labels to identify the application on the cluster. Any Custom Properties will also be assigned as Labels during this process.

All applications will be deployed to the dedicated Platform Namespace on the cluster, acp.

Resource Management

Both Kubernetes and the Apprenda Cloud Platform have ways to manage resources used by deployed workloads. To avoid conflicts and to allow Platform Operators to help manage cluster resources, Platform Resource Policies will overwrite any resource limits in a PodTemplate. The actual throttling of resources will be managed by Kubernetes, however when deploying the application, the Platform configures the PodTemplate to use resource limits from assigned Resource Policies instead of what was defined the in uploaded spec file.

An important difference in resource management between Kubernetes and the Platform is that Kubernetes applies resource limits at the container level and the Platform applies at the Pod level. This means that all containers within the same Pod are each given the same resource limits as the assigned Resource Policy making the the total allotment n * CPU/Memory Limit (where n is the number of containers in a Pod). For example, if a Pod is defined with 2 containers and assigned a Resource Policy that sets a limit of .1 CPU, both containers will be assigned a limit of .1, making the total consumption limit .2 fraction of cores. 

Resource Policies can be be managed by Developers as a Component level configuration setting in the Developer Portal or can be assigned as a throttlingPolicy attribute in the Deployment Manifest. Pod specific Resource Polices need to be created by Platform Operators in the SOC before they can be assigned to Kubernetes components. If throttling is disabled on the Platform, the resource limits and requests in the spec file will be used on the cluster. See more about how Kubernetes manages resources.

Custom Properties

You can use Custom Properties on your Kubernetes components to customize an application's behavior. Properties are assigned as Labels to Pods with the prefix followed by the property name. The Platform will automatically inject Custom Property labels into the spec file when a Kubernetes application is deployed.

Custom Properties can be be managed by Developers as a Component level configuration setting in the Developer Portal or can be assigned as a customProperty attribute in the Deployment Manifest. Pod specific Custom Properties need to be created by Platform Operators in the SOC before they can be assigned to Kubernetes components.

Platform Functionality Limitations for Kubernetes Components

The following are not available for Kubernetes applications for the current version of the Platform:

  • Any User Access Model other than "Anyone" (this corresponds to the Application Services Level of "None" in the Deployment Manifest). Authentication, Authorization, and Multi-tenancy are not supported at this time.
  • Any Platform functionality that requires a User Access Model other than "Anyone" (such as Securables and Features)
  • Apprenda Guest Application API usage
  • Application request handling settings: sticky sessions, forcing secure access, or creating a redirect rule for a front-end load balancer
  • Anything that is backed by the distributed cache (including session replication)
  • Debugging (ability to SSH into processes)
  • Deployment Policies (Kubernetes handles host selection for Pod deployment)