Deploy and Manage Docker Workloads

This page explains how developers can deploy, monitor, and scale guest applications deployed using Docker on the Platform. Before deploying applications using Docker, a Platform Operator must first configure your Platform to support Docker workloads and you should be familiar with the Platform’s Linux Services, Custom Properties, Bootstrap Policies, and Docker functionality.

Configuring a Docker workload to run your Platform

Getting applications running in Docker follows the same development process as other application for the Platform with the following configurations.

Apprenda Archive 

To deploy an application on the Platform using Docker, you must first package the application as an application archive. At a minimum the archive must include:

The artifact-specific folder does not need to contain any application binaries because the image and other Docker configuration options for the application are set as Docker specific Customer Properties and interpreted at deploy time. Instead, this folder is used to include initial content for a data volume if your application requires custom data. If no additional content is needed, you must still place an empty text file inside the archive folder to meet Platform requirements for Application Archives. Find more information on including content for data volumes in the Using Volumes section of this page.

In the example below, the archive includes in its linuxServices folder a subdirectory for the application nginx. The subdirectory houses a text file, slug.txt, which is an empty text file.

Deployment Manifest

The application archive must include a Deployment Manifest that references the same named artifact-specific sub-folder as a Linux Service and declares an HTTP mapped port.

This Deployment Manifest below shows the minimum requirements for deploying with Docker, including the Docker Deploy and Docker Image Custom Property definitions. You must set those two Custom Properties for all Docker deployed application because they signify to the Platform that application should be deployed as a Docker container (Docker Deploy) and specify the image to use for deployment (Docker Image). You can include any Docker Custom Properties for managing the container in the Deployment Manifest, like the manifest below, or after the archive is uploaded.

Sample DeploymentManifest.xml for a Docker Workload

<?xml version="1.0" encoding="utf-8"?>
<appManifest xmlns:xs="" xmlns:xsi="" xsi:schemaLocation="" xmlns="">
  <presentation strategy="CommingledAppRoot" scalingType="Manual"/>
    <service name="nginx" >
        <customProperty name="DockerDeploy">
           <propertyValue value="Registry"/>
        <customProperty name="DockerImageName">
            <propertyValue values=""/>
        <dynamicPort httpMapped="true" portName="HTTP_80" />

Port allocation and HTTP routing

You can specify ports for a Docker container through the Deployment Manifest. There are two types of ports that you can define, static and dynamic. An application can have more than one static or dynamic (or both) ports defined for it.

Static ports (included primarily to support legacy applications) are pre-set. Specifying a static port in the Deployment Manifest will permit one instance of the application to listen on that port. This means if you define a static port, that applications cannot scale on a single Linux server as the port is allocated to the first instance deployed.

<ports >
   <staticPort httpMapped="true" num="50000" />

Dynamic ports are allocated by Platform at deploy time.

<ports >
   <dynamicPort httpMapped="true" portName="HTTP_99" />

If you are using a version of the deployer before 1.7, format portName values as XXXX_YYYY where everything before the underscore (XXXX_) is a alphanumeric identifier and YYYY is the internal port to which the external dynamic port chosen by the Platform is mapped for execution of “docker run”. For example, if the portName value is HTTP_99, and the Platform allocated port is 10005, the final port mapping to the container is 10005:99. 

If you are using version 1.7 or later of the Docker deployer, use the Docker Port Mapping Custom Property to specify internal port mappings for a container. Docker Port Mapping accepts a comma separated list of name identifiers and ports,  static or dynamic, for the container. The name identifiers should match the values defined in portName in the Deployment Manifest. When deploying the Platform will match the names in the portName field with one in the Docker Port Mapping property, and assign the port to the container. For example for a dynamic port:

  • portName = Container_Port
  • Docker Port Mapping value = Container_Port:99

The Platform will allocate a random port, 10005, and find the mapped value matching the portName from Docker Port Mapping property, 99, to use as the port mapping when starting the container, 10005:99.

HTTP Mapped

You can define more than one static or dynamic port for each application, however you can only define one port as httpMapped. When a static or dynamic port is marked httpMapped=”true”, the Platform will use that port to route HTTP traffic to the Docker Workload using the URL type defined in the Developer Portal. For path-based URLs the workload must serve requests at the given path (e.g. /myapp-v1); for subdomain-based URLs the workload must serve requests at the root context (/).

Deploying and managing Docker workloads

Once your application has been configured and packaged, it can be uploaded to the Platform like any other guest application as explained in the Create a New Application section of our documentation.

Prior to promoting the application to the Sandbox or Published stages, you must ensure that it has been tagged for deployment using Docker and that the Docker image name has been set. To do this, go to the application’s Configure>Components section in the Developer Portal. Select the Linux Services Application Tier, then look under Deployment for the Custom Properties list.

To set the application to deploy using Docker, set Docker Deploy to Registry.

To set the Docker image for the container, set Docker Image Name to the registry and image name you wish to use, formatted like registry/imageName.

Be sure to save your changes. Other Docker Custom Properties can also be configured through this page.

Once the application has been properly tagged, you can promote and deploy it like any other Apprenda guest application as explained in the Deploying Your Applications section of our documentation. Some configuration and management options will be limited.

Docker Workload functionality supported in the current version of Apprenda

​The following are available as part of the Docker functionality in the current version of the Apprenda Platform:

Limitations to Docker Workloads support

The following are not available for Linux Services for the current version of the Apprenda Platform:

  • Any User Access Model other than “Anyone” (this corresponds to the Application Services Level of “None” in the Deployment Manifest). Authentication, Authorization, and Multi-tenancy are not supported at this time
  • Any Platform functionality that requires a User Access Model other than “Anyone” (such as Securables and Features)
  • Apprenda Guest Application API usage
  • Anything that is backed by the distributed cache (including session replication)
  • Debugging (ability to SSH into processes)
  • Token switching

Using volumes

The Platform supports Docker’s volume hosting capabilities. Custom Properties are used to configure one or more paths to mount into a container or host for each application deployed with Docker. During deployment, the Platform utilizes the native Docker volume functionality to configure paths and mount a volume to the specific location inside the container or host based on the values entered in the bind mount Custom Properties.

The Platform supports three types of bindings for volumes: Local, Shared, and Host. Due to the nature of the Platform, use the relative path of the file system for all values of the Local and Shared bind mount Custom Properties because it is impossible to know where the container will be hosted until it is deployed on the Platform.

Local Volume Binding

Configured using the Docker Bind Local Custom Property, local volume bindings will place host paths directly into the deployed container file system. These volumes are tied to the life cycle of the container and are deleted when the container instance is undeployed.

All directories listed in Docker Bind Local are mounted to the docker-binds directory of the deployed workload file system located at /apprenda/persistent-instance-state/instances/ (where is the component's Platform assigned identifier) on a given host.

For example, if the Local binding was set as the following:

  • Docker Bind Local is set to /usr/share/nginx/html

The directory is bind mounted inside the container in /apprenda/persistent-instance-state/instances/ /docker-binds/usr/share/nginx/html when the container is deployed.

Shared Volume Binding

Shared binds work similarly to Local binds, however volumes are mounted to a directory outside the deployed container file system. This means that the contents of that directory will survive the deployment and undeployment of a single deployed workload. Instead, data is persisted for all workloads of an application on a given host. To use Shared binding, use the Docker Bind Shared Root Directory and Docker Bind Shared Custom Properties to define a directory target mountings on all hosts and the directories to mount.

Docker Bind Shared Root Directory defines the target directory for Shared volume mountings on all hosts. Platform Operators need to create this property before you can use Shared volume mounting, the default recommendation is to use /apprenda/docker-binds/. To share information among all your Docker workloads, it is recommended that the target directory used as the Docker Bind Shared Root Directory is a mounted external file system shared among all the Linux servers that will host Docker workloads.

Directories provided by Developers in Docker Bind Shared are mounted inside the Docker Bind Shared Root Directory upon application deployment. The Platform will generate a Tenant, application, and application version specific directory hierarchy for each deployed workload inside the Docker Bind Shared Root Directory to hold all mounted directories.

For example, if the Shared bindings were set as the following:

  • Docker Bind Shared Root Directory is set to /apprenda/docker-binds/
  • Docker Bind Shared is set to /usr/share/nginx/html

The bind mount is placed in /apprenda/docker-binds/<Tenant_Alias> /<Application_Alias> /<Version_Alias> /usr/shared/nginx/html on the hosting server. Any additional directories listed for Docker Bind Shared will also be placed inside /apprenda/docker-binds/<Tenant_Alias> /<Application_Alias> /<Version_Alias> /.

Host Volume Mounting

This type of bind works differently than a Shared or Local mount, allowing you to mount any host path into any path in a deployed container. The Docker Bind Host Custom Property expects input in the form of /absolute/host/path:/absolute/container/path and will mount the absolute host path directory inside the absolute container path when the workload is deployed.

Due to the increased security risks of mounting any host directory into a guest application, to use this type of bind mounting, Platform Operators must first provide a colon separated list of approved directories that to use as volumes in the Docker Bind Host Approved Directories Custom Property. Developers are only able to use directories found in the Docker Bind Host Approved Directories list when setting Docker Bind Host for any application. If a directory is used in Docker Bind Host that is not on the approved list, the Platform will ignored when the container is deployed.

Initialing Volumes with Archive Content

For Local or Shared bind mounts you have the ability to include content in the application archive to populate the volume at deploy time. This is an great way to inject content into a basic image to customize container behaviors without having to rebuild the image.

Include content to use for volumes in the application archive in the artifact-specific sub-folder of the linuxServices folder. Use the same file hierarchy as defined in the Docker Bind Local or Docker Binds Shared properties.

When the application is deployed, before mounting a given directory, the archive is checked for a file hierarchy that matches a bind mount location in the Docker Bind Local or Docker Binds Shared properties. If any matches are found, the directory is mounted and the content from the archive is copied into the mount. Note that content from an application archive is copied for every container deployment and will overwrite data that is already in the mount location. Only use this type of content injection for data that will be consistent for every deployment and a separate mounting location should be used for data you wish to persist.

For example, if you wanted to replace the index.html for a typical nginx image deployment with a custom index.html file you would start by creating an archive, like the one shown below. Place the file hierarchy usr/share/nginx/html with an HTML file index.html inside the nginx folder. Depending on how you wanted to mount the volume, set either the Docker Bind Local or Docker Bind Shared property to the same file hierarchy: /usr/share/nginx/html (the Deployment Manifest below uses Docker Bind Local).

When the following example archive and Deployment Manifest is deployed on the Platform, the Docker deployer will create the desired mount at /apprenda/persistent-instance-state/instances/ /docker-binds/usr/share/nginx/html and copy index.html into the mount in the container file system. All deployed instances of this nginx will use the customized index file instead of its default index file.

Sample Archive LinuxService Folder with Volume Content

Samlple DeploymentManifest.xml for Local Bind

<?xml version="1.0" encoding="utf-8"?>
<appManifest xmlns:xs="" xmlns:xsi="" xsi:schemaLocation="" xmlns="">
  <presentation strategy="CommingledAppRoot" scalingType="Manual"/>
    <service name="nginx" >
        <customproperty name="DockerDeploy">
            <propertyvalue value="Registry"></propertyvalue>
        <customproperty name="DockerImageName">
            <propertyvalue value=""></propertyvalue>
        <customproperty name="DockerBindLocal">
            <propertyvalue value="/usr/shard/nginx/html/"></propertyvalue>
        <dynamicport httpmapped="true" portname="HTTP_80"></dynamicport>

Using overlay networking

You can connect containers deployed by the Platform through an overlay network. To use this kind of networking, it must first be configured for your Docker installations on all relevant hosts. See Docker’s documentation on how to configure support for overlay networking.

Once overlay networking is setup, you can begin configuring the components of your application to communicate with each other. You can control the scope of the network an application belongs to by using the Docker Network Scope Custom Property. This property has the values of App, Tenant, or Global for any application. The Docker Network property is also used to name the overlay network for Tenant and Global scoped workloads.

Service Names

Every workload that is connected to the same overlay network can be looked up in DNS by its service name and can connect to each other directly.

The service name for a workload attached to an App scoped overlay network is the component alias of the workload.

For Tenant and Global scoped networks, the service name is the application, version and component alias separated by hyphens (-). For example, given the following:

  • Application alias: myApp
  • Version alias: v1
  • Component alias: frontend

The service name for a deployed workload of the frontend component connected the network is myApp-v1-frontend.  

If an application component is horizontally scaled, service name look-up will return a list of IP address for the deployed workloads of the component. Look-ups of the same component will not return the workload IPs in the same order.

Application Scope

Setting Docker Network Scope to App means that deployments of application workloads are able to communicate with each other over an overlay network. The name of the overlay network is the named for the Tenant, application, and version alias. This scope is useful for workload interactions that depend on non-HTTP connectivity. For example, use it to connect workloads of a middleware component to a back-end database or message queue, without having to route communication through the Platform Load Balancer.

Tenant scope

A scope of Tenant will create an overlay network that can connect application workloads from the same Tenant. To use this scope, also set the Docker Network Custom Property. The name of the network is the Tenant alias and the value of Docker Network.

Tenants can create more than one Tenant scoped overlay network to connect any workloads they own. Connect workloads to the same network by assigning the Tenant scope and setting the same value for Docker Network.

Global Scope

Global scope means that workloads are connected to an overlay network that spans the entire Platform, regardless of which Tenant they belong to. For this scope you must also set the Docker Network Custom Property, which is the name of the network. Connect application components to the same network by assigning the Global scope and setting the same value for Docker Network.

Before establishing a Global scoped overlay network on your Platform you should understand the requirements and security vulnerabilities of using one. Workloads using a Global overlay network are at a higher risk of network based attacks and should be protected at the network level before being added to the Global overlay network.

Checking container deployment readiness

For some container deployments, there may be a delay from when the container is started to when the application within it is ready to fulfill external requests. This delay in application start up may cause problems for requests trying to connect to an application that is not ready. To prevent traffic being routed too soon, you can enable the Docker Readiness Checks Custom Property to test container readiness before the application is added to the Platform Load Balancer and routed requests.

Disabled by default, you can enable these checks for a container by setting the Docker Readiness Check Custom Property to True. Once enabled, for each new deployed workload the Docker Deployer will send HTTP or HTTPS GET requests to the container until a valid Response Code is returned (a response less than 300 characters is considered healthy). The Platform will test for a valid response from the provided path in the Docker Readiness Check Path and HTTP schema in the Docker Readiness Check Schema Custom Properties.

The Docker Deployer will continue to test the application for a healthy response until the Docker Readiness Check Timeout Seconds time limit is reached. After the timeout period has expired (default is 300 seconds), the deployment will be considered failed, an error will be logged, and the Platform will remove the container. The Platform will deploy a new container to meet scaling requirements, repeating the process until the container is successfully deployed.