Resource Throttling on the Platform is designed to give Platform Operators the ability to limit the resources a given guest application can consume. In addition, a Platform Operator can set global resource limits on CPU, Memory, and Storage that will apply to all Development Teams, ensuring that no one group monopolizes resources on the Platform. Both forms of Resource Throttling are explained in greater detail below.
Note that Resource Utilization has a different set of control than Resource Throttling. See more about resource utilization and tracking.
Navigate to the Resource Policies page in the SOC (via the Configuration link in the top menu bar).
If any active, deployable Resource Policies have already been configured for your Platform, you will see them listed on the right side of the screen in the Compute Policies tab. By default the Platform is installed with an unlimited Resource Policy. You may use this if you wish, or configure your own policies for guest applications.
To configure a new Resource Policy, click on the New Policy button at the top of the Resource Policy page.
This will open a Resource Policy Definition window:
Due to the differences in managing CPU and memory resources on Kubernetes clusters, Resource Policies created for Pods can only be applied to Pods. See more about Resource Policies assigned to Pods.
You can create policies that apply to any combination of non-Kubernetes components (User Interfaces, Services, Java Web Apps, and Linux Services).
Note that Databases aren’t available as an option if database resource throttling is disabled. If you have database resource throttling enabled, Databases will be an option in the Resource Policy Definition window.
input a Name for the policy, a Description (which can be seen by Development Teams when selecting a policy in the Development Portal) as well as any Notes (which are visible to Platform Operator through the SOC but are not visible to Development Teams)
mark the policy as Active and Deployable so that it can be used. See more about how to manage available Resource Policies
to set the Resource limits,
The minimum and maximum limits available, as well as the increments in between, are configurable by changing some Platform Registry Settings. It is possible to set an unlimited amount of CPU speed and/or memory, by sliding the bar all the way to the right. Choosing “unlimited” for a resource type (Memory, for instance) will mean that any component tied to that Resource Policy will not be throttled based on that resource type.
set any Cost Definition involved for resources used. If you want to set a Cost Definition, you must also choose the Currency from the dropdown, input the Cost, and set the Unit Label. This will be visible to Development Teams, but will appear for communication purposes only, as the Platform does not automatically calculate or regulate chargeback for Development Team resource usage.
Click Save when you finish configuring your Resource Policy. Be sure to review your configuration setting for the Resource Policy, as you are not able to edit resource limits once the policy is created. Also see the section about managing Resource Policies and setting default component policies for more information about applying the policies you create.
To configure a new Storage Quota, click on the Storage Quotas tab at the top of the Resource Policy page, and then click on the New Quota button.
This will open a Storage Quota Definition window,
input a Name for the policy, a Description (which can be seen by Development Teams when selecting a quota in the Development Portal) as well as any Notes (which are visible to Platform Operator through the SOC but are not visible to Development Teams)
mark the policy as Active and Deployable so that it can be used. See more about how to manage available Resource Policies and Storage Quotas
Fill in the information for storage,
If you need it, set a Cost Definition per block of storage used. Once selected, then choose the Currency from the dropdown, input the Cost, and set the Unit Label. This information will be visible to Development Teams, but will appear for communication purposes only
You are not able to edit resource limits for a Resource Policy or Storage Quota. Once created, you will only be able to edit the Name, Description, Notes, and whether the policy or quota is Active and Deployable. For Resource Policies you are also able to edit which application components the policy can be applied to. If you’d like to adjust the resource limits of the policy or quota, you need to create another new policy or quota with the desired configurations.
You can edit and delete Resource Policies or Storage quotas from the Resource Policy page (under the Configuration menu in the SOC).
To Edit a policy or quota, click the Edit option on the bottom of the policy or quota you want to change.
You are also able to edit a Resource Policy or Storage Quota’s Custom Properties by clicking on the View Custom Properties link. Read more about setting and utilizing Custom Properties.
To Delete a policy or quota, click on the Delete link on the bottom of the policy quota you want to delete.
When a Resource Policy or Storage Quota is Active it is available for Development Teams to assign to application components. An application’s components must have active Resource Policies assigned; otherwise, the application can’t be promoted to the Sandbox or Published stages, and components cannot be deployed through the Developer Portal.
Mark a policy as Inactive if you want to prevent Development Teams from assigning it to application components. This is a useful first step for phasing out a policy. Components with an Inactive policy may still be deployed when an application is launched, and a Platform Operator can still deploy the application’s UI, .NET service, or Java Web App components through the SOC.
When a Resource Policy is Deployable, components tied to it can be deployed by Developers (through application promotion or through the Developer Portal), and deployed by launching the application. Application components with an Undeployable policy can’t be deployed by Developers, by launching the application, or by Platform Operators in the SOC. Mark a policy as Undeployable if you want to prevent all deployment of assigned components. This is a useful second step for phasing out a policy.
Tips for Resource Policy Management: It is highly recommended that Platform Operators clearly communicate any anticipated changes in Resource Policy availability to Development Teams. When phasing out a policy, for example, new policies should be made available and clear deadlines for unassigning retiring policies should be set. As needed, you can also use the Active and Deployable flags to help enforce such deadlines. Removing the Active flag, for instance, will prevent any Development Teams from further assigning the policy, and will encourage those that have it assigned to adopt a new policy, as their ability to manipulate the application will be restricted. You can then remove the Deployable flag, which will prevent launching of the application, as another step of phasing out the old policy and encouraging the adoption of a new one.
You can define default Service Policy, User Interface Policy, Java Web App Policy, Storage Policy, Pod Compute, or Storage Quotas from the Defaults section on the left side of the Resource Policy page. ** The defined default Resource Policy of a component is available for all components of its type, and Development Teams will be given the option of assigning it to their application’s components through a special link. The default policy will only be available if it is active.
The default will not be automatically assigned to new components, however, unless Automatic Policy Assignment is enabled. If the default Resource Policy is not configured with Automatic Policy Assignment enabled, Development Teams will have to manually select a policy before deploying their applications.
To set a default,
Select the policy or quota in the dropdown menu
Click Save Changes (below the Deployment Strategies section)
Resource Throttling works by limiting the resources that a guest application’s components can consume to those defined in their Resource Policy. It also determines the number of “slots” available on the Platform to host .NET service, Java Web App, and UI deployments tied to each Resource Policy with limited (and not “unlimited”) resource definitions. In other words, once a service, Java Web App, or UI is deployed, a section of CPU utilization and memory equal to the limits in its Resource Policy is “reserved” for that service or UI deployment. Once the “reserved” resources reach the total resources available on the Platform, no new deployments will be allowed.
Because it is unlikely that all guest applications hosted on the Platform will require all of their resource allocations at once, the PhysicalHost.CpuThrottling.AllocationFactor and PhysicalHost.MemoryThrottling.AllocationFactor settings can be configured in the Platform Registry to tell the Platform to allow the “reserved” resource allocations to exceed the actual physical resources present. For each of these settings, a value of 1 will result in a strict correlation of the “reserved” resources to the Platform’s actual resources; a value of 2 will allow the “reserved” resource allocation a limit of twice the actual Platform resources; a value of 3 will allow the “reserved” resource allocation a limit of three times the actual platform resources, and so on.
Note that there is also a minimum CPU usage on the Platform that must be met before Resource Throttling of CPU will take effect. This is controlled by the PhysicalHost.CpuThrottling.TotalCpuThreshold setting. The default value for this setting is 75, meaning that CPU usage must hit 75% before Throttling takes effect. This setting, along with those mentioned above, can be configured on the Platform Registry Settings page. The PhysicalHost.CpuThrottling.TotalCpuThreshold setting only affects throttling for CPU; Memory ceilings for processes are always in effect when Resource Throttling is enabled.
Kubernetes handles throttling for deployed Pods. The Platform manages the assigned limits through Resource Policies, but once deployed onto the cluster resource consumption is managed by Kubernetes. See the Kubernetes documentation more about resource management on Kubernetes.
Additionally, the Platform will only display CPU and memory allotments in the SOC or Developer Portal for Kubernetes components if Heapster has been configured for your Platform and the URL added to your cluster configuration settings. Note that utilization information will be reported in fractions of cores instead of MHZ to more accurately reflect the resource consumption on your cluster to how Kubernetes will track it. The Platform will poll your cluster in intervals defined in the Kubernetes.PodUtilizationUpdatePeriodSeconds (default: 60 seconds) to collect utilization data if Kubernetes.CapturePodUtilization is enabled (default: True). If you do not set up Heapster, no utilization information will be available on the Platform.
Once you have configured active, deployable Resource Policies for guest applications, you can enable Resource Throttling. It is possible, but not recommended, to enable throttling without first configuring Resource Policies. If you enable throttling before creating usable Resource Policies, it will prevent Development Teams from deploying or launching their applications.
Throttling can be enabled from the Resource Policies page.
To enable throttling, click on the Enable Throttling button.
To disable throttling, click the Disable Throttling button. (This will replace the Enable Throttling button when throttling is enabled.)
Throttling of database Memory and CPU with Resource Policies is no longer supported by default when Resource Throttling is enabled on the Platform. Only Storage Quotas can be used to manage database resources when Resource Throttling is enabled.
To enable throttling of CPU and Memory resources for application database components, you need to grant the SQL Server instance SQL Account serveradmin and sysadmin roles and change the Platform Registry Setting Apprenda.DB.UseResourceGovernor to True. The SQL account must be granted these elevated roles before database throttling will work on your Platform.
Note that the permissions needed to enable database throttling grant users more control over databases than is necessary to run the Platform. This feature should only be enabled if the security implications of granting higher access levels are understood.
Once Resource Throttling is enabled, you can configure how the Platform deploys service and UI components for guest applications. At the bottom left of the Resource Policies page, you will see the Distribution Strategies section, which includes a pull-down menu for each service, UI, and Java Web App deployments. For each of these, you can determine future component deployment according to either RAM or CPU usage.
To change a distribution strategy for Service, User Interfaces, Java Web Apps, and Linux Services,
select strategy you want from the dropdown for the corresponding service
click Save Changes. This will also save any changes made to default Resource Policy assignment
When the Platform is filtering for acceptable servers for new database workload placement, the Platform will always choose the best available server from the final group of applicable servers filtered. This means that if there is a server with more disk space in the final group of applicable servers, that server will always be chosen for hosting new workloads.
If equal distribution of database workloads is a concern, Application Deployment Policies should be used to narrow down the list of servers to a homogeneous group of servers. Similarly, in the event that the Platform has high performance servers for hosting critical workloads, Application Resource Policies can be used to control deployments to those servers and keep non-critical workloads from consuming their resources.
The following Platform Registry Settings can help you define some minimum and maximum resource limits for your Platform. Kubernetes manages CPU resource in fractions of cores, and you must manage limits separately than limits of other component types.
|ResourcePolicies.MaxCpuCores||Maximum CPU cores available for resource policies||Number of cores|
|ResourcePolicies.MaxCpuCoresInFractionOfCores||Maximum CPU fraction of core limit for Kubernetes workloads||Fraction of cores (limit to 1 decimal place)|
|ResourcePolicies.MaxCpuSpeed||Maximum CPU speed available for resource policies||In MHz|
|ResourcePolicies.MaxMemory||Maximum RAM cores available for resource policies||In MB|
|ResourcePolicies.MinCpuCores||Minimum CPU cores available for resource policies||Number of cores|
|ResourcePolicies.MinCpuCoresInFractionOfCores||Minium CPU fraction of core limit for Kubernetes workloads||Fraction of cores (limit to 1 decimal place)|
|ResourcePolicies.MinCpuSpeed||Minimum CPU speed available for resource policies||In MHz|
|ResourcePolicies.MinMemory||Minimum RAM cores available for resource policies||In MB|
|ResourcePolicies.StepCpuCores||Increment between CPU cores when creating resource policies||Integer >0|
|ResourcePolicies.StepCpuCoresInFractionOfCores||Increment between fraction of CPU cores when creating resource policies for Kubernetes||Float > 0.0 (limit to 1 decimal place)|
|ResourcePolicies.StepCpuSpeed||Increment between CPU speeds when creating resource policies||Integer > 0|
|ResourcePolicies.StepMemory||Increment between RAM allocations when creating resource policies||Integer > 0|
In order to view current Resource Policy and Storage Quota assignments for Development Teams’ existing application components and component Workloads, use the Workload Allocations and Component Assignments tabs at the top of the screen. The Workload Allocations tab contains a list of all deployed component Workloads on the Apprenda Platform, along with information such as the Development Team that owns the Workload, the app component that the Workload is a deployed instance of, the Resource Policy/Storage quota that the Workload was deployed under*, and the machine that the Workload is deployed to. The Component Assignments tab, on the other hand, contains a list of all application components existing on the Platform, along with information about the associated application name, the Development Team that owns the app component, and the Resource Policy/Storage Quota currently assigned to the component.
*Please Note: In the event that the assigned Resource Policy for an app component is changed after an initial component Workload has been deployed, that Workload will retain the original Resource Policy assignment that it was deployed under for the duration of its existence. All Workloads deployed after the Resource Policy assignment is updated, though, will be deployed under that updated policy.
Before Resource Limits can be set for Development Teams, at least one active Resource Policy must be configured and Resource Throttling must be enabled. Development Team Limits are enforced based on application component deployment and are calculated by the Resource Policies attached to a Development Team’s applications. Resource Policies need to be created and throttling must be enabled before the Platform can calculate Development Team resource limits.
A Development Team has met its resource limit when all of the resource slots that are “reserved” by the Team’s deployed applications are filled. The “reserved” resource slots are determined by the guest application’s assigned Resource Polices. The Platform uses those limits to keep track of how much a Development Team has used by adding up slots used by deployed applications.
Once a Development Team has met its limit, no further service or UI deployments are allowed from that Development Team’s applications. The Development Team may choose to undeploy instances in order to free up space in their group allocation.
Resource limits for Development Teams can be configured through the following Platform Registry Settings. They can be set as global limits that will apply to all Development Teams on the Platform, or you can set limits for a specific Development Team to override the global limits. Development Teams without limit overrides will use the default settings. Note that you do not need to set overrides for all three settings in order to create a Development Team override for CPU, memory, or storage.
To set a global resource limit, set one of the following Registry setting to your desired limit.
To set a resource limit override for a Development Team, add one of the following Registry settings as a new setting and append the Development Team alias with a “-“ at the end of the setting name. This will be a separate setting from the global limit. For example: PhysicalHost.ResourceAllocation.MaxDeveloperCpuAllocation-myDevTeamAlias. Development Team aliases can be found on the Access> Development Teams page in the SOC.
Note that no throttling will take place when these values are set to -1 (as either a global limit or a Development Team specific limit). Also, the total resource allocation for a Development Team will not take into account any application components deployed under an “unlimited” Resource Policy or Storage Quota.
|Sets the total amount, in MHz, of CPU that a given Developer's applications can collectively use on the Platform. For this setting to have any effect, Resource Throttling must be enabled. The total is calculated by measuring allocations against Resource Policies assigned to the applications. Applications deployed against an unlimited Resource Policy do not count towards the total.||-1 signifies no limit; otherwise any positive integer|
|Sets the total amount, in MB, of Memory that a given Developer's applications can collectively use on the Platform. For this setting to have any effect, Resource Throttling must be enabled. The total is calculated by measuring allocations against Resource Policies assigned to the applications. Applications deployed against an unlimited Resource Policy do not count towards the total.||-1 signifies no limit; otherwise any positive integer|
|Sets the total amount, in MB, of storage that a given Developer's applications can collectively use on the Platform. For this setting to have any effect, Resource Throttling must be enabled. The total is calculated by measuring allocations against Resource Policies assigned to the applications. Applications deployed against an unlimited Resource Policy do not count towards the total.||-1 signifies no limit; otherwise any positive integer|
Your Platform was installed with two Resource Policies: an Apprenda Core Services policy and a Cataloging Service policy. Both were installed as Inactive, meaning that they are not available for use for guest applications. These policies are essential for your Platform to work properly; as such, they should not be altered.