Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This guide describes what you should be aware of when you want to migrate an existing Tomcat application to run on Azure Kubernetes Service (AKS).
To ensure a successful migration, before you start, complete the assessment and inventory steps described in the following sections.
External resources, such as data sources, JMS message brokers, and others are injected via Java Naming and Directory Interface (JNDI). Some such resources may require migration or reconfiguration.
Inspect the META-INF/context.xml file. Look for <Resource>
elements inside the <Context>
element.
Inspect the $CATALINA_BASE/conf/context.xml and $CATALINA_BASE/conf/server.xml files as well as the .xml files found in $CATALINA_BASE/conf/<engine-name>/<host-name> directories.
In context.xml files, JNDI resources will be described by the <Resource>
elements inside the top-level <Context>
element.
In server.xml files, JNDI resources will be described by the <Resource>
elements inside the <GlobalNamingResources>
element.
Datasources are JNDI resources with the type
attribute set to javax.sql.DataSource
. For each datasource, document the following information:
- What is the datasource name?
- What is the connection pool configuration?
- Where can I find the JDBC driver JAR file?
For more information, see JNDI Datasource HOW-TO in the Tomcat documentation.
It isn't feasible to document every possible external dependency in this guide. It's your team's responsibility to verify that you can satisfy every external dependency of your application after the migration.
Check all properties and configuration files on the production server(s) for any secret strings and passwords. Be sure to check server.xml and context.xml in $CATALINA_BASE/conf. You may also find configuration files containing passwords or credentials inside your application. These may include META-INF/context.xml, and, for Spring Boot applications, application.properties or application.yml files.
Any usage of the file system on the application server will require reconfiguration or, in rare cases, architectural changes. You may identify some or all of the following scenarios.
If your application currently serves static content, you need an alternate location for it. You should consider moving static content to Azure Blob Storage and adding Azure Front Door for fast downloads globally. For more information, see Static website hosting in Azure Storage and Integrate an Azure Storage account with Azure Front Door.
For files that are frequently written and read by your application (such as temporary data files), or static files that are visible only to your application, you can mount Azure Storage shares as persistent volumes. For more information, see Create and use a volume with Azure Files in Azure Kubernetes Service (AKS).
To identify the session persistence manager in use, inspect the context.xml files in your application and Tomcat configuration. Look for the <Manager>
element, and then note the value of the className
attribute.
Tomcat's built-in PersistentManager implementations, such as StandardManager or FileStore aren't designed for use with a distributed, scaled platform such as Kubernetes. AKS may load balance among several pods and transparently restart any pod at any time, persisting mutable state to a file system isn't recommended.
If session persistence is required, you'll need to use an alternate PersistentManager
implementation that will write to an external data store, such as VMware Tanzu Session Manager with Redis Cache. For more information, see Use Redis as a session cache with Tomcat.
Certain production scenarios may require additional changes or impose additional limitations. While such scenarios can be infrequent, it is important to ensure that they are either inapplicable to your application or correctly resolved.
Scheduled jobs, such as Quartz Scheduler tasks or cron jobs, can't be used with containerized Tomcat deployments. If your application is scaled out, one scheduled job may run more than once per scheduled period. This situation can lead to unintended consequences.
Inventory any scheduled jobs, inside or outside the application server.
If your application contains any code that is accommodating the OS your application is running on, then your application needs to be refactored to NOT rely on the underlying OS. For instance, any uses of /
or \
in file system paths may need to be replaced with File.Separator
or Path.get
.
MemoryRealm requires a persisted XML file. On Kubernetes, this file will need to be added to the container image or uploaded to shared storage that is made available to containers. The pathName
parameter will have to be modified accordingly.
To determine whether MemoryRealm
is currently used, inspect your server.xml and context.xml files and search for <Realm>
elements where the className
attribute is set to org.apache.catalina.realm.MemoryRealm
.
In containerized deployments, SSL sessions are typically offloaded outside the application container, usually by the ingress controller. If your application requires SSL session tracking, ensure the SSL traffic gets passed through to the application container directly.
If AccessLogValve is used, the directory
parameter should be set to a mounted Azure Files share or one of its subdirectories.
Before you create container images, migrate your application to the JDK and Tomcat that you intend to use on AKS. Test your application thoroughly to ensure compatibility and performance.
In the pre-migration, you'll likely have identified secrets and external dependencies, such as datasources, in server.xml and context.xml files. For each item thus identified, replace any username, password, connection string, or URL with an environment variable.
Note
Microsoft recommends using the most secure authentication flow available. The authentication flow described in this procedure, such as for databases, caches, messaging, or AI services, requires a very high degree of trust in the application and carries risks not present in other flows. Use this flow only when more secure options, like managed identities for passwordless or keyless connections, are not viable. For local machine operations, prefer user identities for passwordless or keyless connections.
For example, suppose the context.xml file contains the following element:
<Resource
name="jdbc/dbconnection"
type="javax.sql.DataSource"
url="jdbc:postgresql://postgresdb.contoso.com/wickedsecret?ssl=true"
driverClassName="org.postgresql.Driver"
username="postgres"
password="{password}"
/>
In this case, you could change it as shown in the following example:
<Resource
name="jdbc/dbconnection"
type="javax.sql.DataSource"
url="${postgresdb.connectionString}"
driverClassName="org.postgresql.Driver"
username="${postgresdb.username}"
password="${postgresdb.password}"
/>
With the exception of the first step ("Provision container registry and AKS"), we recommend that you follow the steps below individually for each application (WAR file) you wish to migrate.
Note
Some Tomcat deployments may have multiple applications running on a single Tomcat server. If this is the case in your deployment, we strongly recommend running each application in a separate pod. This enables you to optimize resource utilization for each application while minimizing complexity and coupling.
Create a container registry and an Azure Kubernetes cluster whose Service Principal has the Reader role on the registry. Be sure to choose the appropriate network model for your cluster's networking requirements.
az group create \
--resource-group $resourceGroup \
--location eastus
az acr create \
--resource-group $resourceGroup \
--name $acrName \
--sku Standard
az aks create \
--resource-group $resourceGroup \
--name $aksName \
--attach-acr $acrName \
--network-plugin azure
Clone the Tomcat On Containers Quickstart GitHub repository. It contains a Dockerfile and Tomcat configuration files with a number of recommended optimizations. In the steps below, we outline modifications you'll likely need to make to these files before building the container image and deploying to AKS.
If you intend to use Tomcat Clustering on AKS, ensure that the necessary port ranges are exposed in the Dockerfile. In order to specify the server IP address in server.xml, be sure to use a value from a variable that is initialized at container startup to the pod's IP address.
Alternatively, session state can be persisted to an alternate location to be available across replicas.
To determine whether your application uses clustering, look for the <Cluster>
element inside the <Host>
or <Engine>
elements in the server.xml file.
Edit server.xml to add the resources you prepared in the pre-migration steps, such as Data Sources, as shown in the following example:
Note
Microsoft recommends using the most secure authentication flow available. The authentication flow described in this procedure, such as for databases, caches, messaging, or AI services, requires a very high degree of trust in the application and carries risks not present in other flows. Use this flow only when more secure options, like managed identities for passwordless or keyless connections, are not viable. For local machine operations, prefer user identities for passwordless or keyless connections.
<!-- Global JNDI resources
Documentation at /docs/jndi-resources-howto.html
-->
<GlobalNamingResources>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users
-->
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml"
/>
<!-- Migrated datasources here: -->
<Resource
name="jdbc/dbconnection"
type="javax.sql.DataSource"
url="${postgresdb.connectionString}"
driverClassName="org.postgresql.Driver"
username="${postgresdb.username}"
password="${postgresdb.password}"
/>
<!-- End of migrated datasources -->
</GlobalNamingResources>
For additional data source instructions, see the following sections of the JNDI Datasource How-To in the Tomcat documentation:
The simplest way to build and upload the image to Azure Container Registry (ACR) for use by AKS is to use the az acr build
command. This command doesn't require Docker to be installed on your computer. For example, if you have the Dockerfile above and the application package petclinic.war in the current directory, you can build the container image in ACR with one step:
az acr build \
--image "${acrName}.azurecr.io/petclinic:{{.Run.ID}}" \
--registry $acrName \
--build-arg APP_FILE=petclinic.war \
--build-arg=prod.server.xml .
You can omit the --build-arg APP_FILE...
parameter if your WAR file is named ROOT.war. You can omit the --build-arg SERVER_XML...
parameter if your server XML file is named server.xml. Both files must be in the same directory as Dockerfile.
Alternatively, you can use Docker CLI to build the image locally. This approach can simplify testing and refining the image before initial deployment to ACR. However, it requires Docker CLI to be installed and Docker daemon to be running.
# Build the image locally
sudo docker build . --build-arg APP_FILE=petclinic.war -t "${acrName}.azurecr.io/petclinic:1"
# Run the image locally
sudo docker run -d -p 8080:8080 "${acrName}.azurecr.io/petclinic:1"
# Your application can now be accessed with a browser at http://localhost:8080.
# Log into ACR
sudo az acr login --name $acrName
# Push the image to ACR
sudo docker push "${acrName}.azurecr.io/petclinic:1"
For more information, see the Learn module for Building and storing container images in Azure.
If your application is to be accessible from outside your internal or virtual network(s), a public static IP address will be required. This IP address should be provisioned inside cluster's node resource group.
export nodeResourceGroup=$(az aks show \
--resource-group $resourceGroup \
--name $aksName \
--query 'nodeResourceGroup' \
--output tsv)
export publicIp=$(az network public-ip create \
--resource-group $nodeResourceGroup \
--name applicationIp \
--sku Standard \
--allocation-method Static \
--query 'publicIp.ipAddress' \
--output tsv)
echo "Your public IP address is ${publicIp}."
Create and apply your Kubernetes YAML file(s). If you're creating an external load balancer (whether to your application or to an ingress controller), be sure to provide the IP address provisioned in the previous section as the LoadBalancerIP
.
Include externalized parameters as environment variables. Don't include secrets (such as passwords, API keys, and JDBC connection strings). Secrets are covered in the Configure KeyVault FlexVolume section.
If your application requires non-volatile storage, configure one or more Persistent Volumes.
You might want to create a Persistent Volume using Azure Files mounted to the Tomcat logs directory /tomcat_logs to retain logs centrally. For more information, see Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS).
Create an Azure KeyVault and populate all the necessary secrets. Then, configure a KeyVault FlexVolume to make those secrets accessible to pods.
You'll need to modify the startup script (startup.sh in the Tomcat on Containers GitHub repository) to import the certificates into the local keystore on the container.
Note
Microsoft recommends using the most secure authentication flow available. The authentication flow described in this procedure, such as for databases, caches, messaging, or AI services, requires a very high degree of trust in the application and carries risks not present in other flows. Use this flow only when more secure options, like managed identities for passwordless or keyless connections, are not viable. For local machine operations, prefer user identities for passwordless or keyless connections.
To execute scheduled jobs on your AKS cluster, define Cron Jobs as needed.
Now that you've migrated your application to AKS, you should verify that it works as you expect. Once you've done that, we have some recommendations for you that can make your application more Cloud native.
Consider adding a DNS name to the IP address allocated to your ingress controller or application load balancer. For more information, see Use TLS with an ingress controller on Azure Kubernetes Service (AKS).
Consider adding HELM charts for your application. A helm chart allows you to parameterize your application deployment for use and customization by a more diverse set of customers.
Design and implement a DevOps strategy. To maintain reliability while increasing your development velocity, consider automating deployments and testing with Azure Pipelines.
Enable Azure Monitoring for the cluster to allow the collection of container logs, track utilization, and so on.
Consider exposing application-specific metrics via Prometheus. Prometheus is an open-source metrics framework broadly adopted in the Kubernetes community. You can configure Prometheus Metrics scraping in Azure Monitor instead of hosting your own Prometheus server to enable metrics aggregation from your applications and automated response to or escalation of aberrant conditions.
Design and implement a business continuity and disaster recovery strategy. For mission-critical applications, consider a multi-region deployment architecture.
Review the Kubernetes Version Support policy. It's your responsibility to keep updating your AKS cluster to ensure it's always running a supported version.
Have all team members responsible for cluster administration and application development review the pertinent AKS best practices.
Evaluate the items in the logging.properties file. Consider eliminating or reducing some of the logging output to improve performance.
Consider monitoring the code cache size and adding the parameters
-XX:InitialCodeCacheSize
and-XX:ReservedCodeCacheSize
to theJAVA_OPTS
variable in the Dockerfile to further optimize performance.