Jon Clausen

July 24, 2017

Spread the word

Share your thoughts

We've been using our CommandBox Docker images for awhile now for multi-tier development and deployment. We've also received a lot of great feedback from the community that has helped to expand the power and flexibility of the those images in orchestrating CFML server environments.

One important aspect of non-development deployments of applications on the CommandBox image, is the need to warm up the server by seeding the CFML engine file system and configuration before the application is deployed in its target environment/tier. Other than the default Lucee 4.5 engine, which is what CommandBox, itself, runs on, any CFML engine specified in your application's server.json file is downloaded upon server start. Depending on the latency of your Docker environment's connection, this can mean that a bare-bones first run of your application can take minutes to start up, rather than seconds. For obvious reasons, this is not desirable.

The solution for fast start up times in production is to run pre-compiled images, configured for the environment, which contain a fully seeded file system and a warmed-up CFML server. These "baked" images also have the benefit of requiring no additional external network connectivity - other than a connection to a Docker registry from which to pull the image. During the warm-up process, the server files are installed and configuration is applied. Tests are performed upon the running image. In addition, any tier-specific configuration adjustments are performed during the "baking" process, to ensure the application starts up quickly and efficiently in its target environment.

In order to do this, you will need a few things:

  1. A Docker environment from which to build a fully warmed up image. Most typically, this is your CI build environment, but it can even be your local environment, if you wish to build and deploy manually. We'll demonstrate it locally, here.
  2. A private Docker registry, which will hold your pre-built images. You can obtain this through Docker Hub's commercial services, or use a third-party private registry provider. If your VC/CI environment uses GitLab, this functionality is built in to the latest version of the GitLab Omnibus package. In addition, AWS has it's own container registry, for deployments on their services, as does Microsoft Azure. You can also deploy your own on any Docker-enabled host. The general steps are similar for different platforms.
  3. Lastly, you'll need a Docker container service running to which you can deploy your pre-built images. For this tutorial, we'll be using a service deployed on Docker Swarm. You can create and test your own Swarms setup, with a sandbox lifetime of about 4 hours, using Play with Docker ( which is how I tested the functionality of the scripts in this tutorial ).

Let's create a simple test "application" to deploy. Starting from an empty directory, let's create a .cfm file that will be served by our chosen CFML engine from inside of a container:

echo '<cfsetting enablecfoutputonly="true"/>' > index.cfm
echo '<cfoutput><h1>We are up and running on Docker!</h1></cfoutput>' >> index.cfm

Now let's create a Dockerfile we can use to warm up our images:

touch Dockerfile

Now open this Dockerfile so we can add our warm up steps. We're going to perform a basic warm up of the image, with no additional configuration, which will simply seed the server files and set the deployment to HEADLESS - meaning the Lucee admin interfaces will not be accessible.

Add the following lines, so that our Docker build copies our application files and then warms up the image:

FROM ortussolutions/commandbox

# Copy application files to root

# Sets our server to start up in headless mode forever

# Warm up our server
# Set our image testing flag up to prevent the server from tailing output and hanging up the build
# Then we start, stop the server, and unset the testing variable
	$BUILD_DIR/ && \
	cd $APP_DIR && box server stop && \
	echo "Container successfully warmed up"

Lastly let's set a custom CFML engine (along with a small heap size) that we will use for this application ( bypassing the default Lucee 4.5 image that CommandBox uses ):

box server set jvm.heapSize=64 app.cfengine=lucee@5

Now that you have a file system in place, it's time to set up a CI pipeline. There are three basic actions which need to be executed during the build and publish process:

  1. Build the image
  2. Test the image - in this case we are simply checking that the health check passed. We would want to integrate actual testing in to that phase for a real app, obviously
  3. Deploy the image, if it passes the tests

This can be performed manually, on your local machine, with only a few simple commands:

# Build our image
docker build --no-cache -t my-test-app -f ./Dockerfile ./

# Log in and push to the private registry
docker login -u [Registry Username] -p [ Registry Password ]
docker tag my-test-app
docker push

# Test the image
docker run -name app-test -d
RUNNING=$(docker inspect --format="{{.State.Running}}" app-test 2> /dev/null)

# Deploy the image if the healthcheck passed, by SSH-ing in to our Docker Swarm master and applying a force update (which pulls a fresh copy of the image from the registry)
if [ "$RUNNING" != "false" ]; then 
	sudo docker login -u [Registry Username] -p [ Registry Password ] &&
	sudo docker service update --with-registry-auth --force --image  my-docker-service-name

When these script commands are run, your image will be created, pushed to your registry, and then deployed your Docker swarm. Note: if you are using Play with Docker, you'll have to manually run the login and deployment commands yourself, however, since SSH access is disallowed. Since the server engine is warmed up and pre-configured, your container start up times are greatly reduced.

The next step in the process, which we'll save for a future post, would be to build a fully-automated build, test, and deployment process with a CI pipeline. The basics of the commands used and the warm-up process for your production images, remains relatively the same.

Add Your Comment

Recent Entries

Unveiling the Future of CFML Development - 3rd Round of Sessions

Unveiling the Future of CFML Development - 3rd Round of Sessions

Welcome back to our journey into the future of CFML development! As excitement continues to build for Into the Box 2024, we're thrilled to bring you the latest updates on what promises to be a transformative event. Continuing our blog post series, let's delve deeper into the third wave of session releases and discover the key surprises awaiting attendees. Learn More

Maria Jose Herrera
Maria Jose Herrera
March 01, 2024
Elevate Your ColdBox Experience and Skills

Elevate Your ColdBox Experience and Skills

We're thrilled to announce a significant overhaul of our ColdBox training experience to ensure it's nothing short of extraordinary! We've listened closely to your feedback and made significant improvements geared towards transforming you into a ColdBox superhero. Learn What's New!

Maria Jose Herrera
Maria Jose Herrera
February 20, 2024
Ortus Redis Extension v3.3.0 Released!

Ortus Redis Extension v3.3.0 Released!

We are very excited to bring you another release for our Redis Lucee Extension. The most significant feature in this release is the addition of the `` and `redisLock{}` tag, which allows you perform a lock across all instances in a cluster.

Ortus Redis Extension v3.3.0 gives you greater control over concurrent modifications in a distributed environment, utilizing your distributed cache to prevent overlaps!

Jon Clausen
Jon Clausen
February 16, 2024