Blog

Jon Clausen

July 24, 2017

Spread the word


Share your thoughts

We've been using our CommandBox Docker images for awhile now for multi-tier development and deployment. We've also received a lot of great feedback from the community that has helped to expand the power and flexibility of the those images in orchestrating CFML server environments.

One important aspect of non-development deployments of applications on the CommandBox image, is the need to warm up the server by seeding the CFML engine file system and configuration before the application is deployed in its target environment/tier. Other than the default Lucee 4.5 engine, which is what CommandBox, itself, runs on, any CFML engine specified in your application's server.json file is downloaded upon server start. Depending on the latency of your Docker environment's connection, this can mean that a bare-bones first run of your application can take minutes to start up, rather than seconds. For obvious reasons, this is not desirable.

The solution for fast start up times in production is to run pre-compiled images, configured for the environment, which contain a fully seeded file system and a warmed-up CFML server. These "baked" images also have the benefit of requiring no additional external network connectivity - other than a connection to a Docker registry from which to pull the image. During the warm-up process, the server files are installed and configuration is applied. Tests are performed upon the running image. In addition, any tier-specific configuration adjustments are performed during the "baking" process, to ensure the application starts up quickly and efficiently in its target environment.

In order to do this, you will need a few things:

  1. A Docker environment from which to build a fully warmed up image. Most typically, this is your CI build environment, but it can even be your local environment, if you wish to build and deploy manually. We'll demonstrate it locally, here.
  2. A private Docker registry, which will hold your pre-built images. You can obtain this through Docker Hub's commercial services, or use a third-party private registry provider. If your VC/CI environment uses GitLab, this functionality is built in to the latest version of the GitLab Omnibus package. In addition, AWS has it's own container registry, for deployments on their services, as does Microsoft Azure. You can also deploy your own on any Docker-enabled host. The general steps are similar for different platforms.
  3. Lastly, you'll need a Docker container service running to which you can deploy your pre-built images. For this tutorial, we'll be using a service deployed on Docker Swarm. You can create and test your own Swarms setup, with a sandbox lifetime of about 4 hours, using Play with Docker ( which is how I tested the functionality of the scripts in this tutorial ).

Let's create a simple test "application" to deploy. Starting from an empty directory, let's create a .cfm file that will be served by our chosen CFML engine from inside of a container:

echo '<cfsetting enablecfoutputonly="true"/>' > index.cfm
echo '<cfoutput><h1>We are up and running on Docker!</h1></cfoutput>' >> index.cfm

Now let's create a Dockerfile we can use to warm up our images:

touch Dockerfile

Now open this Dockerfile so we can add our warm up steps. We're going to perform a basic warm up of the image, with no additional configuration, which will simply seed the server files and set the deployment to HEADLESS - meaning the Lucee admin interfaces will not be accessible.

Add the following lines, so that our Docker build copies our application files and then warms up the image:

FROM ortussolutions/commandbox

# Copy application files to root
COPY ./ $APP_DIR/

# Sets our server to start up in headless mode forever
ENV HEADLESS true

# Warm up our server
# Set our image testing flag up to prevent the server from tailing output and hanging up the build
# Then we start, stop the server, and unset the testing variable
RUN export IMAGE_TESTING_IN_PROGRESS=true && \
	$BUILD_DIR/run.sh && \
	cd $APP_DIR && box server stop && \
	unset IMAGE_TESTING_IN_PROGRESS && \
	echo "Container successfully warmed up"

Lastly let's set a custom CFML engine (along with a small heap size) that we will use for this application ( bypassing the default Lucee 4.5 image that CommandBox uses ):

box server set jvm.heapSize=64 app.cfengine=lucee@5

Now that you have a file system in place, it's time to set up a CI pipeline. There are three basic actions which need to be executed during the build and publish process:

  1. Build the image
  2. Test the image - in this case we are simply checking that the health check passed. We would want to integrate actual testing in to that phase for a real app, obviously
  3. Deploy the image, if it passes the tests

This can be performed manually, on your local machine, with only a few simple commands:

# Build our image
docker build --no-cache -t my-test-app -f ./Dockerfile ./

# Log in and push to the private registry
docker login -u [Registry Username] -p [ Registry Password ] my.privateregistry.com
docker tag my-test-app my.privateregistry.com/my-test-app:latest
docker push my.privateregistry.com/my-test-app:latest

# Test the image
docker run -name app-test -d my.privateregistry.com/my-test-app:latest
RUNNING=$(docker inspect --format="{{.State.Running}}" app-test 2> /dev/null)

# Deploy the image if the healthcheck passed, by SSH-ing in to our Docker Swarm master and applying a force update (which pulls a fresh copy of the image from the registry)
if [ "$RUNNING" != "false" ]; then 
	ssh gitlab@myswarmmaster.mydomain.com
	 "
	sudo docker login -u [Registry Username] -p [ Registry Password ] my.privateregistry.com &&
	sudo docker service update --with-registry-auth --force --image my.privateregistry.com/my-test-app:latest  my-docker-service-name
    "
fi

When these script commands are run, your image will be created, pushed to your registry, and then deployed your Docker swarm. Note: if you are using Play with Docker, you'll have to manually run the login and deployment commands yourself, however, since SSH access is disallowed. Since the server engine is warmed up and pre-configured, your container start up times are greatly reduced.

The next step in the process, which we'll save for a future post, would be to build a fully-automated build, test, and deployment process with a CI pipeline. The basics of the commands used and the warm-up process for your production images, remains relatively the same.

Recent Entries

Into the Box 2024 Last Early Bird Days!

Into the Box 2024 Last Early Bird Days!

Time is ticking, with less than 60 days remaining until the excitement of Into the Box 2024 unfolds! Don't let this golden opportunity slip away; our exclusive Early Bird Pricing is here for a limited time only, available until March 31st. Why wait? Secure your seat now and take advantage of this steal!

Maria Jose Herrera
Maria Jose Herrera
March 20, 2024
Ortus February Newsletter 2024

Ortus February Newsletter 2024

Welcome to Ortus Solutions’ monthly roundup, where we're thrilled to showcase cutting-edge advancements, product updates, and exciting events! Join us as we delve into the latest innovations shaping the future of technology.

Maria Jose Herrera
Maria Jose Herrera
March 06, 2024
Unveiling the Future of CFML Development - 3rd Round of Sessions

Unveiling the Future of CFML Development - 3rd Round of Sessions

Welcome back to our journey into the future of CFML development! As excitement continues to build for Into the Box 2024, we're thrilled to bring you the latest updates on what promises to be a transformative event. Continuing our blog post series, let's delve deeper into the third wave of session releases and discover the key surprises awaiting attendees. Learn More

Maria Jose Herrera
Maria Jose Herrera
March 01, 2024