Google recently released Cloud Run, a serverless platform based on Knative that offers hosting of stateless web services which only run, and for which you are only charged, while they’re serving requests. For many projects, including many hobby projects, this solves the problem of having to pay for a server that isn’t guaranteed to always be serving requests, while also allowing your services to scale up to meet spikes in demand.

At a very basic level, Cloud Run is able to quickly start up a container image to serve requests when they come in, and will shut the container down, after a cooldown period, while there are no requests. This means you get flexible pricing, as you only pay for server resources while your server in use, and has tons of use cases, like development versions of a service, services that serve large quantities of requests at specific times (think services to support sporting or other event based apps), or really any stateless service that you want to be able to grow, but also to scale down to 0, when needed.

Ktor also happens to be a great fit for many of these use cases, as it is a library that makes it easy to create REST services, can be easily packaged into a Docker container, and is well suited to the stateless nature of Cloud Run services. While any Docker container that runs a web server can work with Cloud Run, in this article I’d like to share a quick and easy way to set up and deploy a Ktor web service in Google’s Cloud Run.

The Jib Gradle Plugin

For this example we don’t actually need to look at any Ktor code, just Gradle build files, so theres a fair argument that this applies to other stateless JVM service libraries as well, and I’m sure it does. But I haven’t tested it with others, so I’ll be discussing this in the context of Ktor.

While there are Ktor examples to build Docker container images of your servers, I’ve recently started using a Gradle plugin called Jib. From the Google Container Tools team, Jib is a Gradle plugin (also a Maven plugin or CLI tool) for building Docker container image of your Java applications. Jib is capable of building and deploying container images from a Gradle task, without the need to write your own Dockerfiles, have deep knowledge of container optimization, or even having Docker installed.

Jib also creates a container with your exploded WAR or JAR file, which speeds up initialization for on-demand services like Cloud Run, and builds a separate layer for your dependencies, so you don’t need to use a shadow plugin to embed your dependencies into your final JAR file.

The Migration

My migration from a shaded JAR file and custom Dockerfile to using Jib was extremely straight-forward. For years I’d been using the Gradle Shadow Plugin to compile my servers to a single JAR file, then writing a Dockerfile to build and serve the JAR from an Alpine OpenJDK container. This works reasonably well, but the Jib plugin greatly reduces the overhead.

Here’s my file.

plugins {
  kotlin("jvm") version "1.3.72"
  id("") version "2.7.0"

val main_class = "io.ktor.server.netty.EngineMain"

application {
  mainClassName = main_class

  applicationDefaultJvmArgs = listOf(

// The projectId can be overridden by adding a `-P projectId=...` flag
// at the comment line.
val projectId = project.findProperty("projectId") ?: "pigment-staging"
val image = "$projectId/gallery-pages"

jib {
  to.image = image

  container {
    ports = listOf("8080")
    mainClass = main_class

    // good defauls intended for Java 8 (>= 8u191) containers
    jvmFlags = listOf(

val deploy by tasks.registering(Exec::class) {
  commandLine = "gcloud run deploy gallery-pages --image $image --project $projectId --platform managed --region us-central1".split(" ")
  dependsOn += tasks.findByName("jib")

The first step is to add the Jib plugin to the plugins block of your build file. In my case I also removed the Gradle Shadow Plugin and it’s configuration, since it’s no longer needed.

That’s followed by a standard application configuration block, which allows running the server using the ./gradlew run command.

After this comes the jib configuration. Aside from the to.image, which specifies the tag of the container image that will be generated, this is quite similar to the application configuration, specifying the main class, and some configuration options. We also add the ports, which tell’s Jib which ports should be exposed in the container.

The last bit is my own personal deploy task. This allows me to not only build the container from Gradle, but also deploy it as a new instance in Cloud Run, using the ./gradlew deploy command. By default, this will deploy to my staging project, but I can easily override the project with a command line parameter: ./gradlew deploy -P projectId=pigment-prod.

Without Cloud Run?

While Jib builds an optimized container for Knative and other on-demand services, like Cloud Run, it is also capable of building a container using your locally installed docker service, and deploying to registries other than Google Container Registry, as I have above. By running ./gradlew jibDockerBuild, the container will be build using the local Docker service, allowing it to be run locally with the following command:

docker run --rm -p 8080:8080

Similarly, the image could be pushed to an internal registry, and hosted from your own environment, using Docker Compose, Kubernetes, or whatever other container service you choose.

Quick and Easy

As I mentioned, Jib makes containerizing Ktor servers super quick and easy, and Cloud Run makes hosting them quick, easy and affordable. While I’ve been hosting production services using Cloud Run throughout it’s beta, if you haven’t had the chance, I’d recommend checking it out.