Monosoul's Dev Blog A blog to write down dev-related stuff I face
Using Liquibase with Kubernetes

Using Liquibase with Kubernetes

If you’re using Liquibase for database versioning with Kubernetes to deploy your app, you might have faced an issue when a migration gets stuck because Liquibase can’t acquire lock. It might look somewhat like this:

liquibase.exception.LockException: Could not acquire change log lock.  Currently locked by LockOwner ...
        at liquibase.lockservice.StandardLockService.waitForLock(
        at liquibase.Liquibase.update(
        at liquibase.Liquibase.update( language: Properties (properties)

Probably if you face this issue, you run DB migration when you start the app. Also, chances are high that you’re using Spring Framework. The reasons to face this issue might vary, but one of them might be auto scaling enabled in Kubernetes.

The issue

When running DB migrations Liquibase uses a table for locking, to make sure no other instances will run the same migration. The table is called DATABASECHANGELOGLOCK. Liquibase sets value in column LOCKED to 1 before running a migration and to 0 after it finishes. But if the instance running the migration gets killed before it can finish, or any other issue happens, you can end up with the exception above.

In that case you have two ways of solving this issue:

  1. Run Liquibase’s releaseLocks command from command line:
    liquibase --changeLogFile=mainchangelog.xml releaseLocks
    You can read more about it here.
  2. Run a SQL statement:

The solution

The official Liquibase blog has a working solution for the issue, but it requires you to:

  • have another copy of your migration scripts in the container;
  • provide authentication details to connect to the DB in a way different from the one you use in your app (be it a Hashicorp’s vault integration or something else);
  • probably even create a different new Docker image.

But there’s a way to take advantage of your current authentication/connection infrastructure packed into the jar file while still using init containers.

In the following examples I’ll be using Kotlin and Spring Framework, but you can apply a similar solution with any other language/framework.

Using command line to alter the context

Let’s change our main app class to take an argument and change the context configuration based on that:

import org.springframework.boot.ApplicationContextFactory.ofContextClass
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
import org.springframework.boot.autoconfigure.liquibase.LiquibaseAutoConfiguration
import org.springframework.boot.builder.SpringApplicationBuilder
import org.springframework.boot.runApplication
import org.springframework.context.annotation.AnnotationConfigApplicationContext
import org.springframework.context.annotation.Import

class Application

@Import(DataSourceAutoConfiguration::class, LiquibaseAutoConfiguration::class)
class LiquibaseInit

fun main(args: Array<String>) {
    if (args.contains("dbinit")) {

Code language: Kotlin (kotlin)

Let’s go through the code in this example.

We have 2 classes declared here Application (the main app class having @SpringBootApplication annotation) and LiquiBaseInit (having @DataSourceAutoConfiguration and @LiquibaseAutoConfiguration), the first one will spin up the whole context, while the latter will only spin up the beans necessary for Liquibase to run migration.

❗NOTE: with Spring Boot 2.4.0 and earlier use SpringApplicationBuilder#contextClass instead of contextFactory method.

Inside the main function we check if the arguments array has a string dbinit and if it is there we start an application context out of LiquiBaseInit class. We also activate the same-named Spring profile, more on that below.

Using Spring profiles to disable migrations on app startup

If we’re to run DB migrations using Kubernetes’ init containers, we should make sure to not run them on app startup. Otherwise we might still face the same issue. This is where we will use the profile mentioned above: we will only run Liquibase migrations when dbinit profile is active.

Here’s an example of how to do that in application.yml:

    change-log: classpath:/db/changelog/db.changelog-master.xml
    user: db_user
    password: password
    default-schema: db_schema
    enabled: false


  profiles: dbinit
    enabled: trueCode language: YAML (yaml)

Here we have Liquibase disabled by default, while dbinit profile enables it. Keep in mind that I’ve omitted some of the configuration properties unrelated to Liquibase, like datasource configuration and other stuff.

Using init containers to run migration

Now that we have everything we need, there’s only one step left – to configure Kubernetes to run the container with a custom endpoint where we’ll specify the new run argument.

Here’s a part of myapp.yaml where we configure it:

apiVersion: v1
kind: Pod
  name: myapp-pod
    app: myapp
    - name: myapp-container
      image: myapp-image
        - secretRef:
            name: db-secrets
    - name: myapp-liquibase
      image: myapp-image
        - "java"
        - "-jar"
        - "/app/service.jar"
        - "dbinit"
        - secretRef:
            name: db-secretsCode language: YAML (yaml)

You can read more about init containers in the official Kubernetes documentation. But what’s happening here is that we reuse the same DB credentials for both myapp-container and maypp-liquibase. We also reuse the same image for both of them.


With this approach you can have a single Docker image to run your DB migrations and to run the app itself, while making sure there won’t be a deadlock.

Hope this article helps you!

Happy coding!

Like it? Share it!

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.