Friday, December 31, 2021

Getting rid of Liquibase changelog lock

If you're using Liquibase you most probably faced the DB changelog lock. The Liquibase documentation describes it as:

Liquibase uses the DATABASECHANGELOGLOCK table to ensure only one instance of Liquibase is running at one time.

Because Liquibase simply reads from the DATABASECHANGELOG table to determine which changesets need to run, if multiple instances of Liquibase are executed against the same database concurrently you will get conflicts. This can happen if multiple developers use the same database instance or if there are multiple servers in a cluster which auto-run Liquibase on startup.

To get rid of the log you can start liquibase releaseLocks which in turn runs UPDATE DATABASECHANGELOGLOCK SET LOCKED=0 on DB level.

In theory you can disable the lock using https://github.com/liquibase/liquibase-nochangeloglock, but that is dangerous if you have several instances of your app/service runing in parallel.

There are cases where the lock gets in the way and you want to automate its release. Several pending PRs are with Liquibase at this moment (December 2021) that will make this lock bound to a particular process. Until these are merged however, your best option would be to use a Liquibase extension like https://github.com/oridool/liquibase-locking. This particular extension seems to be bound to PostgreSQL and in my case I was using different DB.

I wanted to keep the locking to prevent accidental changelog modified from two instances at the same time. But also I had to work with the following set of boundary conditions:

  • only one instance of your service/application is usually running (monolith or scheduled task for instance)
  • application is often killed or exits during startup, while applying Liquibase changelog  
  • PaaS health check or probe (K8s, Messos/Marathon, Cloud Foundry) does not allow Liquibase changelog changes to run more than a certain amount of time

So I decided to create a Liquibase extension that: 

  • releases the lock if it was created more than max-timeout minutes ago
  • works with the default strategy (waits for the lock) otherwise
 
Be aware that you can only safely use it if you have one instance of your application and it cannot run for more that the configured max timeout.

Friday, October 15, 2021

Importing sqlldr "loader" format data in Oracle SQL Developer

My previous blog detailed how to backup & restore tables. However it turns out if you need to restore/import a single table that's harder than it should be.

The problem lies mostly in the UI of Oracle's SQL Developer. By default it requires CSV/Excel/DAT file and opening a loader formatted file (ldr) is not an option.

Googling on "how to import loader format data" brought some hints though. First result was on directly using sqlldr which is a separate executable (part of Instant Client) you need to download from Oracle site.

Second result was on Jeff Smith's site and helped me realize that I can do the same with SQL Developer. In the end the backup I made already produced loader format files, so it was strange that I cannot import back these.

Jeff's post suggested that loader files are CSV-like and indeed inspecting them proved that. Then it hit me. Backup required separators and delimiters. So loader format is not much different that CSV.

So I renamed the file to csv and tried to follow Jeff's steps. No luck here. Obviously things changed a bit from 2012.

What worked for me though was:

  1. Rename ldr to csv 
  2. Delete the first space character in the file 

    As the ldr file consists of a single line that can be several GBs, most text editors cannot cope with that. You need to use command line like this

  3. Select your table and start Import
  4. Open our ldr file (masked as csv now) for import 
  5. Stop reading preview when asked to do so
  6. Select the following options

    Select Line Terminator as |{EOL}<space>
    Select Delimiter as |
    Deselect Header as loader files have no header info

    Selecting the terminator and delimiters will output several warnings that data cannot be previewed. Dismiss them and once you enter everting you should see your data in File Contents pane.

    Carefully inspect the data. If you see space in front of the data in the first column, please check you have the Line Terminator with space character (invisible on the screenshot) in the end.


  7. Select import method

    I used Insert (without opening the resulting SQL in workspace), but you might check Jeff's advice on how to use SQL Loader

  8. Select columns & definitions

    There should be no warnings in these two steps. If your first column (usually ID) says that the data is not appropriate for Non null columns check if you still have that first space character in your file and if so remove it.

  9. Start the import

id_rsa.pub: invalid format, error in libcrypto

After I upgraded my Linux and got Python 3.10 by default, it turned out that Ansible 2.9 will no longer run and is unsupported together with...