DEV Community

Cover image for A Blazing Fast 🔥 Kotlin Native Cross Compilation Environment
CharlieTap
CharlieTap

Posted on

A Blazing Fast 🔥 Kotlin Native Cross Compilation Environment

This post was written over the course of one evening, it's a brain-dump of information I've accumulated recently. You can expect the following:

  • Me continuously going on a tangent and failing 🤣
  • A full development environment for cross compiling Kotlin Native 🔥🔥🔥
  • Some real bleeding edge Docker techniques 🔪
  • Remote development inside a container from Intelli 🤯
  • Emojis 🤓 🤖 🦄

Lets set the scene


As part of an upcoming project involving io_uring (a Linux system IO library) I needed to create an environment capable of compiling a Kotlin Native application on my shiny Apple Silicon machine. Given my OS is macos and I'm trying to develop an application for Linux I understood this would require some form of cross compilation.

So I took a look at the Kotlin Official Site and found this snippet

While cross-platform compilation is possible, which means using one platform to compile for a different one, in this Kotlin case we'll be targeting the same platform we're compiling on.

Easy peasy, so I set up my application and configured one target for my native application as follows:



linuxArm64("native")


Enter fullscreen mode Exit fullscreen mode

We're just going to build for now, obviously the binary produced couldn't run on a mac.



./gradlew linkDebugExecutableNative


Enter fullscreen mode Exit fullscreen mode

And .... nothing, this is going to be a theme throughout the course of this blog post. If you run this on any mac you will receive no artifacts what so ever. I wasted an hour or so trying different permutations before realising that whilst cross compilation of some sort may be possible in Kotlin, mac to linux right now isn't happening.

To VM or not to VM


Given we clearly aren't capable of compiling and running this application on our own machine we're going to need a linux environment. This is where Docker comes in, with Docker we can spin up containers with whatever OS we want, additionally those containers can also target different cpu architectures (Why would I wanna do that ... you'll see).

My reason for reaching for Docker over spinning up a VM (yes i know Docker for mac using a vm behind the scenes) is because Docker makes it easy to try different OS's quickly. Additionally Docker allows me to mount my filesystem into the containers filesystem which I hoped would allow me to develop on my machine and just have the build and run of my application happen inside the container.

So my first decision, what base image should I go for. In the interest of speed I took a look at what Gradle themselves had built on their Dockerhub repo. I figured this would at least mean I wouldn't have to worry about configuring both Gradle and the JDK and I could just focus on the libs I need for my application. If you've worked in this space before you'll know that images extend other images and typically you end up with the following set of operating systems to choose from:

  • scratch (this is no OS so not useful)
  • alpine (this uses musl c and is very small)
  • debian
  • ubuntu

I like small containers and Gradle had an alpine based image with the JDK 17 built in so I went for this. It's worth noting I'm on ARM machine and therefore this will pull an ARM container by default.

Before I setup a full environment I wrote a quick Dockerfile to validate it built my "Hello World" linuxArm64 application correctly.



FROM gradle:8.1.1-jdk17-alpine

RUN apk update && \
    apk add --no-cache \
    build-base \
    liburing-dev

COPY . /app
WORKDIR /app

RUN ./gradlew --stacktrace linkDebugExecutableNative


CMD ["build/bin/native/DebugExecutable/koru.kexe"]


Enter fullscreen mode Exit fullscreen mode

Lets try to build that



docker build --platform linux/amd64 -t koru-dev -f Dockerfile  .


Enter fullscreen mode Exit fullscreen mode

But ... no bueno. It turns out that you cannot compile Kotlin native applications on a linux arm host.

Okay no problem we change the Dockerfile a little



FROM -platform=linux/amd64 gradle:8.1.1-jdk17-alpine


Enter fullscreen mode Exit fullscreen mode

We change our Kotlin Native target also



linuxX64("native")


Enter fullscreen mode Exit fullscreen mode

Boom! Another crash 😔

This time the culprit is root of the problem is JDK 17. I'll admit I didn't fight this one too much and just downgraded to JDK 11.



FROM -platform=linux/amd64 gradle:8.1.1-jdk11-alpine


Enter fullscreen mode Exit fullscreen mode

💥 💥 💥

Still nothing, it turns out that decision to use a container based on musl c came back to bite me. The Kotlin native toolchain uses glibc in places, meaning a musl c based container would be missing a bunch of necessary libs.

I had a quick stab at adding some compat



RUN apk update && \
    apk add --no-cache \
    build-base \
    liburing-dev \
    gcompat \


Enter fullscreen mode Exit fullscreen mode

💥 💥 💥

Same result, I iterated some more and actually got pretty close to making this work (I actually think its possible) but eventually chose to cut my losses and went with an ubuntu base image.



FROM --platform=linux/amd64 gradle:8.1.1-jdk11


Enter fullscreen mode Exit fullscreen mode

And with that I had some thing build! Albeit slowly, but building! Does its run?



docker run --rm -ti --platform linux/amd64 koru-dev


Enter fullscreen mode Exit fullscreen mode

The beautiful "Hello World" graces my terminal

The time you enjoy wasting is not wasted time


Okay its slow but building the source into a container image was never the original plan, the original plan was to mount my filesystem at runtime and have the build/execution happen when the container runs.

Given I'd already wasted a ton of time on this surely it wasn't time for another risk right? Well I'm stupid so thats what I did 🤣.

You see I've worked with containers for a long time, and the pattern I planned to use of mounting the filesystem at runtime has some flaws. You see working this way you end up with slightly different containers between dev and your other environments (where the binary itself is baked in) and I'd recently learned about a way to potentially get great dev environment performance whilst creating an image I could use agnostic of environment.

Lets give it a go



FROM  --platform=linux/amd64 gradle:8.1.1-jdk11

RUN apt-get update && \
    apt-get install -y \
    build-essential \
    liburing-dev


COPY . /app
WORKDIR /app

RUN \
    --mount=type=cache,target=/app/.gradle,rw \
    --mount=type=cache,target=/app/bin/build,rw \
    --mount=type=cache,target=/home/gradle/.gradle,rw \
    gradle --stacktrace linkDebugExecutableNative


RUN \
    --mount=type=cache,target=/app/bin/build,rw \
    find /app/bin/build/ \
        -maxdepth 4 \
        -type f -executable \
        -exec cp {} /usr/local/bin \;

CMD ["build/bin/native/DebugExecutable/koru.kexe"]


Enter fullscreen mode Exit fullscreen mode

The new lines add cache volumes during the build phase of the container. So our first run came out at around 1 minute 30, and the second ... 1 minute. That isn't half bad. But I noticed in the logs that the Kotlin Native compiler Konan is being downloaded every time. After some fiddling I found a solution



# For some reason konan seems to wipe the $HOME/.konan folder between builds
# By defining this env variable konan will write cache out to a folder we are already using for caching
ENV KONAN_DATA_DIR=/home/gradle/.gradle


Enter fullscreen mode Exit fullscreen mode

✅ 🚀

45 Seconds! To build that is, a second or so to run. That's pretty fast, but remember this is Hello World and 45 seconds is still a poor development cycle. A large portion of the build time can be attributed to the Gradle daemon warming up and bootstrapping a JDK, something that wouldn't be an issue if we were running back to back builds on a running container...

I had to compare it to the OG solution, for science! worst case we can use these image build time optimisations in our prod container.

So what does the other solution look like?



FROM  --platform=linux/amd64 gradle:8.1.1-jdk11

ENV KONAN_DATA_DIR=/home/gradle/.gradle

# /home/gradle/.gradle is already declared a volume in the base image
#VOLUME /app/bin/build

WORKDIR /app

RUN \
    --mount=type=cache,target=/var/cache/apt \
    apt-get update && \
    apt-get install -y \
    build-essential \
    liburing-dev

CMD ["gradle", "-t", "runDebugExecutableNative", "--debug"]


Enter fullscreen mode Exit fullscreen mode

However we will be doing more than just building and running, we'll also have to mount our local filesystem into the container. The container won't look to run our program and exit either, instead it will build our project continuously, recompiling and rerunning tasks every time the files change



docker build --platform linux/amd64 -t koru-dev -f Dockerfile  .


Enter fullscreen mode Exit fullscreen mode


docker run --rm -ti --platform linux/amd64 -v "$(pwd):/app" koru-dev


Enter fullscreen mode Exit fullscreen mode

💥 💥 💥

Back to things breaking 😭, and its this line



CMD ["gradle", "-t", "runDebugExecutableNative", "--debug"]


Enter fullscreen mode Exit fullscreen mode

The short of it that is that is the command that gets run when the container starts and more specifically runs a Gradle task every time your project files change. Unfortunately the file watching impl is broken (I never found a good reason as to why) and the this results in the task being run 1000 times per second. Okay Plan B, we will just launch the container with a shell and connect to the shell ourselves, issuing our own commands.

A minor change



CMD ["/bin/bash"]


Enter fullscreen mode Exit fullscreen mode

We're back in action. I connect to the shell and issue my Gradle command.



./gradlew runDebugExecutableNative


Enter fullscreen mode Exit fullscreen mode

The first run is slow, this is expected. The second however takes around 45 seconds... How could it be this bad given we have a warm Gradle daemon ready to go?

It transpires both our container and emulated file system are running slow. My first tip of this post, enable Rosetta 2 translation for x86 images in the Docker settings!

Image description

Remember we're running x86 images now, Docker supports running Rosettas translation over them ahead of time to get them running natively 😍.

Whilst you're in the settings I want you to also enable the Virtio FS which can be found under the General tab. This will give a speedier filesystem for our bind mounted project code.

Image description

With those changes made, I restart Docker entirely and issue my command again inside of the running container



./gradlew runDebugExecutableNative


Enter fullscreen mode Exit fullscreen mode

✅ 🚀🚀🚀

19 Seconds !!! thats absolutely rapid and really no different than Kotlin Native builds on my own machine.

This approach is definitively faster, it seems the Gradle Daemon being up and running makes all the difference.

Thats a wrap? Right? Well ....

If you're only using the standard native lib yes, but I need c interop for io uring, and it turns out Intellij is unable to use the bindings that get generated inside of the container despite the fact they share the same filesystem.

😠😠😠🤬🤬🤬😭😭😭

Another tangent 📐


It's okay, I can work with this... the container is working and I've invested a ton of time optimising it, so let's not throw that away. IntelliJ is struggling because its outside the container observing files being generated on a linux machine.

What if we could get it on the inside? its possible for IDE to connect to remote environments and run a backend inside the container whilst the frontend runs on the host. You might have heard of Intelli J connecting to Github Codespaces or other remote development environments, why couldn't it connect to our docker container?

Well it can!

I make some changes once again to the cross compilation development container. This time adding an ssh server and making this the containers process.



FROM  --platform=linux/amd64 gradle:8.1.1-jdk11

ENV KONAN_DATA_DIR=/home/gradle/.gradle

WORKDIR /app

RUN \
    --mount=type=cache,target=/var/cache/apt \
    apt-get update && \
    apt-get install -y \
    build-essential \
    liburing-dev \
    openssh-server \
    sudo

RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo -u 1001 test

RUN echo 'test:test' | chpasswd

RUN service ssh start

EXPOSE 22

CMD ["/usr/sbin/sshd","-D"]


Enter fullscreen mode Exit fullscreen mode

I build and run the container one final time, do note how I'm now binding port 20 on my local to that of the container on the run command



docker build --platform linux/amd64 -t koru-dev -f Dockerfile  .


Enter fullscreen mode Exit fullscreen mode


docker run --rm -ti --platform linux/amd64 -v "$(pwd):/app" -p 22:22 koru-dev


Enter fullscreen mode Exit fullscreen mode

It started okay lets try and connect to it. Inside IntelliJ I go to File > Remote Development...

Image description

Enter the following credentials, remember we made a user called test with a password of test when we built the Docker container

Image description

After what feels like an eternity I'm greeted with an IDE, with working intellisense on my C bindings !!!!!!!

Image description

Final thoughts

Despite wasting a week trying to get this working, I have in the process learned a lot and opened up some opportunities for the future.

I did take some time to tidy up after myself, I moved the cross compilation Dockerfile into a file called Dockerfile.dev and made a second file for production called Dockerfile which has all the optimisations I worked on originally. I also made a small Makefile to save myself writing the same docker commands over and over



BINARY_NAME=koru.kexe
IMAGE_NAME=koru

BUILD_ARG_BINARY=--build-arg BINARY=$(BINARY_NAME)

.DEFAULT_GOAL := help
.PHONY: build build-release build-verbose debug run help

build:
    docker build --platform linux/amd64 $(BUILD_ARG_BINARY) -t $(IMAGE_NAME) .
build-dev:
    docker build --platform linux/amd64 -t $(IMAGE_NAME)-dev -f Dockerfile.dev .
build-release:
    docker build --platform linux/amd64 $(BUILD_ARG_BINARY) --build-arg MODE=release -t $(IMAGE_NAME) .
build-verbose:
    docker build --platform linux/amd64 $(BUILD_ARG_BINARY) -t $(IMAGE_NAME) --progress=plain .
debug: build
    docker run --rm -ti --platform linux/amd64 $(IMAGE_NAME) /bin/bash
run: build
    docker run --rm -ti --platform linux/amd64 $(IMAGE_NAME)
run-dev: build-dev
    docker run --rm -ti --platform linux/amd64 -v "$(CURDIR):/app" -p 22:22 $(IMAGE_NAME)-dev
run-release: build-release
    docker run --rm -ti --platform linux/amd64 $(IMAGE_NAME)

help:
    @echo "Available targets:"
    @echo "  build         Build the Docker image in debug mode"
    @echo "  build-release Build the Docker image in release mode"
    @echo "  build-verbose Build the Docker image in debug mode with verbose output"
    @echo "  debug         Build (if necessary) and run the container attaching a bash shell"
    @echo "  run           Build (if necessary) and run the Docker container"
    @echo "  help          Display this help message"


Enter fullscreen mode Exit fullscreen mode

With this I just run the following to spin up the environment



make run-dev 


Enter fullscreen mode Exit fullscreen mode

If you're interested in any of the above I've made the project I'm working on public here. Just please bare in mind this repository is part of a larger project so things may change.

I think thats a wrap, I hope you learned something, I sure did!

Top comments (1)

Collapse
 
norsedreki profile image
Alex Dmitriev

Thanks for sharing your article.

Just for reference, I was able to find a sample Kotlin/Native project which cross compiles to multiple targets, worked well on my Mac: github.com/JakeWharton/mosaic/blob....

Though looks like cross compilation won't be that easy in a case when project depends on a C library, cinterop would need to generate a library shim, and that's platform-specific. That's were the Docker approach can come in handy I think.