1. Introduction
In this article, we are going to have a trial-and-error look at two different implementations of Coroutines
also known as Continuations within the JVM
. These are Java Virtual Threads
which are a part of Project Loom and Kotlin Coroutines
provided as DSLs
to run on the JVM
. Because of the nature of this article, it will be subject to frequent reviews. The supporting code is located on GitHub.
A bit of history
In recent years, if you work around the JVM
, you must have noticed that a new player is in town in the JVM
world. Enter Kotlin
. Long story short, Kotlin
started within JetBrains R&D
Department, being named after Kotlin
island in the neighborhood of St. Petersburg
.
This is the short Kotlin story so far. But in order to understand where we are this year (2022
) and how far we’ve come, we need to go down memory lane and understand how Java has developed and when and how other languages derived from Java have flourished. This way we can have a better picture and draw better-informed conclusions on it.
Before we continue, we have to honor the responsible people who have, according to extended documentation, started all of this JVM
revolution. James Gosling
is by many considered the inventor of Java and the JVM. Without him, nothing that has been invented afterward, on top of the JVM
, would be possible. In the same way, Martin Odersky
is pretty much the inventor of Scala
. Finally, for Kotlin
, we can only say that the team leader of the JetBrains
team responsible for further developments is Dmitry Jemerov
.
The table above is a bit of a short sketch consolidating major highlights in the history of three languages that share a common ecosystem, which is the Java Virtual Machine
. Java
has existed since before 1995
, Scala since 2001
, and Kotlin since 2010
. Java
is the oldest JVM
language and the newest is Kotlin
. Java
started at least 15
years before the early beginnings of Kotlin and Scala
started 9 years
before Kotlin
.
I couldn't find precisely where, but examining the commits for project Loom I could see that the first commit
happened in 2007
. What this tells us is that it is very likely that the idea of Loom
started around this year. Loom
is a project, that pretty much like coroutines in Kotlin
, focuses on making maximum usage of system threads by fragmenting them into separate independent processes. Loom
calls these processes virtual threads
. In Kotlin
an experimental release supporting this same idea with coroutines was released in 2018
. Project Loom in Java is scheduled to be released in 2022
.
With regard to Kotlin
, is quite hard to pinpoint what exactly was the motivation to create a new language. The best I can find is that "new features needed to be added". In this article, I want to share with you what I have found about Kotlin Coroutines
and Java Virtual Threads
and then reveal a great conclusion I came up with.
I myself have never been a part of the Java Loom Team nor have I been a part of the Kotlin Coroutines team. I have done this article on the basis of the source code information, international conference videos, and papers.
But before we continue, coroutines were invented a long time ago, but, if you are not aware of it, here is a great revelation. They are indeed very old, and they are actually older than 1958. This is only the year when this term was coined by Donald Knuth and Melvin Conway. Here, people have created their own implementations of this, for example, this one by codecop.
1.1. Motivation
Software engineering
has changed through the years and undoubtedly everyone strives to make everything better. We want it to be easier to create software and make our code work. To do that, we have created syntaxes and semantics that enable us to develop with an increasing level of simplicity. When Kotlin came to the scene I was almost immediately sold out to the idea of it being superior to Java on many levels. That’s what the Kotlin community mostly promotes. A few months into it, I realized a few things that defeated the reason for my excitement about it. As time went by I got more and more of this idea that Kotlin is just another language and that may be the true thing that makes it exciting is that it is different. Something new breaks up the routine and makes room for creativity. One thing I didn’t change my mind about, is that Kotlin
, done the right way, can produce code that is much more beautiful than Java
. But beauty is something I don’t want to discuss in this article. What this article really is about is performance. We are not going to discuss Kotlin and Java alone. We are going to discuss two implementations that make use of System Threads and a very old concept called coroutines. In Java, this is being called virtual threads
in project Loom
and in Kotlin this is being called… well… coroutines
. As we go along in the code on both sides, we’ll make pit-stops and compare code against each other and see the differences. But first, let’s go into a bit of theory to understand exactly what are we talking about, discuss why was this not a revolution before, and why it has taken so much time to get languages to develop interfaces and semantics to be able to use system threads more efficiently.
1.2 Coroutines, what are they?
If we take the literal meaning of coroutines
, purely on a semantic level, we get Co
and Routines
. So, a routine is just some instruction that runs. A coroutine is something that runs along. Running along, in this case, means literally suspending the original routine and allowing a completely different routine to start and then resume the original routine.
To illustrate this, I’ve gone back to 1985
, and with the help of the internet
, I’ve created a small program in C++
that shows some instructions about creating a table with Epoxy (don’t follow these instructions if you want to create a real Epoxy table
, creating tables with epoxy
require safety gear and protection, so get informed first). Why C++
? Well, why not? And further, I think it is very important to start out with a neutral point. If we get these basics right, then we are on a roll! So this is the main program:
int main()
{
printf("╔════════════════════════════════════════════════╗\n");
printf("║ Welcome to the Epoxy Table manufacture factory ║\n");
printf("╚════════════════════════════════════════════════╝\n");
printf("--- Tables are made in parallel steps ---\n");
int i;
for (; i=processes(1, 11);){
printf("---> ▶️ Execution (%d)starts here with id %d!\n",i,pthread_self());
switch(i){
case 1: printf("==>Collect Epoxy and Resin\n"); break;
case 2: printf("==>Mix ingredients\n"); break;
case 3: printf("==>Polish wooden plank\n"); break;
case 4: printf("==>Water seal container\n"); break;
case 5: printf("==>Pour epoxy mix\n"); break;
case 6: printf("==>Wait for it to dry\n"); break;
case 7: printf("==>Polish final edges and result\n"); break;
case 8: printf("==>Fine polish\n"); break;
case 9: printf("==>Paint\n"); break;
case 10: printf("==>Slow dry surface\n");
}
}
return 0;
}
So, we have a bunch of cases (10 to be exact) and for now, this piece of code doesn’t seem to show a lot. We do have something that should get your attention at the moment and that is pthread_self(). Another thing is processes(1,11), which are included in the validation check of the for-loop. Let’s dive into this method:
int processes(int startIndex, int endIndex)
{
static long long int i;
static int state = 0;
switch (state)
{
case 0: state = 1;
for (i = startIndex; i < endIndex; i++)
{
sleep(1);
printf("👍 Ordering at %lld. This suspends the next run (coroutine)\n",currentTSMills());
return i;
case 1:
sleep(1);
printf("✅ Ending step. This is the callback (start of the coroutine): %lld at %lld with id %d\n",i, currentTSMills(),pthread_self());
}
}
puts("┌──────────────┐");
puts("│ 🛑 Finishe
d! │");
puts("└──────────────┘");
state = 0;
return 0;
}
So, here, we have a strange switch-case. We are assigning 1 to the state within the case condition of 0. This causes the main thread to split, before returning. It doesn’t technically split into 2, but it does suspend on runtime, to allow the other to start. This means that when the routine hits return i, it will suspend itself and the thread will first run whatever is in the main for-loop and only then it will finish running what’s on case 1. Looking at this in C++
, it may seem highly counterintuitive, but if we run the code, then we see this phenomenon taking place, and we can also see, that although the main thread has suspended and resumed different routines, they are all hanging on the same thread:
So this is essentially what a coroutine is. In this C++
example, everything is run asynchronously. There are also many ways to implement a coroutine. What project Loom and Kotlin Coroutines
saw as a gold mine in the second half of the 2000s
decade, was to explore this and implement coroutines in a sort of asynchronous way. Both languages have evolved, and both still have experimental features running on their respective implementations. However, Java
is still in the EAB
(Early Access Build
) stage, although it has started its developments much earlier in the second half of this decade.
1.3 Java virtual threads
In order to discuss Java Virtual Threads
, we have to get familiar with a few basic concepts: Fibers
, Continuations
, and of course Virtual Threads
.
-
Fibers
: To be very clear, fiber is just another way to refer to Virtual Threads. There is nothing magic about it -
Virtual Threads
: They have been named this way to better refer to their actual behavior. For the developer, there is no apparent difference between a Thread (Platform or System thread) and a Virtual Thread (Something run by the carrier thread that executes independently allowing more processes to run) -
Carrier Thread
: A term that in the first instance seems to be used by the hip and happening and looks like just another way to refer to a Platform Thread or System Thread. However, it does have a much more important role than that. A Carrier Thread is where one Virtual Thread executes. This becomes more visible when we look into the code, which we will further down below. -
Continuation
: Fibers and Virtual Threads are continuations. A continuation is just something that allows us to continue after yielding a result. This is the very low level of all virtual threads and how they work. We have seen before how coroutines work. This is exactly how continuations work. In fact, coroutines are just another name for continuations. In the code in the example at the beginning of this article, there would be two virtual threads. The one at the start of the execution and another when we start with the text: "Ending step".
1.3 What are Java Virtual Threads?
At this point, and from the above, I think you are getting a very clear idea of what this whole continuation and coroutines are about. The same thing right? The theory seems to be the same, but the implementation differs. At this stage let's have a look at some of the highlights of the implementation of Virtual Threads (at least in my view):
public static Thread startVirtualThread(Runnable task) {
Objects.requireNonNull(task);
var thread = ThreadBuilders.newVirtualThread(null, null, 0, task);
thread.start();
return thread;
}
in JDK 21
At this point, nothing happens. We receive a plain runnable, and we get into the method. We are executing inside the JDK19
already and this code is only JDK19
code. Once there, Loom creates a VirtualThread with our task as a parameter and starts it. When we start a virtual thread this way, we are doing so by making the first two parameters null, the third 0, and the fourth one is our task. Let’s dive into the VirtualThread first and see if we see signs of anything remotely similar to what we’ve seen and learned about what a continuation is:
VirtualThread(Executor scheduler, String name, int characteristics, Runnable task) {
super(name, characteristics, /*bound*/ false);
Objects.requireNonNull(task);
// choose scheduler if not specified
if (scheduler == null) {
Thread parent = Thread.currentThread();
if (parent instanceof VirtualThread vparent) {
scheduler = vparent.scheduler;
} else {
scheduler = DEFAULT_SCHEDULER;
}
}
this.scheduler = scheduler;
this.cont = new VThreadContinuation(this, task);
this.runContinuation = this::runContinuation;
}
in JDK 21
In this case what this means is that we create a virtual thread without a scheduler, without a name
, and 0
characteristics. And of course, what does this all mean? Maybe here we can skip a few steps, but the thread initialization will assign an id to it and no characteristics. Since we don’t give it a name, our thread will not be identifiable with a name. Not by default at least. Before launching our thread
, we get a scheduler
. In this part, we are coming against some code that ensures that we get either an appropriate scheduler
from the System Thread or an appropriate thread from the VirtualThread. We seem to have two types of schedulers
. One for the Virtual Thread and another for the System Thread. These look actually to be reused. The new scheduler is only assigned if there is no scheduler
given in the constructor, and it is assigned on the basis of the parent thread, which is the current thread. Once we have the scheduler, we can finally create a continuation (VThreadContinuation
) with the current VirtualThread
, and we pass the runnable task we have given. Finally, we assign the runContinuation
property with the runContinuation lambda in order to be able to execute it later.
So now we have created a Virtual Thread
with the scheduler of the Platform Thread
, no name, one id, and 0 characteristics, and we have assigned a continuation to it and have assigned the runContinuation
property with the runContinuation
lambda. The scheduler we have just created is a ForkJoinPool
, which is created by default with a parallelization
level equivalent to the number of CPU
’s provided by the machine and a maximum worker pool of 256
.
From here onwards, it becomes quite complicated to describe what happens, given that this involves quite a lot of native code calls, which I do not know much about and it is irrelevant for this article. Relevant for this article, though, are the states a virtual thread goes through in its lifecycle. A virtual thread can potentially go through the following states (they are all int values):
- New 0: State on the start of the thread.
- Started 1: The virtual thread has started.
- Runnable 2: The thread is unmounted and this state can be assigned to a thread after it has status Yielding. The thread is not running at this time.
- Running 3: The thread is running and it is mounted
- Parking 4: Starts disabling thread for scheduling unless the thread has a permit.
- Parked 5: The thread gets Parked after a status Parking and after yielding. Parked means, in other words, waiting to be Scheduled.
- Pinned 6: A thread gets pinned, when being delayed by a synchronized process, or performing some virtual thread unsupported operation as is the case of some IO operations. Other IO operations are performed in a non-blocking way. More precisely, pinning is a way to not allow a Virtual Thread to unmount if it is waiting for an object that is not available yet.
- Yielding 7: The thread gets unmounted in order to yield its control of the processor and then it gets mounted again when it is allowed to do so again. In other words, it’s just returning the carrier Thread. This is also a form of context switching. Sleeping with (0) will trigger this state immediately.
- Terminated 99: Final state of the Virtual Thread. It will not be used again.
- Suspended 256: A Virtual Thread can be suspended after unmount.
- Runnable Suspended: The thread can be runnable and suspended.
- Parked Suspended: The thread can be parked and suspended.
-
When a
virtual thread
needs to sleep, it will perform a delay operation. This requires something calledYielding
. By doingYielding
, we unmount the current virtual thread from its current system thread and yield its control to anothervirtual thread
. If we are performing a blocking operation and the thread is pinning, one system thread will be blocked, but the others won't. This means that, for example. if you have 12 cores,11
will be used to manage virtual threads, but only 1 will be blocked waiting.Blocking
operations happen when using some operations that are blocking in the native code, for example using synchronized andObject.wait()
cause the thread to be pinned:
@Hidden
@ChangesCurrentThread
private boolean yieldContinuation() {
// unmount
notifyJvmtiUnmount(/*hide*/true);
unmount();
try {
return Continuation.yield(VTHREAD_SCOPE);
} finally {
// re-mount
mount();
notifyJvmtiMount(/*hide*/false);
}
}
in JDK 21
Sleeping
is one way a virtual thread will pause its execution. It has a different behavior to another virtual thread that is running on a synchronised code. For this combination we need another concept called parking in VirtualThread.java
:
@Override
void park() {
assert Thread.currentThread() == this;
// complete immediately if parking permit available or interrupted
if (getAndSetParkPermit(false) || interrupted)
return;
// park the thread
boolean yielded = false;
setState(PARKING);
try {
yielded = yieldContinuation(); // may throw
} finally {
assert (Thread.currentThread() == this) && (yielded == (state() == RUNNING));
if (!yielded) {
assert state() == PARKING;
setState(RUNNING);
}
}
// park on the carrier thread when pinned
if (!yielded) {
parkOnCarrierThread(false, 0);
}
}
in JDK 21
Parking
happens when we use some kind of scheduled process, for example, a queue or certain IO operations
. If they cannot run and have to block on native processes as the mentioned synchronised test-case they will change state from PARKING
to PINNED
:
private void parkOnCarrierThread(boolean timed, long nanos) {
assert state() == RUNNING;
VirtualThreadPinnedEvent event;
try {
event = new VirtualThreadPinnedEvent();
event.begin();
} catch (OutOfMemoryError e) {
event = null;
}
setState(PINNED);
try {
if (!parkPermit) {
if (!timed) {
U.park(false, 0);
} else if (nanos > 0) {
U.park(false, nanos);
}
}
} finally {
setState(RUNNING);
}
// consume parking permit
setParkPermit(false);
if (event != null) {
try {
event.commit();
} catch (OutOfMemoryError e) {
// ignore
}
}
}
in JDK 21
I provided an example with test-case saveWordsParking:
@Synchronized
override fun saveWordsParking(words: List<String>): String? {
try {
Thread.sleep(100)
} catch (e: InterruptedException) {
throw RuntimeException(e)
}
...
}
Parked
, however, is quite an odd state, and I wasn’t able to reproduce it. This has to do with this variable notifyJvmtiEvents, which apparently does something about mounting and unmounting using native methods. According to the literature, Parked is a status identifying a thread in a scheduler that is not doing anything and waiting for its turn to be Unparked
and taken by the Scheduler
. This should be the case with unblocking operations that the JVM
can manage, i.e. native independent
.
1.4 Kotlin coroutines
As we have seen before, coroutines
are very similar to virtual threads
. There is actually no major difference between both of them in theory. However, their implementations
do differ. But before delving into them as we did before with virtual threads, let’s get familiar with some of the terms of the Kotlin world:
- Suspend: Refers to the act of creating a coroutine. A method referred to as suspended, runs only in a coroutine context. This context may be switched to another during execution.
- delay: A delay, is kind of like sleep, but it will just pause or suspend the running coroutine for as long as we tell it to
- coroutine: Just like Virtual Threads, a coroutine runs on a platform Thread. It can also automatically switch content.
1.4.1 What are Kotlin Coroutines ?
Kotlin
, as you now have probably figured out by now, is still nothing more than a simple DSL
that enables some new syntax with the goal to make it easy for programmers to build their applications. What this entails is a bit of confusion when first interpreting the code and the bytecode. So, instead of clicking on something like startVirtualThread as in the case of Java
with our favorite IDE
, in Kotlin
’s case, we need to find a way to enter the suspend code. We start by looking at an example of that like this one:
suspend fun readWordFlowBack(words: List<String>) = wordsFlow(words).toList().joinToString(" ")
Depending on your IDE
, you’ll find different ways to do the following. In Intellij
, there is, fortunately, a Tool that allows us to see the decompiled resulting bytecode in Java:
Once here, we can click on the button Decompile:
And we finally get this kind of code:
Pretty messy right? Well, this is the way that we currently, in 2022
, get to decompile Kotlin
code into Java code
. It’s really not Java
code per se, but it gives us a window into how things are truly translated into the JVM
. If we want to skip these steps and see exactly how the code gets compiled, then you probably need to go to the command line. Just out of curiosity, if you do go to the command line and list the files in the target directory, you’ll see a lot more files than what you normally see in compiled Java classes:
Note that we have quite a few classes and some with the actual method names. Not very nice to see but Kotlin
does this because Kotlin is a layer on top of Java. In other words, it’s a DSL
(Domain Specific Language). This means that we will not be getting bytecode classes just like we get from Java code. In the end, you do not need Java code because the bytecode is what’s being generated under the hood at compile time. Another curious fact is that when you use Intellij
by default, you don’t really see all of these files. The only thing you see is their Kotlin counterparts in an interpreted way.
Anyway, let’s go back to the decompiled code. Did you notice that we are using a Continuation? We have seen that before in Java correct? Let’s delve into it in the same way we did in Java:
@kotlin.SinceKotlin
public interface Continuation<in T> {
public abstract val context: kotlin.coroutines.CoroutineContext
public abstract fun resumeWith(result: kotlin.Result<T>): kotlin.Unit
}
in Kotlin 1.8
We see that a Continuation
is an interface, and it has a CoroutineContext
and a resumeWith
function.
And this is really as far as we seem to be able to go in evaluating coroutines because the whole library is developed with Kotlin source code and that makes it reasonably difficult to see how that gets translated to Java. I guess the point I’m trying to make is that it doesn’t look like Kotlin coroutines are that much different than Java virtual threads at this point. But, on the other hand, just because the source code is written in Kotlin, it does not really mean that we can’t read it. So let’s try that.
@kotlin.PublishedApi
@kotlin.SinceKotlin
internal final expect class SafeContinuation<in T> : kotlin.coroutines.Continuation<T> {
internal constructor(delegate: kotlin.coroutines.Continuation<T>, initialResult: kotlin.Any?) { /* compiled code */
}
@kotlin.PublishedApi
internal constructor(delegate: kotlin.coroutines.Continuation<T>) { /* compiled code */
}
public expect open val context: kotlin.coroutines.CoroutineContext /* compiled code */
@kotlin.PublishedApi
internal final expect fun getOrThrow(): kotlin.Any? { /* compiled code */
}
public open expect fun resumeWith(result: kotlin.Result<T>): kotlin.Unit { /* compiled code */
}
}
in KotlinX 1.10.0
SafeContinuation
is an implementation of Continuation
. The expect
is a keyword, used in Kotlin
in the same way as native
is. In other words, in Kotlin
this just means that the implementation is platform-dependent
and of course, it’s not easy to access it as well. Further down the line in the coroutines code, it gets quite difficult to understand anything. Whereas in Java I could debug through the whole JDK, in Kotlin, it gets quite difficult, and I’m assuming that this has to do with the fact that suspend is interpreted as a keyword in Intellij
and not as ordinary code. Thus, we don’t really get to debug things like Continuation
that easily. But hold on! Of course, we can!. With Kotlin, just as much as with Java, we sometimes need to guess where the code is going to fall into. So we take a wild
guess by opening the run method in DispatchedTask.kt:
final override fun run() {
assert { resumeMode != MODE_UNINITIALIZED } // should have been set before dispatching
try {
val delegate = delegate as DispatchedContinuation<T>
val continuation = delegate.continuation
withContinuationContext(continuation, delegate.countOrElement) {
val context = continuation.context
val state = takeState() // NOTE: Must take state in any case, even if cancelled
val exception = getExceptionalResult(state)
/*
* Check whether continuation was originally resumed with an exception.
* If so, it dominates cancellation, otherwise the original exception
* will be silently lost.
*/
val job = if (exception == null && resumeMode.isCancellableMode) context[Job] else null
if (job != null && !job.isActive) {
val cause = job.getCancellationException()
cancelCompletedResult(state, cause)
continuation.resumeWithStackTrace(cause)
} else {
if (exception != null) {
continuation.resumeWithException(exception)
} else {
continuation.resume(getSuccessfulResult(state))
}
}
}
} catch (e: DispatchException) {
handleCoroutineException(delegate.context, e.cause)
} catch (e: Throwable) {
handleFatalException(e)
}
}
in KotlinX 1.10.0
If you run my Kotlin
example, you’ll see that the code falls in here. This dispatched task is what allows our coroutine to run.
In Kotlin
, we can start coroutines
in several ways. We can use suspend
in a function
and get something to call that, we can start a coroutine context with withContext
we can implement them using runBlocking, plus many other ways. In our tests example we are using something like this:
GlobalScope.launch {
withContext(IO) {
FileOutputStream(File(File(dumpDir, "kotlin"), "$methodName.csv"), true).use { oos ->
(0..repeats).map {
startProcessAsync(oos) {
toTest()
}
}.awaitAll()
}
}
}.join()
Just remember that GlobalScope
is fine to use for experiments, but we should be careful using this in production code, because the coroutine context
will be open during the runtime of the whole application.
Intellij can help we figure out where coroutines
are starting. In this example we are actually creating 3 coroutines:
- suspend creates a coroutine with the context of the caller
-
GlobalScope
.launch, will launch a coroutine in aglobal
context (strongly advised against). Always recommended to use coroutineScope instead. -
withContext(IO)
will create a coroutine in an IO context.
The keyword suspend, creates a coroutine. We don’t see it in the example. It is associated with the parent function: suspend fun generalTest(). For that, please look for this example in the code. Then we start a new GlobalScope
. The GlobalScope
will start a coroutine with a global context. And of course, under it, we can start another coroutine with withContext(IO)
.
public fun CoroutineScope.launch(
context: CoroutineContext = EmptyCoroutineContext,
start: CoroutineStart = CoroutineStart.DEFAULT,
block: suspend CoroutineScope.() -> Unit
): Job {
val newContext = newCoroutineContext(context)
val coroutine = if (start.isLazy)
LazyStandaloneCoroutine(newContext, block) else
StandaloneCoroutine(newContext, active = true)
coroutine.start(start, coroutine, block)
return coroutine
}
in KotlinX 1.10.0
A deeper dive into the coroutine
implementation at Tasks.kt,
shows us know that a coroutine has a mode and a state.
A coroutine can have these modes:
-
TASK_NON_BLOCKIN
0: The task is CPU bound and will not block. -
TASK_PROBABLY_BLOCKING
: 1: The task will probably block. This works like a hint and just like we saw in virtual threads, this will let the scheduler know that a system thread might be needed. The states available for a Kotlin coroutine worker in CoroutineScheduler.kt are: -
CPU_ACQUIRED
: It acquires a CPU token and with it tries to execute a task in a non-blocking way. -
BLOCKING
: The task is blocking and the only mode that allows this isTASK_PROBABLY_BLOCKING
. -
PARKING
: It parks a thread, and pretty much like we saw before, parking happens when the thread cannot be temporarily executed. -
DORMANT
: It stays dormant until it can execute another task. This is different thanPARKING
becausePARKING
means that the worker is already responsible for a task. -
TERMINATED
: This is the last state of the worker Finally, coroutines have these states in DispatchedCoroutine.kt: -
RESUMED
2: Only possible to set when the coroutine is stillUNDECIDED
. The coroutine is proceeding with the execution -
SUSPENDED
1: Only possible to set when a coroutine is stillUNDECIDED
. The coroutine is suspended. -
UNDECIDED
0: The initial status of a coroutine (also described as _decision in the source code)
These are the familiar states when we launch a coroutine. During design time, we aren’t really concerned about how the Worker does its thing, and we are definitely not concerned with the modes. However, it can be incredibly helpful to know these basic concepts about coroutines or at least be aware that they exist.
As a recap, a coroutine can start with a suspend function, withContext, or launch. withContext and launch do not work outside a coroutine context. If you need to create such context, then you need to use something like runBlockingor a suspend function.
Similarities between Virtual Threads and coroutines
Now that we have examined the code, let’s try to make more sense of it by diving into the theory. The theory about coroutines and java virtual threads is pretty much found anywhere on the internet and the repo where I performed the tests contains many links to information about it. Perhaps what we need to know at this point in its very basics, about the two implementations is that:
1. Both are based on the original coroutine principle invented in 1958. This is indeed no new concept
2. Both are based on the idea that you can suspend one function runtime to give way to another function runtime.
3. Both implement ideas of suspend and waiting on the main thread using concepts like pinning, dormant, and parking.
4. Both are managed by the JVM and not by the system
5. Both avoid the creation of a whole new platform thread and take advantage of already running ones. They have been started in a Thread pool. ForkJoinPool for Java Virtual Threads and CoroutineScheduler for Kotlin Coroutines.
6. Although we can only have as many platform threads as our CPU cores, we can launch different processes, with a level of parallelization up to the number of cores we have, and launch as many processes we want at the same time until the limits that our machine can handle. The illusion that we perform more in parallel is created by not allowing system threads to block whenever that is possible.
7. Both do not technically sleep. At the very least they do not sleep in a blocking state. In Java, this is done seamlessly with Thread.sleep and it uses non-blocking techniques by giving the thread a PARKING status and giving it a permit. Parking means, in other words sleeping, and unparking means waking up. In Kotlin, the delay ensures that the current execution gets scheduled to execute later. But a deep dive lets us see that Parking and Unparking are also part of the implementation.
8. Both have different ways to do PINNING. In Java, Pinning is done to hold a thread tight to its carrier thread. This happens in synchronized methods. In Kotlin coroutines, the execution is PINNED to one single CPU thread. Suspend and resume operations will make sure the coroutine will run on that same thread until the end. In the same way, Kotlin has synchronized methods and of course, they also use PINNING
9. In both cases, a Thread is a thin wrapper around native threads.
2. Java Virtual Threads Test Implementation
In order to perform these test sets, I created a small framework
that allows me to measure the running time of different methods with different complexities in time and space. The idea is to give enough variation
to different kinds of progressions
and see how that all plays out when deploying several java virtual threads at the same time. For these tests, I’m not interested in measuring the individual time it takes for one particular java virtual thread to execute. Instead, I want to measure the whole and see how it all plays out. The code for the performance measurements contains also reports code
, file management
code, and CSV
file generation algorithms
to help determine how many java virtual threads were allowed to deploy at one single point in time. Let’s have a look at the method that receives a lambda as a parameter including other arguments in order to execute, perform and measure the duration of each individual
test:
private <T> void performTest(
String testName,
String methodName,
String timeComplexity,
String spaceComplexity,
Supplier<T> sampleTest,
Runnable toTest,
int repeats) {
try (var oos = new FileOutputStream(
new File(
new File(dumpDir, "java"),
String.format("%s.csv", methodName)),
true)) {
log.info("===> Starting test: {}: {} <===", testName, sampleTest.get());
log.info("***> Processing took {} milliseconds", measureTimeMillis(() -> {
final List<Thread> threadStream = range(0, repeats).mapToObj(i ->
startProcessAsync(toTest, oos)).toList();
log.info("---> Just sent {} threads", repeats);
threadStream
.forEach(thread -> {
try {
thread.join();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
});
}, testName, methodName, timeComplexity, spaceComplexity, repeats));
} catch (IOException e) {
throw new RuntimeException(e);
}
}
So what I’ve created here is just a method inspired by a few things I’ve learned with Kotlin. Let’s look at them individually:
-
testName
is just a name of a method -
methodName
is a parameter that lets us know what method are we testing. In Kotlin, we’ll see later that we can easily get method names via reflection without much hassle. In Java though, I still had to hardcode the method name and use it as an input parameter as a quick win-win solution. -
timeComplexity
is literally a String where you can put whatever you want but is meant to be used to express the big O notation for the method being tested. This is important in order to see if method complexity would play any role whatsoever in the performance -
spaceComplexity
is also literally a String but in this case, is used for Space complexity -
sampleTest
is just a supplier so that we see a fragment of an output of a single test in the logs -
toTest
is the actual test to be run -
repeats
is how many times it will run
Just for clarity, the timeComplexity
and the spaceComplexity
should be tested in a progressive
fashion going from small
input to a slowly
increasing input. The progression will be available in the future on my website http://joaofilipesabinoesperancinha.nl sometime in the future. Progression
tests are a bit difficult to run because of the limitations of a personal computer and so these two factors do not play a significant role in the results of this article. The individual implementation of each method should be easy to read in the project I’ve created for this article.
startProcessAsync
is where the startVirtualThread
method is called:
private Thread startProcessAsync(Runnable runnable, FileOutputStream oos) {
final Runnable threadRunnable = () -> {
var start = LocalDateTime.now();
runnable.run();
var end = LocalDateTime.now();
try {
oos.write(
String.format("%s,%s,%s\n", start, end, Thread.currentThread()).getBytes(StandardCharsets.UTF_8)
);
oos.flush();
} catch (IOException e) {
throw new RuntimeException (e);
}
};
return startVirtualThread(threadRunnable);
}
3. Diving into Coroutines
Coroutines have a slightly more extensive
paradigm than Java Virtual Threads
. This is because it provides us with different options
on how to start them. Java Virtual Threads have this also, but Kotlin
, goes a few steps beyond this, by changing its own syntax to accommodate these changes. Its complexity, however, makes it quite probably less approachable for many developers. To me, it makes it very interesting, but maybe to the average developer, it might be a step too far. In a short sentence, Kotlin
coroutines allow us to start an execution asynchronously
and wait for the return object, the same thing and not wait for the return object, suspend the current coroutine and execute another one instead, on a different or the same context
, it has 4 different common abstractions for running context, it allows you to "sleep" under the name delay
, which is scheduling
of a sleeping action in the end, and it allows us to create special IO
specific contexts with enabled coroutine
capabilities. These are the basics of what we are going to look at in this section. For now, let’s have look at the following:
private fun runDelayExample() = runBlocking {
println(
"3 - This is the parent coroutine, it will not be suspended by launch ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
launch {
delay(2000)
println(
"1 - (launch) - This launches a coroutine in parallel ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
println(
"1 - (launch) - The coroutine should remain pinned to the original thread up until the end ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
}
val deferred = async {
println(
"4 - (async) - This coroutine is asynchronous and therefore it's thread has to be another ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
}
println(
"3 - The parent coroutine will get suspended with a withContext ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
withContext(IO) {
println(
"2 - (IO) This couroutine has suspended the caller coroutine and now it will be parked or scheduled to run later ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
delay(500)
println(
"2 - (IO) Although the coroutine has been parked, it is now unparked and it remains on the same thread ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
}
withContext(Unconfined) {
println(
"6 - (Unconfined) This coroutine has suspended the caller coroutine and now it will be parked or scheduled to run later ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
delay(500)
println(
"6 - (Unconfined) Although the coroutine has been parked, it is now unparked and it remains on the same thread ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
}
withContext(Default) {
println(
"7 - (Default) This couroutine has suspended the caller coroutine and now it will be parked or scheduled to run later ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
delay(500)
println(
"7 - (Default) Although the coroutine has been parked, it is now unparked and it remains on the same thread ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
}
println(
"3 - This coroutine has been suspended but now running and still on the same thread ==> (${Thread.currentThread().name}) - ${
Thread.currentThread().threadId()
}"
)
deferred.await()
delay(2000)
}
You’ll find in many tutorials, that people use thread-like
squiggles to represent the way coroutines work. I used to do that before but in my own opinion that can be a bit misleading. Or you could argue that is just an introductory representation for the initiates. However, coroutines do not work so much like Threads, although you may have that impression at some points in the code. At this point, if you read all of the above you probably already understand why am I saying this. And if you run the above code located in class CoroutinesShortExplained.kt, you’ll see that much of this code runs on thread main. So you may be asking yourself, why is it that in a single thread we can wait 2
seconds and then 2
seconds, and then the whole thing takes exactly 2 seconds to execute? Well that’s because unlike Thread.sleep
(for Java), the delay
operation schedules the current coroutine to execute later and parks it. This releases the main thread to continue execution. When the 2 seconds are passed, the coroutine gets unparked
and it starts again. With async
, we do the same as launch
, but in this case, we return whatever the receiver returns. In this case, is just a Unit
because it returns nothing. Finally, we encounter withContext
which will have the effect of adding 500 ms
to the whole waiting time of this function. The reason being is that withContext performs context switching. It suspends the calling coroutine
and runs its execution, returning back to the caller at the end of it. This happens regardless of the System Thread
running it. This is why when we run the whole code, we get approximately 3500
ms in runtime:
So these are the basics, but it is also important to have an idea of what the 4 different contexts do and remember that for Android
there are a lot more and that we can make custom
ones:
-
IO
: This context manages coroutines during blocking operations in the same way as Java Virtual Threads do duringPINNING
. You can see this in execution results number 2. It is purposely made to be used duringIO
operations, in order to allow, when possible, IO operations to be executed in a non-blocking way. -
Default
: It uses at least 2 cores in order to work and by default uses a pool of threads containing as many threads as the available cores. You can see this in execution results number 7. It will use a different Thread from the available JVM pool of threads, if possible. Otherwise, it will use the first one. -
Unconfined
: It means that a dispatcher will not necessarily continue to execute on the same thread. You can see this in execution results number 6. Its criteria are to use the first available thread, making it quite fast. A subtle difference between this one and the Default, is that Default chooses the first different thread if possible whereas Unconfined allows the dispatcher to pick any first available one of them. -
Main
: This one is platform-dependent and it does not have to exist. It is sometimes referred to as an Android-specific context, but in reality, it’s just referring to the implementation of whatever the platform where you are running this defines it to be.
In project Loom
, Thread.sleep
, cannot necessarily be considered a blocking operation anymore. Not strictly at least. However, when running Kotlin Coroutines, the executing thread is not considered to be a virtual thread. It is instead, a Worker provided by the Kotlin coroutines core library. Worker
is an implementation of the Thread interface, and so Worker
is a coroutine that is also a Thread
, but because it is not of the type of VirtualThread
, it will not be scheduled to sleep and instead still block the whole execution:
static Thread newVirtualThread(Executor scheduler,
String name,
int characteristics,
Runnable task) {
if (ContinuationSupport.isSupported()) {
return new VirtualThread(scheduler, name, characteristics, task);
} else {
if (scheduler != null)
throw new UnsupportedOperationException();
return new BoundVirtualThread(name, characteristics, task);
}
}
in JDK21
3. Coroutines Test Implementation
The implementation of the coroutines
test function is quite similar to its java method counterpart, but it is important that we have a quick look at it:
private suspend fun <T> performTest(
testName: String,
methodName: String,
timeComplexity: String = "n/a",
spaceComplexity: String = "n/a",
sampleTest: suspend () -> T,
toTest: suspend () -> T,
repeats: Int
) {
log.info("===> {} : {}", testName, sampleTest())
log.info(
"***> Processing took ${
measureTimeMillisSave(
testName,
methodName,
timeComplexity,
spaceComplexity,
repeats = repeats
) {
withContext(IO) {
FileOutputStream(
File(
File(
dumpDir,
"kotlin"
),
"$methodName.csv"
),
true
).use { oos ->
(0..repeats).map {
startProcessAsync(oos) {
toTest()
}
}.awaitAll()
}
}
log.info("Just sent {} threads", repeats)
}
} milliseconds"
)
}
Although this bit seems to be the same, there is a small difference. Since we want to save data to a file, and we want those all to be non-blocking, then we start the whole process with a coroutine under the IO
context. Once we achieve that, we can then start out the method to be tested under an async context:
private fun <T> CoroutineScope.startProcessAsync(
oos: FileOutputStream?,
function: suspend () -> T
) =
async {
val start = LocalDateTime.now()
function()
val end = LocalDateTime.now()
oos?.let {
oos.write(
"$start,$end,${currentThread()}\n".toByteArray(StandardCharsets.UTF_8)
)
oos.flush()
}
}
3.1. Before testing
One thing that has made this article difficult to write is to clearly explain the goal here. Am I trying to measure how Virtual Thread
s perform in relation to Coroutines
and vice versa? Absolutely! Are Virtual Threads
and Coroutines
made to answer performance issues? The short answer is a big massive maybe...! The long answer is complicated. The problem Continuations
are solving is the shortage of resources we have. By making the JVM
handle concurrency
we can now write code in a structured concurrency way, we are allowed to trigger several processes at the same time, and we can encapsulate them.
Explaining why both Java Virtual Threads
and Kotlin Coroutines
allow us to program in a structured concurrency kind of way would be in itself a whole new article that is really off-topic, but I think that if we just use our common sense in the short definition we can immediately see why this is so:
Structured concurrency means that lifetimes of concurrent functions are cleanly nested
We trigger them, but we don’t necessarily start to run them. Platform
threads are very expensive processes that take up space, and start-up time, and they are limited to the number of cores of your machine. What this means in practice and as of result of any implementation of Continuations
, is that suddenly we have so many resources that there are already talks about if concurrent and asynchronous programming is now even worth the effort. What my tests are doing is allowing me to exhaust the resources up to a point where the implementation on both sides of this discussion gets challenged. Thats’ where performance tests come in. Managing Continuations
when resources are exhausted needs to be done in an intelligent way and this is why I’m stressing out these two implementations. I could find that Coroutines
are much better than Virtual Threads or I could find that Virtual Threads
are way better. Or maybe I will find exactly no difference which could actually be the case since we have seen that there does not seem to be any major difference between the two implementations.
There is of course a lot of code built in order to make it possible to generate such tests. If you run make clean build-run on the root of the application, you’ll see that a dump directory will be generated. Inside you’ll find two directories java and kotlin . This is where the results of our tests will go in. There are two types of files generated on each of them. There is a readable mardown file and several quite unreadable csv files. These csv files are created in pairs. One file contains the method name and the other contains the method name but ends in -ms. The first two columns of the first file, contain the start and end timestamps per virtual-thread/coroutine. The third column contains the name of the running thread that carried that process.
Finally, on the root, another markdown file is generated with a short comparison report about the different methods implemented the same way, as much as possible in Java and Kotlin. This file is called Log.md.
But we still have to look at another visualization behind the theory of both technologies. The idea is that you can execute something else while you suspend the previous execution. Virtual threads
work a bit like this and this is just an oversimplified representation:
Coroutines
give in practice the same kind of structure and again just another over simplified example:
The only thing that is happening in both cases regardless of how they are implemented at a low level is a switch between available threads. In a concurrent environment with just the use of just of platform Threads, making a blocking call always means waiting for the blocking call to finish before being allowed to continue. Coroutines
or Continuations
explore threads to the maximum by making sure that we avoid anything to block whenever that is possible. If we are waiting for a blocking call, then we’ll get back to that coroutine when we are done, but in the meantime we just let another coroutine move around in another thread or even on the same thread. This is what allows us now to implement in a structured concurrency way, which is something we still need to explicitly do in the code if we want to.
They may be different on a low level, but what I see is that on a high level, both Kotlin coroutines and Java Virtual Threads (also known in the old days as fibers
) are exactly the same thing.
To make this article a bit more interesting I’ve made the data source where all of these algorithms will run against to, to be a small developing novel. The longer it gets, the harder will the two different implementations have to work. It’s all available in the GoodStory.md
file located in the project repository.
4. Test results
As I mentioned before, the best way to run these tests is via the command line, but you can also run them via IntelliJ
.
If you run them via Intellij
you’ll need to run at least two main classes. One for Java and the other one for Kotlin. These are respectively GoodStoryJava.java and GoodStoryKotlin.kt
. We’ll need to run them with these parameters:
-f docs/good.story/GoodStory.md -lf Log.md -dump dump
And for Java specifically, we’ll have to enable JDK19 features:
--enable-preview
If you have VisualVM please have it running at the same time. I was able to grab these snapshots just before VisualVM
crashed:
And I was able to capture this for the Kotlin coroutines project in the same way:
There are a few differences, but that’s just a name difference. Between the two captures, we get ForkJoinPool-1-worker-N
for Java Virtual threads
and DefaultDispatcher-worker-N
for Kotlin coroutines
. These workers are responsible to coordinate coroutines
, coroutine context
, context switching
, and assigning a coroutine
to a system thread
. The Java
ForkJoinPool
starts setting a maximum of 256
workers. The CoroutineScheduler
starts with a maximum setting of 2097150
workers.
I’ve created some CSV
files to get an idea of how many Virtual Threads
or Coroutines
are executing at any given time. These are not accurate and the reason why is that they are assuming that these two kinds of processes run continually and never switch context in these runs, per continuation. However, we now know that this isn’t necessarily true all the time. Anyway it’s worth the effort to look into them. If we look at one of the heaviest processes we ran in these two projects. For example, let’s check what happens with the method/function: repetitionCount
. This method checks how many words are repeated more than once. This means that if we find two words "dog" then that is 1 repetition. For every other "dog" found we add one more to that count. If we look at the count
generation for Java we find that the number of active Virtual Threads
at any given time was 12
:
For Kotlin we find something off. We see that the number of active coroutines at any given time rose up to 63:
How does this happen? Well, for Java Virtual Threads
, it makes perfect sense that only 12 are active at one given time. For Kotlin Coroutines
it’s just strange. In this case, It’s not really clear to me what happened, but I’m guessing that this 63 number is just a misleading result because should a coroutine
change context in the middle of a run, or if for whatever reason it gets suspended, then, of course, the start and end timestamps will encompass a longer delta than usual and that result will not be applicable for the initial assumption that the asynchronous processes that we have started have been run continuously without being suspended once started. We should tell have gotten 12 or less than that because that’s how many cores my machine has. Not 63! I can only wish at this point.
Finally, let’s have a look at the general results where we can compare different runs of 10000
repetitions for each implemented algorithm:
Time | Method | Time Complexity | Space Complexity | Repetitions | Java Duration | Kotlin Duration | Kotlin Loom Duration | Machine |
---|---|---|---|---|---|---|---|---|
2023-01-16T20:25:14.937724788 | wait0Nanos - Wait 0 Nanos - Running - Yielding - Virtual Thread | n/a | n/a | 2 | 10 | -1 | -1 | |
2023-01-16T20:25:14.944936772 | wait100Mills - Wait 100 Mills - Running - Parking - Yield - Virtual Thread | n/a | n/a | 2 | 103 | -1 | -1 | |
2023-01-16T20:25:14.944998573 | saveWordsNio - Write to 1 file - Yield - Virtual Thread | n/a | n/a | 2 | 130 | -1 | -1 | |
2023-01-16T20:25:14.945033813 | saveWordsNio - Write to 1 file - Pinning - Yield - Virtual Thread | n/a | n/a | 2 | 233 | -1 | -1 | |
2023-01-16T20:25:14.945061943 | findAllUniqueWords - All Unique Words | n/a | n/a | 10000 | 1674 | 3911 | 1654 | |
2023-01-16T20:25:14.945090147 | findAllUniqueWordsWithCount - All Words with count | n/a | n/a | 10000 | 1194 | 1268 | 1068 | |
2023-01-16T20:25:14.945118644 | revertText - Reverted Text | O(n) | O(1) | 10000 | 168 | 844 | 360 | |
2023-01-16T20:25:14.945157538 | contentSplitIterateSubtractAndSum - Double iteration of an array of words | O(n^2) | O(1) | 10000 | 728 | 2987 | 1848 | |
2023-01-16T20:25:14.945200581 | repetitionCount - Repetition count | O(n^2) | O(n) | 10000 | 3180 | 2470 | 1012 | |
2023-01-16T20:25:14.945248030 | createAvlTree - Create AVL Tree | O(log n) | O(n) | 10000 | 302 | 1577 | 725 | |
2023-01-16T20:25:14.945295143 | findPrimeSecret - Secret word in Sieve of Eratosthenes | O(n * log(log n)) | O(n) | 10000 | 699 | 2593 | 470 | |
2023-01-16T20:25:14.945338146 | createSplayTree - Create Splay Tree | O(log n) | O(n) | 10000 | 155 | 530 | 248 | |
2023-01-16T20:25:14.945382500 | quickSort - Quick sort | O(n * log n) | O(log n) | 10000 | 1770 | 4075 | 3993 | |
2023-01-16T20:25:14.945426448 | makeTextFromWordFlow - Make text from word Flow | n/a | n/a | 10000 | 693 | 614 | 361 | |
2023-01-16T20:25:14.945485517 | createIntersectionWordList - Intersection Text Algorithm | O(n) | O(n) | 10000 | 85 | 299 | 181 | |
2023-01-16T20:25:14.945525274 | controlTest - N/A | n/a | n/a | 10000 | 517 | 463 | 43 | |
2023-01-16T20:25:14.945594012 | generalTest - N/A | n/a | n/a | 10000 | 112 | 46 | 58 | |
2023-01-16T20:25:14.945640460 | findAllUniqueWords - wait0Nanos | n/a | n/a | 2 | -1 | 16 | 18 |
Looking at the table we see that in almost all cases, the duration of throwing in ten thousand virtual threads or coroutines
in methods/functions with approximately the same complexity isn’t really that different. In fact, zooming in more closely almost gives us the idea that project Loom seems better in terms of performance. Anyway, it is not enough to draw conclusions. At this point, I’ve exhausted the limits of my local machine and it has worked enough in these tests. There are indications all throughout my tests that Project Loom’s Virtual Threads
do seem to perform better than Coroutines, but, as I mentioned before, it is not a definite conclusion. It is just a correlation, an idea if you will. I still wasn’t able to definitely prove that one is better than the other. What I was able to prove is that in my current local environment, there is nothing, absolutely nothing that makes me doubt any of these approaches to solve this same problem. Both of them seem good at the same level, and that slight indication that Java Virtual Threads do better is still just an indication. The other reason why this is just an indication is that I have been able to run these same tests, on other occasions, where all the coroutines implementations did better than Java Virtual Threads. It’s just the frequency that mostly seems to favor Java Virtual Threads
, but this isn’t material to draw any conclusions. And maybe, not being able to draw any conclusions is in itself a conclusion already, but I let you decide that.
Conclusion
When I compare both implementations of this same idea of Continuations
, I didn’t really see in practice any major difference. I find both Kotlin coroutines
and Java Virtual Threads
great technologies alike. When exhausting the system with coroutines and forcing all sorts of algorithms to come to action to optimize that, I didn’t see any major difference in performance.
Here is the thing. Kotlin is here to stay and Java is also here to stay. My point with the article was to lead both parties to this discussion to make a good look at what both languages have. Kotlin
is an invention in 2010
and Java
exists since 1995
. In the same way, Scala
was created, Kotlin
was also created to "provide features not available before". Well, this is a tough pill for me to swallow. Do you know why? Because everything that is available in Kotlin and that we say was "needed" in Kotlin I always find it to be available in Java too! Just under a different style. This ranges from what we nowadays call idiomatic Kotlin to what we call nowadays idiomatic Java.
Since Java 8 we have lambdas which actually was the first time Java back in 2014 began to have concerns about the lack of better solutions. Lambdas
do exactly the same thing as for, while and do {} while, in the same way, receivers in Kotlin do. They make everything slow as hell! You only realize this when implementing algorithms for High Availability applications or by making exercises in hacker sites concerned about the big O notation. This may be an exaggeration, but hey, I also love the elegancy that both bring and so I also use them massively, to be honest, but my point is that they are not everything. When we invest in sequences
, lambdas
, receivers
, and map-reduce
operations, we are in a way penalizing performance. Does it matter? It only matters when it matters, so my best advice is just to be an expert on them. We all truly love Lambdas and Receivers, but just don’t let them be a point of anger in your daily coder’s life, because sometimes, the good old for
’s can make a real difference.
If we talk, for example, about extension functions being better than static methods in Java, that’s also not a great standpoint. When I see these discussions or when I get dragged into them, what I usually observe is that one side is extremely passionate about its language of choice, but what truly is happening, in my view, is just people defending their personal preferences. Me, I prefer to be objective and I can’t see anything objectively concerning about any of these languages. They are just different. And it’s great so!
Java
is in many ways the parent of both Scala
and Kotlin
. I think it is kind of senseless to want or wish Kotlin to take over Java
. I personally think that all languages should exist and that we should learn from all of them because the very principle that they are different but end up doing the same is exactly the same principle that keeps us active and makes us understand different perspectives on code. I don’t want Java, Kotlin, or Scala to go away. I want all of them and the other languages to evolve as well. And I want to learn from all of them. Hey, remember that I’ve started programming with tapes on a ZX-Spectrum 48K
machine with rubber keys? That was during the end of the 80’s decade for me. That probably has no relevance in the world today but having that reference does allow me to understand better where we are, which problems we faced in the past, and the present, and which problems we may find in the future. The enrichment that more languages bring to the world is frequently overlooked.
I could go on forever, but what I really want to say with all of this article is plain and simple. Kotlin is a new player in town and so is the coroutines' implementation. And we all love them. But no matter what, I fail to see the engineering added value of these technologies in relation to Java Virtual Threads. I think Kotlin is just different and that adds a new flavor to the JVM
. However, every single critic I became aware about Kotlin, it turns out I can see the same for Java. In the same way, for every single compliment about Kotlin, I can find exactly the same in Java. It just seems to have a different style. Of course, many things aren’t integrated into the Java SDK, but Kotlin is still just a DSL
on top of the JVM
. This means that if I use something like Lombok
in Java I’d probably be having the same right? It’s just another DSL, just like Kotlin. Well, many of you reading this would go up in arms saying that Lombok is "a terrible idea", and then I would say "but we have record’s in Java now!" and then you’d say "Yeah but data classes do all of that together, and you can make everything immutable, and it looks so much better!". That’s all amazing and I agree with that last statement. Kotlin
does look better. Or does it? Maybe I prefer using annotations
, maybe I prefer using @Builde
r instead of data class , maybe I want to be reminded that behind on single data keyword I get a hash implementation
, an equals
, getters
and setters
, and if I use val
it on all of my properties then I get an immutable
object! This is where I think Kotlin is a genius language. It still makes it unclear to me as to what engineering benefit it adds to the code, and yet, by riding on our instincts and current trends it has found a golden opportunity to fill out a perceived gap that many developers and engineers face up to these days. Boilerplate
, repeated code, difficult code, engineering costs, etc., etc. Plus it provides an amazing style of programming when it comes to ensuring structured concurrency. And of course our desire to do something stimulating and new. New syntax and new semantics create a whole new playground and that is just a positive thing.
Neither Kotlin nor Java are, in my view better than one another in a strict engineering sense. You may disagree of course. And I think if you come from an Android background, then you’ll have much more to say here than I could possibly say. I am very aware that Kotlin has been massively embraced by Android developers. Sounds good to me. My opinion (or lack thereof) comes from a services implementation-only perspective. Android does have a lot more to it, so I have to abstain from commenting on that one. For now, that is.
If you have to pick a new technology, my advice is, just pick the one you like best. I seriously doubt you’ll find any performance benefit from the language itself. Be in line with your team as well. If they have a passion for Kotlin then go for it. If they have a passion for Java then go for it. It’s in passion that you’ll find the most productivity. If you want to go for something efficient, and that is your only concern, then, and there is a very wide consensus on this, you may want to stay away from anything JVM related in the first place. It can be difficult to get things up and running in the JVM and this is why many are turning into Native solutions. What I also want to point out is that coroutines are sometimes discussed in the context of multithreading and providing more threads. That is just not the case. The paradigm around coroutines is essentially more related to reactive programming than with anything else. The reason why I say this is that coroutines make much more efficient use of System/Platform Threads
. However, this may sound to have to do with multithreading, it is just not. This is just a way to avoid threads pausing for no good reason as they used to if you will. Whether you decide to use Kotlin coroutines
or the upcoming Virtual Threads
in JDK21
under Project Loom
it is entirely up to you.
The idea that Java
has to defend itself against Kotlin
or that Kotlin
possibly represents a contestant to the ruling position of Java was my initial motivation to write this article and this is because, just like the story of Lucy will one day show, sometimes we just tell very good stories to each other, but they end up meaning nothing. I will personally keep programming in whatever language I feel like when I wake up to it. At work, I stick with the plan. In my spare time though, I just choose whatever I feel like to at that time and that includes Java
, Kotlin
, Scala
, Go
, Rust
, Python
, Ruby
, PHP
, Javascript
, etc.
As I mentioned in the introduction, this article will be subject to more frequent reviews given its experimental nature.
I have placed all the source code of this application in GitLab
I hope that you have enjoyed this article as much as I enjoyed writing it.
Thank you for reading!
5. Resources
- Coroutines are not about multi-threading at all
- Structured concurrency by Roman Elizarov
- libdill: Structured Concurrency for C
- Java Virtual Threads by Gaetano Piazzolla
- Carrier Kernel Thread Pinning of Virtual Threads (Project Loom)
- Why Continuations are Coming to Java
- Coroutines overview
- Scala (programming language)
- History of Scala
- Project Loom (Java 19)
- Java (programming language) ☕
- Project Loom: Fibers and Continuations for the Java Virtual Machine
- Coming to Java 19: Virtual threads and platform threads
- STAR method interview ✨
- Amazon Leadership Examples on Youtube
- System Design Messenger on Youtube
- Behavioral Interview Prep
- System - Design - Primer
- Grokking the System Design Interview
- Grokking the Coding Interview: Patterns for Coding Questions
- Big O Notation and Time/Space Complexity
- Analysis of Algorithms | Big-O analysis
- BTech smart class - Introduction to algorithms
- Splay tree
- Big-O Quiz
- Sieve of Eratosthenes
- Binary search tree
- The height of an AVL tree containing n nodes
- AVL Tree
- Data Structure and Algorithms - AVL Trees
- AVL Tree Insertion, Rotation, and Balance Factor Explained
- What is an AVL tree? 🌳
- AVL Tree program in Java
- How to insert Strings into an AVL Tree
- Big O Factorial Time Complexity
- BIG O NOTATION PRIMER
- What would cause an algorithm to have O(log log n) complexity?
- What does O(log n) mean exactly?
- Big O Notation, Part Two: Space Complexity
- ALGORITHMS IN KOTLIN, BIG-O-NOTATION, PART 1/7
- Big O Cheat Sheet
- Time complexity vs. space complexity
- Complexity and Big-O Notation
- Going inside Java’s Project Loom and virtual threads
- Kotlin Coroutines dispatchers
- VisualVM
- Picocli
- Issues with Spring, how Micronaut solves it, and latter’s support for GraalVM
- Kotlin Coroutines
- Java Project Loom
- GitHub Action for GraalVM
- Project Loom: Understand the new Java concurrency model
- Going inside Java’s Project Loom and virtual threads
Top comments (0)