Coroutines have been available since Kotlin 1.1 but Kotlin 1.3 has lifted them out of experimental mode which means that Coroutines API is finalized and it’s ready to be used in production environments. There are many brave souls who’ve started using coroutines since the early days although the API has been changed with the release of Kotlin 1.3 so the old coroutine code needs some adjustments in order to follow the latest standards.
What are Coroutines?
Coroutines are similar to threads but they may greatly improve code readability which is one of the most crucial things every software engineer should strive for. Coroutines may look like normal functions in Kotlin but they have certain “special abilities” such as pausing the execution halfway trough the function and passing it to other coroutines. This way of executing a program is known as cooperative multitasking and it allows us to simplify our programs and improve their performance. Coroutines are nothing new in the programming world but they have recently started regaining popularity as a tool used for application development.
Since coroutines are not that different from normal functions in many ways, you can convert any function to a coroutine just by adding the suspend keyword before it’s name, although it wouldn’t be optimized to get along well with its neighbors and it’s your responsibility to make sure it’s not too “greedy” in terms of consuming the precious CPU time.
Benefits of Coroutines
One of the main benefits of coroutines is their readability, they look similar to regular functions but they are much more powerful. Code readability tends to be the victim when it comes to concurrency because Java concurrency APIs are complex and error prone and the alternatives such as RxJava can also complicate the code and mess with its structure.
Another important fact about coroutines is that they are lightweight, they don’t need to switch execution context in order to share the computing power with their neighbors which leads to significant performance gains. You can also spawn hundreds of thousands of coroutines on your laptop and it won’t crash your system. Threads, on the contrary, are expensive and you should keep a close look on how many threads you program uses.
Dangers of Coroutines
The biggest issue with coroutines is that they can make your programs more fragile and that’s the consequence of giving a lot of power to each particular coroutine. Badly implemented coroutines often act “greedy” which can reduce the program performance dramatically so you have to make sure that all of your coroutines are cooperative and are willing to share the resources with their neighbors.
Coroutine context is just a set of CoroutineContext.Element instances. Every coroutine should have a context and it’s usually created automatically when you launch a coroutine using one of the popular coroutine builders. Let’s take CoroutineScope.launch() as an example:
There are 4 different keys values which can be stored in CoroutineContext:
- CoroutineName - used to name a coroutine, mostly for debugging purposes
- Job - a cancellable thing, can have a parent job as well as children jobs. Jobs are hierarchical and the cancellation of children jobs does not lead to cancellation of their parent jobs
- ContinuationInterceptor - CoroutineContext instance that intercepts coroutine continuations
- CoroutineExceptionHandler - an optional element used to handle uncaught exceptions
Coroutine dispatchers determine what threads their coroutines should use for their execution. Imagine the situation when you have to interact with some UI elements and you really need to access them from the UI thread. You can create a special CoroutineDispatcher for UI interactions but it’s usually provided by the coroutines library.
There are 4 predefined dispatchers that should be enough for most of the apps:
Default: That’s what coroutines use by default. This dispatcher has a pool of threads which is equal to the number of CPU cores available to your program so this dispatcher can run coroutines in parallel. One of the good uses for this dispatcher is to run any CPU intensive calculations. This way, we can split our coroutines evenly between all of the CPU cores available.
IO: The main purpose of this dispatcher is to handle IO tasks such us reading from or writing to files or performing network requests. This dispatcher uses a pool of threads so it’s also capable of parallel execution but this pool can be substantially bigger than the pool used by Default dispatcher.
Unconfined: This dispatcher is not confined to any thread which makes it unpredictable and you should avoid using it unless you have a strong reason to do otherwise.
Main: The implementation of this dispatcher varies from platform to platform and it is always confined to the UI thread. It’s probably a good idea to execute all of your UI interacting pieces of coroutines with this dispatcher.
Starting a Coroutine
Here is the most popular ways to start a coroutine:
- CoroutineScope.launch() - launches a coroutine in a non-blocking way
- runBlocking() - launches a coroutine and block until it finishes. This function is useful in unit tests when you want to do all of the assertions before the test function returns
Once started, your coroutine is free to call other coroutines directly.
Each coroutine implements Job interface:
class Coroutine : Job
Job is a cancellable thing with a life-cycle that culminates in its completion. You can always check the state of your jobs by reading its isActive, isCompleted or isCancelled properties.
Probably the most important methods of Job interface are:
- cancel() - cancels the job and it’s children
- join() - blocks until this job is complete
Async and Deferred
There is a special coroutine builder called CoroutineScope.async() which lets you call another coroutine in a non-blocking way. This function gives you a Deferred result which means that it might not yet be available. You can just call the await() method at some point in the future so it’s pretty easy to execute several coroutines in parallel and then just call the await() on them to get the results.
Lifecycle management is an important part of software development. Many apps are composed of many components and each of those components might have their own lifecycle which is separate from the rest of the app. Let’s say we have an Android app which has a special screen for displaying some kind of data that needs to be fetched from the server. A user might open this screen and see a loading indicator. At this point, he may keep waiting or he might get bored and close the screen before the data loads. In case the user leaves, we need a way to wind down all of the coroutines that are related to fetching that data but how can we doo that? Well, that’s what scopes are for!
Every coroutine builder is an extension of CoroutineScope so the only thing you have to do is to define a special scope for each component that has a separate lifecycle. Let’s take the Android’s ViewModel as an example:
As you can see, this component has its own scope which uses Dispatchers.Main as a default dispatcher. We can easily stop doing what we’re doing once we receive the onCleared() callback from the external component that manages ViewModels on Android.
Coroutines is a powerful addition to the Kotlin language and they allow us to write complex multi-threaded logic which doesn’t look very different from the code we usually write for single threaded use. Coroutines readability is superior to any of the popular alternatives and I think that coroutines will become more popular among Kotlin developers in the future.