Why is everybody so mad about concurrency

Earlier in the beginning of April, PieSoft programmers were listening to, learning from, and were inspired by the latest technical developments in the Java Ecosystem at the international conference JPoint-2019. The event took place in Moscow, Russia and brought together over a thousand participants and 43 speakers with their reports synchronized on four parallel tracks. Among various topics most frequently discussed within the event, particular emphasis was placed on a successful development of a new product through improving concurrent engineering, parallel execution processes, tools, and best practices. Which means it’s time for you to find out what concurrency is and why it is so essential in today’s Java world.

So, what if you look it up in the dictionary

Actually, there are several different concepts related to the issue:

  • concurrency
  • parallel computing
  • multithreading
  • asynchrony


Concurrency means working with multi-threaded code. Otherwise, it’s the possibility of executing two or more statements simultaneously. Imagine that the code is the water flowing through the pipe. You need to pump out this water from A to B (like code execution from start - main to finish - exit). Given that you can’t change the pipe size, we just add one more pipe. Therefore, in theory, we get more speed. The number of “pipes” that will be used with benefit depends on the number of cores in the processor.

However, concurrency is the most general term showing that more than one task is performed at the same time. For example, you can watch TV and post photos on Facebook at the same time. Oh, common! Even Windows 95 could simultaneously play music and show pictures.

So, when we say concurrency we don’t mean how this concurrency is to be obtained: by pausing some microprogrammes and switching them to another task; by truly simultaneous execution; by delegating work to other devices or something else. It does not matter.

Concurrent performance suggests that more than one task will be solved over a certain period of time. That’s it.

Parallel computing

We talk about parallel computing when more than one computing device is involved (for example, a processor) that simultaneously perform several tasks.

Parallel execution is a strict subset of concurrent computing. This means that on a computer with one processor parallel programming is impossible.


Multithreading is one of the ways to implement concurrent execution by highlighting the abstraction of the “worker thread”.

The threads “abstract” low-level details from the user and allow you to perform more than one task “in parallel.” The operating system, execution environment, or library hide the details whether multithreaded execution is concurrent (when we have more threads than physical processors), or parallel (when the number of threads is fewer or equal to the number of processors and several tasks are physically executed at the same time).


Asynchrony implies that the operation can be performed by someone on the side: a remote web site, a server, or other device outside the current system.

The core attribute of such operations is that their beginning requires significantly less time than the main work. That allows you to perform many asynchronous operations simultaneously even on a device with a small number of computing applications.

Concurrency in practice

At the moment, almost any large enterprise project can’t do without multithreading that allows us to use the capabilities of “hardware” in full. That is why a deep understanding of its basic principles is so important. With experience, this understanding should only deepen.

So, what do you need to make clear:

  • Implementing multithreading in your programming language
  • Problems that arise when developing multi-threaded applications
  • Ways to avoid these problems

Java multithreading implementation

Java provides the opportunity to create multi-threaded applications where different threads run simultaneously. It should be noted that the basic language tools are too low-level. Thus, it is not easy to use correctly the keywords volatile, synchronized, the methods wait(), notify() and notifyAll(). The developers need higher level entities (thread pool, monitors, semaphores, etc.)

Issues that are worth exploring when working with Java

Java virtual machine threads management (create / start / stop).
The code in each thread can be executed in parallel with the code in other threads. Thus, several tasks can be performed simultaneously. In this case, we need to deal with such concepts as threads, pools of threads, and futures.
Program flow control (threads synchronization).
There are situations when the code in one thread has to wait for the completion of the task in another thread. This is achieved using a variety of synchronization tools. A major factor in this matter is deadlock when several threads are waiting for some action from each other.
Control of access to memory (data) in a multi-threaded environment.
In this case, it is important to understand the Java Memory Model, the visibility of variables, the atomicity of operations and race conditions occurrence, thread-safe collections.

Problems that arise when developing multi-threaded applications with Java

In fact, there may be a lot of problems, but the most common are:

– Deadlock (mentioned above), which occurs when we need to block access to a resource in order that other flow could not get access before the resource to perform all necessary computing.

The simplest example of this is when 2 threads are waiting for calculations from each other. Here we can draw an analogy with plumbers and electricians. Imagine that the wiring is “flooded”. Electricians are afraid to start work until plumbers fix the leak, and plumbers are not eager to get an immediate boost of energy for the whole day :).

– Race condition - a situation when two or more streams are trying to access open data and change them simultaneously. And here everything starts to depend on the one that gained access to the stream first.

The basic ways to avoid these problems are:

Deadlock - always monitor the order we get access to resources. Thus, we must clearly understand what is happening in our system.

Race condition – always be sure that the access to resources is organized properly (synchronized and other parallelism utilities). For example, we can make the file readable at any time, but restrict access to the record, etc.

Work with us

We are open to new challenging tasks and we'd love to learn more about your project.

Let's talk