Bark of the byte

Running with bytes and other wild creatures

Part 0.1 Defining concurrent vs parallel

The aim of this post is not to define a universally applicable definition, but rather to investigate the current definitions and then create working definitions as a reference for the rest of these blog posts.

Unfortunately, there is much confusion and disagreement about the difference between concurrent and parallel programming, just look at the comments here. The reason for the disparity and confusion is: -

1. The terms parallel and concurrent are already overloaded and mean different things in different contexts.

2. There is more than one dimension when considering concurrent versus parallel. There is the system as a whole, the processor (the hamster on the wheel that drives the system) and the algorithm (expression of steps to solve the problem). The algorithm or program, is where most of the confusion lies, because the experts (who know the most about it) seem to bully the definitions to suit their particular niche.

3. The end-user experience is the same provided that the processor is fast enough. This is an implementation detail. The end-user is often unaware (as they should be) of whether a system or algorithm is concurrent or parallel.

Why bother differentiating between concurrent and parallel?

Performance. You may notice that I qualified point 2 above with ‘provided that the processor is fast enough’. To use an old idiom 'many hands make light work'. If we have a problem and we are able to split that problem and farm it out to two different cores, computers, people or some other type of worker so that they can work in parallel then we have essentially halved the time that it will take to solve the problem.

Let’s investigate the current definitions

If you look at the English definitions for concurrent and parallel (both at Oxford Dictionaries and at Merrian Webster) you will find that with regards to time, they both mean ‘happening at the same time’.

Parallel lines and concurrent lines are also defined in geometry. The problem is that these definitions are concerned with lines through space while we are concerned with operations in time.

If you look at Wikipedia, the differences basically boil down to the difference between ‘overlapping time periods’(concurrent) and ‘simultaneously’(parallel).

If you look at the community favoured answers on Stack Overflow you get: -

  1. An ascii diagram which is illustrative of the ‘overlapping time periods’ vs ‘simultaneous’ idea. 
  2. Definitions that are similar to the ‘overlapping time period’s vs ‘simultaneous idea’.
  3. A definition defining concurrency as overlapping non-deterministic and parallel as improving throughput in order to achieve improved throughput.

Then there are some expert definitions (this is just a selection of some representative posts that I think explain their point well):

  1. Parallelism is not concurrency. Similar vein to previous point 3.
  2. Concurrency is not Parallelism (it's better!). ‘Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.’ This actually fits well with the time-based concepts and is discussing programming design.

Now, most of this confusion occurs when we enter the realm of the program / algorithm, not the processor or system. For the rest of this post we are discussing ‘concurrent’ and ‘parallel’ as they apply to programs and algorithms.

Time versus determinism

Boiled down there are two competing ideas:

1. Concurrency = overlapping time periods. Parallel = simultaneous.

2. Concurrency = non-deterministic. Parallel = deterministic.

Unfortunately, both definitions cannot coexist because the first definition defines parallel as a a wholly contained subset of concurrency while in the second definition they are separate entities.

If you look at the reasoning behind defining parallelism as a separate entity from concurrency a central point is determinism as described here:

Concurrency is concerned with nondeterministic composition of programs (or their components)

Parallelism [is concerned with the] efficiency of programs with deterministic behavior.

Let’s look at two commonly used examples:

  1. A web server. This is non-deterministic. You can’ tell from one moment to the next which event will arrive next, the program needs to handle and coordinate these events.
  2. A quicksort algorithm. The result is deterministic, the program can subdivide the problem as it wishes and it knows the outcome – a sorted list.

What’s the problem?

I should note that what I describe here are the problems for me in trying to fit definitions to suit multiple layers. The problem does not exist for those that use definitions because it is accurate within their field. The first problem is that it doesn’t translate well up the stack to processors and systems. When we think of operations running independently we don’t think in terms of determinism, we think in terms of steps over time. If you get hit by two tennis balls you don’t think ‘that was a non-deterministic process, I couldn’t tell which ball was going to arrive first’. You think ‘those balls hit me at the same time’. When a person sees a number of bees buzzing around a hive, then tend to think of how the bees all appear to be doing things at the same time, rather than whether their activity is deterministic or not.

The problem is that we have swapped time for determinism and when we think about processing and algorithms we generally think in terms of time rather than determinism. When we think of things running in parallel, it is actually the time that we visualise. Two operations occurring simultaneously. By the determinism definition we could write a program where things happen at the same time, but it is not a parallel program. For example, a web server program that allows events to arrive and be handled simultaneously. Yet this is not, by the determinism definition a parallel algorithm. Even if this program has those horrible side effects and nasty mutexes there will be times (possibly the majority of the time) when all processors are busy processing simultaneously.

We are in the business of doing things at the same time and in with the determinism definition, the time dimension has been swapped with determinism (I know there is more to it than that, but that seems to be the fundamental part). While I see value in differentiating deterministic programs (see later in this post), I have a hard time adopting these when applied to concurrent and parallel because: -

  1. It seems less intuitive to consume as a non-expert. It is easy to understand the concept of activities running at the same time and how that relates to concurrency and parallelism, less easy to understand how determinism relates (as an introduction).
  2. It seems more difficult to understand the natural evolution from sequential to single-processor multi-tasking to multiple processors purely in terms of determinism.
  3. It seems to be a disconnect with the natural time metaphor used for processing and systems to the point that concurrent (non-parallel) programs can run largely in parallel.

Differentiating deterministic side-effect free programs and algorithms

Note, I do think that it is valuable to differentiate deterministic side-effect free programs from their badly behaved (but still very useful) non-deterministic lock-ridden counterparts. However, I see that as a refinement of the definition of parallel. This would stop confusion at the grey areas of concurrent vs parallel and elevate this deterministic class of program from mere parallel to ‘deterministic parallel’ (or something like that). This would also keep the time metaphor so that non-deterministic programs that are processed largely in parallel could still be considered parallel.

What definition will we use?

So, in these posts unless I have am convinced otherwise I will go with the time-based definitions, but highlight deterministic, side-effect free programs or algorithms when relevant. So: -

  • Concurrency is concerned with operations in overlapping time periods.
  • Parallelism is concerned with simultaneous operations.

Regarding programs and algorithms, a web server program is a parallel program. So is a quicksort program. However, the quicksort program is a deterministically parallel program and that means that compilers/interpreters/processors can do cool things with it – like automatically run it in parallel without fear of bringing the walls down or getting locked out.

So, then how do we describe the difference between concurrent and parallel in terms of programs and algorithms? By our definition of concurrent being ‘overlapping time periods’ and parallel being ‘simultaneously’ and concurrent including all parallel, the easiest way is to ask ‘What are the examples of concurrent algorithms that are not parallel?’. Or, what are examples of programs that are designed to run operations in overlapping time periods, but not simultaneously? An example is a timer based program. These are actually very common. Javascript only recently had WebWorkers introduced. Before, if you wanted to perform operations occurring in overlapping time periods (such as animating two separate elements at separate moving at once) you would create one or more timers that would fire at certain intervals. To the user, it would appear that two operations were happening at once, but we know that we actually just interleaved the operations. 

Loading