Bark of the byte

Running with bytes and other wild creatures

Part 1: The opposite of concurrent

In this set of blog posts, we investigate concurrent programming and friends. This includes concurrency, parallel programming, threads, workers, locks, side effects, STM and everything in between. Each topic will be accompanied by a demonstrative drama played out on board a spaceship in the far reaches of a solar system.

Introduction

In terms of computing, the opposite of concurrent is sequential. Sequential systems execute a set of instructions one by one and in the exact order they are specified. Let’s see this in action by embarking on our space saga. 

HMS Optimize

Consider the HMS Optimize, currently 5 parsecs from terra firma. Everyone on board depends on three Space Cadets (two souls and a robot) for the efficient running of the ship. Everything goes smoothly so long as three objects can be manufactured: Large Copper Bricks, Medium Jade Balls and Small Saccharide Cubes. These three objects can be melted, frozen, fried, sautéed  and molded to suit any requirement, such as supporting struts, fuselage liners, tooth brushes and energy drinks. 

Today, is a very exciting day, because the crew take ownership of The Materializer CHS where CHS stands for Concurrent Helper System. This is a contraption that given the right recipe will materialize the desired objects out of the ether. The arrival of this system today is fortuitous, because up until now the forging of these objects was the sole domain of the on-board alchemist, who has gone missing. The alchemist was a testy character - erratic, moody and prone to shouting 'I the Alchemist, raze!' when beating the Cadets at PONG, so he has not really been missed.  

We have the privilege of  joining the live feed as Space Cadet 1, also known as 'Copper', gives the system a test run. Please familiarize yourself with the first page of the Materializer CHS manual and then click the button to join.

As you can see, the test run went smoothly. The Cadet used the control particle to load the recipe, then kicked off the Materializer Factory. Once complete, the Large Copper Brick (LCB) was delivered. This is an example of a sequential algorithm. The algorithm is:

  1. Wait for the size component from head office.
  2. Add the size to the recipe (small).
  3. Wait for the ingredient component from head office.
  4. Add the ingredient to the recipe (copper).
  5. Wait for the model component from head office.
  6. Add the model to the recipe (brick).
  7. Start the factory materialize process.
  8. Factory materializes the LCB.
  9. Factory deploys the LCB.
  10. Go to step 1.

Now, if you are thinking that if would be more efficient to get all the calibrations up front and then enter the size, ingredient and model in parallel then you are going in the right direction for the upcoming blog posts. Unfortunately, spaceship policy dictates that each calibration has to be signed off, counter signed, rubber stamped in order and one by one, so this process cannot be altered at this point. 

If you are thinking that it seems inefficient that the cadet is inactive while waiting for the factory to build and deploy, then you are also on the right track.

OK, so that algorithm is sequential, what about the processor? I can tell you that the Materializer System is capable of concurrent processing (it says so in the manual). However, this ability to run operations concurrently has no effect on the execution of the algorithm because the algorithm is expressed in a sequential manner. The Materializer Factory which is one component of the Materializer System is a sequential processor, it does the following steps one by one and in order: 

  1. Read the recipe. 
  2. Materialize the object specified in the recipe. 
  3. Deploy the materialized object. 

If you think that this is slow and lacking in excitement, then read on to the next post when disaster strikes.  

Analysis

In our context, there are two aspects to sequential systems: -

  1. Sequential processing. In this case, the system that is processing the algorithm is only capable of running one step at a time. A single core, single threaded computer that is not capable of switching tasks is an example of a sequential processor.
  2. Sequential algorithm. A sequential algorithm is one that is written in such a way that the system that processes it does not run any part of it concurrently. This is different from whether the problem has the potential to be run concurrently. For example, the throughput of a program that reads multiple files from disk and then writes them across the network can be improved by using multiple threads of execution so that the files can be processed simultaneously. However, if the algorithm (program in this case) is expressed as a sequential set of steps such as: read file 1, send file 1, read file 2, send file 2, … then it is a sequential algorithm.  

Basically, the difference between 1 and 2 is the difference between the system that is running the algorithm versus the algorithm itself. You could have a 256 core computer, but if the algorithm can only be run step by step then 255 cores will be idle (assuming no other processing is occurring at the time). Conversely, if the algorithm is begging to be segmented into different threads of execution, but the processor does not support threads of execution (such as multiple processes or threads) then it will run sequentially. The compiler for the computer would likely not support concurrent concepts, such as threads and timers, so you won’t be able to express the algorithm concurrently in any case. In this set of blog posts, we will be focusing more on the algorithm than the capabilities of the computing system. We will assume that the computing system is at the level of most computers today which means that it supports concurrency.

There is another dimension to all this which is concurrent computing vs parallel computing. Don't get too bogged down in the details yet, but if you are interested, it is discussed in this post

Some real world examples

Automated call handler

Consider the simple call handling service shown in the diagram below.

 

This system will not benefit by running any two steps simultaneously. For example,calling the overflow number at the same time as the intended recipient is pointless because it is always more desirable to connect the caller to the recipient instead of an overflow agent. If the overflow answered a few seconds before the intended recipient, then the system would need to disconnect the overflow agent and connect the recipient. This would be a waste of the overflow agent’s time and confusing to the caller.

The Fibonacci sequence

A type of problem that is not suited to a parallel algorithm is one where subsequent steps are dependent on prior computations. One example of this is calculating the Fibonacci sequence. Every item in the sequence is dependent on the sum of the previous items, with 0 and 1 as the starting (seed) numbers. A C# implementation to return an array containing the first n Fibonacci numbers is shown below:

        Int64[] FibonacciSequence( int n )
        {
            Int64[] result = new Int64[n];
            //seed with starting numbers
            result[0] = 0;
            result[1] = 1;
            //iterate throught the remaining
            for( int i = 2; i < n; i++ )
            {
                result[i] = result[i - 1] + result[i - 2];
            }
            return result;
        }


Since we don’t know the next Fibonacci number until we have calculated the previous one there is no way to break up the problem into smaller parts and run each sub-part in parallel. Actually, there is some fancy maths that we could employ to test for a big Fibonacci number to try to seed another thread, but that gets very tricky, requires some juggling of very big numbers (bigger than can be held in a 64 bit integer) and really needs to be weighed up against how many numbers you want to generate. I will blog about this in a subsequent post. 

Wrap up

We learnt about both sequential processing and sequential algorithms. In the next post, things will get even more exciting as we dive without fear into the world of concurrent programming.

Loading