Performance Programming: Introduction to Parallelism and Concurrency

Marho Onothoja
3 min readFeb 11, 2021

--

Built for speed

The world is getting increasingly fast. Faster cars, planes breaking the sound barrier, rockets reaching escape velocity, etc. People are increasing looking for ways to improve performance in anyway they can, this is why you have multiple different algorithms doing the exact same thing with the only difference being time and space complexity -the time it takes to complete the tasks and the memory it uses; a topic for another series.

Software and performance have come a long way since the 1900’s, faster compilers and interpreters, optimized algorithms, improved design patterns, new paradigms, all of these to make improved run time for various languages. Video games can render imaging today at qualities initially never conceived. Optimization has played a huge role in the software tech space, consistently bringing ground breaking breakthroughs and making the conceptualization of other more seeming impossible feats possible(like how fast the internet runs these days, machine learning, etc). In building new software for particular purposes, optimization and speed are two things that are often considered

unreal engine rendering
A rendered image using unreal engine. A 3D design software.

Hardware have been built to reach break neck speed this is why super computers now exist, this is the reason for GPU in computers, this is why most modern laptops have multiple cores, but having all of that will mean little if you do not know how to make the best of it. Modern software take advantage of these improvements in hardware and its resources such that in playing games like FIFA on your computer, you are expected to have certain specifications in other for you to make the best out of the game otherwise you might experience lag and many more inconveniences.

In summary what I am saying is sometimes using the best algorithms and data structure just won’t cut it. Of course there are still a great deal of things to consider while optimizing your code for speed, like caching and memory usage. More on this later in the series

However, what do you do when rewriting your code to the best possible version it can be just isn’t enough to meet your speed requirements?

This is where the concept of parallelism and concurrency in programming comes into play. These are two concepts that attempt to optimally utilize the entirety of the ability of your hardware/computer.

In this series, I intend to discuss some key ways of improving the performance of the your software which includes but is not limited to the use of parallelism and concurrency your code base

If you have ever heard of these terms before, it is important to note that parallelism and concurrency are not the same thing albeit they share some similarities. Parallelism has to do with carrying out/running tasks simultaneously, concurrency on the other hand has to do with scheduling multiple tasks to run for a designated amount of time, such that one task runs for a period and stops, handing control over to another task to run for a specific period of time and afterwards will also hand control over to another task, and this cycle goes on till all tasks are complete.

A couple of ways to achieve parallelism or concurrency include:

  1. Threading
  2. Multiprocessing
  3. Asynchronous programming
  4. Clusters

If you find one of these, then you are often very likely to find the rest of them, and this series is dedicated to looking at the core concept of all four of them, their best use cases and the possible drawbacks you might face while using any of them, such that you know which to use for your project and when/where in your code to make use of it.

In the next post in this series I’ll be discussing concurrency or more accurately threading and how it uses concurrency, see you there.

--

--

Marho Onothoja
Marho Onothoja

No responses yet