Parallel Computing

Parallel computing uses multiple processors to perform a single task simultaneously, in order to increase the speed and efficiency of the computation. This is done by dividing the task into smaller sub-tasks and assigning each sub-task to a separate processor, which works on it simultaneously. Then, the results of the sub-tasks are then combined to produce the final result.

Parallel computing can be used in a number of different applications such as, scientific simulations, data analysis, and artificial intelligence.

The key objective of parallel computing is increasing accessible computing power potential and speeding up the time it takes to process applications and solve problems, which was a major concern with serial computing.

Parallel computing infrastructure is generally contained in a data centre where processors are placed in server racks.

Advantages of Parallel Computing:

  • Multiple resources can be used at the same time
  • Greater memory for resource and task allocation -Time saving with multiple applications that can be simultaneously implemented
  • Faster compared to other operating systems

Disadvantages of Parallel Computing:

  • Parallel operating systems consume a lot of power
  • Maintenance cost is high
  • Complex computing architecture
  • Initial set up cost is expensive

There are 4 types of Parallel computing available from open-source and proprietary parallel computing vendors:

  • Bit-Level Parallelism

It is a type of parallel computing based on increasing processor word size. Increasing the word size reduces the number of instructions a processor must implement in order to carry out an action on large-sized data.

  • Instruction-Level Parallelism

This is the parallel execution of a sequence of instructions by a computer program. With the hardware approach of ILP, the processor decides at run-time which instructions to execute in parallel, also known as dynamic parallelism. With the software approach, static parallelism is applied (ie. the compiler decides which instructions to execute in parallel).

  • Task Parallelism

It is a parallel program which allows the execution of different calculations on the same or different sets of data concurrently.

  • Superword-Level Parallelism

It is a vectorization technique based on loop unwinding and basic block vectorization while exploiting parallelism of inline code.

Application of Parallel Computing in Modern Day

  • Medical Imaging and Diagnosis

Over the past 2 decades there has been an increase in the use of parallel computing for medical imaging and diagnosis. It has been used to accelerate brain fibre tracking, vertebrae detection and segmentation in x-ray images, 3D Compressive MRI reconstruction, etc.

These breakthroughs have helped with faster response times for time-critical applications.

  • Artificial Intelligence, Advanced Graphics & Virtual Reality

GPU (Graphic Processing Unit) Parallel computing has helped with better graphic rendering for 2D and 3D gaming which is essential with the rise in VR gaming, greater bandwidth for machine learning and AI, higher quality graphics for video editing and content creation.

  • Biomedical Engineering

Parallel computing has sped up research learning, replicated patient behaviour and visualised complex biological models.

  • Weather Prediction

With Data collected and processed from various weather tools such as doppler radar, radiosondes, satellite and ASOS(automated surface-observing systems), weather stations are able to give faster and more accurate weather predictions and also simulate the possible track of natural hazards/disasters.

  • Online Search Engines

Thousands of computers across the world work together to produce the results of a search online. Search engines may be the most common use of parallel computing.

  • Logistical Planning and Tracking for Transportation

Granular data tracking and real-time aggregate analysis for logistics applications has been made possible with parallel computing increasing the ease with which planning and issues can be analysed, prioritised and handled.

  • Aerospace

Parallel computing is being used to explore certain phenomena that occur in astronomy that may take millions or billions of years to occur, for example galaxies merging, black holes swallowing astronomical objects, stars colliding etc. This is done through simulation.

  • Energy

Supercomputing is also needed in oil and gas for seismic data processing to provide a clearer picture of underground strata for drilling and obtaining fossils and gases.

  • Business and Finance

Every major aspect of banking (credit scoring, risk modelling, fraud detection, etc) is powered by GPU fintech

  • Pharmaceutical Design

Parallel computing has been used to run molecular dynamics simulations which have been very useful in drug discovery, run simulations that help us understand how neurotransmitters work/communicate and has been used in genetic disease research.

Other applications of Parallel computing include:

- Tracking, processing and storing big data
    
- Economic forecasting
    
- Collaborative digital workspaces

Examples of Parallel Computing

  • Smartphones

The iPhone 14 has an 6-core CPU and 5-core Apple GPU and the Samsung Flip 4 has an 8-core CPU and an Adreno 730 GPU. Various other smartphones have processors that enable parallel processing.

  • **Laptops & Desktops **

Modern computers are powered by Intel Cores, AMD Ryzens and Apple M series processors. These are examples of parallel computing.

  • Illiac IV

The first massively produced computer made with the help of NASA and Air Force at the University of Illinois. It had 64 FPUs and could process 131,072 bits at a time.

  • NASA’s Space Shuttle Computer System

The Space Shuttle System has 5 IBM AP101 computers in parallel controlling the shuttle’s avionics while processing large amounts of fast-paced real-time data and can perform 480,000 instructions per second.

  • American Summit Supercomputer

This machine was built by the U.S Department of Energy and is the most powerful supercomputer in the world. It is a 200 petaFLOPS machine that can process 200 quadrillion instructions in a second. To get a better picture of what this machine can do, if 7+ billion people on earth did a calculation every second, it would take them 10 months to achieve what this supercomputer can do in a second.

  • SETI

The SETI (Search for Extraterrestrial Intelligence) is a machine that monitors millions of radio signals day and night from other worlds to prove the existence of extraterrestrial life. To be able to manage this workload, SETI uses parallel computing from BOINC (Berkeley Open Infrastructure for Network Computing). In addition, donations from millions of people aid in the processing of these signals.

  • pSIMS

A team from the University of Chicago Computation Institute started the pSIMS (Parallel System for Integrating Impact Models and Sectors) project to better understand the effects of climate and climate change over time. It has been used to create models of environments, for instance forest and oceans and also simulate global food system resilience. The framework currently processes data via numerous supercomputers, clusters and cloud computing technologies.

  • Bubba

Bubba is a supercomputer created to process seismic data to enable drillers to mine difficult terrain. It is made up of thousands of Intel Xeon Phi multiprocessors that are cooled in chilled oil baths for exceptionally high-performance data processing.

  • Clara

Parallel computing is also popular in healthcare technology and smart hospital operations. Clara is Nvidia’s AI powered platform that enables creation of 3D models and automation of patient monitoring tasks through deep learning in medicine.

  • The Internet of Things (IoT)

With over 20 billion devices and 50+ billion sensors (soil sensors, smart cars, drones, pressure sensors etc), parallel computing keeps up with the avalanche of real-time telemetry data from the IoT.

Other examples of parallel computing include:

  • Bitcoin

  • Multithreading eg. Block-STM

  • Python

  • Parallel computing in R

  • Matlab Toolbox

  • Tesla GPUs

  • Ansys Fluent

Additional Resources: