Skip to content

Concurrency Concepts

Foundational concepts for understanding concurrent programming.

Overview

This section covers the theoretical foundations that apply across programming languages.

Topics

Topic Description
Fundamentals Processes vs threads, concurrency vs parallelism
Synchronization Coordinating access to shared resources
Common Problems Race conditions, deadlocks, and how to avoid them

Key Terminology

Term Definition
Process Independent execution unit with its own memory space
Thread Lightweight execution unit sharing memory with its process
Concurrency Managing multiple tasks, possibly interleaved
Parallelism Executing multiple tasks simultaneously
Race condition Bug caused by timing-dependent behavior
Deadlock Circular waiting where no thread can proceed
Critical section Code that accesses shared resources
Mutex Lock ensuring only one thread accesses a resource
Semaphore Lock allowing N threads to access a resource
Context switch OS switching between threads/processes

Mental Models

The Restaurant Analogy

Single-threaded: - One chef does everything sequentially - Simple, but slow for complex meals

Multi-threaded: - One chef, multiple prep stations - Chef moves between stations (context switching) - Food prep overlaps

Multi-process: - Multiple chefs, each with own kitchen - True parallel cooking - Need to coordinate final plating

The Assembly Line

Sequential:

[Build] → [Paint] → [Test] → [Pack]
   1         1         1        1    = 4 time units

Pipelined (concurrent):

Time 1: [Build₁]
Time 2: [Paint₁] [Build₂]
Time 3: [Test₁]  [Paint₂] [Build₃]
Time 4: [Pack₁]  [Test₂]  [Paint₃] [Build₄]

Throughput: 4 items in 4 time units = 1 item/unit

Parallel:

Line 1: [Build₁] → [Paint₁] → [Test₁] → [Pack₁]
Line 2: [Build₂] → [Paint₂] → [Test₂] → [Pack₂]

Throughput: 2 items in 4 time units = 0.5 items/unit

Amdahl's Law

The theoretical speedup from parallelization is limited by the sequential portion.

Speedup = 1 / (S + P/N)

Where:
S = Sequential fraction (can't be parallelized)
P = Parallel fraction (1 - S)
N = Number of processors

Example: - 90% of code can be parallelized (P = 0.9) - 10% must be sequential (S = 0.1) - With 10 processors: Speedup = 1 / (0.1 + 0.9/10) = 5.26x - With 100 processors: Speedup = 1 / (0.1 + 0.9/100) = 9.17x - With ∞ processors: Speedup = 1 / 0.1 = 10x (maximum)

Lesson: Reducing the sequential portion matters more than adding processors.

Common Patterns

Task Parallelism

Different tasks run in parallel.

Task A: [████████████]      Process 1
Task B: [████████████████]  Process 2
Task C: [████████]          Process 3

Data Parallelism

Same operation on different data.

Data: [1, 2, 3, 4, 5, 6, 7, 8]
      [1, 2] → Process 1 → [1, 4]
      [3, 4] → Process 2 → [9, 16]
      [5, 6] → Process 3 → [25, 36]
      [7, 8] → Process 4 → [49, 64]
Result: [1, 4, 9, 16, 25, 36, 49, 64]

Pipeline Parallelism

Stages process different items simultaneously.

Item 1: [Stage A] → [Stage B] → [Stage C]
Item 2:            [Stage A] → [Stage B] → [Stage C]
Item 3:                       [Stage A] → [Stage B] → [Stage C]

When Concurrency Helps

Situation Concurrency Benefit
Waiting for I/O Do other work while waiting
User interface Keep UI responsive during computation
Batch processing Process multiple items at once
Web servers Handle multiple requests simultaneously
Real-time systems Meet timing deadlines

When Concurrency Hurts

Situation Why It Hurts
Simple sequential logic Overhead exceeds benefit
Heavy shared state Synchronization kills parallelism
Already fast enough Complexity without benefit
Debugging nightmare Race conditions are subtle

Next Steps

  1. Fundamentals — Deep dive into processes and threads
  2. Synchronization — How to coordinate concurrent access
  3. Problems — Common pitfalls and solutions