Threads and Synchronization
A thread is an independent path of execution. Multiple threads can run at the same time on different CPU cores, sharing the same memory.
Why Threads?
Threads let one program do several things at once. A web server can handle many requests in parallel. A game can run physics on one thread while rendering on another. The downside is that threads share memory, which makes coordinating them tricky.
Race Conditions
A race condition happens when two threads read and write the same data without coordination. The classic example is incrementing a shared counter. Each counter += 1 is really three steps: read, add, write. If two threads interleave those steps, increments get lost and the final value is wrong.
Locks and Mutexes
A mutex (mutual exclusion lock) ensures only one thread enters a critical section at a time. A thread acquires the lock, does its work, and releases the lock so others can proceed. Locks fix race conditions but slow things down, so keep critical sections small.
Language Differences
Python has real OS threads, but the Global Interpreter Lock (GIL) means only one thread runs Python bytecode at a time. Threads still help for IO-bound work.
JavaScript is single-threaded inside one event loop. There are no race conditions on shared variables in normal code because only one task runs at a time. Web Workers add real parallelism but have isolated memory. The example above simulates two "workers" with async functions to illustrate the lock pattern.
C++ and Java offer real preemptive threads with shared memory. You must use a mutex or synchronized block to protect shared state.
Try It Yourself
- Remove the lock from your language's example and run it many times. Watch the final counter drift below the expected value.
- Replace the mutex with an atomic integer (
std::atomic<int>,AtomicInteger) and compare performance. - Write two threads where one produces numbers into a list and the other consumes them, with a lock around the list.