Note: This blogs are generated by Claude.

Race Conditions in Java: What Every Developer Should Know

“Concurrency bugs are the hardest bugs to find, reproduce, and fix. Understanding race conditions is the first step to writing code that doesn’t lie to you in production.”


The Core Assumption

Any thread can be paused at any point — between any two instructions — and another thread can run.

Not between methods. Not between lines. Between any two bytecode/machine instructions. The OS scheduler is ruthless and gives you zero guarantees about timing.

Take this innocent-looking line:

lastNumber.set(i);
lastFactors.set(factors);

You see 2 lines. The scheduler sees 2 independent pause opportunities:

──── Thread A ────────────────────────────────────────
lastNumber.set(7);
        ↑ scheduler pauses Thread A RIGHT HERE
                    ──── Thread B reads ────────────────
                    lastNumber.get()  → 7      ✓
                    lastFactors.get() → [2,3]  ✗ (still old!)
        ↓ Thread A resumes
lastFactors.set([7]);   ← too late, damage done

What Is a Race Condition?

A race condition occurs when the correctness of a program depends on the relative timing or interleaving of operations across multiple threads. When two or more threads access shared mutable state simultaneously — and at least one of them is writing — the result becomes unpredictable.

The dangerous part? Your code may look perfectly correct. It may pass every unit test. It may run fine in development. Then, under production load with multiple threads racing through the same code path, it silently produces wrong results.

The term “race” is apt: threads are literally racing to execute operations, and whoever wins the race determines the outcome — not the logic you intended.


Why Is It So Hard to Catch?

Race conditions are non-deterministic. The bug only appears when threads interleave in a specific way. That interleaving depends on:

  • CPU scheduling decisions
  • Garbage collection pauses
  • System load at that moment
  • Hardware architecture (number of cores, cache behaviour)

A test suite running sequentially will never expose it. A bug might appear once every million requests — and vanish the moment you attach a debugger, because the debugger changes thread timing.

This is what makes race conditions genuinely dangerous: they hide until the worst possible moment.


Common Types of Race Conditions

1. Read-Modify-Write

You read a value, compute something from it, then write the modified value back. If another thread reads the same value before you write, one update is silently lost.

The classic example — a shared counter:

// Shared state
private int counter = 0;

// Thread 1 and Thread 2 both call this simultaneously
public void increment() {
    counter++; // NOT atomic — this is read, add, write: 3 operations
}

What actually happens:

Thread 1: reads counter = 5
Thread 2: reads counter = 5       ← both see 5 before either writes
Thread 1: writes counter = 6
Thread 2: writes counter = 6      ← Thread 1's increment is lost

Expected: counter = 7. Actual: counter = 6. This is called a lost update.

Fix — use atomic operations:

private final AtomicInteger counter = new AtomicInteger(0);

public void increment() {
    counter.incrementAndGet(); // single atomic CAS operation
}

Or protect the compound action with a lock:

private int counter = 0;
private final Object lock = new Object();

public void increment() {
    synchronized (lock) {
        counter++;
    }
}

2. Check-Then-Act

You check a condition and then act on the assumption that the condition is still true. But another thread can invalidate the condition between your check and your action.

Example — conditional map insertion:

// NOT thread-safe
if (!map.containsKey(key)) {
    map.put(key, value); // another thread may have inserted between these two lines
}

Both threads can pass the containsKey check simultaneously and both call put, defeating any uniqueness guarantee.

Another classic — lazy initialisation:

// NOT thread-safe
if (instance == null) {
    instance = new ExpensiveObject(); // two threads can both pass the null check
}

Fix — use atomic compound operations:

// ConcurrentHashMap provides atomic compound operations
map.putIfAbsent(key, value);

// Or for lazy init — use double-checked locking correctly
private volatile ExpensiveObject instance;

public ExpensiveObject getInstance() {
    if (instance == null) {
        synchronized (this) {
            if (instance == null) {           // second check inside the lock
                instance = new ExpensiveObject();
            }
        }
    }
    return instance;
}

Note: volatile is essential here to prevent the compiler from reordering the write to instance before the constructor finishes.


3. Stale Data / Visibility Race

This one doesn’t even need a compound action. A single read can return an outdated value because of CPU caching and compiler/JVM reordering. One thread writes a value, but another thread reads from its local cache and never sees the update.

Example — a stop flag:

// NOT thread-safe
private boolean running = true;

// Thread 2 calls this
public void stop() {
    running = false;
}

// Thread 1 runs this — may loop forever
public void run() {
    while (running) {
        doWork();
    }
}

Thread 1 may cache running = true in a CPU register and never check main memory again. Thread 2’s write is invisible to it.

Why Thread 1 never sees the change

1. The compiler optimizes The compiler sees that Thread 1 never modifies running itself, so it assumes the value can’t change. It rewrites the loop roughly as:

// Compiler transforms this:
while (running) { doWork(); }

// Into something like this:
boolean cached = running; // read once
while (cached) { doWork(); } // never re-reads memory

2. The CPU has its own cache (L1/L2) Each CPU core has its own cache layer. When Thread 1 reads running, it loads the value into its core’s cache (or a register). Thread 2 runs on a different core, writes false to its cache — but that write may not propagate to Thread 1’s cache immediately.

Core 1 cache: running = true  ← Thread 1 reads this, stuck here
Core 2 cache: running = false ← Thread 2 wrote this
         RAM: running = ???   ← may not even be updated yet

3. CPU instruction reordering Modern CPUs execute instructions out of order for performance. Even if Thread 2 writes false, there’s no guarantee Thread 1 will observe that write in any particular timeframe without a memory barrier.

Fix — use volatile:

private volatile boolean running = true;

volatile guarantees that writes are immediately flushed to main memory and reads always fetch from main memory, establishing happens-before between the write in Thread 2 and the read in Thread 1.


4. Deadlock

A liveness problem rather than a data corruption issue, but closely related to concurrency. Two threads each hold a lock the other needs, so both wait forever — the application hangs.

Example:

Object lockA = new Object();
Object lockB = new Object();

// Thread 1
synchronized (lockA) {
    synchronized (lockB) { // waits for Thread 2 to release B
        // ...
    }
}

// Thread 2
synchronized (lockB) {
    synchronized (lockA) { // waits for Thread 1 to release A → DEADLOCK
        // ...
    }
}

Fix — always acquire locks in a consistent global order:

// Both threads acquire lockA before lockB — deadlock impossible
synchronized (lockA) {
    synchronized (lockB) {
        // ...
    }
}

Or use tryLock with a timeout from java.util.concurrent.locks:

ReentrantLock lockA = new ReentrantLock();
ReentrantLock lockB = new ReentrantLock();

if (lockA.tryLock(1, TimeUnit.SECONDS)) {
    try {
        if (lockB.tryLock(1, TimeUnit.SECONDS)) {
            try {
                // do work
            } finally {
                lockB.unlock();
            }
        }
    } finally {
        lockA.unlock();
    }
}

5. Livelock

Similar to deadlock, but threads aren’t blocked — they keep actively responding to each other and making no progress. Like two people in a corridor repeatedly stepping in the same direction to let the other pass.

Conceptual example:

// Thread 1: keeps retrying if Thread 2 is also retrying
while (thread2IsRetrying) {
    Thread.sleep(100);
    retry();
}

// Thread 2: same logic — both loop forever responding to each other
while (thread1IsRetrying) {
    Thread.sleep(100);
    retry();
}

Fix — introduce randomised backoff so threads don’t respond in perfect lockstep:

Thread.sleep(random.nextInt(100)); // random delay breaks the symmetry

6. Starvation

A thread is perpetually denied CPU time or lock access because other threads keep getting priority. The thread is alive but never makes progress.

Common causes:

  • Unfair locks (non-fair ReentrantLock)
  • High-priority threads monopolising CPU
  • A thread holding a lock for too long

Fix — use fair locks:

// true = fair mode — threads acquire lock in arrival order
ReentrantLock fairLock = new ReentrantLock(true);

Summary

Race Condition Root Cause Fix
Read-Modify-Write Value changed between read and write-back AtomicInteger, synchronized
Check-Then-Act Condition invalidated between check and act putIfAbsent, computeIfAbsent, synchronized
Stale Data CPU cache / compiler reordering hides writes volatile, synchronized
Deadlock Circular lock dependency Consistent lock ordering, tryLock
Livelock Circular active response with no progress Randomised backoff
Starvation Unfair resource scheduling Fair locks, avoid long critical sections

The Underlying Principle

Every race condition is a variation of the same mistake: assuming the world stays still between two operations. The window of time between any two statements is enough for another thread to change everything.

The solution is always to either:

  1. Make the compound action atomic — so no thread can observe intermediate state
  2. Eliminate shared mutable state — immutable objects and local variables can’t race
  3. Coordinate access — locks, volatile, or higher-level concurrent utilities

The java.util.concurrent package exists precisely to give you safe, well-tested abstractions so you don’t have to manage this yourself. Prefer ConcurrentHashMap over HashMap + synchronized. Prefer AtomicInteger over int + synchronized. Prefer ExecutorService over raw Thread.


Further Reading

  • Java Concurrency in Practice — Brian Goetz et al. (the definitive reference)
  • The Art of Multiprocessor Programming — Herlihy & Shavit (for lock-free deep dives)
  • A Critique of ANSI SQL Isolation Levels — Berenson et al. (same concepts, database context)
  • JSR-133 Java Memory Model — the formal spec behind volatile and synchronized

Understanding race conditions is not about memorising patterns — it’s about building a mental model where you always ask: “what happens if another thread runs here?” Once that instinct is second nature, your concurrent code becomes dramatically safer.