This repo explores different approaches to protecting shared mutable state in Swift, and how they behave under concurrency. It intentionally includes both correct and incorrect implementations to show real-world failure modes.
public final class UnprotectedReaderWriter<Value>: ReaderWriter {
private nonisolated(unsafe) var mutableState: Value
public init(value: Value) {
self.mutableState = value
}
public func write<R>(_ body: (inout Value) -> R) -> R {
body(&mutableState)
}
@discardableResult
public func read() -> Value {
mutableState
}
}- No synchronization at all: every concurrent use is a data race.
nonisolated(unsafe)opts out of the compiler's checking.- Behavior in tests:
Int:- Concurrent increments are racy but usually don’t crash.
- On current CPUs, aligned
Intloads/stores tend to be atomic at the word level. - Result: lost updates and wrong final count, but still valid integer values → “seems fine” but logically broken.
Stringand[Int]:- Mutate complex, reference-counted, copy-on-write storage.
- Concurrent mutations corrupt internal invariants (refcounts, buffers, lengths, capacities).
- Result: allocator/runtime crashes or traps → crash in production :)
- Key lesson: data races are always undefined behavior, but:
- Simple value types often “only” give you wrong answers.
- Rich CoW/reference-counted types often crash quickly when raced.
There are also specialized @unchecked Sendable variants:
public final class UnprotectedIntReaderWriter: @unchecked Sendable { ... }
public final class UnprotectedArrayReaderWriter: @unchecked Sendable { ... }
public final class UnprotectedStringReaderWriter: @unchecked Sendable { ... }- All three are unsafe by design.
@unchecked Sendablejust silences the compiler; it does not add synchronization.
public final class SyncQueueReaderWriter<Value: Sendable>: ReaderWriter {
private let queue = DispatchQueue(
label: "SyncQueueReaderWriter"
)
private nonisolated(unsafe) var value: Value
public init(value: Value) {
self.value = value
}
public func write<R>(_ body: (inout Value) -> R) -> R {
queue.sync { body(&value) }
}
@discardableResult
public func read() -> Value {
queue.sync { value }
}
}- Uses a private serial
DispatchQueueandqueue.syncfor both reads and writes. - All access to
valueis serialized → no data race onvaluewhen using this API correctly. Value: Sendableis required, because otherwise we can't guarantee that the value that we read won't participate in a data race on its own.- Deadlock risk (classic GCD pattern):
-
Calling
queue.syncfrom a block already running on the same serial queue deadlocks. -
In this API, that means consumer bugs like:
let box = SyncQueueReaderWriter<Int>(value: 0) box.write { value in value = box.read() + 1 // deadlocks: nested sync on same serial queue }
-
The outer
writeholds the queue; the innerreadtries tosyncon that queue again.
-
- Insight:
- This class is safe for non-reentrant usage.
- It’s vulnerable if clients call
read/writefrom insidewriteclosures (re-entrancy on the same queue).
public final class ConcurrentQueueReaderWriter<Value: Sendable>: Sendable {
private let queue = DispatchQueue(
label: "ConcurrentQueueReaderWriter",
attributes: .concurrent,
target: DispatchQueue.global(qos: .userInitiated)
)
private nonisolated(unsafe) var value: Value
public init(value: Value) {
self.value = value
}
public func write<R>(_ body: (inout Value) -> R) -> R {
queue.sync(flags: .barrier) {
body(&value)
}
}
@discardableResult
public func read() -> Value {
queue.sync {
value
}
}
}- Uses a concurrent GCD queue:
readusesqueue.sync⇒ multiple readers can run concurrently.writeusesqueue.sync(flags: .barrier)⇒ writers are exclusive and wait for prior operations to finish.
- Semantics:
- Read–write pattern with synchronous exclusive writers:
- Many concurrent readers.
- Writes block the caller until they’re fully applied.
- When
writereturns, the mutation is visible to subsequent reads/writes.
- Read–write pattern with synchronous exclusive writers:
- Safety:
- Internally race-free with respect to
valueas long as all access goes throughread/write. Value: Sendableis enforced; wrapper itself relies on manual synchronization.
- Internally race-free with respect to
- Deadlock risk:
- External callers doing
read/writefrom other queues/threads are fine. - As with the serial version, re-entrant usage (e.g. calling
writefrom a closure already running on this same queue) can create self-deadlock scenarios; the abstraction assumes callers don’t do this.
- External callers doing
- Caveat for both queue-based variants:
- Returning
Valuedirectly means reference-like payloads can be mutated off-queue, which can reintroduce races the wrapper can’t prevent.
- Returning
import os
public struct LockedReaderWriter<Value>: Sendable {
// Uses os_unfair_lock under the hood. It's not a recursive lock.
// Attempting to lock it again from the same thread while the lock is already locked will crash.
let value: OSAllocatedUnfairLock<Value>
public init(value: Value) {
self.value = .init(uncheckedState: value)
}
public func write<R>(_ body: (inout Value) throws -> R) rethrows -> R {
try value.withLockUnchecked(body)
}
public func read<R>(_ body: (Value) throws -> R) rethrows -> R {
try value.withLockUnchecked { try body($0) }
}
}- Wraps
OSAllocatedUnfairLock<Value>(which usesos_unfair_lock). - Mutual exclusion:
writeandreadboth run under the same unfair lock.- No concurrent access to
valuethrough these APIs ⇒ no data race onvalue.
- Non-recursive:
-
os_unfair_lockis not recursive; re-locking it on the same thread while already held is undefined / crash. -
That means nested
write/readon the same instance (on the same thread) are invalid:rw.write { value in rw.read { v in ... } // UB: same thread, same lock, re-entrant }
-
public actor ActorReaderWriter<Value> {
private var value: Value
public init(value: Value) {
self.value = value
}
public func write<R>(_ body: (inout Value) -> R) -> R {
body(&value)
}
public func read() -> Value {
value
}
}- Uses a Swift actor instead of explicit locks or queues.
- Actor guarantees:
- Only one task accesses
valueat a time. - No explicit synchronization required; the runtime enforces isolation.
- Only one task accesses
writeandreadare synchronous (non-async) methods:- They execute atomically inside the actor, no suspension inside them.
- No lock-style deadlocks or GCD self-
syncissues.
- Re-entrancy:
- You can’t just
awaitthe same actor from inside its isolated context, the compiler catches many problematic patterns that would be easy to write with locks/queues.
- You can’t just
- Same external caveat: if you return a reference-like
Valueand mutate it off-actor, you can race outside of the actor’s control.
- Data races on simple values (
Int) often “just” produce wrong results:- Still UB in Swift’s memory model.
- But operations usually stay within valid value ranges, so crashes are rare.
- Data races on complex CoW / reference-counted types (
String,[Int]) tend to crash:- They manipulate shared heap buffers, refcounts, and metadata.
- Races corrupt those invariants, triggering use-after-free or allocator/runtime failures.
- Lock and queue abstractions are easy to get mostly right but still fragile:
- Safe only if users:
- Don’t re-enter
read/writefrom inside their closures on the same queue/lock. - Don’t leak mutable references and then mutate them unsafely elsewhere.
- Don’t re-enter
- Useful when users:
- Don't want to or can't introduce async-await in their piece of codebase
- Safe only if users:
- Actors are the safest high-level abstraction:
- Isolation and Sendable checking are enforced by the compiler/runtime.
- Many patterns that deadlock or crash with manual locking become compile-time errors.
- But require async context which introduce another level of complexity
This project serves as a catalog of how different synchronization strategies behave in Swift. From completely unsafe to fully actor‑safe. It illustrates how subtle the distinction is between “works most of the time” and “actually correct.”