16 Years of iOS Concurrency: From Dispatch Queues to async/await
I’ve been writing concurrent iOS code since before GCD was called GCD. The patterns changed every few years — manual threads, dispatch queues, NSOperation, Combine, and now structured concurrency. The sharp edges never went away. They just moved somewhere else.
Here’s what I learned shipping concurrent code across three eras, what patterns survived, and where the new stuff actually helps.
The dispatch queue era: 2010–2018
Every iOS developer of a certain age has written this:
DispatchQueue.global(qos: .userInitiated).async {
let data = heavyComputation()
DispatchQueue.main.async {
self.resultLabel.text = "\(data)"
}
}
This was the default pattern for a decade. It worked. It also produced an entire generation of bugs that looked exactly the same: weak self was forgotten, a completion handler never fired, or a serial queue deadlocked against itself.
What actually worked:
Serial queues as synchronization primitives. Before actors existed, a private serial queue with a sync block was the safest way to guard shared mutable state:
private let queue = DispatchQueue(label: "com.app.state")
private var _items: [Item] = []
func addItem(_ item: Item) {
queue.sync { _items.append(item) }
}
No locks, no semaphores. The queue itself guarantees mutual exclusion. I shipped this pattern in production across multiple apps and it never broke.
What didn’t:
DispatchGroup with manual completion handlers. The pyramid of closures, the forgotten leave() calls, the edge case where the group completes on a background thread and you update UI without switching to main. Every codebase I’ve touched had at least one DispatchGroup bug.
NSOperationQueue with dependencies. The idea was elegant — define operations, chain them with addDependency, let the queue handle ordering. In practice, cancelling a mid-chain operation or debugging why one never started was a time sink every single time.
let op1 = BlockOperation { /* parse JSON */ }
let op2 = BlockOperation { /* save to database */ }
op2.addDependency(op1)
queue.addOperations([op1, op2], waitUntilFinished: false)
Clean on paper. Brittle under real-world conditions like network errors, backgrounding, or a user navigating away mid-operation.
The Combine interlude: 2019–2022
Combine arrived with iOS 13 and promised to unify async code under one reactive paradigm. For UI binding and chaining network calls, it was genuinely good:
$searchText
.debounce(for: .seconds(0.3), scheduler: RunLoop.main)
.removeDuplicates()
.flatMap { api.search($0) }
.receive(on: DispatchQueue.main)
.sink { result in /* update UI */ }
A search bar with debounce, dedup, and async fetching — in four operators. The GCD equivalent was 40 lines of nested callbacks, timers, and cancellation flags.
But Combine overstayed its welcome when people started using it for things that didn’t need to be reactive. A simple button tap that fires a network request shouldn’t be a 4-operator pipeline. PassthroughSubject became the new delegate — implicit, untraceable, scattered across files.
The other problem: cancellables. Every subscription returned an AnyCancellable that had to be stored somewhere. Forgot to keep it? Your pipeline silently stops. Stored it in the wrong object? Memory leak. Set<AnyCancellable> became the new [weak self] — boilerplate everyone copied without understanding.
Combine wasn’t wrong. It was just asking every problem to be a reactive nail, and not every problem was.
Structured concurrency: 2022–present
async/await landed in Swift 5.5 and immediately solved the biggest pain point: the callback pyramid.
// Before: 30 lines
func loadProfile(completion: @escaping (Result<Profile, Error>) -> Void) {
fetchUser { userResult in
switch userResult {
case .success(let user):
fetchPreferences(user.id) { prefsResult in
switch prefsResult {
case .success(let prefs):
completion(.success(Profile(user: user, prefs: prefs)))
case .failure(let error):
completion(.failure(error))
}
}
case .failure(let error):
completion(.failure(error))
}
}
}
// After: 4 lines
func loadProfile() async throws -> Profile {
let user = try await fetchUser()
let prefs = try await fetchPreferences(user.id)
return Profile(user: user, prefs: prefs)
}
That’s not cosmetic. Every line removed is a branch where a completion handler could have been dropped, a capture list forgotten, or an error swallowed. The compiler now enforces what code review used to catch.
Actors replace serial queues
The private serial queue pattern from the GCD era maps directly to actors:
// Before: serial queue + sync
actor ItemStore {
private var items: [Item] = []
func add(_ item: Item) {
items.append(item)
}
func getAll() -> [Item] {
items
}
}
The compiler guarantees mutual exclusion. No queue, no sync {}, no way to accidentally read items from outside the actor. I migrated a state manager from a serial queue to an actor in about ten minutes — the API stayed the same, the implementation got simpler, and the compiler now catches what the queue pattern silently allowed.
@MainActor kills a class of crashes
Updating UI from a background thread has been the number one concurrency crash in iOS since UIKit existed. @MainActor doesn’t make it impossible, but it makes it a compile-time error instead of a runtime purple flash:
@MainActor
class ViewModel: ObservableObject {
@Published var items: [Item] = []
func load() async {
let fetched = await api.fetchItems() // suspends, runs on background
items = fetched // compiler knows this must be on MainActor
}
}
Before Swift 5.5, this would have been another DispatchQueue.main.async { self?.items = ... } that someone forgot. Now the compiler catches it.
The new sharp edges
Structured concurrency didn’t eliminate foot-guns. It just moved them.
Actor reentrancy
The biggest surprise for most people: actors suspend at await. While suspended, another call can enter the same actor. If that call modifies state, the first call resumes with a different world than it left:
actor Cache {
private var entries: [String: Data] = [:]
func fetch(_ key: String) async -> Data? {
if let cached = entries[key] {
return cached
}
let data = try? await download(key) // ← suspension point
entries[key] = data // another call may have written here
return data
}
}
If two calls hit fetch("profile") simultaneously, both see the cache miss, both download, and both write. The second write overwrites the first. No compiler warning because each operation is individually safe — the problem is the interleaving across suspension points.
The fix is to re-check state after every await:
if let data = try? await download(key) {
if entries[key] == nil { // re-check
entries[key] = data
}
}
This pattern is called “re-entrancy checking” and the compiler won’t help you with it. You have to know it exists.
Task.detached and why you don’t want it
Task.detached creates a task that doesn’t inherit the parent’s priority, task-local values, or actor context. It’s almost never what you want:
@MainActor
func refresh() {
Task {
// This inherits @MainActor — UI updates are safe
let data = await fetch()
updateUI(data)
}
Task.detached {
// This does NOT inherit @MainActor — actor context is lost
// Priority is .background by default
// Task-local values (like database connections) are gone
}
}
The only legitimate use case is a fire-and-forget background operation that genuinely shouldn’t be cancelled when the parent task cancels.
Sendable warnings
Swift 5.7+ emits warnings when non-Sendable types cross concurrency boundaries. Most of the time, the warning is correct and you should either mark the type Sendable or use @unchecked Sendable with a comment explaining why.
But the noise-to-signal ratio is high at first. A UIView subclass with stored properties will generate warnings because UIView isn’t Sendable. The fix is usually @MainActor on the class — not @unchecked Sendable.
Swift 6 and complete data isolation
Swift 6 flipped the switch from warnings to errors. Enable SWIFT_STRICT_CONCURRENCY = complete and the compiler enforces full data isolation — every value that crosses a concurrency domain must be Sendable, and every actor-isolated property must be accessed in the right context.
This is the biggest language-level shift since Swift 3. Entire codebases that compiled cleanly under Swift 5 now light up with hundreds of errors. The compiler isn’t being pedantic — it’s catching real data races that were silently shipping.
What breaks first:
Any shared mutable state that was never properly isolated. A global variable that multiple threads touch. A delegate callback that fires on a random queue and mutates a view controller property. A closure captured by an escaping block that crosses actor boundaries without @Sendable.
The most common pattern I’ve seen break is the lazy singleton with internal mutable state:
class ImageCache {
static let shared = ImageCache()
private var storage: [URL: UIImage] = [:]
func image(for url: URL) -> UIImage? {
storage[url] // accessed from any queue, no isolation
}
}
Under Swift 6, this doesn’t compile until you either protect storage with an actor, mark the class @MainActor, or use a lock. The fix that hurts least:
@MainActor
class ImageCache {
static let shared = ImageCache()
private var storage: [URL: UIImage] = [:]
func image(for url: URL) -> UIImage? {
storage[url]
}
}
Callers now get warnings if they access the cache from a background context — which is exactly what you want.
The migration playbook:
- Enable
SWIFT_STRICT_CONCURRENCY = minimalfirst. Fix the warnings one module at a time. - Annotate UI classes with
@MainActor— this clears the majority of warnings. - Convert shared mutable state to actors or
@MainActorclasses. - Mark model types as
Sendable(structs and enums withSendablestored properties get it automatically). - Only then flip to
complete.
This isn’t a weekend migration for a large codebase. But every warning you fix is a potential data race you won’t debug at 3 AM. The compiler is doing what code review can’t — tracing every value across every concurrency boundary and asking “is this safe?”
Where it’s going:
Swift 6’s data isolation model is the endgame the language has been building toward since async/await landed in 5.5. The transition is painful, but the destination is worth it: a world where the compiler guarantees your concurrent code is free of low-level data races. Not “probably fine.” Guaranteed.
That’s a claim no other mainstream systems language makes.
Testing async code
XCTestExpectation was always awkward:
let exp = expectation(description: "load")
viewModel.load()
wait(for: [exp], timeout: 5)
The new Swift Testing framework makes this better with native await support:
@Test func loadReturnsItems() async throws {
let vm = ViewModel()
await vm.load()
#expect(vm.items.count > 0)
}
One assertion, no expectation boilerplate. The test reads like regular code. This alone makes migration from XCTest worth considering.
What I’d tell someone starting today
Concurrency hasn’t gotten easier. The patterns got better, but the mental model got more complex — and Swift 6 just raised the bar further with strict concurrency checking. A junior developer writing async/await can be productive faster than someone learning GCD in 2014. But when something goes wrong — and it will — debugging a suspension point, an actor reentrancy bug, or a Sendable violation requires understanding what the compiler is doing under the hood.