DispatchQueue in iOS

Abhishek Singh
7 min readJan 18, 2022

Hi Everyone,

It’s our second article in Introduction to concurrency in iOS. In this article, we focus on DispatchQueues and how to use them. In our earlier article of this series, we got to know that a queue can be of two types, i.e Serial Queue and Concurrent queue. We already have learned about them and knew their basics.

Based on their fundamentals, can we guess which one to use when? Like for what kind of problem, I’ll be using serial queues, and for what I’ll be using concurrent queues. If we have given enough focus to our last article then the answer to this question is very simple.

Whenever we need to work on a system where all tasks are dependent on each other we need to follow serial queue operations and if tasks are independent then we can use concurrent operations.

What are DispatchQueues?

These are the queues which follows queue algorithm i.e first-in-first-out. In DispatchQueues tasks may or may not complete in first-in and first-out fashion but they always start executing in FIFO.

Types of DispatchQueues

There are three types of DispatchQueue as below:

  1. Main Queue: These are serial queues created by the system and bind it to the application main thread.
  2. Global Queue: These are the concurrent queues shared by the system.
  3. Custom Queue: This can be a serial or concurrent queue.

We also have Quality of service(QoS) in the global and custom queue. There are for QoS available in iOS i.e UserInteractive, UserInitiated, Default and Background. We can use them based on our requirement of task.

Let’s discuss more on Custom queues i.e How to create custom queue, when to use serial queue and concurrent queue etc.

Let’s have a task to perform in a mobile application and try to use serial queue and concurrent queue based on the task. Below is the task which need to be performed:

  1. Need to call an API
  2. Once get the response from API, that response needs to be saved in the local database
  3. After saving the response in the local database is done, notify the user for completion of the task.

Let’s first use serial queue for the above mentioned task:-

let serailQueue = DispatchQueue(label: "com.serialQueue.Abhishek")// call the apifunc callAPIToGetResponse(completionBlock: @escaping (String) -> Void) {sleep(2)print("API response received")completionBlock("Desired response")}// save response to local DBfunc saveResponseToDB(completion: @escaping (String) -> Void?) {sleep(1)print("Response saved to local db")completion("Desired response")}var notifyUser: (String) -> Void? = { res inprint("User data with: \(res) is saved")}serailQueue.async {callAPIToGetResponse(completionBlock: { response insaveResponseToDB( completion: notifyUser)})}

Executing the above code will give us a right answer with no doubt. The result for the above code is:

API response receivedResponse saved to local dbUser data with: Desired response is saved

But if the same use case will get called in any concurrent environment then the result would have been very different.

Now, if we choose to use a concurrent queue on the above tasks then the execution will be random and behaviour will not be desired.

Let’s try this in the playground:-

// call the api
func callAPIToGetResponse() {
sleep(2)
print("call api to get the response")
}
// save response to local DB
func saveTheResponseToDB() {
sleep(1)
print("Response saved")
}
//notify user for the completion of the task
func notifyUser() {
sleep(1)
print("All task done")
}
let concurrentQueue = DispatchQueue(label: "com.concurrent", attributes: .concurrent) concurrentQueue.async {
callAPIToGetResponse()
}
concurrentQueue.async {
saveTheResponseToDB()
}
concurrentQueue.async {
notifyUser()
}

I have intentionally put up the sleep method. If I haven’t used sleep then they would have executed sequentially. This tells us even if we are using a concurrent queue but it always follows the property of a queue, which is first come first out. If the task is taking a little longer time in finishing, then time slicing works on and threads spent random time on concurrent tasks. So all the items will start based on their index in a queue but their completion is completely dependent on the OS and task length.

One more thing, If I would have used sync instead of an async function, even then this would have ended-up executing sequentially.

Now, after running the above code, we will get the below response:-

Response saved
All task done
call API to get the response

The response tells the whole story that we would never want to save a response without having any response. I believe this is a straightforward example of when to use sequential operation.

Now, there might be a scenario where I have more than 10 APIs whose data is required to save. But the sequence of those API’s responses doesn’t matter. In this case, all API can get called in a concurrent environment and the rest of the operations like saving data to DB and notifying the user can be done sequentially.

Let's try the same in playground:

let serailQueue = DispatchQueue(label: "com.serialQueue.Abhishek")
let concurrentQueue = DispatchQueue(label: "com.concurrent", attributes: .concurrent)
// call the api
func callAPIToGetResponse(apiIndex: Int, completionBlock: @escaping (Int) -> Void) {
sleep(2)
print("API with ID: \(apiIndex) called")
completionBlock(apiIndex)
}
//This is the concurrent task where we are calling API.
for
i in 0..<5 {
concurrentQueue.async {
callAPIToGetResponse(apiIndex: i, completionBlock: { response in

serailQueue.async {
saveResponseToDB(index: i, completion: notifyUser)
}
})
}
}
// save response to local DB
func saveResponseToDB(index: Int, completion: @escaping (Int) -> Void?) {
sleep(1)
print("Response saved \(index)")
completion(index)
}
var notifyUser: (Int) -> Void? = { index in
print("User data with index: \(index) is saved")
}

With the below code snippet, we can see 5 tasks which call 5 API’s(hypothetically), once we receive a response from any API, we call another method “saveResponseToDB” to save the response followed by a callback to notify the same. Response for the above will be like below:-

API with ID: 2 called
API with ID: 0 called
API with ID: 1 called
API with ID: 3 called
API with ID: 4 called
Response saved 2
User data with index: 2 is saved
Response saved 0
User data with index: 0 is saved
Response saved 1
User data with index: 1 is saved
Response saved 3
User data with index: 3 is saved
Response saved 4
User data with index: 4 is saved

The above example shows us the API response, these are concurrent calls, and in random order but the rest of the tasks needed to be sequential, so we created another sequential dispatch queue and put it to call the “saveResponseToDB” method.

Lets now understand the difference between “sync” and “async” method:

sync method: This method blocks the calling thread and executes the current task on queues. Once the current task is completed only then thread is allowed to execute the next statement.

async method: This method immediately returns the calls to the next statement and the task is executed on another queue.

We discussed in our last article to avoid race condition by using DispatchQueue. Let’s reproduce one of the scenario for race condition as below:

let concurrentQueueRace = DispatchQueue(label: "com.concurrent.queue", attributes: .concurrent)class TestConcurrency { var store: [Int] = [1,2,3,4,5,6,7,8,9]func removeFirstObject(callingMethod: String) {  print("Before operation of \(callingMethod) our array looked like: \(store)")  if store.first == 1 {   sleep(1)   print("\(callingMethod) has removed first index \.  (store.removeFirst())")   print("After operation of \(callingMethod) our array looked like: \(store)")} else {  print("\(callingMethod) cannot remove first index")}}}let testConcurrency = TestConcurrency()concurrentQueueRace.async {
testConcurrency.removeFirstObject(callingMethod: "firstQueue")
}
concurrentQueueRace.async {
testConcurrency.removeFirstObject(callingMethod: "secondQueue")
}

As given in the above example, there is one method of removeFirstObject(callingMethod: String) whose functioning is to remove the first index of the array contained in the class.

After calling this concurrently, we get some very random results every time we run the application. This is one of the responses we get after running the above code on the playground:

Before operation of firstQueue our array looked like: [1, 2, 3, 4, 5, 6, 7, 8, 9]Before operation of secondQueue our array looked like: [1, 2, 3, 4, 5, 6, 7, 8, 9]secondQueue has removed first index 1firstQueue has removed first index 1After operation of secondQueue our array looked like: [2, 4, 5, 6, 7, 8, 9]After operation of firstQueue our array looked like: [2, 4, 5, 6, 7, 8, 9]

Based on the output we received, we can see if “second queue” has removed the first index then the first queue should have to print the else part of the function. But they both succeeded in removing an index from the array. This is a race condition. Now, let’s see how we can avoid this with the below example:-

let concurrentQueueRace = DispatchQueue(label: "com.concurrent.queue", attributes: .concurrent)let lockQueue = DispatchQueue(label: "com.lock.queue", attributes: .concurrent)class TestConcurrency {var store: [Int] = [1,2,3,4,5,6,7,8,9]func removeFirstObject(callingMethod: String) {print("Before operation of \(callingMethod) our array looked like: \(store)")lockQueue.async(flags: .barrier) { [weak self] inguard let weakSelf = self else { return }if weakSelf.store.first == 1 {sleep(1)print("\(callingMethod) has removed first index \(weakSelf.store.removeFirst())")print("After operation of \(callingMethod) our array looked like: \(weakSelf.store)")} else {print("\(callingMethod) cannot remove first index")}}}}let testConcurrency = TestConcurrency()concurrentQueueRace.async {testConcurrency.removeFirstObject(callingMethod: "firstQueue")}concurrentQueueRace.async {testConcurrency.removeFirstObject(callingMethod: "secondQueue")}

In the above example, we have taken another queue with the barrier attributes assigned to it. Barrier attributes make sure at the time of operation it locked the resource so that at any given time only a single resource should be mutating the value. Once the first queue that has access, done with its work, only then the other thread is allowed to work on the shared resource. Below is the output for the same:

Before operation of firstQueue our array looked like: [1, 2, 3, 4, 5, 6, 7, 8, 9]Before operation of secondQueue our array looked like: [1, 2, 3, 4, 5, 6, 7, 8, 9]firstQueue has removed first index 1After operation of firstQueue our array looked like: [2, 3, 4, 5, 6, 7, 8, 9]secondQueue cannot remove first index

This is perfectly behaving the same way as we expected. So, we can say that we have successfully avoided the race condition in our code.

This is all for this article. I’ll see you in our next blog😊.

--

--