多线程开发是日常开发任务中不可缺少的一部分,在iOS开发中常用到的多线程开发技术有GCD、NSOperation、NSThread,本文主要讲解多线系列文章中关于GCD的相关知识和使用详解。

1、GCD 简介

GCD 对于 iOS 开发者来说并不陌生,在实际开发中我们会经常用到 GCD 进行多线程的处理,那么 GCD 是什么呢?

Grand Central Dispatch(GCD) 是 Apple 开发的一个多核编程的较新的解决方法。它主要用于优化应用程序以支持多核处理器以及其他对称多处理系统。它是一个在线程池模式的基础上执行的并发任务。在 Mac OS X 10.6 雪豹中首次推出,也可在 iOS 4 及以上版本使用。

GCD有着很明显的优势,正是这些优势才使得GCD在处理多线程问题有着举足轻重的地位。

  1. GCD是apple为多核的并行运算提出的解决方案。
  2. GCD能较好的利用CPU内核资源。
  3. GCD不需要开发者去管理线程的生命周期。
  4. 使用简便,开发者只需要告诉GCD执行什么任务,并不需要编写任何线程管理代码。

2、GCD任务和队列

相信很多初级开发者会对 GCD 任务和队列之间的关系理解含糊不清,实际上队列只是提供了保存任务的容器。为了更好的理解 GCD,很有必要先了解 任务队列 的概念。

2.1 GCD 任务

任务 就是需要执行的操作,是 GCD 中放在 block 中在线程中执行的那段代码。任务的执行的方式有 同步执行异步执行 两中执行方式。两者的主要区别是 是否等待队列的任务执行结束,以及 是否具备开启新线程的能力

  • 同步执行(sync):同步添加任务到队列中,在队列之前的任务执行结束之前会一直等待;同步执行的任务只能在当前线程中执行,不具备开启新线程的能力。
  • 异步执行(async):异步添加任务到队列中,并需要理会队列中其他的任务,添加即执行;异步执行可以在新的线程中执行,具备开启新的线程的能力。

2.2 GCD 队列

队列:队列是一种特殊的线性表,队列中允许插入操作的一端称为队尾,允许删除操作的一端称为队头,是一种先进先出的结构

image.png

在 GCD 里面队列是指执行任务的等待队列,是用来存放任务的。按照队列的结构特性,新任务总是插入在队列的末尾,而任务的执行总是从队列的对头输出,每读取一个任务,则从队列中释放一个任务。

GCD的队列分为 串行队列并发队列 两种,两者都符合 FIFO(先进先出) 的原则。两者的主要区别是:执行顺序不同,以及开启线程数不同

  • 串行队列:只开启一个线程,每次只能有一个任务执行,等待执行完毕后才会执行下一个任务。
  • 并发队列:可以让对个任务同时执行,也就是开启多个线程,让多个任务同时执行。

两者之间区别如下图所示:

image.png
image.png

3、GCD 基本使用

GCD的使用很简单,首先创建一个队列,然后向队列中追加任务,系统会根据任务的类型执行任务。

3.1 队列的创建

  • 队列的创建很简单,只需要调用 dispatch_queue_create 方法传入相对应的参数便可。这个方法有两个参数:
    • 第一个参数表示队列的唯一标识,可以传空。
    • 第二个参数用来识别是串行队列还是并发队列。DISPATCH_QUEUE_SERIAL 表示串行队列,DISPATCH_QUEUE_CONCURRENT 表示并发队列。
  1. // 创建串行队列
  2. dispatch_queue_t queue = dispatch_queue_create("com.thread.demo", DISPATCH_QUEUE_SERIAL);
  3. // 创建并发队列
  4. dispatch_queue_t queue = dispatch_queue_create("com.thread.demo", DISPATCH_QUEUE_CONCURRENT);
  • GCD 默认提供一种 全局并发队列,调用 dispatch_get_global_queue 方法来获取全局并发队列。这个方法需要传入两个参数。
    • 第一个参数是一个长整型类型的参数,表示队列优先级,有DISPATCH_QUEUE_PRIORITY_HIGHDISPATCH_QUEUE_PRIORITY_LOWDISPATCH_QUEUE_PRIORITY_BACKGROUNDDISPATCH_QUEUE_PRIORITY_DEFAULT 四个选项,一般用 DISPATCH_QUEUE_PRIORITY_DEFAULT
    • 第二个参数暂时没用,用 0 即可。
  1. // 获取全局并发队列
  2. dispatch_queue_t globalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
  • GCD 默认提供了 主队列,调用 dispatch_get_main_queue 方法获取,所有放在主队列中的任务都会在主线程中执行。主队列是一种串行队列。
  1. // 主队列
  2. dispatch_queue_t mainQueue = dispatch_get_main_queue();

注意:主队列实质上就是一个普通的串行队列,只是因为默认情况下,当前代码是放在主队列中的,然后主队列中的代码,有都会放到主线程中去执行,所以才造成了主队列特殊的现象。

3.2 创建任务

GCD 调用 dispatch_sync 创建同步任务,调用 dispatch_async 创建异步任务。任务的内容都是在block代码块中。

  1. //异步任务
  2. dispatch_async(queue, ^{
  3. //异步执行的代码
  4. });
  5. //同步任务
  6. dispatch_sync(queue, ^{
  7. //同步执行的代码
  8. });

创建的任务需要放在队列中去执行,同时考虑到主队列的特殊性,那么在不考虑嵌套任务的情况下就会存在同步任务+串行队列、同步任务+并发队列、异步任务+串行队列、异步任务+并发队列、主队列+同步任务、主队列+异步任务六种组合,下面我们来分析下这几种组合。

1. 同步任务+串行队列:同步任务不会开启新的线程,任务串行执行。 2. 同步任务+并发队列:同步任务不会开启新的线程,虽然任务在并发队列中,但是系统只默认开启了一个主线程,没有开启子线程,所以任务串行执行。 3. 异步任务+串行队列:异步任务有开启新的线程,任务串行执行。 4. 异步任务+并发队列:异步任务有开启新的线程,任务并发执行。 5. 主队列+同步任务:主队列是一种串行队列,任务在主线程中串行执行,将同步任务添加到主队列中会造成追加的同步任务和主线程中的任务相互等待阻塞主线程,导致死锁。 6. 主队列+异步任务:主队列是一种串行队列,任务在主线程中串行执行,即使是追加的异步任务也不会开启新的线程,任务串行执行。

除了上边提到的主队列中调用 主队列+同步任务 会导致死锁问题。实际在使用串行队列的时候,也可能出现阻塞串行队列所在线程的情况发生,从而造成死锁问题。这种情况多见于同一个串行队列的嵌套使用。

比如下面代码这样:在异步任务+串行队列的任务中,又嵌套了当前的串行队列,然后进行同步执行。

  1. dispatch_queue_t queue = dispatch_queue_create("com.thread.demo", DISPATCH_QUEUE_SERIAL);
  2. dispatch_async(queue, ^{ // 异步任务 + 串行队列
  3. dispatch_sync(queue, ^{ // 同步任务 + 串行队列
  4. sleep(1); // 模拟耗时操作
  5. NSLog(@"1");
  6. });
  7. });

执行上面的代码会导致 串行队列中追加的任务 和 串行队列中原有的任务 两者之间相互等待,阻塞了串行队列,最终造成了串行队列所在的线程(子线程)死锁问题。主队列造成死锁也是基于这个原因,所以,这也进一步说明了主队列其实并不特殊。

3.3 GCD 的基础使用

3.3.1 同步任务+串行队列

image.png

从上面代码运行的结果可以看出,并没有开启新的线程,任务是按顺序执行的。

3.3.2 同步任务+并发队列

image.png

从上面代码运行的结果可以看出,同步任务不会开启新的线程,虽然任务在并发队列中,但是系统只默认开启了一个主线程,没有开启子线程,所以任务串行执行。

3.3.3 异步任务+串行队列

image.png

从上面代码运行的结果可以看出,开启了一个新的线程,说明异步任务具备开启新的线程的能力,但是由于任务是在串行队列中执行的,所以任务是顺序执行的。

3.3.4 异步任务+并发队列

image.png

从上面代码的运行结果可以看出,生成了多个线程,并且任务是随机执行(并发执行)的。

3.3.5 主队列+同步任务

image.png

很明显上面这段代码运行崩溃了,这是因为我们在主线程中执行 syncTaskWithMainQueue 方法,相当于把 syncTaskWithMainQueue 任务放到了主线程的队列中。而 同步执行 会等待当前队列中的任务执行完毕,才会接着执行。那么当我们把 任务1 追加到主队列中,任务1 就在等待主线程处理完 syncTaskWithMainQueue 任务。而 syncMain 任务需要等待 任务1 执行完毕,这样就形成了相互等待的情况,产生了死锁。

3.3.6 主队列+异步任务

image.png

从上面代码的运行结果可以看出,虽然是异步任务,但是并没有开启新的线程,任然是在主线程中执行,并且任务是顺序执行的。

4、GCD源码分析

4.1 队列是如何产生的?

image.png

我们给 dispatch_queue_create 下个符号断点,看它是属于哪个系统库的

image.png
image.png

可以看到是 libdispatch.dylib,我们从苹果官网上下载最新源码 libdispatch 1173.40.5

4.1.1 串行与并发队列

现在我们一步步跟着源码的流程来进行分析

  • dispatch_queue_create 分析
  1. // 如何在创建的时候区分串行还是并发
  2. // 通过结构体位域设置dqai的属性
  3. dispatch_queue_t
  4. dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
  5. {
  6. return _dispatch_lane_create_with_target(label, attr,
  7. DISPATCH_TARGET_QUEUE_DEFAULT, true);
  8. }
  • _dispatch_lane_create_with_target 分析
  1. DISPATCH_NOINLINE
  2. static dispatch_queue_t
  3. _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
  4. dispatch_queue_t tq, bool legacy)
  5. {
  6. // dqai 创建 - 如果是串行队列会返回 { } 空
  7. dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);
  8. //
  9. // Step 1: Normalize arguments (qos, overcommit, tq)
  10. //
  11. dispatch_qos_t qos = dqai.dqai_qos;
  12. #if !HAVE_PTHREAD_WORKQUEUE_QOS
  13. if (qos == DISPATCH_QOS_USER_INTERACTIVE) {
  14. dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED;
  15. }
  16. if (qos == DISPATCH_QOS_MAINTENANCE) {
  17. dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND;
  18. }
  19. #endif // !HAVE_PTHREAD_WORKQUEUE_QOS
  20. _dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit;
  21. if (overcommit != _dispatch_queue_attr_overcommit_unspecified && tq) {
  22. if (tq->do_targetq) {
  23. DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and "
  24. "a non-global target queue");
  25. }
  26. }
  27. if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) {
  28. // Handle discrepancies between attr and target queue, attributes win
  29. if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
  30. if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
  31. overcommit = _dispatch_queue_attr_overcommit_enabled;
  32. } else {
  33. overcommit = _dispatch_queue_attr_overcommit_disabled;
  34. }
  35. }
  36. if (qos == DISPATCH_QOS_UNSPECIFIED) {
  37. qos = _dispatch_priority_qos(tq->dq_priority);
  38. }
  39. tq = NULL;
  40. } else if (tq && !tq->do_targetq) {
  41. // target is a pthread or runloop root queue, setting QoS or overcommit
  42. // is disallowed
  43. if (overcommit != _dispatch_queue_attr_overcommit_unspecified) {
  44. DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute "
  45. "and use this kind of target queue");
  46. }
  47. } else {
  48. if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
  49. // Serial queues default to overcommit!
  50. overcommit = dqai.dqai_concurrent ?
  51. _dispatch_queue_attr_overcommit_disabled :
  52. _dispatch_queue_attr_overcommit_enabled;
  53. }
  54. }
  55. if (!tq) {
  56. tq = _dispatch_get_root_queue(
  57. qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos,
  58. overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;
  59. if (unlikely(!tq)) {
  60. DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute");
  61. }
  62. }
  63. //
  64. // Step 2: Initialize the queue
  65. //
  66. if (legacy) {
  67. // if any of these attributes is specified, use non legacy classes
  68. if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) {
  69. legacy = false;
  70. }
  71. }
  72. const void *vtable;
  73. dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0;
  74. // 通过dqai.dqai_concurrent区分串行和并发
  75. // 然后通过 DISPATCH_VTABLE 生成对应的对象
  76. // OS_dispatch_##name##_class -> OS_dispatch_queue_concurrent_class
  77. if (dqai.dqai_concurrent) {
  78. // OS_dispatch_queue_concurrent
  79. vtable = DISPATCH_VTABLE(queue_concurrent);
  80. } else {
  81. vtable = DISPATCH_VTABLE(queue_serial);
  82. }
  83. switch (dqai.dqai_autorelease_frequency) {
  84. case DISPATCH_AUTORELEASE_FREQUENCY_NEVER:
  85. dqf |= DQF_AUTORELEASE_NEVER;
  86. break;
  87. case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM:
  88. dqf |= DQF_AUTORELEASE_ALWAYS;
  89. break;
  90. }
  91. // label 赋值
  92. if (label) {
  93. const char *tmp = _dispatch_strdup_if_mutable(label);
  94. if (tmp != label) {
  95. dqf |= DQF_LABEL_NEEDS_FREE;
  96. label = tmp;
  97. }
  98. }
  99. // 开辟内存,生成相应的对象:dq -- 队列对象 → dispatch_queue_create方法创建出对象
  100. dispatch_lane_t dq = _dispatch_object_alloc(vtable,
  101. sizeof(struct dispatch_lane_s));
  102. // 其中有 dispatch_object_t 最大的对象,其他对象都是此对象衍生的
  103. // 构造方法 --- 如果是并发,给个最大宽度 0xffe 4094,如果是串行,宽度给个1
  104. _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
  105. DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
  106. (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0)); // init
  107. // label 赋值
  108. dq->dq_label = label;
  109. // 优先级 赋值
  110. dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
  111. dqai.dqai_relpri);
  112. if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
  113. dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
  114. }
  115. if (!dqai.dqai_inactive) {
  116. _dispatch_queue_priority_inherit_from_target(dq, tq);
  117. _dispatch_lane_inherit_wlh_from_target(dq, tq);
  118. }
  119. _dispatch_retain(tq);
  120. dq->do_targetq = tq;
  121. _dispatch_object_debug(dq, "%s", __func__);
  122. return _dispatch_trace_queue_create(dq)._dq;
  123. }
  • _dispatch_queue_attr_to_info 分析
  1. dispatch_queue_attr_info_t
  2. _dispatch_queue_attr_to_info(dispatch_queue_attr_t dqa)
  3. {
  4. dispatch_queue_attr_info_t dqai = { };
  5. if (!dqa) return dqai;
  6. #if DISPATCH_VARIANT_STATIC
  7. // 如果是并发队列这里就返回了,走到下面都是串行队列
  8. if (dqa == &_dispatch_queue_attr_concurrent) {
  9. dqai.dqai_concurrent = true;
  10. return dqai;
  11. }
  12. #endif
  13. if (dqa < _dispatch_queue_attrs ||
  14. dqa >= &_dispatch_queue_attrs[DISPATCH_QUEUE_ATTR_COUNT]) {
  15. DISPATCH_CLIENT_CRASH(dqa->do_vtable, "Invalid queue attribute");
  16. }
  17. // 苹果自己定的算法
  18. size_t idx = (size_t)(dqa - _dispatch_queue_attrs);
  19. // 位域写法:某几位代表某个意思-为了节省空间
  20. // 例如 isa 指针里面的某些位代表 nonpointer、has_assoc 等
  21. dqai.dqai_inactive = (idx % DISPATCH_QUEUE_ATTR_INACTIVE_COUNT);
  22. idx /= DISPATCH_QUEUE_ATTR_INACTIVE_COUNT;
  23. dqai.dqai_concurrent = !(idx % DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT);
  24. idx /= DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT;
  25. dqai.dqai_relpri = -(int)(idx % DISPATCH_QUEUE_ATTR_PRIO_COUNT);
  26. idx /= DISPATCH_QUEUE_ATTR_PRIO_COUNT;
  27. dqai.dqai_qos = idx % DISPATCH_QUEUE_ATTR_QOS_COUNT;
  28. idx /= DISPATCH_QUEUE_ATTR_QOS_COUNT;
  29. dqai.dqai_autorelease_frequency =
  30. idx % DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT;
  31. idx /= DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT;
  32. dqai.dqai_overcommit = idx % DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;
  33. idx /= DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;
  34. return dqai;
  35. }
  • DISPATCH_VTABLE 分析

下面的宏定义也可以看出来 队列其实也是一个对象,我们把 ##name 替换替换了,得到

OS_dispatch_queue_serialOS_dispatch_queue_concurrent

其实就是串行队列和并发队列

  1. #define DISPATCH_VTABLE(name) DISPATCH_OBJC_CLASS(name)
  2. ->
  3. #define DISPATCH_OBJC_CLASS(name) (&DISPATCH_CLASS_SYMBOL(name))
  4. ->
  5. #define DISPATCH_CLASS_SYMBOL(name) OS_dispatch_##name##_class
  6. ->
  7. #define DISPATCH_CLASS(name) OS_dispatch_##name

4.1.2 mainQueue 和 globalQueue

  • _dispatch_get_root_queue 分析
  1. DISPATCH_ALWAYS_INLINE DISPATCH_CONST
  2. static inline dispatch_queue_global_t
  3. _dispatch_get_root_queue(dispatch_qos_t qos, bool overcommit) // (4,YES/NO)
  4. {
  5. if (unlikely(qos < DISPATCH_QOS_MIN || qos > DISPATCH_QOS_MAX)) {
  6. DISPATCH_CLIENT_CRASH(qos, "Corrupted priority");
  7. }
  8. // 取数组下标 6 或 7
  9. // 下面进入_dispatch_root_queues[]--静态数组,装了很多信息:target_queue
  10. return &_dispatch_root_queues[2 * (qos - 1) + overcommit];
  11. }
  1. struct dispatch_queue_global_s _dispatch_root_queues[] = {
  2. #define _DISPATCH_ROOT_QUEUE_IDX(n, flags) \
  3. ((flags & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) ? \
  4. DISPATCH_ROOT_QUEUE_IDX_##n##_QOS_OVERCOMMIT : \
  5. DISPATCH_ROOT_QUEUE_IDX_##n##_QOS)
  6. #define _DISPATCH_ROOT_QUEUE_ENTRY(n, flags, ...) \
  7. [_DISPATCH_ROOT_QUEUE_IDX(n, flags)] = { \
  8. DISPATCH_GLOBAL_OBJECT_HEADER(queue_global), \
  9. .dq_state = DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE, \
  10. .do_ctxt = _dispatch_root_queue_ctxt(_DISPATCH_ROOT_QUEUE_IDX(n, flags)), \
  11. .dq_atomic_flags = DQF_WIDTH(DISPATCH_QUEUE_WIDTH_POOL), \
  12. .dq_priority = flags | ((flags & DISPATCH_PRIORITY_FLAG_FALLBACK) ? \
  13. _dispatch_priority_make_fallback(DISPATCH_QOS_##n) : \
  14. _dispatch_priority_make(DISPATCH_QOS_##n, 0)), \
  15. __VA_ARGS__ \
  16. }
  17. _DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, 0,
  18. .dq_label = "com.apple.root.maintenance-qos",
  19. .dq_serialnum = 4,
  20. ),
  21. _DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
  22. .dq_label = "com.apple.root.maintenance-qos.overcommit",
  23. .dq_serialnum = 5,
  24. ),
  25. _DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, 0,
  26. .dq_label = "com.apple.root.background-qos",
  27. .dq_serialnum = 6,
  28. ),
  29. _DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
  30. .dq_label = "com.apple.root.background-qos.overcommit",
  31. .dq_serialnum = 7,
  32. ),
  33. _DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, 0,
  34. .dq_label = "com.apple.root.utility-qos",
  35. .dq_serialnum = 8,
  36. ),
  37. _DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
  38. .dq_label = "com.apple.root.utility-qos.overcommit",
  39. .dq_serialnum = 9,
  40. ),
  41. _DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT, DISPATCH_PRIORITY_FLAG_FALLBACK,
  42. .dq_label = "com.apple.root.default-qos",
  43. .dq_serialnum = 10,
  44. ),
  45. _DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT,
  46. DISPATCH_PRIORITY_FLAG_FALLBACK | DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
  47. .dq_label = "com.apple.root.default-qos.overcommit",
  48. .dq_serialnum = 11,
  49. ),
  50. _DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, 0,
  51. .dq_label = "com.apple.root.user-initiated-qos",
  52. .dq_serialnum = 12,
  53. ),
  54. _DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
  55. .dq_label = "com.apple.root.user-initiated-qos.overcommit",
  56. .dq_serialnum = 13,
  57. ),
  58. _DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, 0,
  59. .dq_label = "com.apple.root.user-interactive-qos",
  60. .dq_serialnum = 14,
  61. ),
  62. _DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
  63. .dq_label = "com.apple.root.user-interactive-qos.overcommit",
  64. .dq_serialnum = 15,
  65. ),
  66. };

4.1.2.1 dispatch_get_global_queue 全局并发队列

  1. dispatch_queue_global_t
  2. dispatch_get_global_queue(long priority, unsigned long flags)
  3. {
  4. dispatch_assert(countof(_dispatch_root_queues) ==
  5. DISPATCH_ROOT_QUEUE_COUNT);
  6. if (flags & ~(unsigned long)DISPATCH_QUEUE_OVERCOMMIT) {
  7. return DISPATCH_BAD_INPUT;
  8. }
  9. dispatch_qos_t qos = _dispatch_qos_from_queue_priority(priority);
  10. #if !HAVE_PTHREAD_WORKQUEUE_QOS
  11. if (qos == QOS_CLASS_MAINTENANCE) {
  12. qos = DISPATCH_QOS_BACKGROUND;
  13. } else if (qos == QOS_CLASS_USER_INTERACTIVE) {
  14. qos = DISPATCH_QOS_USER_INITIATED;
  15. }
  16. #endif
  17. if (qos == DISPATCH_QOS_UNSPECIFIED) {
  18. return DISPATCH_BAD_INPUT;
  19. }
  20. // 前面的 _dispatch_root_queues[] --静态数组
  21. return _dispatch_get_root_queue(qos, flags & DISPATCH_QUEUE_OVERCOMMIT);
  22. }

4.2.1.2 dispatch_get_main_queue 主队列/串行队列

苹果暴露给我们的代码

  1. dispatch_queue_main_t
  2. dispatch_get_main_queue(void)
  3. {
  4. // 进入 dispatch_queue_main_t
  5. return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
  6. }
  1. DISPATCH_DECL_SUBCLASS(dispatch_queue_main, dispatch_queue_serial);
  1. // name -> 标准类 base -> 基类
  2. #define DISPATCH_DECL_SUBCLASS(name, base) OS_OBJECT_DECL_SUBCLASS(name, base)

libdispatch 源码

  1. // 意思是:name(标准类) 是 base(基类) 的重写
  2. // 即:dispatch_queue_main_t 是 _dispatch_main_q 的重写 --- 函数重写
  3. #define OS_OBJECT_DECL_SUBCLASS(name, super) \
  4. OS_OBJECT_DECL_IMPL(name, <OS_OBJECT_CLASS(super)>)
  1. struct dispatch_queue_static_s _dispatch_main_q = {
  2. DISPATCH_GLOBAL_OBJECT_HEADER(queue_main),
  3. #if !DISPATCH_USE_RESOLVERS
  4. .do_targetq = _dispatch_get_default_queue(true),
  5. #endif
  6. .dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) |
  7. DISPATCH_QUEUE_ROLE_BASE_ANON,
  8. .dq_label = "com.apple.main-thread",
  9. .dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1),
  10. .dq_serialnum = 1,
  11. };
  1. dispatch_queue_main_t
  2. dispatch_get_main_queue(void)
  3. {
  4. return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
  5. }
  1. #define DISPATCH_GLOBAL_OBJECT(type, object) ((OS_OBJECT_BRIDGE type)&(object))

4.2 异步函数源码分析

4.2.1 如何创建线程

  1. void
  2. dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
  3. {
  4. dispatch_continuation_t dc = _dispatch_continuation_alloc();
  5. uintptr_t dc_flags = DC_FLAG_CONSUME;
  6. dispatch_qos_t qos;
  7. // 任务包装器 - 接受 - 保存 - 函数式
  8. // 保存 block
  9. qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
  10. // 创建线程
  11. _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
  12. }
  • _dispatch_continuation_init
  1. static inline dispatch_qos_t
  2. _dispatch_continuation_init(dispatch_continuation_t dc,
  3. dispatch_queue_class_t dqu, dispatch_block_t work,
  4. dispatch_block_flags_t flags, uintptr_t dc_flags)
  5. {
  6. void *ctxt = _dispatch_Block_copy(work);
  7. dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
  8. if (unlikely(_dispatch_block_has_private_data(work))) {
  9. dc->dc_flags = dc_flags;
  10. dc->dc_ctxt = ctxt;
  11. // will initialize all fields but requires dc_flags & dc_ctxt to be set
  12. return _dispatch_continuation_init_slow(dc, dqu, flags);
  13. }
  14. // 调用
  15. dispatch_function_t func = _dispatch_Block_invoke(work);
  16. if (dc_flags & DC_FLAG_CONSUME) {
  17. func = _dispatch_call_block_and_release;
  18. /*
  19. 注意:
  20. _dispatch_call_block_and_release(void *block)
  21. {
  22. void (^b)(void) = block;
  23. b();
  24. Block_release(b);
  25. }
  26. */
  27. }
  28. return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
  29. }
  • _dispatch_continuation_init_f
  1. static inline dispatch_qos_t
  2. _dispatch_continuation_init_f(dispatch_continuation_t dc,
  3. dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
  4. dispatch_block_flags_t flags, uintptr_t dc_flags)
  5. {
  6. pthread_priority_t pp = 0;
  7. dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
  8. dc->dc_func = f; // 保存任务
  9. dc->dc_ctxt = ctxt;
  10. // in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
  11. // should not be propagated, only taken from the handler if it has one
  12. if (!(flags & DISPATCH_BLOCK_HAS_PRIORITY)) {
  13. pp = _dispatch_priority_propagate();
  14. }
  15. _dispatch_continuation_voucher_set(dc, flags);
  16. return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
  17. }
  • _dispatch_continuation_async
  1. static inline void
  2. _dispatch_continuation_async(dispatch_queue_class_t dqu,
  3. dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
  4. {
  5. #if DISPATCH_INTROSPECTION
  6. if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
  7. _dispatch_trace_item_push(dqu, dc);
  8. }
  9. #else
  10. (void)dc_flags;
  11. #endif
  12. // dx_push 宏 -> dx_vtable -> dq_push
  13. // 并发 .dq_push = _dispatch_root_queue_push
  14. return dx_push(dqu._dq, dc, qos);
  15. }
  • _dispatch_root_queue_push
  1. void
  2. _dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
  3. dispatch_qos_t qos)
  4. {
  5. #if DISPATCH_USE_KEVENT_WORKQUEUE
  6. dispatch_deferred_items_t ddi = _dispatch_deferred_items_get();
  7. if (unlikely(ddi && ddi->ddi_can_stash)) {
  8. dispatch_object_t old_dou = ddi->ddi_stashed_dou;
  9. dispatch_priority_t rq_overcommit;
  10. rq_overcommit = rq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
  11. if (likely(!old_dou._do || rq_overcommit)) {
  12. dispatch_queue_global_t old_rq = ddi->ddi_stashed_rq;
  13. dispatch_qos_t old_qos = ddi->ddi_stashed_qos;
  14. ddi->ddi_stashed_rq = rq;
  15. ddi->ddi_stashed_dou = dou;
  16. ddi->ddi_stashed_qos = qos;
  17. _dispatch_debug("deferring item %p, rq %p, qos %d",
  18. dou._do, rq, qos);
  19. if (rq_overcommit) {
  20. ddi->ddi_can_stash = false;
  21. }
  22. if (likely(!old_dou._do)) {
  23. return;
  24. }
  25. // push the previously stashed item
  26. qos = old_qos;
  27. rq = old_rq;
  28. dou = old_dou;
  29. }
  30. }
  31. #endif
  32. #if HAVE_PTHREAD_WORKQUEUE_QOS
  33. if (_dispatch_root_queue_push_needs_override(rq, qos)) {
  34. return _dispatch_root_queue_push_override(rq, dou, qos);
  35. }
  36. #else
  37. (void)qos;
  38. #endif
  39. _dispatch_root_queue_push_inline(rq, dou, dou, 1);
  40. }
  • 进入 _dispatch_root_queue_push_inline
  1. static inline void
  2. _dispatch_root_queue_push_inline(dispatch_queue_global_t dq,
  3. dispatch_object_t _head, dispatch_object_t _tail, int n)
  4. {
  5. struct dispatch_object_s *hd = _head._do, *tl = _tail._do;
  6. if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {
  7. return _dispatch_root_queue_poke(dq, n, 0);
  8. }
  9. }
  • 进入 _dispatch_root_queue_poke
  1. void
  2. _dispatch_root_queue_poke(dispatch_queue_global_t dq, int n, int floor)
  3. {
  4. if (!_dispatch_queue_class_probe(dq)) {
  5. return;
  6. }
  7. #if !DISPATCH_USE_INTERNAL_WORKQUEUE
  8. #if DISPATCH_USE_PTHREAD_POOL
  9. if (likely(dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE))
  10. #endif
  11. {
  12. if (unlikely(!os_atomic_cmpxchg2o(dq, dgq_pending, 0, n, relaxed))) {
  13. _dispatch_root_queue_debug("worker thread request still pending "
  14. "for global queue: %p", dq);
  15. return;
  16. }
  17. }
  18. #endif // !DISPATCH_USE_INTERNAL_WORKQUEUE
  19. return _dispatch_root_queue_poke_slow(dq, n, floor);
  20. }
  • 进入 _dispatch_root_queue_poke_slow
  1. static void
  2. _dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
  3. {
  4. int remaining = n;
  5. int r = ENOSYS;
  6. _dispatch_root_queues_init();
  7. _dispatch_debug_root_queue(dq, __func__);
  8. _dispatch_trace_runtime_event(worker_request, dq, (uint64_t)n);
  9. #if !DISPATCH_USE_INTERNAL_WORKQUEUE
  10. #if DISPATCH_USE_PTHREAD_ROOT_QUEUES
  11. if (dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE)
  12. #endif
  13. {
  14. _dispatch_root_queue_debug("requesting new worker thread for global "
  15. "queue: %p", dq);
  16. r = _pthread_workqueue_addthreads(remaining,
  17. _dispatch_priority_to_pp_prefer_fallback(dq->dq_priority));
  18. // 此处直接往工作队列中添加线程。进不去,看不到更进一步的源码
  19. (void)dispatch_assume_zero(r);
  20. return;
  21. }
  22. #endif // !DISPATCH_USE_INTERNAL_WORKQUEUE
  23. #if DISPATCH_USE_PTHREAD_POOL
  24. dispatch_pthread_root_queue_context_t pqc = dq->do_ctxt;
  25. if (likely(pqc->dpq_thread_mediator.do_vtable)) {
  26. while (dispatch_semaphore_signal(&pqc->dpq_thread_mediator)) {
  27. _dispatch_root_queue_debug("signaled sleeping worker for "
  28. "global queue: %p", dq);
  29. if (!--remaining) {
  30. return;
  31. }
  32. }
  33. }
  34. bool overcommit = dq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
  35. if (overcommit) {
  36. os_atomic_add2o(dq, dgq_pending, remaining, relaxed);
  37. } else {
  38. if (!os_atomic_cmpxchg2o(dq, dgq_pending, 0, remaining, relaxed)) {
  39. _dispatch_root_queue_debug("worker thread request still pending for "
  40. "global queue: %p", dq);
  41. return;
  42. }
  43. }
  44. int can_request, t_count;
  45. // seq_cst with atomic store to tail <rdar://problem/16932833>
  46. // 获取缓存池的大小
  47. t_count = os_atomic_load2o(dq, dgq_thread_pool_size, ordered);
  48. do {
  49. // 可以产生的线程数
  50. can_request = t_count < floor ? 0 : t_count - floor;
  51. // 想要产生的线程数 > 能够产生的线程数
  52. if (remaining > can_request) {
  53. _dispatch_root_queue_debug("pthread pool reducing request from %d to %d",
  54. remaining, can_request);
  55. os_atomic_sub2o(dq, dgq_pending, remaining - can_request, relaxed);
  56. remaining = can_request;
  57. }
  58. if (remaining == 0) {
  59. _dispatch_root_queue_debug("pthread pool is full for root queue: "
  60. "%p", dq);
  61. return;
  62. }
  63. } while (!os_atomic_cmpxchgvw2o(dq, dgq_thread_pool_size, t_count,
  64. t_count - remaining, &t_count, acquire)); // 判断缓存池的大小
  65. #if !defined(_WIN32)
  66. pthread_attr_t *attr = &pqc->dpq_thread_attr;
  67. pthread_t tid, *pthr = &tid;
  68. #if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
  69. if (unlikely(dq == &_dispatch_mgr_root_queue)) {
  70. pthr = _dispatch_mgr_root_queue_init();
  71. }
  72. #endif
  73. // remaining > 0 且通过以上判断说明没有问题则开始开辟线程
  74. do {
  75. _dispatch_retain(dq); // released in _dispatch_worker_thread
  76. while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) {
  77. if (r != EAGAIN) {
  78. (void)dispatch_assume_zero(r);
  79. }
  80. _dispatch_temporary_resource_shortage();
  81. }
  82. } while (--remaining); // 当 remaining == 0 时说明开辟完毕
  83. #else
  84. (void)floor;
  85. #endif // DISPATCH_USE_PTHREAD_POOL
  86. }

4.2.2 如何执行任务

image.png

从上面打印的堆栈中,我们从 _dispatch_worker_thread2 开始分析

  1. static void
  2. _dispatch_worker_thread2(pthread_priority_t pp)
  3. {
  4. bool overcommit = pp & _PTHREAD_PRIORITY_OVERCOMMIT_FLAG;
  5. dispatch_queue_global_t dq;
  6. pp &= _PTHREAD_PRIORITY_OVERCOMMIT_FLAG | ~_PTHREAD_PRIORITY_FLAGS_MASK;
  7. _dispatch_thread_setspecific(dispatch_priority_key, (void *)(uintptr_t)pp);
  8. dq = _dispatch_get_root_queue(_dispatch_qos_from_pp(pp), overcommit);
  9. _dispatch_introspection_thread_add();
  10. _dispatch_trace_runtime_event(worker_unpark, dq, 0);
  11. int pending = os_atomic_dec2o(dq, dgq_pending, relaxed);
  12. dispatch_assert(pending >= 0);
  13. _dispatch_root_queue_drain(dq, dq->dq_priority,
  14. DISPATCH_INVOKE_WORKER_DRAIN | DISPATCH_INVOKE_REDIRECTING_DRAIN);
  15. _dispatch_voucher_debug("root queue clear", NULL);
  16. _dispatch_reset_voucher(NULL, DISPATCH_THREAD_PARK);
  17. _dispatch_trace_runtime_event(worker_park, NULL, 0);
  18. }

进入 _dispatch_root_queue_drain

  1. DISPATCH_NOT_TAIL_CALLED // prevent tailcall (for Instrument DTrace probe)
  2. static void
  3. _dispatch_root_queue_drain(dispatch_queue_global_t dq,
  4. dispatch_priority_t pri, dispatch_invoke_flags_t flags)
  5. {
  6. #if DISPATCH_DEBUG
  7. dispatch_queue_t cq;
  8. if (unlikely(cq = _dispatch_queue_get_current())) {
  9. DISPATCH_INTERNAL_CRASH(cq, "Premature thread recycling");
  10. }
  11. #endif
  12. _dispatch_queue_set_current(dq);
  13. _dispatch_init_basepri(pri);
  14. _dispatch_adopt_wlh_anon();
  15. struct dispatch_object_s *item;
  16. bool reset = false;
  17. dispatch_invoke_context_s dic = { };
  18. #if DISPATCH_COCOA_COMPAT
  19. _dispatch_last_resort_autorelease_pool_push(&dic);
  20. #endif // DISPATCH_COCOA_COMPAT
  21. _dispatch_queue_drain_init_narrowing_check_deadline(&dic, pri);
  22. _dispatch_perfmon_start();
  23. while (likely(item = _dispatch_root_queue_drain_one(dq))) {
  24. if (reset) _dispatch_wqthread_override_reset();
  25. _dispatch_continuation_pop_inline(item, &dic, flags, dq);
  26. reset = _dispatch_reset_basepri_override();
  27. if (unlikely(_dispatch_queue_drain_should_narrow(&dic))) {
  28. break;
  29. }
  30. }
  31. // overcommit or not. worker thread
  32. if (pri & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
  33. _dispatch_perfmon_end(perfmon_thread_worker_oc);
  34. } else {
  35. _dispatch_perfmon_end(perfmon_thread_worker_non_oc);
  36. }
  37. #if DISPATCH_COCOA_COMPAT
  38. _dispatch_last_resort_autorelease_pool_pop(&dic);
  39. #endif // DISPATCH_COCOA_COMPAT
  40. _dispatch_reset_wlh();
  41. _dispatch_clear_basepri();
  42. _dispatch_queue_set_current(NULL);
  43. }

发现上面的重点部分是 while 循环里面的 _dispatch_continuation_pop_inline

  1. static inline void
  2. _dispatch_continuation_pop_inline(dispatch_object_t dou,
  3. dispatch_invoke_context_t dic, dispatch_invoke_flags_t flags,
  4. dispatch_queue_class_t dqu)
  5. {
  6. dispatch_pthread_root_queue_observer_hooks_t observer_hooks =
  7. _dispatch_get_pthread_root_queue_observer_hooks();
  8. if (observer_hooks) observer_hooks->queue_will_execute(dqu._dq);
  9. flags &= _DISPATCH_INVOKE_PROPAGATE_MASK;
  10. if (_dispatch_object_has_vtable(dou)) {
  11. dx_invoke(dou._dq, dic, flags);
  12. } else {
  13. _dispatch_continuation_invoke_inline(dou, flags, dqu);
  14. }
  15. if (observer_hooks) observer_hooks->queue_did_execute(dqu._dq);
  16. }

进入 _dispatch_continuation_invoke_inline

  1. static inline void
  2. _dispatch_continuation_invoke_inline(dispatch_object_t dou,
  3. dispatch_invoke_flags_t flags, dispatch_queue_class_t dqu)
  4. {
  5. dispatch_continuation_t dc = dou._dc, dc1;
  6. dispatch_invoke_with_autoreleasepool(flags, {
  7. uintptr_t dc_flags = dc->dc_flags;
  8. // Add the item back to the cache before calling the function. This
  9. // allows the 'hot' continuation to be used for a quick callback.
  10. //
  11. // The ccache version is per-thread.
  12. // Therefore, the object has not been reused yet.
  13. // This generates better assembly.
  14. _dispatch_continuation_voucher_adopt(dc, dc_flags);
  15. if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
  16. _dispatch_trace_item_pop(dqu, dou);
  17. }
  18. if (dc_flags & DC_FLAG_CONSUME) {
  19. dc1 = _dispatch_continuation_free_cacheonly(dc);
  20. } else {
  21. dc1 = NULL;
  22. }
  23. if (unlikely(dc_flags & DC_FLAG_GROUP_ASYNC)) {
  24. _dispatch_continuation_with_group_invoke(dc);
  25. } else {
  26. _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
  27. _dispatch_trace_item_complete(dc);
  28. }
  29. if (unlikely(dc1)) {
  30. _dispatch_continuation_free_to_cache_limit(dc1);
  31. }
  32. });
  33. _dispatch_perfmon_workitem_inc();
  34. }

重点代码 _dispatch_client_callout

  1. void
  2. _dispatch_client_callout(void *ctxt, dispatch_function_t f)
  3. {
  4. @try {
  5. // 执行任务函数
  6. return f(ctxt);
  7. }
  8. @catch (...) {
  9. objc_terminate();
  10. }
  11. }

到这里,就能正确理解异步函数的执行过程了。