帧率是什么

FPS是图像领域中的定义,是指画面每秒传输帧数,通俗来讲就是指动画或视频的画面数

帧率的影响是什么

我们用华为解读的《软件绿色联盟应用体验标准3.0》原话来解释:应用界面的刷新帧率,尤其是滑动时,如果帧率较低,会带来卡顿的感觉,所以保持一个相对较高的帧率会带来更流畅的体验。还有刷新频率越低,图像闪烁和抖动的就越厉害,眼睛疲劳得就越快。

那么影响帧率的是什么呢

现在的Android旗舰机都做到了120HzZ的刷新率,但实际上,我们需要这么高的刷新率吗?其实我们对帧率的要求,不同的场景要求不太一样,电影做到24Hz就可以正常观看了,游戏的话需要最低30Hz,想要流畅的感觉就需要>60Hz,这也是为啥王者荣耀要达到60Hz左右才不会卡顿的原因。那么反过来再想,到底是什么在影响帧率呢?

  • 显卡,FPS越高则需要的显卡能力越高
  • 分辨率,分辨率越低,越容易做到高帧率

有个公式,提供了帧率的计算方法:显卡的处理能力= 分辨率×帧率,举个例子:分辨率是1024×768 帧率是24帧/秒,那么需要:
1024×768×24=18874368 一千八百八十七万个像素量/秒 的显卡处理能力,想达到50HZ 就需要三千九百万的像素处理能力。

帧率多少属于正常呢

软件绿色联盟应用体验标准3.0 -性能标准中提到如下标准:

  • 普通应用的帧率应≥55fps
  • 游戏类、地图类和视频类的帧率应≥25fps

游戏类不是达到60Hz才感觉流畅吗,看到绿色联盟对游戏要求倒是很低,哈哈。

如何监控Android手机的帧率呢?

既然保持高帧率对用户体验如此重要,那么我们如何实时监控手机的帧率呢?一种最简单的方式就是手机里面的开发者模式中,打开HWHI呈现模式分析,选择屏幕条形图,或者adb shell输出都行。详细介绍在这里:
使用 GPU 渲染模式分析工具进行分析
image.png
但这好像也不是我们这次关心的重点,我们更想从代码的角度来分析,那么如何实现呢?答案就是Choreographer,谷歌在 Android API 16 中提供了这个类,它会在每一帧绘制之前通过FrameCallBack 接口的方式回调doFrame方法,且提供了当前帧开始绘制的时间(单位:纳秒),代码如下:

  1. /**
  2. * Implement this interface to receive a callback when a new display frame is
  3. * being rendered. The callback is invoked on the {@link Looper} thread to
  4. * which the {@link Choreographer} is attached.
  5. */
  6. public interface FrameCallback {
  7. /**
  8. * Called when a new display frame is being rendered.
  9. * <p>
  10. * This method provides the time in nanoseconds when the frame started being rendered.
  11. * The frame time provides a stable time base for synchronizing animations
  12. * and drawing. It should be used instead of {@link SystemClock#uptimeMillis()}
  13. * or {@link System#nanoTime()} for animations and drawing in the UI. Using the frame
  14. * time helps to reduce inter-frame jitter because the frame time is fixed at the time
  15. * the frame was scheduled to start, regardless of when the animations or drawing
  16. * callback actually runs. All callbacks that run as part of rendering a frame will
  17. * observe the same frame time so using the frame time also helps to synchronize effects
  18. * that are performed by different callbacks.
  19. * </p><p>
  20. * Please note that the framework already takes care to process animations and
  21. * drawing using the frame time as a stable time base. Most applications should
  22. * not need to use the frame time information directly.
  23. * </p>
  24. *
  25. * @param frameTimeNanos The time in nanoseconds when the frame started being rendered,
  26. * in the {@link System#nanoTime()} timebase. Divide this value by {@code 1000000}
  27. * to convert it to the {@link SystemClock#uptimeMillis()} time base.
  28. */
  29. public void doFrame(long frameTimeNanos);
  30. }

通过注释就发现,确实如上描述的那样。我们再深挖一下,到底是什么动力促使有了doFrame的回调呢?然后再想一想如何计算出帧率。

Choreographer背后的动力

这就需要了解Android的渲染原理,Android的渲染也是经过Google长期的迭代,不断优化更新,其实整个渲染过程很复杂,需要很多框架的支持,我们知道的的底层库如:Skia或者OpenGL,Flutter就使用了Skia绘制,但它是2D的,且使用的是CPU,OpenGL可以绘制3D,且使用的是GPU,而最终呈现图形的是Surface。所有的元素都在 Surface 这张画纸上进行绘制和渲染,每个窗口Window都会关联一个 Surface,WindowManager 则负责管理这些Window窗口,并且把它们的数据传递给 SurfaceFlinger,SurfaceFlinger 接受缓冲区,对它们进行合成,然后发送到屏幕。
WindowManager 为 SurfaceFlinger 提供缓冲区和窗口元数据,而 SurfaceFlinger通过硬件合成器 Hardware Composer 合成并输出到显示屏,Surface会被绘制很多层,所以就是上面提到的缓冲区,在 Android 4.1 之前使用的是双缓冲机制;在 Android 4.1 之后,使用的是三缓冲机制。
然而这还没完,Google 在 2012 年的 I/O 大会上宣布了 Project Butter 黄油计划,并且在 Android 4.1 中正式开启了这个机制,也就是VSYNC 信号,这是个什么东西呢?先来看一张图,你会发现一次屏幕的绘制,需要经过CPU计算,然后GPU,最后Display,VSYNC 像是一个队列(生产者消费者模型),一个个随着时间累加,一个个又立刻输出后被消费,我们知道最终的Buffer数据是通过SurfaceFlinger输出到显示屏的,而VSYNC在这期间起到的作用其实就是有序规划渲染流程,降低延时,而我们还知道一个VSYNC的时间间隔是16ms,超过16就会导致页面绘制停留在上一帧画面,从而感觉像是掉帧
为什么要16ms呢?这来源一个计算公式:
1000ms/60fps 约等于 16ms/1fps,而Choreographer的doFrame方法正常情况下就是16ms一次回调。且它的动力就是来源于这个VSYNC信号。
image.png

怎么通过Choreographer计算帧率

由于doFrame方法正常是16ms被调一次,那么我们就可以顺着这个特点来做个计算,看段代码来梳理下整个思路:

  1. //记录上次的帧时间
  2. private long mLastFrameTime;
  3. Choreographer.getInstance().postFrameCallback(new Choreographer.FrameCallback() {
  4. @Override
  5. public void doFrame(long frameTimeNanos) {
  6. //每500毫秒重新赋值一次最新的帧时间
  7. if (mLastFrameTime == 0) {
  8. mLastFrameTime = frameTimeNanos;
  9. }
  10. //本次帧开始时间减去上次的时间除以100万,得到毫秒的差值
  11. float diff = (frameTimeNanos - mLastFrameTime) / 1000000.0f;
  12. //这里是500毫秒输出一次帧率
  13. if (diff > 500) {
  14. double fps = (((double) (mFrameCount * 1000L)) / diff);
  15. mFrameCount = 0;
  16. mLastFrameTime = 0;
  17. Log.d("doFrame", "doFrame: " + fps);
  18. } else {
  19. ++mFrameCount;
  20. }
  21. //注册监听下一次 vsync信号
  22. Choreographer.getInstance().postFrameCallback(this);
  23. }
  24. });

为什么要按500毫秒计算呢?其实也可以用一秒来算哈,你自己决定,总之doFrame方法,如果在一秒内被回调60次左右,那就基本正常哦。好了知道了如何用代码测算帧率,那么我们就来开始分析,Matrix的帧率检测代码。看看他都做了哪些。

Matrix 帧率检测代码分析

帧率检测的代码肯定是在trace canary中,首先来看下整体目录,我们发现了ITracer抽象
image.png
分包有,AnrTracer、EvilMethodTracer、FrameTracer、StartupTracer四个,看名字应该就能判断出FrameTracer肯定和帧率相关,于是我们打开FrameTracer,并检索fps字段,发现如下代码:
FrameTracer类中的私有类FrameCollectItem

  1. private class FrameCollectItem {
  2. long sumFrameCost;
  3. int sumFrame = 0;
  4. void report() {
  5. // 这里计算帧率,1000.f * sumFrame / sumFrameCost 公式和我们之前的
  6. // double fps = (((double) (mFrameCount * 1000L)) / diff) 是不是有异曲同工之处?
  7. // sumFrameCost应该是动态的时间差常量,可以是500毫秒也可以是1秒。
  8. float fps = Math.min(60.f, 1000.f * sumFrame / sumFrameCost);
  9. MatrixLog.i(TAG, "[report] FPS:%s %s", fps, toString());
  10. try {
  11. //这里就是生成Json报告,就不看了。
  12. TracePlugin plugin = Matrix.with().getPluginByClass(TracePlugin.class);
  13. if (null == plugin) {
  14. return;
  15. }
  16. JSONObject dropLevelObject = new JSONObject();
  17. dropLevelObject.put(DropStatus.DROPPED_FROZEN.name(), dropLevel[DropStatus.DROPPED_FROZEN.index]);
  18. dropLevelObject.put(DropStatus.DROPPED_HIGH.name(), dropLevel[DropStatus.DROPPED_HIGH.index]);
  19. dropLevelObject.put(DropStatus.DROPPED_MIDDLE.name(), dropLevel[DropStatus.DROPPED_MIDDLE.index]);
  20. dropLevelObject.put(DropStatus.DROPPED_NORMAL.name(), dropLevel[DropStatus.DROPPED_NORMAL.index]);
  21. dropLevelObject.put(DropStatus.DROPPED_BEST.name(), dropLevel[DropStatus.DROPPED_BEST.index]);
  22. JSONObject dropSumObject = new JSONObject();
  23. dropSumObject.put(DropStatus.DROPPED_FROZEN.name(), dropSum[DropStatus.DROPPED_FROZEN.index]);
  24. dropSumObject.put(DropStatus.DROPPED_HIGH.name(), dropSum[DropStatus.DROPPED_HIGH.index]);
  25. dropSumObject.put(DropStatus.DROPPED_MIDDLE.name(), dropSum[DropStatus.DROPPED_MIDDLE.index]);
  26. dropSumObject.put(DropStatus.DROPPED_NORMAL.name(), dropSum[DropStatus.DROPPED_NORMAL.index]);
  27. dropSumObject.put(DropStatus.DROPPED_BEST.name(), dropSum[DropStatus.DROPPED_BEST.index]);
  28. JSONObject resultObject = new JSONObject();
  29. resultObject = DeviceUtil.getDeviceInfo(resultObject, plugin.getApplication());
  30. resultObject.put(SharePluginInfo.ISSUE_SCENE, visibleScene);
  31. resultObject.put(SharePluginInfo.ISSUE_DROP_LEVEL, dropLevelObject);
  32. resultObject.put(SharePluginInfo.ISSUE_DROP_SUM, dropSumObject);
  33. resultObject.put(SharePluginInfo.ISSUE_FPS, fps);
  34. Issue issue = new Issue();
  35. issue.setTag(SharePluginInfo.TAG_PLUGIN_FPS);
  36. issue.setContent(resultObject);
  37. plugin.onDetectIssue(issue);
  38. } catch (JSONException e) {
  39. MatrixLog.e(TAG, "json error", e);
  40. } finally {
  41. sumFrame = 0;
  42. sumDroppedFrames = 0;
  43. sumFrameCost = 0;
  44. }
  45. }
  46. }

我跟着这段代码,找一下sumFrame被调用的地方,看看都做了什么,下面可以快速看,因为没有具体代码,原因是我们已经知道了可以通过Choreographer.FrameCallback来注册监听,为了快速验证Matrix Trace 帧率的实现方案,我们跳过细节,直接找到核心逻辑再贴代码。
image.png
找到了一个函数collect,做了++操作,再往上找发现,FrameTracer又一个内部类FPSCollector
image.png
再往上,发现doReplay方法调用了doReplayInner
image.png
继续后,发现了IDoFrameListener在调用doReplay函数
image.png

而且FPSCollector就是继承自IDoFrameListener,再来看IDoFrameListener
image.png
和我们之前分析的不太一样,并没有找到Choreographer.FrameCallback的影子,计算方式倒是差不多。我不信,我还要再往上找一下
image.png
image.png
看到这里,发现了doFrame函数,似乎找到了FrameCallback的影子,却又不是。继续看
image.png
发现了UIThreadMonitor类,继续往上
image.png
发现在init函数中被调用,来该看代码了

  1. LooperMonitor.register(new LooperMonitor.LooperDispatchListener() {
  2. @Override
  3. public boolean isValid() {
  4. return isAlive;
  5. }
  6. @Override
  7. public void dispatchStart() {
  8. super.dispatchStart();
  9. UIThreadMonitor.this.dispatchBegin();
  10. }
  11. @Override
  12. public void dispatchEnd() {
  13. super.dispatchEnd();
  14. UIThreadMonitor.this.dispatchEnd();
  15. }
  16. });

LooperMonitor是个什么鬼,为啥能感知帧率?看下它是个啥?

  1. class LooperMonitor implements MessageQueue.IdleHandler

查询一下发现MessageQueue.IdleHandler,它可以用来在线程空闲的时候,指定一个操作,只要线程空闲了,就可以执行它指定的操作,这跟我们之前的方案是不是就不一样了,我们可没有考虑线程是否是空闲,随时都在计算帧率,看到这里,我算是知道了它根本没有用FrameCallback,而是通过另一种方式来计算的,先不说是什么,我们再跟踪一下LooperDispatchListener
image.png
发现一个LooperPrinter,它分发的,来看LooperPrinter

  1. class LooperPrinter implements Printer
  2. //打印?
  3. public interface Printer {
  4. /**
  5. * Write a line of text to the output. There is no need to terminate
  6. * the given string with a newline.
  7. */
  8. void println(String x);
  9. }

看看这个LooperPrinter是怎么创建对象的,找到如下引用
image.png
详细看下代码

  1. private synchronized void resetPrinter() {
  2. Printer originPrinter = null;
  3. try {
  4. if (!isReflectLoggingError) {
  5. originPrinter = ReflectUtils.get(looper.getClass(), "mLogging", looper);
  6. if (originPrinter == printer && null != printer) {
  7. return;
  8. }
  9. }
  10. } catch (Exception e) {
  11. isReflectLoggingError = true;
  12. Log.e(TAG, "[resetPrinter] %s", e);
  13. }
  14. if (null != printer) {
  15. MatrixLog.w(TAG, "maybe thread:%s printer[%s] was replace other[%s]!",
  16. looper.getThread().getName(), printer, originPrinter);
  17. }
  18. //setMessageLogging用来记录Looper.loop()中相关log信息,给他设置一个printer,
  19. //那岂不是打印的工作就交给了LooperPrinter
  20. looper.setMessageLogging(printer = new LooperPrinter(originPrinter));
  21. if (null != originPrinter) {
  22. MatrixLog.i(TAG, "reset printer, originPrinter[%s] in %s", originPrinter, looper.getThread().getName());
  23. }
  24. }

我顺着这个looper,找到了这个
image.png
对的就是主线程的Looper,我们都知道主线程,一直负责UI的刷新工作,原来如此,它利用这Looper提供的日志机制,且考虑到在线程空闲时来处理数据,来监控帧率和其他。很不错的设计,值得借鉴学习。而我还发现了一个细节,其实它还是用到了Choreographer来计算帧率,且利用反射来获取字段信息如:

  1. //帧间隔时间
  2. frameIntervalNanos= ReflectUtils.reflectObject(choreographer, "mFrameIntervalNanos", Constants.DEFAULT_FRAME_DURATION);
  3. //vsync信号接受
  4. vsyncReceiver = ReflectUtils.reflectObject(choreographer, "mDisplayEventReceiver", null);
  5. 上面doFrame函数回调中的 frameTimeNanos 其实就是从 vsyncReceiver中拿到的。

源码截图,看来计算帧率肯定是离不开Choreographer
image.png
那么问题又来了。

为什么Looper提供的日志机制可以计算帧率

你是不是跟我一样有这个疑问,我带你看看Choreographer的源码你就明白了,来

  1. private static final ThreadLocal<Choreographer> sThreadInstance =
  2. new ThreadLocal<Choreographer>() {
  3. @Override
  4. protected Choreographer initialValue() {
  5. Looper looper = Looper.myLooper();
  6. if (looper == null) {
  7. throw new IllegalStateException("The current thread must have a looper!");
  8. }
  9. Choreographer choreographer = new Choreographer(looper, VSYNC_SOURCE_APP);
  10. if (looper == Looper.getMainLooper()) {
  11. mMainInstance = choreographer;
  12. }
  13. return choreographer;
  14. }
  15. };

从这段代码分析我们得出:
Choreographer 是线程私有的,因为ThreadLocal创建的变量只能被当前线程访问,也就是说 一个线程对应一个Choreographer,主线程的Choreographer就是mMainInstance。再来看一段代码

  1. private Choreographer(Looper looper, int vsyncSource) {
  2. mLooper = looper;
  3. mHandler = new FrameHandler(looper);
  4. mDisplayEventReceiver = USE_VSYNC
  5. ? new FrameDisplayEventReceiver(looper, vsyncSource)
  6. : null;
  7. mLastFrameTimeNanos = Long.MIN_VALUE;
  8. mFrameIntervalNanos = (long)(1000000000 / getRefreshRate());
  9. mCallbackQueues = new CallbackQueue[CALLBACK_LAST + 1];
  10. for (int i = 0; i <= CALLBACK_LAST; i++) {
  11. mCallbackQueues[i] = new CallbackQueue();
  12. }
  13. // b/68769804: For low FPS experiments.
  14. setFPSDivisor(SystemProperties.getInt(ThreadedRenderer.DEBUG_FPS_DIVISOR, 1));
  15. }

这是Choreographer构造,这里其实我们发现,Choreographer是通过Looper创建的,它俩是一对一的关系,也就是在一个线程中有一个Looper也有一个Choreographer,这里面两个比较重要 FrameHandler,FrameDisplayEventReceiver 现在还不知道他俩用来干嘛,往下看代码

  1. private final class FrameHandler extends Handler {
  2. public FrameHandler(Looper looper) {
  3. super(looper);
  4. }
  5. @Override
  6. public void handleMessage(Message msg) {
  7. switch (msg.what) {
  8. case MSG_DO_FRAME:
  9. doFrame(System.nanoTime(), 0);
  10. break;
  11. case MSG_DO_SCHEDULE_VSYNC:
  12. doScheduleVsync();
  13. break;
  14. case MSG_DO_SCHEDULE_CALLBACK:
  15. doScheduleCallback(msg.arg1);
  16. break;
  17. }
  18. }
  19. }
  20. void doFrame(long frameTimeNanos, int frame) {
  21. final long startNanos;
  22. synchronized (mLock) {
  23. if (!mFrameScheduled) {
  24. return; // no work to do
  25. }
  26. if (DEBUG_JANK && mDebugPrintNextFrameTimeDelta) {
  27. mDebugPrintNextFrameTimeDelta = false;
  28. Log.d(TAG, "Frame time delta: "
  29. + ((frameTimeNanos - mLastFrameTimeNanos) * 0.000001f) + " ms");
  30. }
  31. long intendedFrameTimeNanos = frameTimeNanos;
  32. startNanos = System.nanoTime();
  33. final long jitterNanos = startNanos - frameTimeNanos;
  34. if (jitterNanos >= mFrameIntervalNanos) {
  35. final long skippedFrames = jitterNanos / mFrameIntervalNanos;
  36. if (skippedFrames >= SKIPPED_FRAME_WARNING_LIMIT) {
  37. Log.i(TAG, "Skipped " + skippedFrames + " frames! "
  38. + "The application may be doing too much work on its main thread.");
  39. }
  40. final long lastFrameOffset = jitterNanos % mFrameIntervalNanos;
  41. if (DEBUG_JANK) {
  42. Log.d(TAG, "Missed vsync by " + (jitterNanos * 0.000001f) + " ms "
  43. + "which is more than the frame interval of "
  44. + (mFrameIntervalNanos * 0.000001f) + " ms! "
  45. + "Skipping " + skippedFrames + " frames and setting frame "
  46. + "time to " + (lastFrameOffset * 0.000001f) + " ms in the past.");
  47. }
  48. frameTimeNanos = startNanos - lastFrameOffset;
  49. }
  50. if (frameTimeNanos < mLastFrameTimeNanos) {
  51. if (DEBUG_JANK) {
  52. Log.d(TAG, "Frame time appears to be going backwards. May be due to a "
  53. + "previously skipped frame. Waiting for next vsync.");
  54. }
  55. scheduleVsyncLocked();
  56. return;
  57. }
  58. if (mFPSDivisor > 1) {
  59. long timeSinceVsync = frameTimeNanos - mLastFrameTimeNanos;
  60. if (timeSinceVsync < (mFrameIntervalNanos * mFPSDivisor) && timeSinceVsync > 0) {
  61. scheduleVsyncLocked();
  62. return;
  63. }
  64. }
  65. mFrameInfo.setVsync(intendedFrameTimeNanos, frameTimeNanos);
  66. mFrameScheduled = false;
  67. mLastFrameTimeNanos = frameTimeNanos;
  68. }
  69. try {
  70. Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Choreographer#doFrame");
  71. AnimationUtils.lockAnimationClock(frameTimeNanos / TimeUtils.NANOS_PER_MS);
  72. mFrameInfo.markInputHandlingStart();
  73. doCallbacks(Choreographer.CALLBACK_INPUT, frameTimeNanos);
  74. mFrameInfo.markAnimationsStart();
  75. doCallbacks(Choreographer.CALLBACK_ANIMATION, frameTimeNanos);
  76. mFrameInfo.markPerformTraversalsStart();
  77. doCallbacks(Choreographer.CALLBACK_TRAVERSAL, frameTimeNanos);
  78. doCallbacks(Choreographer.CALLBACK_COMMIT, frameTimeNanos);
  79. } finally {
  80. AnimationUtils.unlockAnimationClock();
  81. Trace.traceEnd(Trace.TRACE_TAG_VIEW);
  82. }
  83. if (DEBUG_FRAMES) {
  84. final long endNanos = System.nanoTime();
  85. Log.d(TAG, "Frame " + frame + ": Finished, took "
  86. + (endNanos - startNanos) * 0.000001f + " ms, latency "
  87. + (startNanos - frameTimeNanos) * 0.000001f + " ms.");
  88. }
  89. }

通过代码我们发现,FrameHandler它接收消息,然后处理消息,调用Choreographer的doFrame函数,这个doFrame非postFrameCallback的FrameCallback的doFrame,而经过代码的搜索,我发现FrameCallback的doFrame 就是通过这里触发的,在doCallbacks函数中。这个细节不带你们看了,我们来看这个FrameHandler的消息是谁触发的,接着看FrameDisplayEventReceiver

  1. private final class FrameDisplayEventReceiver extends DisplayEventReceiver
  2. implements Runnable {
  3. private boolean mHavePendingVsync;
  4. private long mTimestampNanos;
  5. private int mFrame;
  6. public FrameDisplayEventReceiver(Looper looper, int vsyncSource) {
  7. super(looper, vsyncSource);
  8. }
  9. @Override
  10. public void onVsync(long timestampNanos, int builtInDisplayId, int frame) {
  11. // Ignore vsync from secondary display.
  12. // This can be problematic because the call to scheduleVsync() is a one-shot.
  13. // We need to ensure that we will still receive the vsync from the primary
  14. // display which is the one we really care about. Ideally we should schedule
  15. // vsync for a particular display.
  16. // At this time Surface Flinger won't send us vsyncs for secondary displays
  17. // but that could change in the future so let's log a message to help us remember
  18. // that we need to fix this.
  19. if (builtInDisplayId != SurfaceControl.BUILT_IN_DISPLAY_ID_MAIN) {
  20. Log.d(TAG, "Received vsync from secondary display, but we don't support "
  21. + "this case yet. Choreographer needs a way to explicitly request "
  22. + "vsync for a specific display to ensure it doesn't lose track "
  23. + "of its scheduled vsync.");
  24. scheduleVsync();
  25. return;
  26. }
  27. // Post the vsync event to the Handler.
  28. // The idea is to prevent incoming vsync events from completely starving
  29. // the message queue. If there are no messages in the queue with timestamps
  30. // earlier than the frame time, then the vsync event will be processed immediately.
  31. // Otherwise, messages that predate the vsync event will be handled first.
  32. long now = System.nanoTime();
  33. if (timestampNanos > now) {
  34. Log.w(TAG, "Frame time is " + ((timestampNanos - now) * 0.000001f)
  35. + " ms in the future! Check that graphics HAL is generating vsync "
  36. + "timestamps using the correct timebase.");
  37. timestampNanos = now;
  38. }
  39. if (mHavePendingVsync) {
  40. Log.w(TAG, "Already have a pending vsync event. There should only be "
  41. + "one at a time.");
  42. } else {
  43. mHavePendingVsync = true;
  44. }
  45. mTimestampNanos = timestampNanos;
  46. mFrame = frame;
  47. Message msg = Message.obtain(mHandler, this);
  48. msg.setAsynchronous(true);
  49. mHandler.sendMessageAtTime(msg, timestampNanos / TimeUtils.NANOS_PER_MS);
  50. }
  51. @Override
  52. public void run() {
  53. mHavePendingVsync = false;
  54. doFrame(mTimestampNanos, mFrame);
  55. }
  56. }

onVsync 检索源码发现这个函数是由android.view包下DisplayEventReceiver触发,DisplayEventReceiver实际是在C++层初始化,并监听Vsync信号,其实是由SurfaceFlinger传递过来,所以这里我知道了,FrameDisplayEventReceiver用来接收onVsync信号,然后通过mHandler也就是上面的FrameHandler触发一次消息的传递。然而你是不是有点怀疑不对劲,因为上面的case MSGDO_FRAME 才会触发doFrame函数,这里没有设置这样的消息mHandler.obtainMessage(_MSG_DO_FRAME),这样就会触发对吧,但仔细看Message.obtain(mHandler, this),这里的this就是FrameDisplayEventReceiver,而FrameDisplayEventReceiver实现Runnable,那么就会导致FrameHandler在收到消息后,执行FrameDisplayEventReceiver的run函数,而这个函数就是调用了doFrame,那么就通了。
好了,可以总结下为什么可以了:
Choreographer的onVsync 消息消费其实就是通过所在线程中的Looper中处理的,那么我们监控主线程中的looper消息,同样也能监控到帧率。就是这样的道理。

小结

  • Main Looper 中设置 Printer来做分发
  • 分发后来计算帧率
  • 通过MessageQueue.IdleHandler避开线程忙碌的时间,等待闲时处理

大致就这么多,如果你有什么新发现,或者我有不对的地方,欢迎评论执教。