FlinkKafkaProducer实现了TwoPhaseCommitSinkFunction,也就是两阶段提交。关于两阶段提交的原理,可以参见《An Overview of End-to-End Exactly-Once Processing in Apache Flink》,本文不再赘述两阶段提交的原理,但是会分析FlinkKafkaProducer源码中是如何实现两阶段提交的,并保证了在结合kafka的时候做到端到端的Exactly Once语义的。

TwoPhaseCommitSinkFunction分析

  1. public abstract class TwoPhaseCommitSinkFunction<IN, TXN, CONTEXT>
  2. extends RichSinkFunction<IN>
  3. implements CheckpointedFunction, CheckpointListener

TwoPhaseCommitSinkFunction实现了CheckpointedFunctionCheckpointListener接口,首先就是在initializeState方法中开启事务,对于flink sink的两阶段提交,第一阶段就是执行CheckpointedFunction#snapshotState当所有task的checkpoint都完成之后,每个task会执行CheckpointedFunction#notifyCheckpointComplete也就是所谓的第二阶段

FlinkKafkaProducer第一阶段分析

image.png

  1. @Override
  2. public void snapshotState(FunctionSnapshotContext context) throws Exception {
  3. // this is like the pre-commit of a 2-phase-commit transaction
  4. // we are ready to commit and remember the transaction
  5. checkState(currentTransactionHolder != null, "bug: no transaction object when performing state snapshot");
  6. long checkpointId = context.getCheckpointId();
  7. LOG.debug("{} - checkpoint {} triggered, flushing transaction '{}'", name(), context.getCheckpointId(), currentTransactionHolder);
  8. preCommit(currentTransactionHolder.handle);
  9. pendingCommitTransactions.put(checkpointId, currentTransactionHolder);
  10. LOG.debug("{} - stored pending transactions {}", name(), pendingCommitTransactions);
  11. currentTransactionHolder = beginTransactionInternal();
  12. LOG.debug("{} - started new transaction '{}'", name(), currentTransactionHolder);
  13. state.clear();
  14. state.add(new State<>(
  15. this.currentTransactionHolder,
  16. new ArrayList<>(pendingCommitTransactions.values()),
  17. userContext));
  18. }

这部分代码的核心在于

  1. 先执行preCommit方法,EXACTLY_ONCE模式下会调flush,立即将数据发送到指定的topic,这时如果消费这个topic,需要指定isolation.levelread_committed表示消费端应用不可以看到未提交的事物内的消息。
    1. @Override
    2. protected void preCommit(FlinkKafkaProducer.KafkaTransactionState transaction) throws FlinkKafkaException {
    3. switch (semantic) {
    4. case EXACTLY_ONCE:
    5. case AT_LEAST_ONCE:
    6. flush(transaction);
    7. break;
    8. case NONE:
    9. break;
    10. default:
    11. throw new UnsupportedOperationException("Not implemented semantic");
    12. }
    13. checkErroneous();
    14. }


注意第一次调用的sendflush的事务都是在initializeState方法中开启事务

  1. transaction.producer.send(record, callback);
  1. transaction.producer.flush();
  1. pendingCommitTransactions保存了每个checkpoint对应的事务,并为下一次checkpoint创建新的producer事务,即currentTransactionHolder = beginTransactionInternal();下一次的sendflush都会在这个事务中。也就是说第一阶段每一个checkpoint都有自己的事务,并保存在pendingCommitTransactions中。

FlinkKafkaProducer第二阶段分析

image.png
当所有checkpoint都完成后,会进入第二阶段的提交,

  1. @Override
  2. public final void notifyCheckpointComplete(long checkpointId) throws Exception {
  3. // the following scenarios are possible here
  4. //
  5. // (1) there is exactly one transaction from the latest checkpoint that
  6. // was triggered and completed. That should be the common case.
  7. // Simply commit that transaction in that case.
  8. //
  9. // (2) there are multiple pending transactions because one previous
  10. // checkpoint was skipped. That is a rare case, but can happen
  11. // for example when:
  12. //
  13. // - the master cannot persist the metadata of the last
  14. // checkpoint (temporary outage in the storage system) but
  15. // could persist a successive checkpoint (the one notified here)
  16. //
  17. // - other tasks could not persist their status during
  18. // the previous checkpoint, but did not trigger a failure because they
  19. // could hold onto their state and could successfully persist it in
  20. // a successive checkpoint (the one notified here)
  21. //
  22. // In both cases, the prior checkpoint never reach a committed state, but
  23. // this checkpoint is always expected to subsume the prior one and cover all
  24. // changes since the last successful one. As a consequence, we need to commit
  25. // all pending transactions.
  26. //
  27. // (3) Multiple transactions are pending, but the checkpoint complete notification
  28. // relates not to the latest. That is possible, because notification messages
  29. // can be delayed (in an extreme case till arrive after a succeeding checkpoint
  30. // was triggered) and because there can be concurrent overlapping checkpoints
  31. // (a new one is started before the previous fully finished).
  32. //
  33. // ==> There should never be a case where we have no pending transaction here
  34. //
  35. Iterator<Map.Entry<Long, TransactionHolder<TXN>>> pendingTransactionIterator = pendingCommitTransactions.entrySet().iterator();
  36. checkState(pendingTransactionIterator.hasNext(), "checkpoint completed, but no transaction pending");
  37. Throwable firstError = null;
  38. while (pendingTransactionIterator.hasNext()) {
  39. Map.Entry<Long, TransactionHolder<TXN>> entry = pendingTransactionIterator.next();
  40. Long pendingTransactionCheckpointId = entry.getKey();
  41. TransactionHolder<TXN> pendingTransaction = entry.getValue();
  42. if (pendingTransactionCheckpointId > checkpointId) {
  43. continue;
  44. }
  45. LOG.info("{} - checkpoint {} complete, committing transaction {} from checkpoint {}",
  46. name(), checkpointId, pendingTransaction, pendingTransactionCheckpointId);
  47. logWarningIfTimeoutAlmostReached(pendingTransaction);
  48. try {
  49. commit(pendingTransaction.handle);
  50. } catch (Throwable t) {
  51. if (firstError == null) {
  52. firstError = t;
  53. }
  54. }
  55. LOG.debug("{} - committed checkpoint transaction {}", name(), pendingTransaction);
  56. pendingTransactionIterator.remove();
  57. }
  58. if (firstError != null) {
  59. throw new FlinkRuntimeException("Committing one of transactions failed, logging first encountered failure",
  60. firstError);
  61. }
  62. }

这一阶段会将pendingCommitTransactions中的事务全部提交

  1. @Override
  2. protected void commit(FlinkKafkaProducer.KafkaTransactionState transaction) {
  3. if (transaction.isTransactional()) {
  4. try {
  5. transaction.producer.commitTransaction();
  6. } finally {
  7. recycleTransactionalProducer(transaction.producer);
  8. }
  9. }
  10. }

这时消费端就能看到read_committed的数据了,至此整个producer的流程全部结束。

Exactly-Once分析

当输入源和输出都是kafka的时候,flink之所以能做到端到端的Exactly-Once语义,主要是因为第一阶段FlinkKafkaConsumer会将消费的offset信息通过checkpoint保存,所有checkpoint都成功之后,第二阶段FlinkKafkaProducer才会提交事务,结束producer的流程。这个过程中很大程度依赖了kafka producer事务的机制,可以参考Kafka事务

总结

本文主要分析了flink结合kafka是如何实现Exactly-Once语义的。

注:本文基于flink 1.9.0和kafka 2.3