- 21.11.19 SJL博客6篇
- 21.11.1 采样策略
- 21.11.1 Focus Attention: Promoting Faithfulness and Diversity in Summarization【ACL2021】 pending
- 21.11.1 BERTSCORE: EVALUATING TEXT GENERATION WITH BERT 【ICLR2020】
- 21.11.3 On Faithfulness and Factuality in Abstractive Summarization【ACL 2020】
- 21.11.4 PEGASUS Pre-training with Extracted Gap-sentences for Abstractive Summarization【ICML 2020】
- 21.11.5 Fairseq Code review
- 21.11.6 EMNLP2021 论文预讲笔记
- 21.11.15 初步做的论文Servey
- 21.11.16 NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
- 21.11.16 Factual Error Correction for Abstractive Summarization Models EMNLP20 short
- 21.11.17 SJL博客3篇
- 21.11.18 Discourse Understanding and Factual Consistency in Abstractive Summarization EACL2021
- 21.11.18 SJL博客8篇
- 21.11.18 转载苏巨 Bert初始化以及Norm,梯度消失的讨论
- 21.11.30 MOFE: MIXTURE OF FACTUAL EXPERTS FOR CONTROLLING HALLUCINATIONS IN ABSTRACTIVE SUMMARIZATION
- 21.11.19 BM25和TFIDF转载
- 21.11.22 NLP多进程处理框架 如何高效处理文本?
- 21.11.22 NJU HPC Tips
- 21.11.22 DEEP: DEnoising Entity Pre-training for Neural Machine Translation
- 21.11.24 相似度分析
- 21.11.26 Prefix-Tuning: Optimizing Continuous Prompts for Generation [ACL2021]
- 21.11.27 BARTSCORE: Evaluating Generated Text as Text Generation 【NIPS2021】
- 21.11.27 Factual Probing Is [MASK]: Learning vs. Learning to Recall【NAACL2021】
- 21.11.28
- 21.11.29 Are Factuality Checkers Reliable? Adversarial Meta-evaluation of Factuality in Summarization [EMNLP21]
- 21.11.30 SUMMAC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization 【TACL】
- 21.11.30 Dialogue Inspectional Summarization with Factual Inconsistency Awareness
- 21.11.30 Fine-grained Factual Consistency Assessment for Abstractive Summarization Models 【EMNLP21】