自然语言处理
白天
夜间
首页
下载
阅读记录
书签管理
我的书签
添加书签
移除书签
21.11.15 初步做的论文Servey
浏览
102
扫码
分享
2023-03-22 13:51:05
若有收获,就点个赞吧
0 人点赞
上一篇:
下一篇:
学术小记
2022.05
Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers
BRIO: Bringing Order to Abstractive Summarization
SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization
2022.03
22.03.08 Improving Evidence Retrieval for Automated Explainable Fact-Checking
22.03.07 Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
22.03.05 Controllable Natural Language Generation with Contrastive Prefixes
22.03.05 Text Smoothing: Enhance Various Data Augmentation Methods on Text Classification Tasks
22.03.05 Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding
22.03.04 SpanBert
22.03.02 Improving Language Models by Retrieving from Trillions of Tokens
22.03.02 Retrieval Augmented NLP
22.03.01 Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
2022.02
22.02.28 Bert-Whitening
22.02.27 各向异性
22.02.27 Flooding-X: Improving BERT’s Resistance to Adversarial Attacks via Loss-Restricted Fine-Tuning
22.02.26 Impact of Pretraining Term Frequencies on Few-Shot Reasoning
22.02.19 A Primer in BERTology: What We Know About How BERT Works
22.02.19 模型压缩 From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
2022.01
21.01.28 苏神博客
21.01.23 重读DQN
21.01.22 DATASET DISTILLATION
21.01.22 Adversarial NLI: A New Benchmark for Natural Language Understanding【ACL20】
21.01.21 重读变分推断
21.01.18 ExtraPhrase: Efficient Data Augmentation for Abstractive Summarization
2021.12
21.12.30 Robust Neural Machine Translation with Doubly Adversarial Inputs
21.12.30 Towards a Universal Continuous Knowledge Base 连续性知识库
21.12.20 Asking and Answering Questions to Evaluate the Factual Consistency of Summaries
21.12.20 Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward
21.12.20 FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization
21.12.20 Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference
21.12.20 Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization
21.12.20 Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization [NIPS19]
21.12.19 Improving Truthfulness of Headline Generation
21.12.18 Multi-Fact Correction in Abstractive Text Summarization.
21.12.18 Reducing Quantity Hallucinations in Abstractive Summarization【EMNLP20】
21.12.18 GO FIGURE: A Meta Evaluation of Factuality in Summarization
21.12.18 Truth or Error? Towards systematic analysis of factual errors in abstractive summaries
21.12.17 Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation
21.12.17 Detecting Hallucinated Content in Conditional Neural Sequence Generation
21.12.16 Assessing The Factual Accuracy of Generated Text
21.12.15 Enhancing Factual Consistency of Abstractive Summarization
21.12.15 QuestEval: Summarization Asks for Fact-based Evaluation
21.12.11 Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection 【NAACL2021】
21.12.9 Incorporating Commonsense Knowledge into Abstractive Dialogue Summarization via Heterogeneous Graph Networks
21.12.2 Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics
21.12.2 Annotating and Modeling Fine-grained Factuality in Summarization
21.12.1 Inspecting the Factuality of Hallucinated Entities in Abstractive Summarization
21.12.1 Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries
2021.11
21.11.30 MOFE: MIXTURE OF FACTUAL EXPERTS FOR CONTROLLING HALLUCINATIONS IN ABSTRACTIVE SUMMARIZATION
21.11.30 Fine-grained Factual Consistency Assessment for Abstractive Summarization Models 【EMNLP21】
21.11.30 Dialogue Inspectional Summarization with Factual Inconsistency Awareness
21.11.30 SUMMAC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization 【TACL】
21.11.29 Are Factuality Checkers Reliable? Adversarial Meta-evaluation of Factuality in Summarization [EMNLP21]
21.11.28
21.11.27 Factual Probing Is [MASK]: Learning vs. Learning to Recall【NAACL2021】
21.11.27 BARTSCORE: Evaluating Generated Text as Text Generation 【NIPS2021】
21.11.26 Prefix-Tuning: Optimizing Continuous Prompts for Generation [ACL2021]
21.11.24 相似度分析
21.11.22 DEEP: DEnoising Entity Pre-training for Neural Machine Translation
21.11.22 NJU HPC Tips
21.11.22 NLP多进程处理框架 如何高效处理文本?
21.11.19 BM25和TFIDF转载
21.11.19 SJL博客6篇
21.11.18 转载苏巨 Bert初始化以及Norm,梯度消失的讨论
21.11.18 SJL博客8篇
21.11.18 Discourse Understanding and Factual Consistency in Abstractive Summarization EACL2021
21.11.17 SJL博客3篇
21.11.16 Factual Error Correction for Abstractive Summarization Models EMNLP20 short
21.11.16 NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
21.11.15 初步做的论文Servey
21.11.6 EMNLP2021 论文预讲笔记
21.11.5 Fairseq Code review
21.11.4 PEGASUS Pre-training with Extracted Gap-sentences for Abstractive Summarization【ICML 2020】
21.11.3 On Faithfulness and Factuality in Abstractive Summarization【ACL 2020】
21.11.1 BERTSCORE: EVALUATING TEXT GENERATION WITH BERT 【ICLR2020】
21.11.1 Focus Attention: Promoting Faithfulness and Diversity in Summarization【ACL2021】 pending
21.11.1 采样策略
2021.10
21.10.31 Factual Consistency Evaluation for Text Summarization via Counterfactual Estimation 【EMNLP2021】
21.10.29 Faithful or Extractive? On Mitigating the Faithfulness-Abstractiveness Trade-off in Abstractive Summarization
21.10.27 MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization 【EMNLP2021】
21.10.26 Improving Factual Consistency of Abstractive Summarization via Question Answering 【pending】
21.10.25 Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization
21.10.24 CTRLSUM: TOWARDS GENERIC CONTROLLABLE TEXT SUMMARIZATION
21.10.20 Training Dynamics for Text Summarization Models
21.10.15 GSum: A General Framework for Guided Neural Abstractive Summarization 【NAACL 2021】
21.10.10 VAETransformer进一步实验
21.10.3 CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization 【EMNLP2021】
21.10.1 Controllable Neural Dialogue Summarization with Personal Named Entity Planning 【EMNLP2021】
2021.9
21.9.29 VAE Transformer
21.9.26 Enriching and Controlling Global Semantics for Text Summarization
21.9.25 VAE变分编码器
21.9.24 GAN原理和实践
21.9.22 A Bag of Tricks for Dialogue Summarization【EMNLP2021】
21.9.16 相似度匹配若干新算法
21.9.16 R-Drop
21.9.14 在BERT里Seq2Seq
21.9.13 prompt方法
Before
GNN in NLP
NJU HPC 高性能计算Guide
Fairseq 源码分析
对抗学习 in NLP
Trick in DL NLP
对比学习
自监督学习Self-Supervise Learning
CCL 2020 Note
Seq2Seq Translation
奇怪的技巧
懒得整理
方向实践
关系抽取 Relation Extraction
Bert in RE
Relation Extraction
Q&A
QA资料
文本摘要
Lead算法和Rouge评估实现
Match_Sum复现
Search strategy
BPE Subword
结营报告
NER
初试实体识别
ChineseBertNer
ChineseBertNER2
新数据的BertNer
TextMatching
ESIM
Text_Matching 比赛
Text_Matching调参优化
Text_Matching总结
其他
Neo4j
neo4j图形数据库入门
Attention
建立Wiki词典
导入Wiki词典
阅读信息
NLP's Model
Word2Vec
Word2vec's Skip_Gram模型
Skip_Gram中文应用
Skip_Gram模型(补)
Skip_Gram中文实用
Bert
Classification Bert
Bert for Ner 代码分析
ChineseBertNer
配置Bert
Bert初试
Bert魔改
服务器部署环境
Bert 再战Kaggle(句子分类)
Bert pretrain
LSTM
LSTM 入门(词性标注)
LSTM Kaggle 实战(句子分类)
LSTM 再战Kaggle(句子分类)
NLP's API
NLTK入门
jieba入门
hanlp
以前的论文笔记
事实一致性
The Factual Inconsistency Problem in Abstractive Text Summarization A Survey
对话摘要 Dialog Summarization
Improving Abstractive Dialogue Summarization with Graph Structures
ABSTRACTIVE DIALOG SUMMARIZATION WITH SEMANTIC SCAFFOLDS
Two-stage encoding Extractive Summarization
Neural Document Summarization by Jointly Learning to Score and Select Sentences
CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training
CoLAKE: Contextualized Language and Knowledge Embedding
A Closer Look at Data Bias in Neural Extractive Summarization Models
Deep Communicating Agents for Abstractive Summarization
Abstractive News Summarization based on Event Semantic Link Network
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
BART Denoising
What Have We Achieved on Text Summarization?
Abstractive Summarization: A Survey of the State of the Art
Summary Level Training of Sentence Rewriting for Abstractive Summarization
Heterogeneous Graph Neural Networks
STRUCTURED NEURAL SUMMARIZATION
Text Summarization with Pretrained Encoders
SummaRuNNer
Attention is All you Need
Toward Making the Most of Context in NMT
Acquiring Knowledge from Pre-trained Model to Neural Machine Translation
Text Summarization Techniques
Extractive Summarization as Text Matching
暂无相关搜索结果!
让时间为你证明
分享,让知识传承更久远
×
文章二维码
×
手机扫一扫,轻松掌上读
文档下载
×
请下载您需要的格式的文档,随时随地,享受汲取知识的乐趣!
PDF
文档
EPUB
文档
MOBI
文档
书签列表
×
阅读记录
×
阅读进度:
0.00%
(
0/0
)
重置阅读进度
×
思维导图备注