在这个教程中,我们所做的任务为MRPC数据上的文本匹配,因为在BERT中,文本匹配本质是一个双句文本分类,因此下面代码用文本分类的方法进行任务。本次教程依托PAI-PyTorch上开发的EasyNLP,用户仅需要配置好相关命令参数,改动少量代码就可以在PAI上跑BERT文本分类任务。
1. 代码详解
构造数据
train_dataset = ClassificationDataset(pretrained_model_name_or_path=args.pretrained_model_name_or_path,data_file=args.tables,max_seq_length=args.sequence_length,input_schema=args.input_schema,first_sequence=args.first_sequence,label_name=args.label_name,label_enumerate_values=args.label_enumerate_values,is_training=True)
构造Application
model = SequenceClassification(pretrained_model_name_or_path=args.pretrained_model_name_or_path)
调用Trainer训练
Trainer(model=model, train_dataset=train_dataset).train()
2. 跑脚本
cd EasyNLP/examples/quick_start/sh run_user_defined_local.sh
3. 过程详解
下载数据
export CUDA_VISIBLE_DEVICES=0# Local training example# cur_path=/tmp/EasyNLPcur_path=/home/admin/workspace/EasyNLP/cd ${cur_path}if [ ! -f ./tmp/train.tsv ]; thenwget http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/tutorials/classification/train.tsvwget http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/tutorials/classification/dev.tsvmkdir tmp/mv *.tsv tmp/fi
跑训练脚本
这里预训练模型是: bert-small-uncased
DISTRIBUTED_ARGS="--nproc_per_node 1 --nnodes 1 --node_rank 0 --master_addr localhost --master_port 6009"python -m torch.distributed.launch $DISTRIBUTED_ARGS \examples/self_defined_examples/main.py \--mode train \--tables=tmp/train.tsv,tmp/dev.tsv \--input_schema=label:str:1,sid1:str:1,sid2:str:1,sent1:str:1,sent2:str:1 \--first_sequence=sent1 \--second_sequence=sent2 \--label_name=label \--label_enumerate_values=0,1 \--checkpoint_dir=./tmp/classification_model/ \--learning_rate=3e-5 \--epoch_num=3 \--random_seed=42 \--logging_steps=1 \--save_checkpoint_steps=50 \--sequence_length=128 \--micro_batch_size=10 \--app_name=text_classify \--use_amp \--user_defined_parameters='pretrain_model_name_or_path=bert-small-uncased'
