网站怎么快速收录,网站维护电话,装修免费出效果图,哪些专业要学网页制作文章目录P-Tuning v2 概述核心改进关键技术细节代码示例性能对比局限性https://github.com/THUDM/P-tuning-v2 P-Tuning v2 概述
P-Tuning v2 是清华大学团队提出的一种参数高效微调#xff08;Parameter-Efficient Fine-Tuning, PEFT#xff09;方法#xff0c;旨在改进传…文章目录P-Tuning v2 概述核心改进关键技术细节代码示例性能对比局限性https://github.com/THUDM/P-tuning-v2P-Tuning v2 概述P-Tuning v2 是清华大学团队提出的一种参数高效微调Parameter-Efficient Fine-Tuning, PEFT方法旨在改进传统微调方法在大型预训练语言模型如GPT、BERT上的效率和性能。它是P-Tuning的升级版本通过优化提示Prompt设计和参数更新策略显著提升了模型在低资源场景下的表现。核心改进连续提示优化P-Tuning v2 引入了可训练的连续提示Continuous Prompts取代了传统离散提示。这些提示以嵌入向量的形式插入到模型的输入层或中间层通过梯度下降动态调整避免了人工设计提示的局限性。分层提示注入与P-Tuning仅在输入层添加提示不同P-Tuning v2 在模型的每一层或关键层注入提示向量形成分层提示结构。这种设计能更深度地引导模型行为尤其适合深层Transformer架构。参数效率提升P-Tuning v2 仅需微调少量额外参数通常占模型总参数的0.1%-1%大幅降低了计算和存储开销同时保持了与全参数微调相近的性能。关键技术细节提示向量初始化提示向量通常随机初始化或从任务相关词嵌入中采样。实验表明合理的初始化能加速收敛并提升最终效果。训练目标P-Tuning v2 通过标准的下游任务损失如交叉熵优化提示参数同时可结合适配器Adapter或LoRA等轻量级模块进一步减少可训练参数。适用场景小样本学习Few-shot Learning多任务学习通过不同提示区分任务资源受限的设备部署代码示例P-Tuning v2的核心逻辑importtorchclassPrefixEncoder(torch.nn.Module):r The torch.nn model to encode the prefix Input shape: (batch-size, prefix-length) Output shape: (batch-size, prefix-length, 2*layers*hidden) def__init__(self,config):super().__init__()self.prefix_projectionconfig.prefix_projectionifself.prefix_projection:# Use a two-layer MLP to encode the prefixself.embeddingtorch.nn.Embedding(config.pre_seq_len,config.hidden_size)self.transtorch.nn.Sequential(torch.nn.Linear(config.hidden_size,config.prefix_hidden_size),torch.nn.Tanh(),torch.nn.Linear(config.prefix_hidden_size,config.num_hidden_layers*2*config.hidden_size))else:self.embeddingtorch.nn.Embedding(config.pre_seq_len,config.num_hidden_layers*2*config.hidden_size)defforward(self,prefix:torch.Tensor):ifself.prefix_projection:prefix_tokensself.embedding(prefix)past_key_valuesself.trans(prefix_tokens)else:past_key_valuesself.embedding(prefix)returnpast_key_valueshttps://github.com/THUDM/P-tuning-v2/blob/main/model/token_classification.pyclassBertPrefixForTokenClassification(BertPreTrainedModel):def__init__(self,config):super().__init__(config)self.num_labelsconfig.num_labels self.bertBertModel(config,add_pooling_layerFalse)self.dropouttorch.nn.Dropout(config.hidden_dropout_prob)self.classifiertorch.nn.Linear(config.hidden_size,config.num_labels)from_pretrainedFalseiffrom_pretrained:self.classifier.load_state_dict(torch.load(model/checkpoint.pkl))forparaminself.bert.parameters():param.requires_gradFalseself.pre_seq_lenconfig.pre_seq_len self.n_layerconfig.num_hidden_layers self.n_headconfig.num_attention_heads self.n_embdconfig.hidden_size//config.num_attention_heads self.prefix_tokenstorch.arange(self.pre_seq_len).long()self.prefix_encoderPrefixEncoder(config)bert_param0forname,paraminself.bert.named_parameters():bert_paramparam.numel()all_param0forname,paraminself.named_parameters():all_paramparam.numel()total_paramall_param-bert_paramprint(total param is {}.format(total_param))# 9860105defget_prompt(self,batch_size):prefix_tokensself.prefix_tokens.unsqueeze(0).expand(batch_size,-1).to(self.bert.device)past_key_valuesself.prefix_encoder(prefix_tokens)# bsz, seqlen, _ past_key_values.shapepast_key_valuespast_key_values.view(batch_size,self.pre_seq_len,self.n_layer*2,self.n_head,self.n_embd)past_key_valuesself.dropout(past_key_values)past_key_valuespast_key_values.permute([2,0,3,1,4]).split(2)returnpast_key_valuesdefforward(self,input_idsNone,attention_maskNone,token_type_idsNone,position_idsNone,head_maskNone,inputs_embedsNone,labelsNone,output_attentionsNone,output_hidden_statesNone,return_dictNone,):return_dictreturn_dictifreturn_dictisnotNoneelseself.config.use_return_dict batch_sizeinput_ids.shape[0]past_key_valuesself.get_prompt(batch_sizebatch_size)prefix_attention_masktorch.ones(batch_size,self.pre_seq_len).to(self.bert.device)attention_masktorch.cat((prefix_attention_mask,attention_mask),dim1)outputsself.bert(input_ids,attention_maskattention_mask,token_type_idstoken_type_ids,position_idsposition_ids,head_maskhead_mask,inputs_embedsinputs_embeds,output_attentionsoutput_attentions,output_hidden_statesoutput_hidden_states,return_dictreturn_dict,past_key_valuespast_key_values,)sequence_outputoutputs[0]sequence_outputself.dropout(sequence_output)logitsself.classifier(sequence_output)attention_maskattention_mask[:,self.pre_seq_len:].contiguous()lossNoneiflabelsisnotNone:loss_fctCrossEntropyLoss()# Only keep active parts of the lossifattention_maskisnotNone:active_lossattention_mask.view(-1)1active_logitslogits.view(-1,self.num_labels)active_labelstorch.where(active_loss,labels.view(-1),torch.tensor(loss_fct.ignore_index).type_as(labels))lossloss_fct(active_logits,active_labels)else:lossloss_fct(logits.view(-1,self.num_labels),labels.view(-1))ifnotreturn_dict:output(logits,)outputs[2:]return((loss,)output)iflossisnotNoneelseoutputreturnTokenClassifierOutput(lossloss,logitslogits,hidden_statesoutputs.hidden_states,attentionsoutputs.attentions,)性能对比在SuperGLUE基准测试中P-Tuning v2 仅微调0.5%参数时性能可达全参数微调的90%以上同时训练速度提升3-5倍。对于超大规模模型如百亿参数其优势更加显著。局限性提示长度和层数需通过实验调优对某些需要全局参数调整的任务如文本生成可能需结合其他PEFT方法参考 https://github.com/zejunwang1/chatglm_tuning/blob/main/train_ptuning.py