模型:
google/roberta2roberta_L-24_wikisplit
该模型于 this paper 由Sascha Rothe、Shashi Narayan、Aliaksei Severyn引入,并于 this repository 首次发布。
该模型是一个编码器-解码器模型,使用了 roberta-large 的检查点进行编码器和解码器的初始化,并在 WikiSplit 数据集上进行了句子拆分的微调。
免责声明:该模型卡片由Hugging Face团队编写。
您可以使用该模型进行句子拆分,例如:
重要提示:该模型没有在 " (双引号)字符 上进行训练 -> 因此,在对文本进行标记化之前,建议将所有 " (双引号)替换为两个单 ' (单引号)。
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_wikisplit") model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_wikisplit") long_sentence = """Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it and he decides to open Bob 's Burgers for customers who were planning on going to Lobsterfest.""" input_ids = tokenizer(tokenizer.bos_token + long_sentence + tokenizer.eos_token, return_tensors="pt").input_ids output_ids = model.generate(input_ids)[0] print(tokenizer.decode(output_ids, skip_special_tokens=True)) # should output # Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it. He decides to open Bob's Burgers for customers who were planning on going to Lobsterfest.