英文

KoBigBird

Pretrained BigBird Model for Korean ( kobigbird-bert-base )

About

BigBird是一种基于稀疏注意力的变形器,它将基于Transformer的模型(如BERT)扩展到更长的序列。

BigBird依赖于块稀疏注意力,而不是常规注意力(即BERT的注意力),并且可以处理长度长达4096的序列,而比BERT的计算成本要低得多。

该模型从韩语BERT的检查点进行了预热启动。

如何使用

注意:使用BertTokenizer而不是BigBirdTokenizer。(AutoTokenizer将加载BertTokenizer)

from transformers import AutoModel, AutoTokenizer

# by default its in `block_sparse` mode with num_random_blocks=3, block_size=64
model = AutoModel.from_pretrained("monologg/kobigbird-bert-base")

# you can change `attention_type` to full attention like this:
model = AutoModel.from_pretrained("monologg/kobigbird-bert-base", attention_type="original_full")

# you can change `block_size` & `num_random_blocks` like this:
model = AutoModel.from_pretrained("monologg/kobigbird-bert-base", block_size=16, num_random_blocks=2)

tokenizer = AutoTokenizer.from_pretrained("monologg/kobigbird-bert-base")
text = "한국어 BigBird 모델을 공개합니다!"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)