模型:
flax-community/dansk-gpt-wiki
一种使用Flax CLM管道在wiki40b数据集的丹麦部分上训练的GPT2风格模型。
https://huggingface.co/datasets/wiki40b
该模型是在Huggingface Flax/Jax挑战期间使用Flax Jax在TPU上训练的一系列模型之一。
https://huggingface.co/birgermoell/swedish-gpt/
https://huggingface.co/flax-community/swe-gpt-wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
https://huggingface.co/birgermoell/roberta-swedish-scandi
https://huggingface.co/birgermoell/roberta-swedish
https://huggingface.co/birgermoell/t5-base-swedish
使用以下脚本对数据进行了清理和预处理。确保安装了beam_runner的依赖项,以使数据集正常工作。
from datasets import load_dataset def load_and_clean_wiki(): dataset = load_dataset('wiki40b', 'da', beam_runner='DirectRunner', split="train") #dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner') dataset = dataset.remove_columns(['wikidata_id', 'version_id']) filtered_dataset = dataset.map(filter_wikipedia) # filtered_dataset[:3] # print(filtered_dataset[:3]) return filtered_dataset def filter_wikipedia(batch): batch["text"] = " ".join(batch["text"].split("\ _START_SECTION_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_PARAGRAPH_\ ")) batch["text"] = " ".join(batch["text"].split("_NEWLINE_")) batch["text"] = " ".join(batch["text"].split("\xa0")) return batch
使用以下训练脚本对模型进行了训练。
./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="da" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub