中文

taskGPT2-xl v0.2a

Model Summary

I finetuned GPT2 on text2code, cot, math and FLAN tasks, on some tasks its performs better than GPT-JT

I create a collection of open techniques and datasets to build taskGPT2-xl:

Quick Start

from transformers import pipeline
pipe = pipeline(model='AlexWortega/taskGPT2-xl')
pipe('''"I love this!" Is it positive? A:''')

or

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("taskGPT2-xl")
model = AutoModelForCausalLM.from_pretrained("taskGPT2-xl")

License

The weights of taskGPT2-xl are licensed under version 2.0 of the Apache License.

Training Details

I used datasets from huggingface:

  • strategyqa_train
  • aqua_train
  • qed_train

Hyperparameters

I used Novograd with a learning rate of 2e-5 and global batch size of 6 (3 for each data parallel worker). I use both data parallelism and pipeline parallelism to conduct training. During training, we truncate the input sequence to 512 tokens, and for input sequence that contains less than 512 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.

References

#Metrics

SOON

BibTeX entry and citation info

@article{
  title={GPT2xl is underrated task solver},
  author={Nickolich Aleksandr, Karina Romanova, Arseniy Shahmatov, Maksim Gersimenko},
  year={2023}
}