模型:
gtfintechlab/FOMC-RoBERTa
该页面提供了ACL 2023论文"万亿美元的话语:一个新的金融数据集、任务和市场分析"的模型。这项工作是在佐治亚理工学院金融服务创新实验室完成的。金融技术实验室是东南地区金融教育、研究和产业的中心。
该论文可在 SSRN 处获取
LABEL_2: 中立 LABEL_1: 鹰派 LABEL_0: 鸽派
from transformers import pipeline from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig tokenizer = AutoTokenizer.from_pretrained("gtfintechlab/FOMC-RoBERTa", do_lower_case=True, do_basic_tokenize=True) model = AutoModelForSequenceClassification.from_pretrained("gtfintechlab/FOMC-RoBERTa", num_labels=3) config = AutoConfig.from_pretrained("gtfintechlab/FOMC-RoBERTa") classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, config=config, device=0, framework="pt") results = classifier(["Such a directive would imply that any tightening should be implemented promptly if developments were perceived as pointing to rising inflation.", "The International Monetary Fund projects that global economic growth in 2019 will be the slowest since the financial crisis."], batch_size=128, truncation="only_first") print(results)
所有带有3个种子的训练-测试集划分的注释数据集都可以在 GitHub Page 处找到
如果您使用任何代码、数据或模型,请引用我们的论文
@article{shah2023trillion, title={Trillion Dollar Words: A New Financial Dataset, Task & Market Analysis}, author={Shah, Agam and Paturi, Suvan and Chava, Sudheer}, journal={Available at SSRN 4447632}, year={2023} }
如有任何问题和疑问,请联系Agam Shah(ashah482[at]gatech[dot]edu)。GitHub: @shahagam4 Website: https://shahagam4.github.io/