CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
This repo contains the data for the NeurIPS 2021 benchmark
Constrained Language Understanding Evaluation Standard (CLUES)
.
Leaderboard
We maintain a
Leaderboard
allowing researchers to submit their results as entries.
Submission Instructions
- Each submission must be submitted as a pull request modifying the markdown file underlying the leaderboard.
- The submission must attach an accompanying public paper and public source code for reproducing their results on our dataset.
- A submission can be toward any subset of tasks in our benchmark, or toward the aggregate leaderboard.
- For any task targeted by the submission, we require evaluation on (1) 10, 20, and 30 shots, and (2) all 5 splits of the corresponding dataset and a report of their mean and standard deviation.
- Each leaderboard will be sorted by the 30-shot mean S1 score (where S1 score is a variant of F1 score defined in our paper).
- The submission should not use data from the 4 other splits during few-shot finetuning of any 1 split, either as extra training set or as validation set for hyperparameter tuning.
- However, we allow external data, labeled or unlabeled, to be used for such purposes.Each submission using external data must mark the corresponding columns "external labeled" and/or "external unlabeled".Note, in this context, "external data" refers to data used after pretraining (e.g., for task-specific tuning); in particular, methods using existing pretrained models only, without extra data, should not mark either column. For obvious reasons, models cannot be trained on the original labeled datasets from where we sampled the few-shot CLUES data.
- In the table entry, the submission should include a method name and a citation, hyperlinking to their publicly released source code reproducing the results. See the last entry of the table below for an example.
Abbreviations
- FT = (classic) finetuning
- PT = prompt based tuning
- ICL = in-context learning, in the style of GPT-3
- μ±σ = mean μ and standard deviation σ across our 5 splits. Aggregate standard deviation is calculated using the sum-of-variance formula from individual tasks' standard deviations.
Benchmarking CLUES for Aggregate 30-shot Evaluation
Shots (K=30)
|
external labeled
|
external unlabeled
|
Average ▼
|
SST-2
|
MNLI
|
CoNLL03
|
WikiANN
|
SQuAD-v2
|
ReCoRD
|
Human
|
N
|
N
|
81.4
|
83.7
|
69.4
|
87.4
|
82.6
|
73.5
|
91.9
|
T5-Large-770M-FT
|
N
|
N
|
43.1±6.7
|
52.3±2.9
|
36.8±3.8
|
51.2±0.1
|
62.4±0.6
|
43.7±2.7
|
12±3.8
|
BERT-Large-336M-FT
|
N
|
N
|
42.1±7.8
|
55.4±2.5
|
33.3±1.4
|
51.3±0
|
62.5±0.6
|
35.3±6.4
|
14.9±3.4
|
BERT-Base-110M-FT
|
N
|
N
|
41.5±9.2
|
53.6±5.5
|
35.4±3.2
|
51.3±0
|
62.8±0
|
32.6±5.8
|
13.1±3.3
|
DeBERTa-Large-400M-FT
|
N
|
N
|
40.1±17.8
|
47.7±9.0
|
26.7±11
|
48.2±2.9
|
58.3±6.2
|
38.7±7.4
|
21.1±3.6
|
RoBERTa-Large-355M-FT
|
N
|
N
|
40.0±10.6
|
53.2±5.6
|
34.0±1.1
|
44.7±2.6
|
48.4±6.7
|
43.5±4.4
|
16±2.8
|
RoBERTa-Large-355M-PT
|
N
|
N
|
90.2±1.8
|
61.6±3.5
|
DeBERTa-Large-400M-PT
|
N
|
N
|
88.4±3.3
|
62.9±3.1
|
BERT-Large-336M-PT
|
N
|
N
|
82.7±4.1
|
45.3±2.0
|
GPT3-175B-ICL
|
N
|
N
|
91.0±1.6
|
33.2±0.2
|
BERT-Base-110M-PT
|
N
|
N
|
79.4±5.6
|
42.5±3.2
|
1233321
|
N
|
Y
|
91.3 ±0.7
|
67.9±3.0
|
Example (lastname et al.)
|
Y/N
|
Y/N
|
0±0
|
0±0
|
0±0
|
0±0
|
0±0
|
0±0
|
0±0
|
Individual Task Performance over Multiple Shots
SST-2
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
GPT-3 (175B) ICL
|
N
|
N
|
85.9±3.7
|
92.0±0.7
|
91.0±1.6
|
-
|
RoBERTa-Large PT
|
N
|
N
|
88.8±3.9
|
89.0±1.1
|
90.2±1.8
|
93.8
|
DeBERTa-Large PT
|
N
|
N
|
83.4±5.3
|
87.8±3.5
|
88.4±3.3
|
91.9
|
Human
|
N
|
N
|
79.8
|
83
|
83.7
|
-
|
BERT-Large PT
|
N
|
N
|
63.2±11.3
|
78.2±9.9
|
82.7±4.1
|
91
|
BERT-Base PT
|
N
|
N
|
63.9±10.0
|
76.7±6.6
|
79.4±5.6
|
91.9
|
BERT-Large FT
|
N
|
N
|
46.3±5.5
|
55.5±3.4
|
55.4±2.5
|
99.1
|
BERT-Base FT
|
N
|
N
|
46.2±5.6
|
54.0±2.8
|
53.6±5.5
|
98.1
|
RoBERTa-Large FT
|
N
|
N
|
38.4±21.7
|
52.3±5.6
|
53.2±5.6
|
98.6
|
T5-Large FT
|
N
|
N
|
51.2±1.8
|
53.4±3.2
|
52.3±2.9
|
97.6
|
DeBERTa-Large FT
|
N
|
N
|
43.0±11.9
|
40.8±22.6
|
47.7±9.0
|
100
|
Example (lastname et al.)
|
Y/N
|
Y/N
|
0±0
|
0±0
|
0±0
|
-
|
MNLI
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
Y
|
78.1
|
78.6
|
69.4
|
-
|
1234321
|
N
|
N
|
60.5±8.3
|
67.2±4.5
|
67.9±3.0
|
-
|
DeBERTa-Large PT
|
N
|
N
|
44.5±8.2
|
60.7±5.3
|
62.9±3.1
|
88.1
|
RoBERTa-Large PT
|
N
|
N
|
57.7±3.6
|
58.6±2.9
|
61.6±3.5
|
87.1
|
BERT-Large PT
|
N
|
N
|
41.7±1.0
|
43.7±2.1
|
45.3±2.0
|
81.9
|
BERT-Base PT
|
N
|
N
|
40.4±1.8
|
42.1±4.4
|
42.5±3.2
|
81
|
T5-Large FT
|
N
|
N
|
39.8±3.3
|
37.9±4.3
|
36.8±3.8
|
85.9
|
BERT-Base FT
|
N
|
N
|
37.0±5.2
|
35.2±2.7
|
35.4±3.2
|
81.6
|
RoBERTa-Large FT
|
N
|
N
|
34.3±2.8
|
33.4±0.9
|
34.0±1.1
|
85.5
|
BERT-Large FT
|
N
|
N
|
33.7±0.4
|
28.2±14.8
|
33.3±1.4
|
80.9
|
GPT-3 (175B) ICL
|
N
|
N
|
33.5±0.7
|
33.1±0.3
|
33.2±0.2
|
-
|
DeBERTa-Large FT
|
N
|
N
|
27.4±14.1
|
33.6±2.5
|
26.7±11.0
|
87.6
|
CoNLL03
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
N
|
87.7
|
89.7
|
87.4
|
-
|
BERT-Base FT
|
N
|
N
|
51.3±0
|
51.3±0
|
51.3±0
|
-
|
BERT-Large FT
|
N
|
N
|
51.3±0
|
51.3±0
|
51.3±0
|
89.3
|
T5-Large FT
|
N
|
N
|
46.3±6.9
|
50.0±0.7
|
51.2±0.1
|
92.2
|
DeBERTa-Large FT
|
N
|
N
|
50.1±1.2
|
47.8±2.5
|
48.2±2.9
|
93.6
|
RoBERTa-Large FT
|
N
|
N
|
50.8±0.5
|
44.6±5.1
|
44.7±2.6
|
93.2
|
WikiANN
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
N
|
81.4
|
83.5
|
82.6
|
-
|
BERT-Base FT
|
N
|
N
|
62.8±0
|
62.8±0
|
62.8±0
|
88.8
|
BERT-Large FT
|
N
|
N
|
62.8±0
|
62.6±0.4
|
62.5±0.6
|
91
|
T5-Large FT
|
N
|
N
|
61.7±0.7
|
62.1±0.2
|
62.4±0.6
|
87.4
|
DeBERTa-Large FT
|
N
|
N
|
58.5±3.3
|
57.9±5.8
|
58.3±6.2
|
91.1
|
RoBERTa-Large FT
|
N
|
N
|
58.5±8.8
|
56.9±3.4
|
48.4±6.7
|
91.2
|
SQuAD v2
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
N
|
71.9
|
76.4
|
73.5
|
-
|
T5-Large FT
|
N
|
N
|
43.6±3.5
|
28.7±13.0
|
43.7±2.7
|
87.2
|
RoBERTa-Large FT
|
N
|
N
|
38.1±7.2
|
40.1±6.4
|
43.5±4.4
|
89.4
|
DeBERTa-Large FT
|
N
|
N
|
41.4±7.3
|
44.4±4.5
|
38.7±7.4
|
90
|
BERT-Large FT
|
N
|
N
|
42.3±5.6
|
35.8±9.7
|
35.3±6.4
|
81.8
|
BERT-Base FT
|
N
|
N
|
46.0±2.4
|
34.9±9.0
|
32.6±5.8
|
76.3
|
ReCoRD
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
N
|
94.1
|
94.2
|
91.9
|
-
|
DeBERTa-Large FT
|
N
|
N
|
15.7±5.0
|
16.8±5.7
|
21.1±3.6
|
80.7
|
RoBERTa-Large FT
|
N
|
N
|
12.0±1.9
|
9.9±6.2
|
16.0±2.8
|
80.3
|
BERT-Large FT
|
N
|
N
|
9.9±5.2
|
11.8±4.9
|
14.9±3.4
|
66
|
BERT-Base FT
|
N
|
N
|
10.3±1.8
|
11.7±2.4
|
13.1±3.3
|
54.4
|
T5-Large FT
|
N
|
N
|
11.9±2.7
|
11.7±1.5
|
12.0±3.8
|
77.3
|
How do I cite CLUES?
@article{cluesteam2021,
title={Few-Shot Learning Evaluation in Natural Language Understanding},
author={Mukherjee, Subhabrata and Liu, Xiaodong and Zheng, Guoqing and Hosseini, Saghar and Cheng, Hao and Yang, Greg and Meek, Christopher and Awadallah, Ahmed Hassan and Gao, Jianfeng},
booktitle = {NeurIPS 2021},
year = {2021},
month = {December},
url = {https://www.microsoft.com/en-us/research/publication/clues-few-shot-learning-evaluation-in-natural-language-understanding/},
}
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to aContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant usthe rights to use your contribution. For details, visit
https://cla.opensource.microsoft.com
.
When you submit a pull request, a CLA bot will automatically determine whether you need to providea CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructionsprovided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the
Microsoft Open Source Code of Conduct
.For more information see the
Code of Conduct FAQ
orcontact opencode@microsoft.com with any additional questions or comments.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow
Microsoft's Trademark & Brand Guidelines
.Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.Any use of third-party trademarks or logos are subject to those third-party's policies.
此存储库包含 NeurIPS 2021 基准测试
Constrained Language Understanding Evaluation Standard (CLUES)
的数据。我们维护一个
Leaderboard
,让研究人员提交其结果作为条目。
提交说明:
- 每个提交必须作为修改引用该排行榜的 markdown 文件的 pull request 进行提交。
- 提交必须附上公开发表的论文和可重现其在我们数据集上结果的公开源代码。
- 提交可以针对我们基准测试中的任何任务子集,或针对聚合排行榜。
- 对于提交所定位的任何任务,我们要求对相应数据集的所有 5 个分割以及它们的平均值和标准差进行 (1) 10、20 和 30 次射击的评估。
- 每个排行榜将按照 30 次射击的平均 S1 分数(其中 S1 分数是我们论文中定义的 F1 分数的变体)进行排序。
- 提交时不得使用其他 4 个分割的数据进行任何 1 个分割的少样本微调,无论是作为附加训练集还是作为超参数调整的验证集。
- 然而,我们允许使用标记或未标记的外部数据进行此类目的。每个使用外部数据的提交必须标记相应列为 "external labeled" 和/或 "external unlabeled"。注意,在这个上下文中,"external data" 指的是在预训练之后使用的数据(例如用于任务特定调整);特别地,只使用现有预训练模型而不使用额外数据的方法不应标记任何列。出于明显的原因,模型不能在我们提取的用于少样本 CLUES 数据的原始标记数据集上进行训练。
- 在表项中,提交应包括方法名称和引用,超链接到他们公开发布的可重现结果的源代码。请参考下表中的最后一项作为示例。
缩写:
- FT = 经典微调
- PT = 基于提示的调整
- ICL = 基于上下文的学习,类似于 GPT-3
- μ±σ = 我们 5 个分割的平均值 μ 和标准差 σ。使用各个任务标准差的方差和公式计算聚合标准差。
聚合 30 次射击评估的 CLUES 基准测试
Shots (K=30)
|
external labeled
|
external unlabeled
|
Average ▼
|
SST-2
|
MNLI
|
CoNLL03
|
WikiANN
|
SQuAD-v2
|
ReCoRD
|
Human
|
N
|
N
|
81.4
|
83.7
|
69.4
|
87.4
|
82.6
|
73.5
|
91.9
|
T5-Large-770M-FT
|
N
|
N
|
43.1±6.7
|
52.3±2.9
|
36.8±3.8
|
51.2±0.1
|
62.4±0.6
|
43.7±2.7
|
12±3.8
|
BERT-Large-336M-FT
|
N
|
N
|
42.1±7.8
|
55.4±2.5
|
33.3±1.4
|
51.3±0
|
62.5±0.6
|
35.3±6.4
|
14.9±3.4
|
BERT-Base-110M-FT
|
N
|
N
|
41.5±9.2
|
53.6±5.5
|
35.4±3.2
|
51.3±0
|
62.8±0
|
32.6±5.8
|
13.1±3.3
|
DeBERTa-Large-400M-FT
|
N
|
N
|
40.1±17.8
|
47.7±9.0
|
26.7±11
|
48.2±2.9
|
58.3±6.2
|
38.7±7.4
|
21.1±3.6
|
RoBERTa-Large-355M-FT
|
N
|
N
|
40.0±10.6
|
53.2±5.6
|
34.0±1.1
|
44.7±2.6
|
48.4±6.7
|
43.5±4.4
|
16±2.8
|
RoBERTa-Large-355M-PT
|
N
|
N
|
90.2±1.8
|
61.6±3.5
|
DeBERTa-Large-400M-PT
|
N
|
N
|
88.4±3.3
|
62.9±3.1
|
BERT-Large-336M-PT
|
N
|
N
|
82.7±4.1
|
45.3±2.0
|
GPT3-175B-ICL
|
N
|
N
|
91.0±1.6
|
33.2±0.2
|
BERT-Base-110M-PT
|
N
|
N
|
79.4±5.6
|
42.5±3.2
|
1233321
|
N
|
Y
|
91.3 ±0.7
|
67.9±3.0
|
Example (lastname et al.)
|
Y/N
|
Y/N
|
0±0
|
0±0
|
0±0
|
0±0
|
0±0
|
0±0
|
0±0
|
多次射击下的各个任务性能:
- SST-2
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
GPT-3 (175B) ICL
|
N
|
N
|
85.9±3.7
|
92.0±0.7
|
91.0±1.6
|
-
|
RoBERTa-Large PT
|
N
|
N
|
88.8±3.9
|
89.0±1.1
|
90.2±1.8
|
93.8
|
DeBERTa-Large PT
|
N
|
N
|
83.4±5.3
|
87.8±3.5
|
88.4±3.3
|
91.9
|
Human
|
N
|
N
|
79.8
|
83
|
83.7
|
-
|
BERT-Large PT
|
N
|
N
|
63.2±11.3
|
78.2±9.9
|
82.7±4.1
|
91
|
BERT-Base PT
|
N
|
N
|
63.9±10.0
|
76.7±6.6
|
79.4±5.6
|
91.9
|
BERT-Large FT
|
N
|
N
|
46.3±5.5
|
55.5±3.4
|
55.4±2.5
|
99.1
|
BERT-Base FT
|
N
|
N
|
46.2±5.6
|
54.0±2.8
|
53.6±5.5
|
98.1
|
RoBERTa-Large FT
|
N
|
N
|
38.4±21.7
|
52.3±5.6
|
53.2±5.6
|
98.6
|
T5-Large FT
|
N
|
N
|
51.2±1.8
|
53.4±3.2
|
52.3±2.9
|
97.6
|
DeBERTa-Large FT
|
N
|
N
|
43.0±11.9
|
40.8±22.6
|
47.7±9.0
|
100
|
Example (lastname et al.)
|
Y/N
|
Y/N
|
0±0
|
0±0
|
0±0
|
-
|
- MNLI
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
Y
|
78.1
|
78.6
|
69.4
|
-
|
1234321
|
N
|
N
|
60.5±8.3
|
67.2±4.5
|
67.9±3.0
|
-
|
DeBERTa-Large PT
|
N
|
N
|
44.5±8.2
|
60.7±5.3
|
62.9±3.1
|
88.1
|
RoBERTa-Large PT
|
N
|
N
|
57.7±3.6
|
58.6±2.9
|
61.6±3.5
|
87.1
|
BERT-Large PT
|
N
|
N
|
41.7±1.0
|
43.7±2.1
|
45.3±2.0
|
81.9
|
BERT-Base PT
|
N
|
N
|
40.4±1.8
|
42.1±4.4
|
42.5±3.2
|
81
|
T5-Large FT
|
N
|
N
|
39.8±3.3
|
37.9±4.3
|
36.8±3.8
|
85.9
|
BERT-Base FT
|
N
|
N
|
37.0±5.2
|
35.2±2.7
|
35.4±3.2
|
81.6
|
RoBERTa-Large FT
|
N
|
N
|
34.3±2.8
|
33.4±0.9
|
34.0±1.1
|
85.5
|
BERT-Large FT
|
N
|
N
|
33.7±0.4
|
28.2±14.8
|
33.3±1.4
|
80.9
|
GPT-3 (175B) ICL
|
N
|
N
|
33.5±0.7
|
33.1±0.3
|
33.2±0.2
|
-
|
DeBERTa-Large FT
|
N
|
N
|
27.4±14.1
|
33.6±2.5
|
26.7±11.0
|
87.6
|
- CoNLL03
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
N
|
87.7
|
89.7
|
87.4
|
-
|
BERT-Base FT
|
N
|
N
|
51.3±0
|
51.3±0
|
51.3±0
|
-
|
BERT-Large FT
|
N
|
N
|
51.3±0
|
51.3±0
|
51.3±0
|
89.3
|
T5-Large FT
|
N
|
N
|
46.3±6.9
|
50.0±0.7
|
51.2±0.1
|
92.2
|
DeBERTa-Large FT
|
N
|
N
|
50.1±1.2
|
47.8±2.5
|
48.2±2.9
|
93.6
|
RoBERTa-Large FT
|
N
|
N
|
50.8±0.5
|
44.6±5.1
|
44.7±2.6
|
93.2
|
- WikiANN
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
N
|
81.4
|
83.5
|
82.6
|
-
|
BERT-Base FT
|
N
|
N
|
62.8±0
|
62.8±0
|
62.8±0
|
88.8
|
BERT-Large FT
|
N
|
N
|
62.8±0
|
62.6±0.4
|
62.5±0.6
|
91
|
T5-Large FT
|
N
|
N
|
61.7±0.7
|
62.1±0.2
|
62.4±0.6
|
87.4
|
DeBERTa-Large FT
|
N
|
N
|
58.5±3.3
|
57.9±5.8
|
58.3±6.2
|
91.1
|
RoBERTa-Large FT
|
N
|
N
|
58.5±8.8
|
56.9±3.4
|
48.4±6.7
|
91.2
|
- SQuAD v2
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
N
|
71.9
|
76.4
|
73.5
|
-
|
T5-Large FT
|
N
|
N
|
43.6±3.5
|
28.7±13.0
|
43.7±2.7
|
87.2
|
RoBERTa-Large FT
|
N
|
N
|
38.1±7.2
|
40.1±6.4
|
43.5±4.4
|
89.4
|
DeBERTa-Large FT
|
N
|
N
|
41.4±7.3
|
44.4±4.5
|
38.7±7.4
|
90
|
BERT-Large FT
|
N
|
N
|
42.3±5.6
|
35.8±9.7
|
35.3±6.4
|
81.8
|
BERT-Base FT
|
N
|
N
|
46.0±2.4
|
34.9±9.0
|
32.6±5.8
|
76.3
|
- ReCoRD
Shots (K)
|
external labeled
|
external unlabeled
|
10
|
20
|
30 ▼
|
All
|
Human
|
N
|
N
|
94.1
|
94.2
|
91.9
|
-
|
DeBERTa-Large FT
|
N
|
N
|
15.7±5.0
|
16.8±5.7
|
21.1±3.6
|
80.7
|
RoBERTa-Large FT
|
N
|
N
|
12.0±1.9
|
9.9±6.2
|
16.0±2.8
|
80.3
|
BERT-Large FT
|
N
|
N
|
9.9±5.2
|
11.8±4.9
|
14.9±3.4
|
66
|
BERT-Base FT
|
N
|
N
|
10.3±1.8
|
11.7±2.4
|
13.1±3.3
|
54.4
|
T5-Large FT
|
N
|
N
|
11.9±2.7
|
11.7±1.5
|
12.0±3.8
|
77.3
|
如何引用 CLUES?
@article{cluesteam2021,
title={Few-Shot Learning Evaluation in Natural Language Understanding},
author={Mukherjee, Subhabrata and Liu, Xiaodong and Zheng, Guoqing and Hosseini, Saghar and Cheng, Hao and Yang, Greg and Meek, Christopher and Awadallah, Ahmed Hassan and Gao, Jianfeng},
booktitle = {NeurIPS 2021},
year = {2021},
month = {December},
url = {https://www.microsoft.com/en-us/research/publication/clues-few-shot-learning-evaluation-in-natural-language-understanding/},
}
贡献:
本项目欢迎贡献和建议。大多数贡献需要您同意 Contributor License Agreement (CLA),声明您有权利并确实授予我们使用您的贡献的权利。有关详细信息,请访问
https://cla.opensource.microsoft.com
。
提交请求时,CLA bot 将自动确定您是否需要提供 CLA,并相应地装饰 PR(例如状态检查、评论)。只需按照机器人提供的说明操作即可。您只需在使用我们的 CLA 的所有存储库中执行一次此操作。
本项目已采用了
Microsoft Open Source Code of Conduct
。更多信息请参阅
Code of Conduct FAQ
,或通过 opencode@microsoft.com 联系我们了解其他问题或意见。
商标:
本项目可能包含项目、产品或服务的商标或标识。使用 Microsoft 的商标或标识必须遵守并遵循
Microsoft's Trademark & Brand Guidelines
。在修改版本的此项目中使用 Microsoft 的商标或标识不得引起混淆,也不得暗示 Microsoft 的赞助。任何使用第三方商标或标识均受该第三方的政策约束。