模型:
SenseTime/deformable-detr-with-box-refine-two-stage
Deformable DEtection TRansformer (DETR)是一个端到端训练的模型,使用COCO 2017目标检测数据集(118k张标注图像)进行训练。该模型由朱等人在文章 Deformable DETR: Deformable Transformers for End-to-End Object Detection 中提出,并于 this repository 首次发布。
免责声明:发布Deformable DETR模型的团队未为该模型编写模型卡片,因此本模型卡片由Hugging Face团队编写。
DETR模型是一个编码-解码变压器模型,具有一个卷积骨干网络。在解码器输出之上添加了两个头部,用于执行目标检测:一个用于类别标签的线性层和一个用于边界框的多层感知机(MLP)。该模型使用所谓的对象查询来在图像中检测对象。每个对象查询在图像中寻找特定的对象。对于COCO数据集,对象查询的数量设置为100。
该模型使用“二分匹配损失”进行训练:将每个N = 100个对象查询的预测类别+边界框与地面真值注释进行比较,将它们都填充到相同长度N(因此,如果一个图像只包含4个对象,那么96个注释将只有“无对象”作为类别和“无边界框”作为边界框)。使用匈牙利匹配算法在N个查询和N个注释之间创建最优一对一映射。接下来,使用标准的交叉熵(用于类别)和L1损失和广义IoU损失的线性组合(用于边界框)来优化模型的参数。
您可以使用原始模型进行目标检测。查看 model hub 以查找所有可用的Deformable DETR模型。
from transformers import AutoImageProcessor, DeformableDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr-with-box-refine-two-stage") model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr-with-box-refine-two-stage") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" )
目前,特征提取器和模型都支持PyTorch。
Deformable DETR模型在 COCO 2017 object detection 上进行了训练,该数据集包含118k/5k张用于训练/验证的标注图像。
@misc{https://doi.org/10.48550/arxiv.2010.04159, doi = {10.48550/ARXIV.2010.04159}, url = {https://arxiv.org/abs/2010.04159}, author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} }