模型:
philschmid/layoutlm-funsd
任务:
other这个模型是在funsd数据集上针对 microsoft/layoutlm-base-uncased 进行微调的版本。它在评估集上取得了以下结果:
训练过程中使用了以下超参数:
在开始之前,请确保您满足以下所有要求:
在本教程中,您将学习如何将 LayoutLM 部署到 Hugging Face Inference Endpoints ,并如何通过API将其集成到您的产品中。
本教程不涵盖创建用于推理终端的自定义处理程序的内容。如果您想了解如何创建用于推理终端的自定义处理程序,您可以查看 documentation ,或者浏览 “Custom Inference with Hugging Face Inference Endpoints” 。
我们将部署 philschmid/layoutlm-funsd ,它实现了以下handler.py
from typing import Dict, List, Any from transformers import LayoutLMForTokenClassification, LayoutLMv2Processor import torch from subprocess import run # install tesseract-ocr and pytesseract run("apt install -y tesseract-ocr", shell=True, check=True) run("pip install pytesseract", shell=True, check=True) # helper function to unnormalize bboxes for drawing onto the image def unnormalize_box(bbox, width, height): return [ width * (bbox[0] / 1000), height * (bbox[1] / 1000), width * (bbox[2] / 1000), height * (bbox[3] / 1000), ] # set device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") class EndpointHandler: def __init__(self, path=""): # load model and processor from path self.model = LayoutLMForTokenClassification.from_pretrained(path).to(device) self.processor = LayoutLMv2Processor.from_pretrained(path) def __call__(self, data: Dict[str, bytes]) -> Dict[str, List[Any]]: """ Args: data (:obj:): includes the deserialized image file as PIL.Image """ # process input image = data.pop("inputs", data) # process image encoding = self.processor(image, return_tensors="pt") # run prediction with torch.inference_mode(): outputs = self.model( input_ids=encoding.input_ids.to(device), bbox=encoding.bbox.to(device), attention_mask=encoding.attention_mask.to(device), token_type_ids=encoding.token_type_ids.to(device), ) predictions = outputs.logits.softmax(-1) # post process output result = [] for item, inp_ids, bbox in zip( predictions.squeeze(0).cpu(), encoding.input_ids.squeeze(0).cpu(), encoding.bbox.squeeze(0).cpu() ): label = self.model.config.id2label[int(item.argmax().cpu())] if label == "O": continue score = item.max().item() text = self.processor.tokenizer.decode(inp_ids) bbox = unnormalize_box(bbox.tolist(), image.width, image.height) result.append({"label": label, "score": score, "text": text, "bbox": bbox}) return {"predictions": result}
Hugging Face的推理终端可以直接处理二进制数据,这意味着我们可以直接将文档中的图片发送到终端。我们将使用requests发送请求(确保您已经安装了它 pip install requests)
import json import requests as r import mimetypes ENDPOINT_URL="" # url of your endpoint HF_TOKEN="" # organization token where you deployed your endpoint def predict(path_to_image:str=None): with open(path_to_image, "rb") as i: b = i.read() headers= { "Authorization": f"Bearer {HF_TOKEN}", "Content-Type": mimetypes.guess_type(path_to_image)[0] } response = r.post(ENDPOINT_URL, headers=headers, data=b) return response.json() prediction = predict(path_to_image="path_to_your_image.png") print(prediction) # {'predictions': [{'label': 'I-ANSWER', 'score': 0.4823932945728302, 'text': '[CLS]', 'bbox': [0.0, 0.0, 0.0, 0.0]}, {'label': 'B-HEADER', 'score': 0.992474377155304, 'text': 'your', 'bbox': [1712.529, 181.203, 1859.949, 228.88799999999998]},
为了更好地理解模型的预测结果,您还可以将预测结果绘制在提供的图片上。
from PIL import Image, ImageDraw, ImageFont # draw results on image def draw_result(path_to_image,result): image = Image.open(path_to_image) label2color = { "B-HEADER": "blue", "B-QUESTION": "red", "B-ANSWER": "green", "I-HEADER": "blue", "I-QUESTION": "red", "I-ANSWER": "green", } # draw predictions over the image draw = ImageDraw.Draw(image) font = ImageFont.load_default() for res in result: draw.rectangle(res["bbox"], outline="black") draw.rectangle(res["bbox"], outline=label2color[res["label"]]) draw.text((res["bbox"][0] + 10, res["bbox"][1] - 10), text=res["label"], fill=label2color[res["label"]], font=font) return image draw_result("path_to_your_image.png", prediction["predictions"])