模型:
dg845/unidiffuser-diffusers
This model is a version of the UniDiffuser-v1 ( original code , original model ) checkpoint which is compatible with diffusers . This is one of two models from the original UniDiffuser release, the other being UniDiffuser-v0 . From the original model card:
UniDiffuser is a unified diffusion framework to fit all distributions relevant to a set of multi-modal data in one transformer. UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead.
Specifically, UniDiffuser employs a variation of transformer, called U-ViT , which parameterizes the joint noise prediction network. Other components perform as encoders and decoders of different modalities, including a pretrained image autoencoder from Stable Diffusion , a pretrained image ViT-B/32 CLIP encoder , a pretrained text ViT-L CLIP encoder , and a GPT-2 text decoder finetuned by ourselves.
We provide two versions of UniDiffuser:
import requests import torch from PIL import Image from io import BytesIO from diffusers import UniDiffuserPipeline device = "cuda" model_id_or_path = "dg845/unidiffuser-diffusers" pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path) pipe.to(device) # Joint image-text generation. The generation task is automatically inferred. sample = pipe(num_inference_steps=20, guidance_scale=8.0) image = sample.images[0] text = sample.text[0] image.save("unidiffuser_sample_joint_image.png") print(text) # The mode can be set manually. The following is equivalent to the above: pipe.set_joint_mode() sample2 = pipe(num_inference_steps=20, guidance_scale=8.0) # Note that if you set the mode manually the pipeline will no longer attempt # to automatically infer the mode. You can re-enable this with reset_mode(). pipe.reset_mode() # Text-to-image generation. prompt = "an elephant under the sea" sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) t2i_image = sample.images[0] t2i_image.save("unidiffuser_sample_text2img_image.png") # Image-to-text generation. image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" response = requests.get(image_url) init_image = Image.open(BytesIO(response.content)).convert("RGB") init_image = init_image.resize((512, 512)) sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) i2t_text = sample.text[0] print(text) # Image variation can be performed with a image-to-text generation followed by a text-to-image generation: sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) final_image = sample.images[0] final_image.save("unidiffuser_image_variation_sample.png") # Text variation can be performed with a text-to-image generation followed by a image-to-text generation: sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) final_prompt = sample.text[0] print(final_prompt)
Note: Most of this section is taken from the Stable Diffusion model card , but applies in the same way to UniDiffuser .
The model should be used following the agpl-3.0 license. Possible usage includes
Excluded uses are described below.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
Out-of-Scope UseThe model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
Misuse and Malicious UseUsing the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: