mirror of
https://github.com/salesforce/BLIP.git
synced 2026-02-01 10:09:44 +00:00
13e95d7ff7a0724babe074b804b6731af3ca7c0f
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
This is the PyTorch implementation of the BLIP paper. The code has been tested on PyTorch 1.9 and 1.10.
Catalog:
- Inference demo
- Pre-trained and finetuned checkpoints
- Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2
- Pre-training code
- Download of bootstrapped image-text dataset
Inference demo (Image Captioning and VQA):
Run our interactive demo using Colab notebook (no GPU needed):
Pre-trained checkpoints:
| Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L |
|---|---|---|---|
| 14M | Download | - | - |
| 129M | Download | Download | Download |
Image-Text Retrieval:
- Download COCO or Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
- To evaluate the finetuned BLIP model on COCO, run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco \ --evaluate
- To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as . Then run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco
Description
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
image-captioningimage-text-retrievalvision-and-language-pre-trainingvision-languagevision-language-transformervisual-question-answeringvisual-reasoning
Readme
9.7 MiB
Languages
Jupyter Notebook
72.5%
Python
27.5%