2022-01-27 21:31:05 +08:00
2022-01-27 12:37:45 +00:00
2022-01-27 12:51:05 +00:00
2022-01-27 12:37:45 +00:00
2022-01-27 12:37:45 +00:00
2022-01-27 12:37:45 +00:00
2022-01-27 21:31:05 +08:00
2022-01-27 12:37:45 +00:00
2022-01-27 12:37:45 +00:00
2022-01-27 12:37:45 +00:00
2022-01-27 12:37:45 +00:00
2022-01-27 12:37:45 +00:00
2022-01-27 12:37:45 +00:00

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

This is the PyTorch implementation of the BLIP paper. The code has been tested on PyTorch 1.9 and 1.10. To install the dependencies, run

pip install -r requirements.txt

Catalog:

  • Inference demo
  • Pre-trained and finetuned checkpoints
  • Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2
  • Pre-training code
  • Download of bootstrapped image-text datasets

Inference demo (Image Captioning and VQA):

Run our interactive demo using Colab notebook (no GPU needed):

Pre-trained checkpoints:

Num. pre-train images BLIP w/ ViT-B BLIP w/ ViT-B and CapFilt-L BLIP w/ ViT-L
14M Download - -
129M Download Download Download

Finetuned checkpoints:

Task BLIP w/ ViT-B BLIP w/ ViT-B and CapFilt-L BLIP w/ ViT-L
Image-Text Retrieval (COCO) Download - Download
Image-Text Retrieval (Flickr30k) Download - Download
Image Captioning (COCO) - Download Download
VQA Download - -
NLVR2 Download - -

Image-Text Retrieval:

  1. Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
  2. To evaluate the finetuned BLIP model on COCO, run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \
--config ./configs/retrieval_coco.yaml \
--output_dir output/retrieval_coco \
--evaluate
  1. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \
--config ./configs/retrieval_coco.yaml \
--output_dir output/retrieval_coco 

Image-Text Captioning:

  1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly.
  2. To evaluate the finetuned BLIP model on COCO, run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py --evaluate
  1. To evaluate the finetuned BLIP model on NoCaps, generate results with:
python -m torch.distributed.run --nproc_per_node=8 --use_env eval_nocaps.py 
  1. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py 
Description
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Readme 9.7 MiB
Languages
Jupyter Notebook 72.5%
Python 27.5%