## BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation This is the PyTorch implementation of the BLIP paper. The code has been tested on PyTorch 1.9 and 1.10. Catalog: - [x] Inference demo - [x] Pre-trained and finetuned checkpoints - [x] Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2 - [x] Pre-training code - [x] Download of bootstrapped image-text dataset ### Inference demo (Image Captioning and VQA): Run our interactive demo using Colab notebook (no GPU needed): ### Pre-trained checkpoints: Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L --- | --- | --- | --- 14M | Download| - | - 129M | Download| Download | Download ### Image-Text Retrieval: 1. Download COCO or Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly. 2. To evaluate the finetuned BLIP model on COCO, run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco \ --evaluate3. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as . Then run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco