## BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation This is the PyTorch implementation of the BLIP paper. The code has been tested on PyTorch 1.9 and 1.10. To install the dependencies, run
pip install -r requirements.txt Catalog: - [ ] Inference demo - [x] Pre-trained and finetuned checkpoints - [x] Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2 - [x] Pre-training code - [x] Download of bootstrapped image-text datasets ### Inference demo (Image Captioning and VQA): Run our interactive demo using Colab notebook (no GPU needed): ### Pre-trained checkpoints: Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L --- | :---: | :---: | :---: 14M | Download| - | - 129M | Download| Download | Download ### Finetuned checkpoints: Task | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L --- | :---: | :---: | :---: Image-Text Retrieval (COCO) | Download| - | Download Image-Text Retrieval (Flickr30k) | Download| - | Download Image Captioning (COCO) | - | Download| Download | VQA | Download| - | - NLVR2 | Download| - | - ### Image-Text Retrieval: 1. Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly. 2. To evaluate the finetuned BLIP model on COCO, run:python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco \ --evaluate3. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco### Image-Text Captioning: 1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly. 2. To evaluate the finetuned BLIP model on COCO, run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py --evaluate3. To evaluate the finetuned BLIP model on NoCaps, generate results with:
python -m torch.distributed.run --nproc_per_node=8 --use_env eval_nocaps.py4. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py