diff --git a/README.md b/README.md index bd2dac1..2c6d756 100644 --- a/README.md +++ b/README.md @@ -1,14 +1,78 @@ ## BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation -This is the PyTorch implementation of the BLIP paper. + + +This is the PyTorch implementation of the BLIP paper. The code has been tested on PyTorch 1.9 and 1.10. +To install the dependencies, run
pip install -r requirements.txt
Catalog: -- [x] Inference demo +- [ ] Inference demo - [x] Pre-trained and finetuned checkpoints -- [x] Pre-training code - [x] Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2 -- [x] Download of bootstrapped image-text dataset +- [x] Pre-training code +- [x] Download of bootstrapped image-text datasets ### Inference demo (Image Captioning and VQA): Run our interactive demo using Colab notebook (no GPU needed): + +### Pre-trained checkpoints: +Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L +--- | :---: | :---: | :---: +14M | Download| - | - +129M | Download| Download | Download + +### Finetuned checkpoints: +Task | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L +--- | :---: | :---: | :---: +Image-Text Retrieval (COCO) | Download| - | Download +Image-Text Retrieval (Flickr30k) | Download| - | Download +Image Captioning (COCO) | - | Download| Download | +VQA | Download| Download | - +NLVR2 | Download| - | - + + +### Image-Text Retrieval: +1. Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly. +2. To evaluate the finetuned BLIP model on COCO, run: +
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \
+--config ./configs/retrieval_coco.yaml \
+--output_dir output/retrieval_coco \
+--evaluate
+3. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run: +
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \
+--config ./configs/retrieval_coco.yaml \
+--output_dir output/retrieval_coco 
+ +### Image-Text Captioning: +1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly. +2. To evaluate the finetuned BLIP model on COCO, run: +
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py --evaluate
+3. To evaluate the finetuned BLIP model on NoCaps, generate results with: (evaluation needs to be performed on official server) +
python -m torch.distributed.run --nproc_per_node=8 --use_env eval_nocaps.py 
+4. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run: +
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py 
+ +### VQA: +1. Download VQA v2 dataset and Visual Genome dataset from the original websites, and set 'vqa_root' and 'vg_root' in configs/vqa.yaml. +2. To evaluate the finetuned BLIP model, generate results with: (evaluation needs to be performed on official server) +
python -m torch.distributed.run --nproc_per_node=8 --use_env train_vqa.py --evaluate
+3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/vqa.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run: +
python -m torch.distributed.run --nproc_per_node=16 --use_env train_vqa.py 
+ +### NLVR2: +1. Download NLVR2 dataset from the original websites, and set 'image_root' in configs/nlvr.yaml. +2. To evaluate the finetuned BLIP model, run +
python -m torch.distributed.run --nproc_per_node=8 --use_env train_nlvr.py --evaluate
+3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/nlvr.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run: +
python -m torch.distributed.run --nproc_per_node=16 --use_env train_nlvr.py 
+ +### Citation +If you find this code to be useful for your research, please consider citing. +
+@inproceedings{ALBEF,
+      title={Align before Fuse: Vision and Language Representation Learning with Momentum Distillation}, 
+      author={Junnan Li and Ramprasaath R. Selvaraju and Akhilesh Deepak Gotmare and Shafiq Joty and Caiming Xiong and Steven Hoi},
+      year={2021},
+      booktitle={NeurIPS},
+}