mirror of
https://github.com/salesforce/BLIP.git
synced 2026-01-26 15:19:44 +00:00
4506cc3c3e918ed18f25b3071fc2b1ed9b6c1b48
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
This is the PyTorch implementation of the BLIP paper. The code has been tested on PyTorch 1.9 and 1.10. To install the dependencies, run
pip install -r requirements.txt
Catalog:
- Inference demo
- Pre-trained and finetuned checkpoints
- Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2
- Pre-training code
- Download of bootstrapped pre-training datasets
Inference demo (Image Captioning and VQA):
Run our interactive demo using Colab notebook (no GPU needed):
Pre-trained checkpoints:
| Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L |
|---|---|---|---|
| 14M | Download | - | - |
| 129M | Download | Download | Download |
Finetuned checkpoints:
| Task | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L |
|---|---|---|---|
| Image-Text Retrieval (COCO) | Download | - | Download |
| Image-Text Retrieval (Flickr30k) | Download | - | Download |
| Image Captioning (COCO) | - | Download | Download |
| VQA | Download | Download | - |
| NLVR2 | Download | - | - |
Image-Text Retrieval:
- Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
- To evaluate the finetuned BLIP model on COCO, run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco \ --evaluate
- To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco
Image-Text Captioning:
- Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly.
- To evaluate the finetuned BLIP model on COCO, run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py --evaluate
- To evaluate the finetuned BLIP model on NoCaps, generate results with: (evaluation needs to be performed on official server)
python -m torch.distributed.run --nproc_per_node=8 --use_env eval_nocaps.py
- To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py
VQA:
- Download VQA v2 dataset and Visual Genome dataset from the original websites, and set 'vqa_root' and 'vg_root' in configs/vqa.yaml.
- To evaluate the finetuned BLIP model, generate results with: (evaluation needs to be performed on official server)
python -m torch.distributed.run --nproc_per_node=8 --use_env train_vqa.py --evaluate
- To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/vqa.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=16 --use_env train_vqa.py
NLVR2:
- Download NLVR2 dataset from the original websites, and set 'image_root' in configs/nlvr.yaml.
- To evaluate the finetuned BLIP model, run
python -m torch.distributed.run --nproc_per_node=8 --use_env train_nlvr.py --evaluate
- To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/nlvr.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=16 --use_env train_nlvr.py
Pre-train:
- Prepare training json files where each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'image': path_of_image, 'caption': text_of_image}.
- In configs/pretrain.yaml, set 'train_file' as the paths for the json files .
- Pre-train the model using 8 A100 GPUs:
python -m torch.distributed.run --nproc_per_node=8 --use_env pretrain.py --config ./configs/Pretrain.yaml --output_dir output/Pretrain
Pre-training datasets download:
We provide bootstrapped pre-training datasets as json files. Each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'url': url_of_image, 'caption': text_of_image}.
| Image source | Filtered web caption | Filtered synthetic caption | Filtered synthetic caption by ViT-L |
|---|---|---|---|
| CC3M+CC12M+SBU | Download | Download | Download |
| LAION115M | Download | Download | Download |
Citation
If you find this code to be useful for your research, please consider citing.
@inproceedings{ALBEF,
title={Align before Fuse: Vision and Language Representation Learning with Momentum Distillation},
author={Junnan Li and Ramprasaath R. Selvaraju and Akhilesh Deepak Gotmare and Shafiq Joty and Caiming Xiong and Steven Hoi},
year={2021},
booktitle={NeurIPS},
}
Description
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
image-captioningimage-text-retrievalvision-and-language-pre-trainingvision-languagevision-language-transformervisual-question-answeringvisual-reasoning
Readme
9.7 MiB
Languages
Jupyter Notebook
72.5%
Python
27.5%