mirror of
https://github.com/salesforce/BLIP.git
synced 2026-03-01 15:29:47 +00:00
54 lines
4.0 KiB
Markdown
54 lines
4.0 KiB
Markdown
## BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
|
|
|
|
This is the PyTorch implementation of the <a href="https://arxiv.org/abs/2107.07651">BLIP paper</a>. The code has been tested on PyTorch 1.9 and 1.10.
|
|
To install the dependencies, run <pre/>pip install -r requirements.txt</pre>
|
|
|
|
Catalog:
|
|
- [ ] Inference demo
|
|
- [x] Pre-trained and finetuned checkpoints
|
|
- [x] Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2
|
|
- [x] Pre-training code
|
|
- [x] Download of bootstrapped image-text datasets
|
|
|
|
|
|
### Inference demo (Image Captioning and VQA):
|
|
Run our interactive demo using Colab notebook (no GPU needed):
|
|
|
|
### Pre-trained checkpoints:
|
|
Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L
|
|
--- | :---: | :---: | :---:
|
|
14M | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_14M.pth">Download</a>| - | -
|
|
129M | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth">Download</a> | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large.pth">Download</a>
|
|
|
|
### Finetuned checkpoints:
|
|
Task | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L
|
|
--- | :---: | :---: | :---:
|
|
Image-Text Retrieval (COCO) | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_retrieval_coco.pth">Download</a>| - | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_retrieval_coco.pth">Download</a>
|
|
Image-Text Retrieval (Flickr30k) | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_retrieval_flickr.pth">Download</a>| - | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_retrieval_flickr.pth">Download</a>
|
|
Image Captioning (COCO) | - | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base_caption.pth">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth">Download</a> |
|
|
VQA | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_vqa.pth">Download</a>| - | -
|
|
NLVR2 | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_nlvr.pth">Download</a>| - | -
|
|
|
|
|
|
### Image-Text Retrieval:
|
|
1. Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
|
|
2. To evaluate the finetuned BLIP model on COCO, run:
|
|
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \
|
|
--config ./configs/retrieval_coco.yaml \
|
|
--output_dir output/retrieval_coco \
|
|
--evaluate</pre>
|
|
3. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
|
|
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \
|
|
--config ./configs/retrieval_coco.yaml \
|
|
--output_dir output/retrieval_coco </pre>
|
|
|
|
### Image-Text Captioning:
|
|
1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly.
|
|
2. To evaluate the finetuned BLIP model on COCO, run:
|
|
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py --evaluate</pre>
|
|
3. To evaluate the finetuned BLIP model on NoCaps, generate results with:
|
|
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env eval_nocaps.py </pre>
|
|
4. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run:
|
|
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py </pre>
|
|
|