mirror of
https://github.com/salesforce/BLIP.git
synced 2026-02-23 20:43:56 +00:00
Update README.md
This commit is contained in:
18
README.md
18
README.md
@@ -26,7 +26,7 @@ Task | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L
|
||||
Image-Text Retrieval (COCO) | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_retrieval_coco.pth">Download</a>| - | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_retrieval_coco.pth">Download</a>
|
||||
Image-Text Retrieval (Flickr30k) | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_retrieval_flickr.pth">Download</a>| - | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_retrieval_flickr.pth">Download</a>
|
||||
Image Captioning (COCO) | - | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base_caption.pth">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth">Download</a> |
|
||||
VQA | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_vqa.pth">Download</a>| - | -
|
||||
VQA | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_vqa.pth">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_vqa.pth">Download</a> | -
|
||||
NLVR2 | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_nlvr.pth">Download</a>| - | -
|
||||
|
||||
|
||||
@@ -46,8 +46,22 @@ NLVR2 | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLI
|
||||
1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly.
|
||||
2. To evaluate the finetuned BLIP model on COCO, run:
|
||||
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py --evaluate</pre>
|
||||
3. To evaluate the finetuned BLIP model on NoCaps, generate results with:
|
||||
3. To evaluate the finetuned BLIP model on NoCaps, generate results with: (evaluation needs to be performed on official server)
|
||||
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env eval_nocaps.py </pre>
|
||||
4. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run:
|
||||
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py </pre>
|
||||
|
||||
### VQA:
|
||||
1. Download VQA v2 dataset and Visual Genome dataset from the original websites, and set 'vqa_root' and 'vg_root' in configs/vqa.yaml.
|
||||
2. To evaluate the finetuned BLIP model, generate results with: (evaluation needs to be performed on official server)
|
||||
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_vqa.py --evaluate</pre>
|
||||
3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/vqa.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run:
|
||||
<pre>python -m torch.distributed.run --nproc_per_node=16 --use_env train_vqa.py </pre>
|
||||
|
||||
### NLVR2:
|
||||
1. Download NLVR2 dataset from the original websites, and set 'image_root' in configs/nlvr.yaml.
|
||||
2. To evaluate the finetuned BLIP model, run
|
||||
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_nlvr.py --evaluate</pre>
|
||||
3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/nlvr.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
|
||||
<pre>python -m torch.distributed.run --nproc_per_node=16 --use_env train_nlvr.py </pre>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user