Update README.md

This commit is contained in:
Junnan Li
2022-01-27 21:31:05 +08:00
committed by GitHub
parent 4bf27877af
commit 16633402c2

View File

@@ -31,13 +31,23 @@ NLVR2 | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLI
### Image-Text Retrieval:
1. Download COCO or Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
1. Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
2. To evaluate the finetuned BLIP model on COCO, run:
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \
--config ./configs/retrieval_coco.yaml \
--output_dir output/retrieval_coco \
--evaluate</pre>
3. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as . Then run:
3. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \
--config ./configs/retrieval_coco.yaml \
--output_dir output/retrieval_coco </pre>
### Image-Text Captioning:
1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly.
2. To evaluate the finetuned BLIP model on COCO, run:
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py --evaluate</pre>
3. To evaluate the finetuned BLIP model on NoCaps, generate results with:
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env eval_nocaps.py </pre>
4. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run:
<pre>python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py </pre>