From 16633402c211f32e3bd03bbd4d182446901bf75d Mon Sep 17 00:00:00 2001 From: Junnan Li Date: Thu, 27 Jan 2022 21:31:05 +0800 Subject: [PATCH] Update README.md --- README.md | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 9e316cd..7650a23 100644 --- a/README.md +++ b/README.md @@ -31,13 +31,23 @@ NLVR2 | python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco + +### Image-Text Captioning: +1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly. +2. To evaluate the finetuned BLIP model on COCO, run: +
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py --evaluate
+3. To evaluate the finetuned BLIP model on NoCaps, generate results with: +
python -m torch.distributed.run --nproc_per_node=8 --use_env eval_nocaps.py 
+4. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run: +
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py 
+