From 70946888f7eee777f39010ee03808bbfbe60b545 Mon Sep 17 00:00:00 2001 From: Junnan Li Date: Fri, 28 Jan 2022 16:34:20 +0800 Subject: [PATCH] Update README.md --- README.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 9318497..46f4d60 100644 --- a/README.md +++ b/README.md @@ -35,43 +35,43 @@ NLVR2 | python -m torch.distributed.run --nproc_per_node=8 --use_env train_retrieval.py \ +
python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py \
 --config ./configs/retrieval_coco.yaml \
 --output_dir output/retrieval_coco 
### Image-Text Captioning: 1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly. 2. To evaluate the finetuned BLIP model on COCO, run: -
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py --evaluate
+
python -m torch.distributed.run --nproc_per_node=8 train_caption.py --evaluate
3. To evaluate the finetuned BLIP model on NoCaps, generate results with: (evaluation needs to be performed on official server) -
python -m torch.distributed.run --nproc_per_node=8 --use_env eval_nocaps.py 
+
python -m torch.distributed.run --nproc_per_node=8 eval_nocaps.py 
4. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run: -
python -m torch.distributed.run --nproc_per_node=8 --use_env train_caption.py 
+
python -m torch.distributed.run --nproc_per_node=8 train_caption.py 
### VQA: 1. Download VQA v2 dataset and Visual Genome dataset from the original websites, and set 'vqa_root' and 'vg_root' in configs/vqa.yaml. 2. To evaluate the finetuned BLIP model, generate results with: (evaluation needs to be performed on official server) -
python -m torch.distributed.run --nproc_per_node=8 --use_env train_vqa.py --evaluate
+
python -m torch.distributed.run --nproc_per_node=8 train_vqa.py --evaluate
3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/vqa.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base.pth". Then run: -
python -m torch.distributed.run --nproc_per_node=16 --use_env train_vqa.py 
+
python -m torch.distributed.run --nproc_per_node=16 train_vqa.py 
### NLVR2: 1. Download NLVR2 dataset from the original websites, and set 'image_root' in configs/nlvr.yaml. 2. To evaluate the finetuned BLIP model, run -
python -m torch.distributed.run --nproc_per_node=8 --use_env train_nlvr.py --evaluate
+
python -m torch.distributed.run --nproc_per_node=8 train_nlvr.py --evaluate
3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/nlvr.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run: -
python -m torch.distributed.run --nproc_per_node=16 --use_env train_nlvr.py 
+
python -m torch.distributed.run --nproc_per_node=16 train_nlvr.py 
### Pre-train: 1. Prepare training json files where each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'image': path_of_image, 'caption': text_of_image}. 2. In configs/pretrain.yaml, set 'train_file' as the paths for the json files . 3. Pre-train the model using 8 A100 GPUs: -
python -m torch.distributed.run --nproc_per_node=8 --use_env pretrain.py --config ./configs/Pretrain.yaml --output_dir output/Pretrain 
+
python -m torch.distributed.run --nproc_per_node=8 pretrain.py --config ./configs/Pretrain.yaml --output_dir output/Pretrain 
### Pre-training datasets download: We provide bootstrapped pre-training datasets as json files. Each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'url': url_of_image, 'caption': text_of_image}.