From 0f43002a7e4f550de5c92057d9788d376c2dfeb6 Mon Sep 17 00:00:00 2001 From: Junnan Li Date: Wed, 2 Feb 2022 20:08:43 +0800 Subject: [PATCH] Update README.md --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index a3c05d1..47e3a22 100644 --- a/README.md +++ b/README.md @@ -70,6 +70,9 @@ NLVR2 | python -m torch.distributed.run --nproc_per_node=16 train_nlvr.py +### Finetune with ViT-L: +In order to finetune a model with ViT-L, simply change the config file to set 'vit' as large. Batch size and learning rate may also need to be adjusted accordingly (please see the paper's appendix for hyper-parameter details). Gradient checkpoint can also be activated in the config file to reduce GPU memory usage. + ### Pre-train: 1. Prepare training json files where each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'image': path_of_image, 'caption': text_of_image}. 2. In configs/pretrain.yaml, set 'train_file' as the paths for the json files .