From d57784d35abbeef4fdab86b10c1383d1d5812bfc Mon Sep 17 00:00:00 2001 From: Junnan Li Date: Thu, 27 Jan 2022 20:49:49 +0800 Subject: [PATCH] Update README.md --- README.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 4ac7aec..bd2dac1 100644 --- a/README.md +++ b/README.md @@ -1 +1,14 @@ -# BLIP +## BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation + +This is the PyTorch implementation of the BLIP paper. + +Catalog: +- [x] Inference demo +- [x] Pre-trained and finetuned checkpoints +- [x] Pre-training code +- [x] Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2 +- [x] Download of bootstrapped image-text dataset + + +### Inference demo (Image Captioning and VQA): +Run our interactive demo using Colab notebook (no GPU needed):