Update README.md

This commit is contained in:
Junnan Li
2022-01-27 20:49:49 +08:00
committed by GitHub
parent e0ef2e84f1
commit d57784d35a

View File

@@ -1 +1,14 @@
# BLIP
## BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
This is the PyTorch implementation of the <a href="https://arxiv.org/abs/2107.07651">BLIP paper</a>.
Catalog:
- [x] Inference demo
- [x] Pre-trained and finetuned checkpoints
- [x] Pre-training code
- [x] Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2
- [x] Download of bootstrapped image-text dataset
### Inference demo (Image Captioning and VQA):
Run our interactive demo using Colab notebook (no GPU needed):