From f770f02310b433b1f1b6cea0d30581f6eec69dc4 Mon Sep 17 00:00:00 2001 From: Lihe Yang Date: Mon, 22 Jan 2024 17:45:09 +0800 Subject: [PATCH] Update README.md --- README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 6297b28..76a783e 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,6 @@ Paper PDF Project Page - This work presents Depth Anything, a highly practical solution for robust monocular depth estimation by training on a combination of 1.5M labeled images and **62M+ unlabeled images**. @@ -107,7 +106,7 @@ You can also try our [online demo](https://huggingface.co/spaces/LiheYoung/Depth ### Import Depth Anything to your project -If you want to use Depth Anything in our own project, you can simply follow [``run.py``](run.py) to load our models and define data pre-processing. +If you want to use Depth Anything in your own project, you can simply follow [``run.py``](run.py) to load our models and define data pre-processing.
Code snippet (note the difference between our data pre-processing and that of MiDaS.) @@ -154,7 +153,7 @@ If you find this project useful, please consider citing: @article{depthanything, title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data}, author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang}, - journal={arXiv:}, + journal={arXiv:2401.10891}, year={2024} } ``` \ No newline at end of file