diff --git a/README.md b/README.md
index 6297b28..76a783e 100644
--- a/README.md
+++ b/README.md
@@ -10,7 +10,6 @@
-
This work presents Depth Anything, a highly practical solution for robust monocular depth estimation by training on a combination of 1.5M labeled images and **62M+ unlabeled images**.
@@ -107,7 +106,7 @@ You can also try our [online demo](https://huggingface.co/spaces/LiheYoung/Depth
### Import Depth Anything to your project
-If you want to use Depth Anything in our own project, you can simply follow [``run.py``](run.py) to load our models and define data pre-processing.
+If you want to use Depth Anything in your own project, you can simply follow [``run.py``](run.py) to load our models and define data pre-processing.
Code snippet (note the difference between our data pre-processing and that of MiDaS.)
@@ -154,7 +153,7 @@ If you find this project useful, please consider citing:
@article{depthanything,
title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
- journal={arXiv:},
+ journal={arXiv:2401.10891},
year={2024}
}
```
\ No newline at end of file