mirror of
https://github.com/huchenlei/Depth-Anything.git
synced 2026-01-26 15:29:46 +00:00
Initial commit
This commit is contained in:
10
README.md
10
README.md
@@ -1,13 +1,13 @@
|
||||
<div align="center">
|
||||
<h2>Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data</h2>
|
||||
|
||||
[**Lihe Yang**](https://liheyoung.github.io/)<sup>1</sup> · [**Bingyi Kang**](https://scholar.google.com/citations?user=NmHgX-wAAAAJ)<sup>2+</sup> · [**Zilong Huang**](http://speedinghzl.github.io/)<sup>2</sup> · [**Xiaogang Xu**](https://xiaogang00.github.io/)<sup>3,4</sup>, [**Jiashi Feng**](https://sites.google.com/site/jshfeng/)<sup>2</sup> · [**Hengshuang Zhao**](https://hszhao.github.io/)<sup>1+</sup>
|
||||
[**Lihe Yang**](https://liheyoung.github.io/)<sup>1</sup> · [**Bingyi Kang**](https://scholar.google.com/citations?user=NmHgX-wAAAAJ)<sup>2+</sup> · [**Zilong Huang**](http://speedinghzl.github.io/)<sup>2</sup> · [**Xiaogang Xu**](https://xiaogang00.github.io/)<sup>3,4</sup> · [**Jiashi Feng**](https://sites.google.com/site/jshfeng/)<sup>2</sup> · [**Hengshuang Zhao**](https://hszhao.github.io/)<sup>1+</sup>
|
||||
|
||||
<sup>1</sup>The University of Hong Kong · <sup>2</sup>TikTok · <sup>3</sup>Zhejiang Lab · <sup>4</sup>Zhejiang University
|
||||
|
||||
<sup>+</sup>corresponding authors
|
||||
|
||||
<a href=""><img src='https://img.shields.io/badge/arXiv-Depth Anything-red' alt='Paper PDF'></a>
|
||||
<a href="https://arxiv.org/abs/2401.10891"><img src='https://img.shields.io/badge/arXiv-Depth Anything-red' alt='Paper PDF'></a>
|
||||
<a href='https://depth-anything.github.io'><img src='https://img.shields.io/badge/Project_Page-Depth Anything-green' alt='Project Page'></a>
|
||||
<a href='https://huggingface.co/spaces/LiheYoung/Depth-Anything'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
|
||||
|
||||
@@ -46,7 +46,7 @@ This work presents Depth Anything, a highly practical solution for robust monocu
|
||||
|
||||
Here we compare our Depth Anything with the previously best MiDaS v3.1 BEiT<sub>L-512</sub> model.
|
||||
|
||||
Please note that the latest MiDaS is also trained on KITTI and NYUv2, while we do not.
|
||||
Please note that the latest MiDaS is also trained on KITTI and NYUv2, while we are not.
|
||||
|
||||
| Method | Params | KITTI || NYUv2 || Sintel || DDAD || ETH3D || DIODE ||
|
||||
|-|-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|
||||
@@ -99,7 +99,7 @@ python run.py --encoder vitl --load-from checkpoints/depth_anything_vitl14.pth -
|
||||
If you want to use Depth Anything in our own project, you can simply follow [``run.py``](run.py) to load our models and define data pre-processing.
|
||||
|
||||
<details>
|
||||
<summary>Code snippet (note the difference between our data pre-processing and that of MiDaS.)</summary>
|
||||
<summary>Code snippet (note the difference between our data pre-processing and that of MiDaS)</summary>
|
||||
|
||||
```python
|
||||
from depth_anything.dpt import DPT_DINOv2
|
||||
@@ -142,7 +142,7 @@ If you find this project useful, please consider citing:
|
||||
@article{depthanything,
|
||||
title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
|
||||
author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
|
||||
journal={arXiv:},
|
||||
journal={arXiv:2401.10891},
|
||||
year={2024},
|
||||
}
|
||||
```
|
||||
@@ -82,7 +82,7 @@ If you find this project useful, please consider citing:
|
||||
@article{depthanything,
|
||||
title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
|
||||
author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
|
||||
journal={arXiv:},
|
||||
journal={arXiv:2401.10891},
|
||||
year={2024},
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user