mirror of
https://github.com/huchenlei/Depth-Anything.git
synced 2026-01-26 15:29:46 +00:00
Update gallery and community support
This commit is contained in:
14
README.md
14
README.md
@@ -21,7 +21,7 @@ This work presents Depth Anything, a highly practical solution for robust monocu
|
||||
|
||||
* **2024-02-05:** [Depth Anything Gallery](./gallery.md) is released. Thank all the users!
|
||||
* **2024-02-02:** Depth Anything serves as the default depth processor for [InstantID](https://github.com/InstantID/InstantID) and [InvokeAI](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.6.1).
|
||||
* **2024-01-25:** Support [video depth visualization](./run_video.py).
|
||||
* **2024-01-25:** Support [video depth visualization](./run_video.py). An [online demo for video](https://huggingface.co/spaces/JohanDL/Depth-Anything-Video) is also available.
|
||||
* **2024-01-23:** The new ControlNet based on Depth Anything is integrated into [ControlNet WebUI](https://github.com/Mikubill/sd-webui-controlnet) and [ComfyUI's ControlNet](https://github.com/Fannovel16/comfyui_controlnet_aux).
|
||||
* **2024-01-23:** Depth Anything [ONNX](https://github.com/fabio-sim/Depth-Anything-ONNX) and [TensorRT](https://github.com/spacewalk01/depth-anything-tensorrt) versions are supported.
|
||||
* **2024-01-22:** Paper, project page, code, models, and demo ([HuggingFace](https://huggingface.co/spaces/LiheYoung/Depth-Anything), [OpenXLab](https://openxlab.org.cn/apps/detail/yyfan/depth_anything)) are released.
|
||||
@@ -29,6 +29,8 @@ This work presents Depth Anything, a highly practical solution for robust monocu
|
||||
|
||||
## Features of Depth Anything
|
||||
|
||||
***If you need other features, please first check [existing community supports](#community-support).***
|
||||
|
||||
- **Relative depth estimation**:
|
||||
|
||||
Our foundation models listed [here](https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints) can provide relative depth estimation for any given image robustly. Please refer [here](#running) for details.
|
||||
@@ -188,7 +190,7 @@ depth = depth_anything(image)
|
||||
|
||||
Easily use Depth Anything through [``transformers``](https://github.com/huggingface/transformers) within 3 lines of code! Please refer to [these instructions](https://huggingface.co/docs/transformers/main/model_doc/depth_anything) (credit to [@niels](https://huggingface.co/nielsr)).
|
||||
|
||||
**Note:** If you encounter ``KeyError: 'depth_anything'``, please install the latest ``transformers`` from source:
|
||||
**Note:** If you encounter ``KeyError: 'depth_anything'``, please install the latest [``transformers``](https://github.com/huggingface/transformers) from source:
|
||||
```bash
|
||||
pip install git+https://github.com/huggingface/transformers.git
|
||||
```
|
||||
@@ -210,13 +212,17 @@ depth = pipe(image)["depth"]
|
||||
**We sincerely appreciate all the extentions built on our Depth Anything from the community. Thank you a lot!**
|
||||
|
||||
Here we list the extensions we have found:
|
||||
- Depth Anything TensorRT:
|
||||
- https://github.com/spacewalk01/depth-anything-tensorrt
|
||||
- https://github.com/thinvy/DepthAnythingTensorrtDeploy
|
||||
- https://github.com/daniel89710/trt-depth-anything
|
||||
- Depth Anything ONNX: https://github.com/fabio-sim/Depth-Anything-ONNX
|
||||
- Depth Anything TensorRT: https://github.com/spacewalk01/depth-anything-tensorrt
|
||||
- Depth Anything in Transformers.js (3D visualization): https://huggingface.co/spaces/Xenova/depth-anything-web
|
||||
- Depth Anything for video (online demo): https://huggingface.co/spaces/JohanDL/Depth-Anything-Video
|
||||
- Depth Anything in ControlNet WebUI: https://github.com/Mikubill/sd-webui-controlnet
|
||||
- Depth Anything in ComfyUI's ControlNet: https://github.com/Fannovel16/comfyui_controlnet_aux
|
||||
- Depth Anything in X-AnyLabeling: https://github.com/CVHub520/X-AnyLabeling
|
||||
- Depth Anything in OpenXLab: https://openxlab.org.cn/apps/detail/yyfan/depth_anything
|
||||
- Depth Anything in Transformers.js: https://huggingface.co/spaces/Xenova/depth-anything-web
|
||||
- Depth Anything in OpenVINO: https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/280-depth-anything
|
||||
|
||||
If you have your amazing projects supporting or improving (*e.g.*, speed) Depth Anything, please feel free to drop an issue. We will add them here.
|
||||
|
||||
BIN
assets/gallery/nuscenes.gif
Normal file
BIN
assets/gallery/nuscenes.gif
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 12 MiB |
10
gallery.md
10
gallery.md
@@ -114,7 +114,9 @@ You can click on the titles below to be directed to corresponding source pages.
|
||||
|
||||
## Video
|
||||
|
||||
The videos may be slow to load. Please wait a moment.
|
||||
For more online showcases, please refer to https://twitter.com/WilliamLamkin/status/1755623301907460582.
|
||||
|
||||
The videos below may be slow to load. Please wait a moment.
|
||||
|
||||
### [Racing game](https://twitter.com/i/status/1750683014152040853)
|
||||
|
||||
@@ -124,6 +126,10 @@ The videos may be slow to load. Please wait a moment.
|
||||
|
||||
<img src="assets/gallery/building.gif" width="80%"/>
|
||||
|
||||
### [nuScenes](https://github.com/scepter914/DepthAnything-ROS)
|
||||
|
||||
<img src="assets/gallery/nuscenes.gif" width="80%"/>
|
||||
|
||||
### [Indoor moving](https://twitter.com/PINTO03091/status/1750162506453041437)
|
||||
|
||||
<img src="assets/gallery/indoor_moving.gif" width="40%"/>
|
||||
@@ -131,7 +137,7 @@ The videos may be slow to load. Please wait a moment.
|
||||
|
||||
## 3D
|
||||
|
||||
The videos may be slow to load. Please wait a moment.
|
||||
The videos below may be slow to load. Please wait a moment.
|
||||
|
||||
### [3D visualization](https://twitter.com/victormustar/status/1753008143469093212)
|
||||
|
||||
|
||||
@@ -40,9 +40,7 @@ Note that our results are obtained *without* Mapillary pre-training.
|
||||
|
||||
## Installation
|
||||
|
||||
Please refer to [MMSegmentation](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/get_started.md#installation) for instructions.
|
||||
|
||||
Please also install mmdet to support Mask2Former:
|
||||
Please refer to [MMSegmentation](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/get_started.md#installation) for instructions. *Do not forget to install ``mmdet`` to support ``Mask2Former``:*
|
||||
```bash
|
||||
pip install "mmdet>=3.0.0rc4"
|
||||
```
|
||||
@@ -52,7 +50,6 @@ After installation:
|
||||
- move our [dinov2.py](./dinov2.py) to mmseg's [backbones](https://github.com/open-mmlab/mmsegmentation/tree/main/mmseg/models/backbones)
|
||||
- add DINOv2 in mmseg's [models/backbones/\_\_init\_\_.py](https://github.com/open-mmlab/mmsegmentation/blob/main/mmseg/models/backbones/__init__.py)
|
||||
- download our provided [torchhub](https://github.com/LiheYoung/Depth-Anything/tree/main/torchhub) directory and put it at the root of your working directory
|
||||
|
||||
**Note:** If you want to reproduce the **training** process, please 1) download the [Depth Anything pre-trained model](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints/depth_anything_vitl14.pth) (to initialize the encoder) and 2) put it under the ``checkpoints`` folder.
|
||||
- download the [Depth Anything pre-trained model](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints/depth_anything_vitl14.pth) (to initialize the encoder) and 2) put it under the ``checkpoints`` folder.
|
||||
|
||||
For training or inference with our pre-trained models, please refer to MMSegmentation [instructions](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/4_train_test.md).
|
||||
|
||||
Reference in New Issue
Block a user