mirror of
https://github.com/huchenlei/Depth-Anything.git
synced 2026-03-11 12:29:47 +00:00
41 lines
1.8 KiB
Markdown
41 lines
1.8 KiB
Markdown
## Depth-Conditioned ControlNet based on Depth Anything
|
|
|
|
We use [Diffusers](https://github.com/huggingface/diffusers/tree/main) to re-train a better depth-conditioned ControlNet based on our Depth Anything.
|
|
|
|
Please download our [config file](./config.json) and [pre-trained weights](https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints_controlnet), then follow the [instructions](https://github.com/huggingface/diffusers/tree/main/examples/controlnet) in Diffusers for inference.
|
|
|
|
## Depth-to-Image Synthesis
|
|
|
|

|
|

|
|
|
|
|
|
## Video Editing
|
|
|
|
The demos below are generated by [MagicEdit](https://github.com/magic-research/magic-edit). The middle video is generated by MiDaS-based ControlNet, while the last video is generated by Depth Anything-based ControlNet.
|
|
|
|
<div style="display: flex; justify-content: space-around;">
|
|
<video width="30%" controls autoplay muted loop>
|
|
<source src="../assets/video_edit/demo1_video.mp4" type="video/mp4">
|
|
</video>
|
|
<video width="30%" controls autoplay muted loop>
|
|
<source src="../assets/video_edit/demo1_midas.mp4" type="video/mp4">
|
|
</video>
|
|
<video width="30%" controls autoplay muted loop>
|
|
<source src="../assets/video_edit/demo1_ours.mp4" type="video/mp4">
|
|
</video>
|
|
</div><br>
|
|
|
|
|
|
<div style="display: flex; justify-content: space-around;">
|
|
<video width="30%" controls autoplay muted loop>
|
|
<source src="../assets/video_edit/demo2_video.mp4" type="video/mp4">
|
|
</video>
|
|
<video width="30%" controls autoplay muted loop>
|
|
<source src="../assets/video_edit/demo2_midas.mp4" type="video/mp4">
|
|
</video>
|
|
<video width="30%" controls autoplay muted loop>
|
|
<source src="../assets/video_edit/demo2_ours.mp4" type="video/mp4">
|
|
</video>
|
|
</div>
|