mirror of
https://github.com/MackinationsAi/UDAV2-ControlNet.git
synced 2026-01-26 23:39:45 +00:00
16 lines
878 B
Markdown
16 lines
878 B
Markdown
## Depth-Conditioned ControlNet based on Depth Anything V2
|
|
|
|
We use [Diffusers](https://github.com/huggingface/diffusers/tree/main) to re-train a better depth-conditioned ControlNet based on our Depth Anything.
|
|
|
|
Please download our [config file](./config.json) and [pre-trained weights](https://huggingface.co/MackinationsAi/Depth-Anything-V2_Safetensors/blob/main/depth_anything_v2_vitl.safetensors), then follow the [instructions](https://github.com/huggingface/diffusers/tree/main/examples/controlnet) in Diffusers for inference.
|
|
|
|
## Depth-to-Image Synthesis
|
|
|
|

|
|

|
|
|
|
|
|
## Video Editing
|
|
|
|
Please refer to our [project page](https://depth-anything-v2.github.io/). We use [MagicEdit](https://github.com/magic-research/magic-edit) to show demos of video editing based on depth information.
|