From 76b855f341d6e1f6ea16604c89acb56fcb1e3397 Mon Sep 17 00:00:00 2001 From: Lihe Yang Date: Sat, 27 Jan 2024 14:30:13 +0800 Subject: [PATCH] Update README.md --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index efce660..d746264 100644 --- a/README.md +++ b/README.md @@ -81,6 +81,8 @@ encoder = 'vits' # can also be 'vitb' or 'vitl' depth_anything = DepthAnything.from_pretrained('LiheYoung/depth_anything_{:}14'.format(encoder)) ``` +Depth Anything is also supported in ``transformers``. You can use it for depth prediction within [3 lines of code](https://huggingface.co/docs/transformers/main/model_doc/depth_anything) (credit to [@niels](https://huggingface.co/nielsr)). + ### No network connection, cannot load these models?
@@ -178,7 +180,7 @@ depth = depth_anything(image) ### Do not want to define image pre-processing or download model definition files? -Easily use Depth Anything through ``transformers``! Please refer to [these instructions](https://huggingface.co/LiheYoung/depth-anything-small-hf) (credit to [@niels](https://huggingface.co/nielsr)). +Easily use Depth Anything through ``transformers`` within 3 lines of code! Please refer to [these instructions](https://huggingface.co/docs/transformers/main/model_doc/depth_anything) (credit to [@niels](https://huggingface.co/nielsr)).
Click here for a brief demo: