mirror of
https://github.com/Bing-su/adetailer.git
synced 2026-04-22 23:39:07 +00:00
Merge branch 'dev'
This commit is contained in:
35
.github/workflows/lint.yml
vendored
Normal file
35
.github/workflows/lint.yml
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
name: Lint
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "**.py"
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.repository == 'Bing-su/adetailer' || github.repository == ''
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Setup python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.10"
|
||||
|
||||
- name: Install python packages
|
||||
run: pip install black ruff pre-commit-hooks
|
||||
|
||||
- name: Run pre-commit-hooks
|
||||
run: |
|
||||
check-ast
|
||||
trailing-whitespace-fixer --markdown-linebreak-ext=md
|
||||
end-of-file-fixer
|
||||
mixed-line-ending
|
||||
|
||||
- name: Run black
|
||||
run: black --check .
|
||||
|
||||
- name: Run ruff
|
||||
run: ruff check .
|
||||
2
.github/workflows/stale.yml
vendored
2
.github/workflows/stale.yml
vendored
@@ -7,7 +7,7 @@ jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@v8
|
||||
- uses: actions/stale@v9
|
||||
with:
|
||||
days-before-stale: 23
|
||||
days-before-close: 3
|
||||
|
||||
@@ -9,12 +9,12 @@ repos:
|
||||
- id: mixed-line-ending
|
||||
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.1.14
|
||||
rev: v0.2.2
|
||||
hooks:
|
||||
- id: ruff
|
||||
args: [--fix, --exit-non-zero-on-fix]
|
||||
|
||||
- repo: https://github.com/psf/black-pre-commit-mirror
|
||||
rev: 23.12.1
|
||||
rev: 24.2.0
|
||||
hooks:
|
||||
- id: black
|
||||
|
||||
11
CHANGELOG.md
11
CHANGELOG.md
@@ -1,5 +1,16 @@
|
||||
# Changelog
|
||||
|
||||
## 2024-03-01
|
||||
|
||||
- v24.3.0
|
||||
- YOLO World 모델 추가: 가장 큰 yolov8x-world.pt 모델만 기본적으로 선택할 수 있게 함.
|
||||
- lllyasviel/stable-diffusion-webui-forge에서 컨트롤넷을 사용가능하게 함 (PR #517)
|
||||
- 기본 스크립트 목록에 soft_inpainting 추가 (https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14208)
|
||||
- 기존에 설치한 사람에게 소급적용되지는 않음
|
||||
|
||||
- 감지모델에 대한 간단한 pytest 추가함
|
||||
- xyz grid 컨트롤넷 모델 옵션에 `Passthrough` 추가함
|
||||
|
||||
## 2024-01-23
|
||||
|
||||
- v24.1.2
|
||||
|
||||
25
README.md
25
README.md
@@ -18,15 +18,16 @@ You can now install it directly from the Extensions tab.
|
||||
|
||||

|
||||
|
||||
You **DON'T** need to download any model from huggingface.
|
||||
You **DON'T** need to download any base model from huggingface.
|
||||
|
||||
## Options
|
||||
|
||||
| Model, Prompts | | |
|
||||
| --------------------------------- | --------------------------------------------------------------------------------- | ------------------------------------------------- |
|
||||
| ADetailer model | Determine what to detect. | `None` = disable |
|
||||
| ADetailer prompt, negative prompt | Prompts and negative prompts to apply | If left blank, it will use the same as the input. |
|
||||
| Skip img2img | Skip img2img. In practice, this works by changing the step count of img2img to 1. | img2img only |
|
||||
| Model, Prompts | | |
|
||||
| --------------------------------- | ---------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| ADetailer model | Determine what to detect. | `None` = disable |
|
||||
| ADetailer model classes | Comma separated class names to detect. only available when using YOLO World models | If blank, use default values.<br/>default = [COCO 80 classes](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml) |
|
||||
| ADetailer prompt, negative prompt | Prompts and negative prompts to apply | If left blank, it will use the same as the input. |
|
||||
| Skip img2img | Skip img2img. In practice, this works by changing the step count of img2img to 1. | img2img only |
|
||||
|
||||
| Detection | | |
|
||||
| ------------------------------------ | -------------------------------------------------------------------------------------------- | ------------ |
|
||||
@@ -52,7 +53,9 @@ Each option corresponds to a corresponding option on the inpaint tab. Therefore,
|
||||
|
||||
You can use the ControlNet extension if you have ControlNet installed and ControlNet models.
|
||||
|
||||
Support `inpaint, scribble, lineart, openpose, tile` controlnet models. Once you choose a model, the preprocessor is set automatically. It works separately from the model set by the Controlnet extension.
|
||||
Support `inpaint, scribble, lineart, openpose, tile, depth` controlnet models. Once you choose a model, the preprocessor is set automatically. It works separately from the model set by the Controlnet extension.
|
||||
|
||||
If you select `Passthrough`, the controlnet settings you set outside of ADetailer will be used.
|
||||
|
||||
## Advanced Options
|
||||
|
||||
@@ -80,11 +83,15 @@ API request example: [wiki/API](https://github.com/Bing-su/adetailer/wiki/API)
|
||||
| mediapipe_face_short | realistic face | - | - |
|
||||
| mediapipe_face_mesh | realistic face | - | - |
|
||||
|
||||
The yolo models can be found on huggingface [Bingsu/adetailer](https://huggingface.co/Bingsu/adetailer).
|
||||
The YOLO models can be found on huggingface [Bingsu/adetailer](https://huggingface.co/Bingsu/adetailer).
|
||||
|
||||
For a detailed description of the YOLO8 model, see: https://docs.ultralytics.com/models/yolov8/#overview
|
||||
|
||||
YOLO World model: https://docs.ultralytics.com/models/yolo-world/
|
||||
|
||||
### Additional Model
|
||||
|
||||
Put your [ultralytics](https://github.com/ultralytics/ultralytics) yolo model in `webui/models/adetailer`. The model name should end with `.pt` or `.pth`.
|
||||
Put your [ultralytics](https://github.com/ultralytics/ultralytics) yolo model in `webui/models/adetailer`. The model name should end with `.pt`.
|
||||
|
||||
It must be a bbox detection or segment model and use all label.
|
||||
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = "24.1.2"
|
||||
__version__ = "24.3.0"
|
||||
|
||||
@@ -5,7 +5,6 @@ from dataclasses import dataclass
|
||||
from functools import cached_property, partial
|
||||
from typing import Any, Literal, NamedTuple, Optional
|
||||
|
||||
import pydantic
|
||||
from pydantic import (
|
||||
BaseModel,
|
||||
Extra,
|
||||
@@ -14,7 +13,6 @@ from pydantic import (
|
||||
PositiveInt,
|
||||
confloat,
|
||||
conint,
|
||||
constr,
|
||||
validator,
|
||||
)
|
||||
|
||||
@@ -34,16 +32,17 @@ class Arg(NamedTuple):
|
||||
|
||||
class ArgsList(UserList):
|
||||
@cached_property
|
||||
def attrs(self) -> tuple[str]:
|
||||
def attrs(self) -> tuple[str, ...]:
|
||||
return tuple(attr for attr, _ in self)
|
||||
|
||||
@cached_property
|
||||
def names(self) -> tuple[str]:
|
||||
def names(self) -> tuple[str, ...]:
|
||||
return tuple(name for _, name in self)
|
||||
|
||||
|
||||
class ADetailerArgs(BaseModel, extra=Extra.forbid):
|
||||
ad_model: str = "None"
|
||||
ad_model_classes: str = ""
|
||||
ad_prompt: str = ""
|
||||
ad_negative_prompt: str = ""
|
||||
ad_confidence: confloat(ge=0.0, le=1.0) = 0.3
|
||||
@@ -113,6 +112,7 @@ class ADetailerArgs(BaseModel, extra=Extra.forbid):
|
||||
p = {name: getattr(self, attr) for attr, name in ALL_ARGS}
|
||||
ppop = partial(self.ppop, p)
|
||||
|
||||
ppop("ADetailer model classes")
|
||||
ppop("ADetailer prompt")
|
||||
ppop("ADetailer negative prompt")
|
||||
ppop("ADetailer mask only top k largest", cond=0)
|
||||
@@ -185,6 +185,7 @@ class ADetailerArgs(BaseModel, extra=Extra.forbid):
|
||||
|
||||
_all_args = [
|
||||
("ad_model", "ADetailer model"),
|
||||
("ad_model_classes", "ADetailer model classes"),
|
||||
("ad_prompt", "ADetailer prompt"),
|
||||
("ad_negative_prompt", "ADetailer negative prompt"),
|
||||
("ad_confidence", "ADetailer confidence"),
|
||||
|
||||
@@ -3,13 +3,13 @@ from __future__ import annotations
|
||||
from collections import OrderedDict
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Optional, Union
|
||||
from typing import Optional
|
||||
|
||||
from huggingface_hub import hf_hub_download
|
||||
from PIL import Image, ImageDraw
|
||||
from rich import print
|
||||
|
||||
repo_id = "Bingsu/adetailer"
|
||||
REPO_ID = "Bingsu/adetailer"
|
||||
_download_failed = False
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ class PredictOutput:
|
||||
preview: Optional[Image.Image] = None
|
||||
|
||||
|
||||
def hf_download(file: str):
|
||||
def hf_download(file: str, repo_id: str = REPO_ID) -> str | None:
|
||||
global _download_failed
|
||||
|
||||
if _download_failed:
|
||||
@@ -56,6 +56,9 @@ def get_models(
|
||||
"hand_yolov8n.pt": hf_download("hand_yolov8n.pt"),
|
||||
"person_yolov8n-seg.pt": hf_download("person_yolov8n-seg.pt"),
|
||||
"person_yolov8s-seg.pt": hf_download("person_yolov8s-seg.pt"),
|
||||
"yolov8x-world.pt": hf_download(
|
||||
"yolov8x-world.pt", repo_id="Bingsu/yolo-world-mirror"
|
||||
),
|
||||
}
|
||||
)
|
||||
models.update(
|
||||
|
||||
103
adetailer/ui.py
103
adetailer/ui.py
@@ -9,25 +9,37 @@ import gradio as gr
|
||||
|
||||
from adetailer import AFTER_DETAILER, __version__
|
||||
from adetailer.args import ALL_ARGS, MASK_MERGE_INVERT
|
||||
from controlnet_ext import controlnet_exists, get_cn_models
|
||||
from controlnet_ext import controlnet_exists, controlnet_type, get_cn_models
|
||||
|
||||
cn_module_choices = {
|
||||
"inpaint": [
|
||||
"inpaint_global_harmonious",
|
||||
"inpaint_only",
|
||||
"inpaint_only+lama",
|
||||
],
|
||||
"lineart": [
|
||||
"lineart_coarse",
|
||||
"lineart_realistic",
|
||||
"lineart_anime",
|
||||
"lineart_anime_denoise",
|
||||
],
|
||||
"openpose": ["openpose_full", "dw_openpose_full"],
|
||||
"tile": ["tile_resample", "tile_colorfix", "tile_colorfix+sharp"],
|
||||
"scribble": ["t2ia_sketch_pidi"],
|
||||
"depth": ["depth_midas", "depth_hand_refiner"],
|
||||
}
|
||||
if controlnet_type == "forge":
|
||||
from lib_controlnet import global_state
|
||||
|
||||
cn_module_choices = {
|
||||
"inpaint": list(global_state.get_filtered_preprocessors("Inpaint")),
|
||||
"lineart": list(global_state.get_filtered_preprocessors("Lineart")),
|
||||
"openpose": list(global_state.get_filtered_preprocessors("OpenPose")),
|
||||
"tile": list(global_state.get_filtered_preprocessors("Tile")),
|
||||
"scribble": list(global_state.get_filtered_preprocessors("Scribble")),
|
||||
"depth": list(global_state.get_filtered_preprocessors("Depth")),
|
||||
}
|
||||
else:
|
||||
cn_module_choices = {
|
||||
"inpaint": [
|
||||
"inpaint_global_harmonious",
|
||||
"inpaint_only",
|
||||
"inpaint_only+lama",
|
||||
],
|
||||
"lineart": [
|
||||
"lineart_coarse",
|
||||
"lineart_realistic",
|
||||
"lineart_anime",
|
||||
"lineart_anime_denoise",
|
||||
],
|
||||
"openpose": ["openpose_full", "dw_openpose_full"],
|
||||
"tile": ["tile_resample", "tile_colorfix", "tile_colorfix+sharp"],
|
||||
"scribble": ["t2ia_sketch_pidi"],
|
||||
"depth": ["depth_midas", "depth_hand_refiner"],
|
||||
}
|
||||
|
||||
|
||||
class Widgets(SimpleNamespace):
|
||||
@@ -73,6 +85,15 @@ def on_generate_click(state: dict, *values: Any):
|
||||
return state
|
||||
|
||||
|
||||
def on_ad_model_update(model: str):
|
||||
if "-world" in model:
|
||||
return gr.update(
|
||||
visible=True,
|
||||
placeholder="Comma separated class names to detect, ex: 'person,cat'. default: COCO 80 classes",
|
||||
)
|
||||
return gr.update(visible=False, placeholder="")
|
||||
|
||||
|
||||
def on_cn_model_update(cn_model_name: str):
|
||||
cn_model_name = cn_model_name.replace("inpaint_depth", "depth")
|
||||
for t in cn_module_choices:
|
||||
@@ -149,21 +170,39 @@ def one_ui_group(n: int, is_img2img: bool, webui_info: WebuiInfo):
|
||||
w = Widgets()
|
||||
eid = partial(elem_id, n=n, is_img2img=is_img2img)
|
||||
|
||||
with gr.Row():
|
||||
model_choices = (
|
||||
[*webui_info.ad_model_list, "None"]
|
||||
if n == 0
|
||||
else ["None", *webui_info.ad_model_list]
|
||||
)
|
||||
with gr.Group():
|
||||
with gr.Row():
|
||||
model_choices = (
|
||||
[*webui_info.ad_model_list, "None"]
|
||||
if n == 0
|
||||
else ["None", *webui_info.ad_model_list]
|
||||
)
|
||||
|
||||
w.ad_model = gr.Dropdown(
|
||||
label="ADetailer model" + suffix(n),
|
||||
choices=model_choices,
|
||||
value=model_choices[0],
|
||||
visible=True,
|
||||
type="value",
|
||||
elem_id=eid("ad_model"),
|
||||
)
|
||||
w.ad_model = gr.Dropdown(
|
||||
label="ADetailer model" + suffix(n),
|
||||
choices=model_choices,
|
||||
value=model_choices[0],
|
||||
visible=True,
|
||||
type="value",
|
||||
elem_id=eid("ad_model"),
|
||||
)
|
||||
|
||||
with gr.Row():
|
||||
w.ad_model_classes = gr.Textbox(
|
||||
label="ADetailer model classes" + suffix(n),
|
||||
value="",
|
||||
visible=False,
|
||||
elem_id=eid("ad_classes"),
|
||||
)
|
||||
|
||||
w.ad_model.change(
|
||||
on_ad_model_update,
|
||||
inputs=w.ad_model,
|
||||
outputs=w.ad_model_classes,
|
||||
queue=False,
|
||||
)
|
||||
|
||||
gr.HTML("<br>")
|
||||
|
||||
with gr.Group():
|
||||
with gr.Row(elem_id=eid("ad_toprow_prompt")):
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
import cv2
|
||||
from PIL import Image
|
||||
@@ -9,16 +10,22 @@ from torchvision.transforms.functional import to_pil_image
|
||||
from adetailer import PredictOutput
|
||||
from adetailer.common import create_mask_from_bbox
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import torch
|
||||
from ultralytics import YOLO, YOLOWorld
|
||||
|
||||
|
||||
def ultralytics_predict(
|
||||
model_path: str | Path,
|
||||
image: Image.Image,
|
||||
confidence: float = 0.3,
|
||||
device: str = "",
|
||||
classes: str = "",
|
||||
) -> PredictOutput:
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO(model_path)
|
||||
apply_classes(model, model_path, classes)
|
||||
pred = model(image, conf=confidence, device=device)
|
||||
|
||||
bboxes = pred[0].boxes.xyxy.cpu().numpy()
|
||||
@@ -37,7 +44,15 @@ def ultralytics_predict(
|
||||
return PredictOutput(bboxes=bboxes, masks=masks, preview=preview)
|
||||
|
||||
|
||||
def mask_to_pil(masks, shape: tuple[int, int]) -> list[Image.Image]:
|
||||
def apply_classes(model: YOLO | YOLOWorld, model_path: str | Path, classes: str):
|
||||
if not classes or "-world" not in Path(model_path).stem:
|
||||
return
|
||||
parsed = [c.strip() for c in classes.split(",") if c.strip()]
|
||||
if parsed:
|
||||
model.set_classes(parsed)
|
||||
|
||||
|
||||
def mask_to_pil(masks: torch.Tensor, shape: tuple[int, int]) -> list[Image.Image]:
|
||||
"""
|
||||
Parameters
|
||||
----------
|
||||
@@ -45,7 +60,7 @@ def mask_to_pil(masks, shape: tuple[int, int]) -> list[Image.Image]:
|
||||
The device can be CUDA, but `to_pil_image` takes care of that.
|
||||
|
||||
shape: tuple[int, int]
|
||||
(width, height) of the original image
|
||||
(W, H) of the original image
|
||||
"""
|
||||
n = masks.shape[0]
|
||||
return [to_pil_image(masks[i], mode="L").resize(shape) for i in range(n)]
|
||||
|
||||
@@ -1,7 +1,21 @@
|
||||
from .controlnet_ext import ControlNetExt, controlnet_exists, get_cn_models
|
||||
try:
|
||||
from .controlnet_ext_forge import (
|
||||
ControlNetExt,
|
||||
controlnet_exists,
|
||||
controlnet_type,
|
||||
get_cn_models,
|
||||
)
|
||||
except ImportError:
|
||||
from .controlnet_ext import (
|
||||
ControlNetExt,
|
||||
controlnet_exists,
|
||||
controlnet_type,
|
||||
get_cn_models,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"ControlNetExt",
|
||||
"controlnet_exists",
|
||||
"controlnet_type",
|
||||
"get_cn_models",
|
||||
]
|
||||
|
||||
11
controlnet_ext/common.py
Normal file
11
controlnet_ext/common.py
Normal file
@@ -0,0 +1,11 @@
|
||||
import re
|
||||
|
||||
cn_model_module = {
|
||||
"inpaint": "inpaint_global_harmonious",
|
||||
"scribble": "t2ia_sketch_pidi",
|
||||
"lineart": "lineart_coarse",
|
||||
"openpose": "openpose_full",
|
||||
"tile": "tile_resample",
|
||||
"depth": "depth_midas",
|
||||
}
|
||||
cn_model_regex = re.compile("|".join(cn_model_module.keys()))
|
||||
@@ -1,7 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
import re
|
||||
import sys
|
||||
from functools import lru_cache
|
||||
from pathlib import Path
|
||||
@@ -9,6 +8,8 @@ from textwrap import dedent
|
||||
|
||||
from modules import extensions, sd_models, shared
|
||||
|
||||
from .common import cn_model_module, cn_model_regex
|
||||
|
||||
try:
|
||||
from modules.paths import extensions_builtin_dir, extensions_dir, models_path
|
||||
except ImportError as e:
|
||||
@@ -22,6 +23,7 @@ except ImportError as e:
|
||||
ext_path = Path(extensions_dir)
|
||||
ext_builtin_path = Path(extensions_builtin_dir)
|
||||
controlnet_exists = False
|
||||
controlnet_type = "standard"
|
||||
controlnet_path = None
|
||||
cn_base_path = ""
|
||||
|
||||
@@ -42,16 +44,6 @@ if controlnet_path is not None:
|
||||
if target_path not in sys.path:
|
||||
sys.path.append(target_path)
|
||||
|
||||
cn_model_module = {
|
||||
"inpaint": "inpaint_global_harmonious",
|
||||
"scribble": "t2ia_sketch_pidi",
|
||||
"lineart": "lineart_coarse",
|
||||
"openpose": "openpose_full",
|
||||
"tile": "tile_resample",
|
||||
"depth": "depth_midas",
|
||||
}
|
||||
cn_model_regex = re.compile("|".join(cn_model_module.keys()))
|
||||
|
||||
|
||||
class ControlNetExt:
|
||||
def __init__(self):
|
||||
|
||||
92
controlnet_ext/controlnet_ext_forge.py
Normal file
92
controlnet_ext/controlnet_ext_forge.py
Normal file
@@ -0,0 +1,92 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import copy
|
||||
|
||||
import numpy as np
|
||||
from lib_controlnet import external_code, global_state
|
||||
from lib_controlnet.external_code import ControlNetUnit
|
||||
|
||||
from modules import scripts
|
||||
from modules.processing import StableDiffusionProcessing
|
||||
|
||||
from .common import cn_model_regex
|
||||
|
||||
controlnet_exists = True
|
||||
controlnet_type = "forge"
|
||||
|
||||
|
||||
def find_script(p: StableDiffusionProcessing, script_title: str) -> scripts.Script:
|
||||
script = next((s for s in p.scripts.scripts if s.title() == script_title), None)
|
||||
if not script:
|
||||
msg = f"Script not found: {script_title!r}"
|
||||
raise RuntimeError(msg)
|
||||
return script
|
||||
|
||||
|
||||
def add_forge_script_to_adetailer_run(
|
||||
p: StableDiffusionProcessing, script_title: str, script_args: list
|
||||
):
|
||||
p.scripts = copy.copy(scripts.scripts_img2img)
|
||||
p.scripts.alwayson_scripts = []
|
||||
p.script_args_value = []
|
||||
|
||||
script = copy.copy(find_script(p, script_title))
|
||||
script.args_from = len(p.script_args_value)
|
||||
script.args_to = len(p.script_args_value) + len(script_args)
|
||||
p.scripts.alwayson_scripts.append(script)
|
||||
p.script_args_value.extend(script_args)
|
||||
|
||||
|
||||
class ControlNetExt:
|
||||
def __init__(self):
|
||||
self.cn_available = False
|
||||
self.external_cn = external_code
|
||||
|
||||
def init_controlnet(self):
|
||||
self.cn_available = True
|
||||
|
||||
def update_scripts_args(
|
||||
self,
|
||||
p,
|
||||
model: str,
|
||||
module: str | None,
|
||||
weight: float,
|
||||
guidance_start: float,
|
||||
guidance_end: float,
|
||||
):
|
||||
if (not self.cn_available) or model == "None":
|
||||
return
|
||||
|
||||
image = np.asarray(p.init_images[0])
|
||||
mask = np.full_like(image, fill_value=255)
|
||||
|
||||
cnet_image = {"image": image, "mask": mask}
|
||||
|
||||
pres = external_code.pixel_perfect_resolution(
|
||||
image,
|
||||
target_H=p.height,
|
||||
target_W=p.width,
|
||||
resize_mode=external_code.resize_mode_from_value(p.resize_mode),
|
||||
)
|
||||
|
||||
add_forge_script_to_adetailer_run(
|
||||
p,
|
||||
"ControlNet",
|
||||
[
|
||||
ControlNetUnit(
|
||||
enabled=True,
|
||||
image=cnet_image,
|
||||
model=model,
|
||||
module=module,
|
||||
weight=weight,
|
||||
guidance_start=guidance_start,
|
||||
guidance_end=guidance_end,
|
||||
processor_res=pres,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
def get_cn_models() -> list[str]:
|
||||
models = global_state.get_all_controlnet_names()
|
||||
return [m for m in models if cn_model_regex.search(m)]
|
||||
@@ -44,7 +44,7 @@ def run_pip(*args):
|
||||
def install():
|
||||
deps = [
|
||||
# requirements
|
||||
("ultralytics", "8.1.0", None),
|
||||
("ultralytics", "8.1.18", None),
|
||||
("mediapipe", "0.10.9", None),
|
||||
("rich", "13.0.0", None),
|
||||
# mediapipe
|
||||
|
||||
@@ -13,7 +13,7 @@ repository = "https://github.com/Bing-su/adetailer"
|
||||
profile = "black"
|
||||
known_first_party = ["launch", "modules"]
|
||||
|
||||
[tool.ruff]
|
||||
[tool.ruff.lint]
|
||||
select = [
|
||||
"A",
|
||||
"B",
|
||||
@@ -36,7 +36,10 @@ select = [
|
||||
"UP",
|
||||
"W",
|
||||
]
|
||||
ignore = ["B008", "B905", "E501", "F401", "UP007"]
|
||||
ignore = ["B008", "B905", "E501", "F401"]
|
||||
|
||||
[tool.ruff.isort]
|
||||
[tool.ruff.lint.isort]
|
||||
known-first-party = ["launch", "modules"]
|
||||
|
||||
[tool.ruff.lint.pyupgrade]
|
||||
keep-runtime-typing = true
|
||||
|
||||
@@ -36,7 +36,12 @@ from adetailer.mask import (
|
||||
)
|
||||
from adetailer.traceback import rich_traceback
|
||||
from adetailer.ui import WebuiInfo, adui, ordinal, suffix
|
||||
from controlnet_ext import ControlNetExt, controlnet_exists, get_cn_models
|
||||
from controlnet_ext import (
|
||||
ControlNetExt,
|
||||
controlnet_exists,
|
||||
controlnet_type,
|
||||
get_cn_models,
|
||||
)
|
||||
from controlnet_ext.restore import (
|
||||
CNHijackRestore,
|
||||
cn_allow_script_control,
|
||||
@@ -62,7 +67,7 @@ model_mapping = get_models(
|
||||
adetailer_dir, extra_dir=extra_models_dir, huggingface=not no_huggingface
|
||||
)
|
||||
txt2img_submit_button = img2img_submit_button = None
|
||||
SCRIPT_DEFAULT = "dynamic_prompting,dynamic_thresholding,wildcard_recursive,wildcards,lora_block_weight,negpip"
|
||||
SCRIPT_DEFAULT = "dynamic_prompting,dynamic_thresholding,wildcard_recursive,wildcards,lora_block_weight,negpip,soft_inpainting"
|
||||
|
||||
if (
|
||||
not adetailer_dir.exists()
|
||||
@@ -517,7 +522,7 @@ class AfterDetailerScript(scripts.Script):
|
||||
i2i._ad_disabled = True
|
||||
i2i._ad_inner = True
|
||||
|
||||
if args.ad_controlnet_model != "Passthrough":
|
||||
if args.ad_controlnet_model != "Passthrough" and controlnet_type != "forge":
|
||||
self.disable_controlnet_units(i2i.script_args)
|
||||
|
||||
if args.ad_controlnet_model not in ["None", "Passthrough"]:
|
||||
@@ -648,6 +653,14 @@ class AfterDetailerScript(scripts.Script):
|
||||
if self.is_ad_enabled(*args_):
|
||||
arg_list = self.get_args(p, *args_)
|
||||
self.check_skip_img2img(p, *args_)
|
||||
|
||||
if hasattr(p, "_ad_xyz_prompt_sr"):
|
||||
replaced_positive_prompt, replaced_negative_prompt = self.get_prompt(
|
||||
p, arg_list[0]
|
||||
)
|
||||
arg_list[0].ad_prompt = replaced_positive_prompt[0]
|
||||
arg_list[0].ad_negative_prompt = replaced_negative_prompt[0]
|
||||
|
||||
extra_params = self.extra_params(arg_list)
|
||||
p.extra_generation_params.update(extra_params)
|
||||
else:
|
||||
@@ -682,6 +695,7 @@ class AfterDetailerScript(scripts.Script):
|
||||
predictor = ultralytics_predict
|
||||
ad_model = self.get_ad_model(args.ad_model)
|
||||
kwargs["device"] = self.ultralytics_device
|
||||
kwargs["classes"] = args.ad_model_classes
|
||||
|
||||
with change_torch_load():
|
||||
pred = predictor(ad_model, pp.image, args.ad_confidence, **kwargs)
|
||||
@@ -958,7 +972,7 @@ def make_axis_on_xyz_grid():
|
||||
"[ADetailer] ControlNet model 1st",
|
||||
str,
|
||||
partial(set_value, field="ad_controlnet_model"),
|
||||
choices=lambda: ["None", *get_cn_models()],
|
||||
choices=lambda: ["None", "Passthrough", *get_cn_models()],
|
||||
),
|
||||
]
|
||||
|
||||
@@ -982,15 +996,15 @@ def on_before_ui():
|
||||
|
||||
def add_api_endpoints(_: gr.Blocks, app: FastAPI):
|
||||
@app.get("/adetailer/v1/version")
|
||||
def version():
|
||||
async def version():
|
||||
return {"version": __version__}
|
||||
|
||||
@app.get("/adetailer/v1/schema")
|
||||
def schema():
|
||||
async def schema():
|
||||
return ADetailerArgs.schema()
|
||||
|
||||
@app.get("/adetailer/v1/ad_model")
|
||||
def ad_model():
|
||||
async def ad_model():
|
||||
return {"ad_model": list(model_mapping)}
|
||||
|
||||
|
||||
|
||||
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
29
tests/conftest.py
Normal file
29
tests/conftest.py
Normal file
@@ -0,0 +1,29 @@
|
||||
from functools import cache
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
from PIL import Image
|
||||
|
||||
|
||||
@cache
|
||||
def _sample_image():
|
||||
url = "https://i.imgur.com/E5OVXvn.png"
|
||||
resp = requests.get(url, stream=True, headers={"User-Agent": "Mozilla/5.0"})
|
||||
return Image.open(resp.raw)
|
||||
|
||||
|
||||
@cache
|
||||
def _sample_image2():
|
||||
url = "https://i.imgur.com/px5UT7T.png"
|
||||
resp = requests.get(url, stream=True, headers={"User-Agent": "Mozilla/5.0"})
|
||||
return Image.open(resp.raw)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def sample_image():
|
||||
return _sample_image()
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def sample_image2():
|
||||
return _sample_image2()
|
||||
18
tests/test_mediapipe.py
Normal file
18
tests/test_mediapipe.py
Normal file
@@ -0,0 +1,18 @@
|
||||
import pytest
|
||||
from PIL import Image
|
||||
|
||||
from adetailer.mediapipe import mediapipe_predict
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"model_name",
|
||||
[
|
||||
"mediapipe_face_short",
|
||||
"mediapipe_face_full",
|
||||
"mediapipe_face_mesh",
|
||||
"mediapipe_face_mesh_eyes_only",
|
||||
],
|
||||
)
|
||||
def test_mediapipe(sample_image2: Image.Image, model_name: str):
|
||||
result = mediapipe_predict(model_name, sample_image2)
|
||||
assert result.preview is not None
|
||||
48
tests/test_ultralytics.py
Normal file
48
tests/test_ultralytics.py
Normal file
@@ -0,0 +1,48 @@
|
||||
import pytest
|
||||
from huggingface_hub import hf_hub_download
|
||||
from PIL import Image
|
||||
|
||||
from adetailer.ultralytics import ultralytics_predict
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"model_name",
|
||||
[
|
||||
"face_yolov8n.pt",
|
||||
"face_yolov8n_v2.pt",
|
||||
"face_yolov8s.pt",
|
||||
"hand_yolov8n.pt",
|
||||
"hand_yolov8s.pt",
|
||||
"person_yolov8n-seg.pt",
|
||||
"person_yolov8s-seg.pt",
|
||||
"person_yolov8m-seg.pt",
|
||||
"deepfashion2_yolov8s-seg.pt",
|
||||
],
|
||||
)
|
||||
def test_ultralytics_hf_models(sample_image: Image.Image, model_name: str):
|
||||
model_path = hf_hub_download("Bingsu/adetailer", model_name)
|
||||
result = ultralytics_predict(model_path, sample_image)
|
||||
assert result.preview is not None
|
||||
|
||||
|
||||
def test_yolo_world_default(sample_image: Image.Image):
|
||||
model_path = hf_hub_download("Bingsu/yolo-world-mirror", "yolov8x-world.pt")
|
||||
result = ultralytics_predict(model_path, sample_image)
|
||||
assert result.preview is not None
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"klass",
|
||||
[
|
||||
"person",
|
||||
"bird",
|
||||
"yellow bird",
|
||||
"person,glasses,headphone",
|
||||
"person,bird",
|
||||
"glasses,yellow bird",
|
||||
],
|
||||
)
|
||||
def test_yolo_world(sample_image2: Image.Image, klass: str):
|
||||
model_path = hf_hub_download("Bingsu/yolo-world-mirror", "yolov8x-world.pt")
|
||||
result = ultralytics_predict(model_path, sample_image2, classes=klass)
|
||||
assert result.preview is not None
|
||||
Reference in New Issue
Block a user