Compare commits

..

141 Commits
1.7.3 ... 2.0.0

Author SHA1 Message Date
Dominik Reh
76bd983ba3 Fix right alignment for count/meta text 2023-02-11 15:26:18 +01:00
Dominik Reh
2de1c720ee Merge branch 'feature-extendability' into main 2023-02-11 15:15:46 +01:00
Dominik Reh
37e1c15e6d Make quality tags file the default 2023-02-11 14:16:31 +01:00
Dominik Reh
c16d110de3 Add example extra file for the common quality tags 2023-02-11 14:14:48 +01:00
Dominik Reh
f2c3574da7 Rework extra file system
Now just for adding new custom tags either before or after the rest
2023-02-11 14:13:42 +01:00
Dominik Reh
b4fe4f717a Extract sanitization / text edit before insertion 2023-02-11 13:36:39 +01:00
Dominik Reh
9ff721ffcb Fix word break behavior for new max-width change
Closes #72, at least with a simple solution
2023-02-11 12:32:42 +01:00
viyiviyi
f74cecf0aa Fixes repeated file loads during setup and limits result width (#126)
Thanks to @viyiviyi
2023-02-11 12:10:31 +01:00
Dominik Reh
b540400110 Allow spaces in wildcard file names 2023-02-10 12:23:52 +01:00
Dominik Reh
d29298e0cc Move anti-caching parameter to load function
For less repetition and shorter paths in the higher level functions.
Active by default, but can be disabled.
2023-02-10 11:59:06 +01:00
Dominik Reh
cbeced9121 Extract file load to queue
This enables other parsers to keep their load function in the same file
2023-02-10 11:55:56 +01:00
DominikDoom
8dd8ccc527 Fix safety check 2023-02-10 07:30:58 +01:00
Dominik Reh
beba0ca714 Merge branch 'main' of https://github.com/DominikDoom/a1111-sd-webui-tagcomplete into main 2023-02-05 17:19:28 +01:00
Dominik Reh
bb82f208c0 Catch lora attribute error
Should fix the issue for older webui versions.
Closes #119, #124
2023-02-05 17:19:24 +01:00
Dominik Reh
890f1a48c2 Add pycache folder to gitignore 2023-02-02 18:56:52 +01:00
Dominik Reh
c70a18919b Make tag regex work with more < configurations
Will now allow completion of a < tag if the one directly after is also a < tag only separated by a space.
(Happens often now that Loras are a thing and <>'s stay in the prompt with them)
2023-02-02 18:56:07 +01:00
DominikDoom
732a0075f8 Merge pull request #122 from ctwrs/main
Fixes https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/issues/121
2023-02-02 17:24:15 +01:00
Piotr Zaborowski
86ead9b43d Add failsafe for badly formatted UmiAI YAML files 2023-02-02 14:37:07 +01:00
Dominik Reh
db3319b0d3 Fix long lists not scrolling to top on reset 2023-01-29 18:43:09 +01:00
Dominik Reh
a588e0b989 Extract embeddings, hypernets and loras 2023-01-29 18:36:09 +01:00
Dominik Reh
b22435dd32 Extract wildcard keep open as well 2023-01-29 17:58:21 +01:00
Dominik Reh
b0347d1ca7 Extract UMI after insert update 2023-01-29 17:36:02 +01:00
Dominik Reh
fad8b3dc88 Safety checks 2023-01-29 17:19:24 +01:00
Dominik Reh
95eb9dd6e9 Extract UMI completion (base) 2023-01-29 17:19:15 +01:00
Dominik Reh
93ee32175d Wildcard fixes & cleanup 2023-01-29 17:16:09 +01:00
Dominik Reh
86fafeebf5 Fix for undefined returns 2023-01-29 17:14:05 +01:00
Dominik Reh
29d1e7212d Rename queues to fit const naming convention 2023-01-29 16:40:04 +01:00
Dominik Reh
8e14221739 Extract wildcard completion 2023-01-29 01:00:02 +01:00
Dominik Reh
cd80710708 Implement parser queue 2023-01-29 00:45:00 +01:00
Dominik Reh
3e0a7cc796 Custom error for missing override 2023-01-29 00:37:39 +01:00
Dominik Reh
98000bd2fc Fix copy-paste error 2023-01-28 23:46:22 +01:00
Dominik Reh
d1d3cd2bf5 Add queue processing & callbacks 2023-01-28 23:28:15 +01:00
Dominik Reh
b70b0b72cb Add base parser 2023-01-28 22:58:29 +01:00
Dominik Reh
a831592c3c Rename globals file to ensure it's loaded first 2023-01-28 22:57:58 +01:00
Dominik Reh
e00199cf06 Move potentially shared vars to separate file
to ensure that they exist when other parts reference them
2023-01-28 22:36:16 +01:00
Dominik Reh
dc34db53e4 Trim carriage return from hyp and lora txts 2023-01-28 22:30:04 +01:00
DominikDoom
a925129981 Update README.md 2023-01-25 16:57:05 +01:00
Dominik Reh
e418a867b3 Merge branch 'hyp-lora-support' into main 2023-01-24 15:23:53 +01:00
Dominik Reh
040be35162 Don't escape parentheses for loras and hypernets 2023-01-24 15:03:56 +01:00
Dominik Reh
316d45e2fa Use extra network multiplier from settings 2023-01-24 15:03:35 +01:00
Dominik Reh
8ab0e2504b Fix meta display, add mixed results
< will show all three, while <e: <h: or <l: will limit it to that type.
2023-01-24 14:51:55 +01:00
Dominik Reh
b29b496b88 Simplify lora and hypernetwork loading 2023-01-24 14:08:11 +01:00
Dominik Reh
e144f0d388 Make script work without settings tab
Fixes #116
2023-01-24 13:08:43 +01:00
JM
ae01f41f30 add support for hypernetworks and lora 2023-01-22 19:24:59 +01:00
DominikDoom
fb27ac9187 Update README_ZH.md 2023-01-18 16:31:57 +01:00
DominikDoom
770bb495a5 Update README.md 2023-01-18 16:29:55 +01:00
Dominik Reh
7fdad1bf62 Add back ability to use hashes in black/whitelist
They are displayed in the UI after all, just not in the dropdown but at the bottom
2023-01-14 14:57:39 +01:00
Dominik Reh
a91a098243 Change blacklist to use model name instead of hash
Hotfix for recent webui changes to use proper sha256 hashes, which is currently not displayed in the UI
2023-01-14 14:24:44 +01:00
Dominik Reh
c663abcbcb Fix wiki links showing on embeddings & wildcards 2023-01-13 19:33:43 +01:00
Dominik Reh
bec222f2b3 Fix for 1-letter completion
Completion would sometimes not show if the prompt was only one letter long and identical to the previous completion
2023-01-12 15:54:57 +01:00
Dominik Reh
d4db6a7907 Option to show ? wiki links for danbooru/e621 tags
Disabled by default since the wiki pages likely contain NSFW images.
Closes #109
2023-01-12 15:49:53 +01:00
Dominik Reh
52593e6ac8 Update setting descriptions for black/whitelist 2023-01-12 14:45:16 +01:00
Dominik Reh
849e346924 Black/whitelisting options for models
Enables selective (de)activation based on model hash.
Closes #14
2023-01-12 14:35:54 +01:00
Dominik Reh
25b285bea3 Styling adjustments 2023-01-10 15:10:13 +01:00
Dominik Reh
984a7e772a File comments 2023-01-10 15:01:22 +01:00
Dominik Reh
964b4fcff3 Rework results system
Now uses object properties instead of array indices, much less confusing
2023-01-10 14:59:09 +01:00
Dominik Reh
54641ddbfc Move utility functions to their own file 2023-01-10 14:58:25 +01:00
Dominik Reh
c048684909 Load embeds recursively in fallback
Webui now supports recursive embedding loading, so we also use it here.
This shouldn't happen since the newer version uses the non-fallback, but it doesn't hurt
2023-01-06 15:52:40 +01:00
Dominik Reh
da9acfea2a Rework embedding load, now uses callback.
Should hopefully fix #100
2023-01-03 17:30:30 +01:00
Dominik Reh
552c6517b8 Make new settings id the default behavior instead of fallback 2023-01-03 11:14:16 +01:00
DominikDoom
f626eb3467 Merge pull request #101 from stysmmaker/fix/apply-settings-button-fallback
Add fallback to applySettingsButton variable
2023-01-03 10:57:53 +01:00
MMaker
2ba513bedc fix: Add fallback to applySettingsButton var
Need due to layout change in recent webui update
269f6e8676
2023-01-03 00:07:15 -05:00
Dominik Reh
89d36da47e Add fallback for embedding loading
Fixes error on outdated webuis, as mentioned in #98 and #99
2023-01-02 16:09:36 +01:00
Dominik Reh
5f2f746310 Skipped embeddings now also hold shape info
so we don't need to guess the type anymore if the model didn't load any.
2023-01-02 12:45:56 +01:00
Dominik Reh
454c13ef6d Fix for embedding search without v1/v2 prefix 2023-01-02 00:44:56 +01:00
Dominik Reh
6deefda279 Show version info for embeddings
Also allows searching by version to quickly find v1 or v2 model embeddings
Closes #97
2023-01-02 00:38:48 +01:00
Dominik Reh
b57042edd0 Remove leftover debug logs 2022-12-26 13:31:04 +01:00
DominikDoom
ceba61163e Merge pull request #93 from ctwrs/umi-suggestions
Smarter suggestion system for UMI wildcard tags that takes previous tags into account to only show possible candidates.
2022-12-26 13:28:46 +01:00
catwars
16201605d0 Merge branch 'DominikDoom:main' into umi-suggestions 2022-12-26 02:10:49 +01:00
ctwrs
0c3397aee6 Fix umi tag autocomplete filtering logic 2022-12-26 01:57:15 +01:00
DominikDoom
4f582f4528 Merge pull request #92 from ctwrs/main
Allows None as a main tag file selection if the user wants special completion like wildcards but not for normal tags.
2022-12-24 13:27:34 +01:00
ctwrs
d2b5142d7d Umi filtering - initial version 2022-12-24 01:32:45 +01:00
ctwrs
f11abe60c2 Allow for selecting an empty tagFile to disable booru suggestions 2022-12-23 21:34:00 +01:00
DominikDoom
16bf9d9a51 Merge pull request #90 from DominikDoom/feature-yaml-wildcards 2022-12-22 14:48:23 +01:00
Dominik Reh
bdd8cf68c7 Fix show all condition 2022-12-22 12:39:00 +01:00
Dominik Reh
63a0d2e73e Bugfixes for change detection & multiple umi tags
(WIP)
2022-12-20 19:45:07 +01:00
Dominik Reh
34ba08d804 Extract regexes for easier editing & testing 2022-12-20 18:25:37 +01:00
Dominik Reh
f1a437ff48 Autocompletion for UMI yaml wildcards 2022-12-20 17:08:09 +01:00
Dominik Reh
97cbada882 Sort yaml tags by count 2022-12-20 13:28:43 +01:00
DominikDoom
860a4034bb Update README.md 2022-12-19 19:30:36 +01:00
Dominik Reh
255d7420fd Merge branch 'main' of https://github.com/DominikDoom/a1111-sd-webui-tagcomplete into main 2022-12-19 19:07:32 +01:00
Dominik Reh
6b34d8ccd1 Warning about e621 as extra file 2022-12-19 19:07:29 +01:00
DominikDoom
b35ee10f8e Merge pull request #88 from ctwrs/main
Preparation for new yaml wildcard support
2022-12-19 13:05:43 +01:00
ctwrs
fc8540589a Index tags used in yaml wildcard files 2022-12-19 12:53:30 +01:00
Dominik Reh
3d1ca6893a Fix formatting 2022-12-19 10:32:47 +01:00
Dominik Reh
73c3424ab3 Fix for script load if no third party found
Added missing null check
Fixes #87
2022-12-19 10:23:34 +01:00
DominikDoom
5f8a5d468d Update README_ZH.md 2022-12-18 16:51:11 +01:00
DominikDoom
4296d8e3b7 Update README.md 2022-12-18 16:38:17 +01:00
DominikDoom
8d9c0c7bb7 Merge pull request #86 from DominikDoom/support-dataset-tag-editor 2022-12-18 14:17:27 +01:00
Dominik Reh
1c22a22abe Support third party textboxes
Base functionality for third party textboxes, specifically Dataset Tag Editor
Closes #83
2022-12-18 14:15:37 +01:00
DominikDoom
f38c5df257 Update README_ZH.md 2022-11-28 08:00:08 +01:00
DominikDoom
3332d62639 Update README.md 2022-11-28 07:55:23 +01:00
DominikDoom
b159efe74e Update README.md 2022-11-28 07:52:36 +01:00
DominikDoom
3789457702 Fix typo 2022-11-26 15:44:43 +01:00
DominikDoom
35875a07a8 Update README_ZH.md 2022-11-26 15:42:30 +01:00
DominikDoom
77c6a2b950 Update README.md for new settings 2022-11-26 15:22:44 +01:00
DominikDoom
bc80b3ea2c Merge pull request #80 from DominikDoom/feature-options
Migrates config file to webui settings
2022-11-26 15:09:46 +01:00
Dominik Reh
a4e0b69d26 Better description for translation format 2022-11-26 15:01:51 +01:00
Dominik Reh
4f68a50a25 Fix styling for tac refresh buttons
Add change listener for settings of type 'select'
2022-11-26 14:45:43 +01:00
Dominik Reh
05c11c9781 Finalize settings migration
- Dropdown selection & refresh for tag files
- Tag, Extra & Translation file reloading without restart
- Any options can now be added to the quicksettings bar
- Standalone colors.json file
2022-11-26 14:29:20 +01:00
Dominik Reh
def6ebb798 Initial changes for settings migration 2022-11-21 19:11:20 +01:00
Dominik Reh
e4a8ee7439 Move settings inline
As mentioned in #71
2022-11-20 12:28:47 +01:00
Dominik Reh
1c3e60cfb2 Use fetch instead of XmlHttpRequest
Easier to use and also seems to be faster
2022-11-19 18:43:04 +01:00
DominikDoom
fc4484ddc6 Merge pull request #77 from stysmmaker/patch/embeddings-dir 2022-11-17 18:22:20 +01:00
MMaker
d6eb751e4b fix: Use correct embeddings dir
Use the `--embeddings-dir` if specified
2022-11-17 12:13:43 -05:00
batvbs
894335f1de Improved search function (#75)
Now only searches for matches at the start of tags or sub-words in multi-word-tags.
Old behavior can still be used by typing * at the beginning of the word.
2022-11-14 12:56:59 +01:00
Dominik Reh
2d45d6c796 Dynamic css construction for less duplicate code
Inserts variables for light/dark mode instead of keeping separate css for both
Fixes scrollbar space being reserved in chrome where it's not needed.
2022-11-07 19:25:51 +01:00
DominikDoom
dba4046064 Update README_ZH.md
I updated it using machine translation, so if you are a native Chinese speaker and notice any mistakes, please tell me.
2022-11-07 10:01:29 +01:00
Dominik Reh
ca8a0c433e Fix result text cutoff in Firefox
Fixes #65
2022-11-06 14:33:11 +01:00
Dominik Reh
535c2a6753 Safety checks for translations
Should prevent list getting cut off if no translation or alias matches
2022-11-06 14:16:56 +01:00
Dominik Reh
e86c604903 Fix for translations failing to match sometimes
Fixes #62 (again)
2022-11-06 14:00:34 +01:00
Dominik Reh
4eabf00f01 Remove max width for resutls
Fixes #65
2022-11-05 18:27:53 +01:00
Dominik Reh
a39b0d0742 Include deprecated danbooru tags
Since many of the deprecated tags have large post counts, it makes sense to include them, even if better alternatives are available now.
Fixes #64
2022-11-05 17:51:34 +01:00
DominikDoom
ecc71902cd Update README.md 2022-11-05 16:46:24 +01:00
DominikDoom
2dc1dfea86 Update README.md 2022-11-05 16:21:38 +01:00
Dominik Reh
18556c6115 Rework translation to be separate from aliases
Enables two-way translation where the translation is always visible.
Closes #62.
2022-11-05 15:54:26 +01:00
DominikDoom
82355cdb60 Update README.md 2022-11-04 16:10:59 +01:00
Dominik Reh
2c6b6e7f13 Count and Alias support
The data and script have been updated to include post count and common aliases for tags.
Aliases are added the same way as translations.
Closes #60.
2022-11-04 15:50:27 +01:00
Dominik Reh
abb5625e55 Cache workaround
Force reload by querying with current time added to the URL
2022-11-01 13:56:03 +01:00
Dominik Reh
d5de786d07 Don't use unnecessary onUIUpdate 2022-11-01 13:54:38 +01:00
Dominik Reh
f8a9223c29 Add config option to hide autocomplete UI options
Implements #57
2022-11-01 13:50:34 +01:00
Dominik Reh
61a97175a7 Fix parentheses regression
Fixes #59
2022-11-01 13:03:03 +01:00
Dominik Reh
92a08205d0 Remove unnecessary path check
Simplifies getting the tag base path. Also fixes #55
2022-10-31 11:04:36 +01:00
DominikDoom
372a499615 Merge pull request #52 from Kinsmir/main 2022-10-30 17:14:10 +01:00
Dominik Reh
ca717948a4 Add config & UI option for appending commas
Closes #49
2022-10-30 17:10:31 +01:00
Dominik Reh
6c6999d5f1 Fix diff check for negative tag count changes
Now properly closes the popup if the last letter of a tag gets deleted.
2022-10-30 16:10:55 +01:00
Joris Neuteboom
f7f5101f62 Removed wildcard comments
https://github.com/Klokinator/UnivAICharGen uses # for comments within the wildcards
2022-10-30 16:10:38 +01:00
Dominik Reh
e49862d422 Fix Regex for non-ascii tags
Also separates the regex for simplification. Fixes #51.
2022-10-30 16:08:13 +01:00
Dominik Reh
524514bd46 Fix parsing for real this time
Fixes #48 (again)
2022-10-29 18:35:06 +02:00
Dominik Reh
106fa13f65 Hotfix for broken parsing
Fixes #48
2022-10-29 17:24:44 +02:00
Dominik Reh
a038664616 Fix new regex for embeddings 2022-10-29 15:55:30 +02:00
Dominik Reh
789f44d52a Support editing tags inside weighting parentheses
Fixes #47
2022-10-29 14:48:44 +02:00
Dominik Reh
59ec54b171 Fix duplicate wildcards
Would occur if the extension folder was also just "wildcards" due to recursive search
2022-10-29 10:08:20 +02:00
Dominik Reh
983da36329 Create tmp folder in root if it doesn't exist
Fixes extension installation on Linux, closes #46
2022-10-29 09:55:30 +02:00
Dominik Reh
48bd3d7b51 Support multiline prompts
Fixes #44
2022-10-28 19:04:41 +02:00
Dominik Reh
c6c9e01410 Formatting 2022-10-28 18:08:02 +02:00
DominikDoom
bf5bb34605 Update README.md 2022-10-28 17:58:59 +02:00
Dominik Reh
860fd34fb4 Support for wildcards from different extensions
Now scans all extensions for a wildcards folder and will combine them
Implements the feature discussed in #37
2022-10-28 17:47:23 +02:00
Dominik Reh
886de4df29 Support installing the script as an extension
Closes #41
2022-10-28 15:46:16 +02:00
Dominik Reh
3e71890489 Support wildcard extension in arbitrary folder
Now no longer only looks in "extensions/wildcards/widlcards".
This enables the user to call the folder whatever they like.
Fixes #37
2022-10-28 15:08:28 +02:00
Dominik Reh
dc77b3f17f Configurable debounce delay
Workaround for the problem in #40
2022-10-26 15:20:06 +02:00
20 changed files with 201897 additions and 176452 deletions

2
.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
tags/temp/
__pycache__/

235
README.md
View File

@@ -1,3 +1,5 @@
![tag_autocomplete_light](https://user-images.githubusercontent.com/34448969/208306863-90bbd663-2cb4-47f1-a7fe-7b662a7b95e2.png)
# Booru tag autocompletion for A1111
[![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/DominikDoom/a1111-sd-webui-tagcomplete)](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases)
@@ -9,164 +11,177 @@ This custom script serves as a drop-in extension for the popular [AUTOMATIC1111
It displays autocompletion hints for recognized tags from "image booru" boards such as Danbooru, which are primarily used for browsing Anime-style illustrations.
Since some Stable Diffusion models were trained using this information, for example [Waifu Diffusion](https://github.com/harubaru/waifu-diffusion), using exact tags in prompts can often improve composition and help to achieve a wanted look.
I created this script as a convenience tool since it reduces the need of switching back and forth between the web UI and a booru site to copy-paste tags.
You can either clone / download the files manually as described [below](#installation), or use a pre-packaged version from [Releases](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases).
You can install it using the inbuilt available extensions list, clone the files manually as described [below](#installation), or use a pre-packaged version from [Releases](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases).
## Common Problems & Known Issues:
- The browser might cache old versions of the script, config, or embedding/wildcard lists. Try hitting `CTRL+F5` to clear the cache.
- If `replaceUnderscores` is active, the script will currently only partly replace edited tags containing multiple words in brackets.
- Depending on your browser settings, sometimes an old version of the script can get cached. Try `CTRL+F5` to force-reload the site without cache if e.g. a new feature doesn't appear for you after an update.
- If `replaceUnderscores` is active, the script will currently only partially replace edited tags containing multiple words in brackets.
For example, editing `atago (azur lane)`, it would be replaced with e.g. `taihou (azur lane), lane)`, since the script currently doesn't see the second part of the bracket as the same tag. So in those cases you should delete the old tag beforehand.
### Wildcard & Embedding support
Autocompletion also works with wildcard files used by [this script](https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py) of the same name (demo video further down). This enables you to either insert categories to be replaced by the script, or even replace them with the actual wildcard file content in the same step.
It also scans the embeddings folder and displays completion hints for the names of all .pt and .bin files inside if you start typing `<`. Note that some normal tags also use < in Kaomoji (like ">_<" for example), so the results will contain both.
Both are now enabled by default and scan the `/embeddings` and `/scripts/wildcards` folders automatically.
## Screenshots
Demo video (with keyboard navigation):
https://user-images.githubusercontent.com/34448969/195344430-2b5f9945-b98b-4943-9fbc-82cf633321b1.mp4
https://user-images.githubusercontent.com/34448969/200128020-10d9a8b2-cea6-4e3f-bcd2-8c40c8c73233.mp4
Wildcard script support:
https://user-images.githubusercontent.com/34448969/195632461-49d226ae-d393-453d-8f04-1e44b073234c.mp4
https://user-images.githubusercontent.com/34448969/200128031-22dd7c33-71d1-464f-ae36-5f6c8fd49df0.mp4
Dark and Light mode supported, including tag colors:
![tagtypes](https://user-images.githubusercontent.com/34448969/195177127-f63949f8-271d-4767-bccd-f1b5e818a7f8.png)
![tagtypes_light](https://user-images.githubusercontent.com/34448969/195180061-ceebcc25-9e4c-424f-b0c9-ba8e8f4f17f4.png)
![results_dark](https://user-images.githubusercontent.com/34448969/200128214-3b6f21b4-9dda-4acf-820e-5df0285c30d6.png)
![results_light](https://user-images.githubusercontent.com/34448969/200128217-bfac8b60-6673-447b-90fd-dc6326f1618c.png)
## Installation
Simply copy the `javascript`, `scripts` and `tags` folder into your web UI installation root. It will run automatically the next time the web UI is started.
### As an extension (recommended)
Either clone the repo into your extensions folder:
```bash
git clone "https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" extensions/tag-autocomplete
```
(The second argument specifies the name of the folder, you can choose whatever you like).
The tags folder contains `config.json` and the tag data the script uses for autocompletion. By default, Danbooru and e621 tags are included.
Or create a folder there manually and place the `javascript`, `scripts` and `tags` folders in it.
### In the root folder (legacy)
This installation method is for old webui versions pre-extension system, it will not work on current versions!
---
In both configurations, the `tags` folder contains `colors.json` and the tag data the script uses for autocompletion. By default, Danbooru and e621 tags are included.
After scanning for embeddings and wildcards, the script will also create a `temp` directory here which lists the found files so they can be accessed in the browser side of the script. You can delete the temp folder without consequences as it will be recreated on the next startup.
### Important:
The script needs **all three folders** to work properly.
### Config
The config contains the following settings and defaults:
```json
{
"tagFile": "danbooru.csv",
"activeIn": {
"txt2img": true,
"img2img": true,
"negativePrompts": true
},
"maxResults": 5,
"resultStepLength": 500,
"showAllResults": false,
"useLeftRightArrowKeys": false,
"replaceUnderscores": true,
"escapeParentheses": true,
"useWildcards": true,
"useEmbeddings": true,
"translation": {
"searchByTranslation": true,
"onlyShowTranslation": false
},
"extra": {
"extraFile": "",
"onlyTranslationExtraFile": false
},
"colors": {
"danbooru": {
"0": ["lightblue", "dodgerblue"],
"1": ["indianred", "firebrick"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["orange", "darkorange"]
},
"e621": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["gold", "goldenrod"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["tomato", "darksalmon"],
"6": ["red", "maroon"],
"7": ["whitesmoke", "black"],
"8": ["seagreen", "darkseagreen"]
}
}
}
```
## Wildcard & Embedding support
Autocompletion also works with wildcard files used by [this script](https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py) of the same name or other similar scripts/extensions. This enables you to either insert categories to be replaced by the script, or even replace them with the actual wildcard file content in the same step. Wildcards are searched for in every extension folder as well as the `scripts/wildcards` folder to support legacy versions. This means that you can combine wildcards from multiple extensions. Nested folders are also supported if you have grouped your wildcards in that way.
It also scans the embeddings folder and displays completion hints for the names of all .pt, .bin and .png files inside if you start typing `<`. Note that some normal tags also use < in Kaomoji (like ">_<" for example), so the results will contain both.
## Settings
The extension has a large amount of configuration & customizability built in:
![image](https://user-images.githubusercontent.com/34448969/204093162-99c6a0e7-8183-4f47-963b-1f172774f527.png)
| Setting | Description |
|---------|-------------|
| tagFile | Specifies the tag file to use. You can provide a custom tag database of your liking, but since the script was developed with Danbooru tags in mind, it might not work properly with other configurations.|
| activeIn | Allows to selectively (de)activate the script for txt2img, img2img, and the negative prompts for both. |
| maxResults | How many results to show max. For the default tag set, the results are ordered by occurence count. For embeddings and wildcards it will show all results in a scrollable list. |
| resultStepLength | Allows to load results in smaller batches of the specified size for better performance in long lists or if showAllResults is true. |
| delayTime | Specifies how much to wait in milliseconds before triggering autocomplete. Helps prevent too frequent updates while typing. |
| showAllResults | If true, will ignore maxResults and show all results in a scrollable list. **Warning:** can lag your browser for long lists. |
| useLeftRightArrowKeys | If true, left and right arrows will select the first/last result in the popup instead of moving the cursor in the textbox. |
| replaceUnderscores | If true, undescores are replaced with spaces on clicking a tag. Might work better for some models. |
| escapeParentheses | If true, escapes tags containing () so they don't contribute to the web UI's prompt weighting functionality. |
| appendComma | Specifies the starting value of the "Append commas" UI switch. If UI options are disabled, this will always be used. |
| useWildcards | Used to toggle the wildcard completion functionality. |
| useEmbeddings | Used to toggle the embedding completion functionality. |
| translation | Options for translating tags. More info in the section below. |
| extras | Options for additional tag files / translations. More info in the section below. |
| colors | Contains customizable colors for the tag types, you can add new ones here for custom tag files (same name as filename, without the .csv). The first value is for dark, the second for light mode. Color names and hex codes should both work.|
| alias | Options for aliases. More info in the section below. |
| translation | Options for translations. More info in the section below. |
| extras | Options for additional tag files / aliases / translations. More info in the section below. |
### Translations & Extra tags
With the recent update it is now possible to add translations to the tags. These will be searchable / shown according to the settings in `config.json`:
- `searchByTranslation` - Whether to search for the translated term as well or only the English tag.
- `onlyShowTranslation` - Replaces the English tag with its translation if it has one. Only for displaying, the inserted text at the end is still the English tag.
### colors.json
Additionally, tag type colors can be specified using the separate `colors.json` file in the extension's `tags` folder.
You can also add new ones here for custom tag files (same name as filename, without the .csv). The first value is for dark, the second for light mode. Color names and hex codes should both work.
```json
{
"danbooru": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["indianred", "firebrick"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["orange", "darkorange"]
},
"e621": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["gold", "goldenrod"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["tomato", "darksalmon"],
"6": ["red", "maroon"],
"7": ["whitesmoke", "black"],
"8": ["seagreen", "darkseagreen"]
}
}
```
The numbers are specifying the tag type, which is dependent on the tag source. For an example, see [CSV tag data](#csv-tag-data).
Example with full and partial chinese tag sets:
### Aliases, Translations & Extra tags
#### Aliases
Like on Booru sites, tags can have one or multiple aliases which redirect to the actual value on completion. These will be searchable / shown according to the settings in `config.json`:
- `searchByAlias` - Whether to also search for the alias or only the actual tag.
- `onlyShowAlias` - Shows only the alias instead of `alias -> actual`. Only for displaying, the inserted text at the end is still the actual tag.
![translation](https://user-images.githubusercontent.com/34448969/196175839-8aaacb26-5c90-48e3-be65-647a0b444ead.png)
![translation_mixed](https://user-images.githubusercontent.com/34448969/196176233-76d4cb5f-16cf-4800-a69b-adb64a79ca8b.png)
#### Translations
An additional file can be added in the translation section, which will be used to translate both tags and aliases and also enables searching by translation.
This file needs to be a CSV in the format `<English tag/alias>,<Translation>`, but for backwards compatibility with older extra files that used a three column format, you can turn on `oldFormat` to use that instead.
Translations can be added in multiple ways, which is where the "Extra" file comes into play.
1. Directly in the main tag file. Simply add a third value, separated by comma, containing the translation for the tag in that row.
2. As an extra file containing only the translated tag rows (so still including the english Tag name and tag type). Will be matched to the English tags in the main file based on the name & type, so might be slow for large translation files.
3. As an extra file with `onlyTranslationExtraFile` true. With this configuration, the extra file has to include *only* the translation itself. That means it is purely index based, assigning the translations to the main tags is really fast but also needs the lines to match (including empty lines). If the order or amount in the main file changes, the translations will potentially not match anymore.
Example with chinese translation:
![IME-input](https://user-images.githubusercontent.com/34448969/200126551-2264e9cc-abb2-4450-9afa-43f362a77ab0.png)
![english-input](https://user-images.githubusercontent.com/34448969/200126513-bf6b3940-6e22-41b0-a369-f2b4640f87d6.png)
**Important**
As of a recent update, translations added in the old Extra file way will only work as an alias and not be visible anymore if typing the English tag for that translation.
#### Extra file
Aliases can be added in multiple ways, which is where the "Extra" file comes into play.
1. As an extra file containing tag, category, optional count and the new alias. Will be matched to the English tags in the main file based on the name & type, so might be slow for large files.
2. As an extra file with `onlyAliasExtraFile` true. With this configuration, the extra file has to include *only* the alias itself. That means it is purely index based, assigning the aliases to the main tags is really fast but also needs the lines to match (including empty lines). If the order or amount in the main file changes, the translations will potentially not match anymore. Not recommended.
So your CSV values would look like this for each method:
| | 1 | 2 | 3 |
|------------|---------------------|--------------------|---------------|
| Main file | `tag,0,translation` | `tag,0` | `tag,0` |
| Extra file | - | `tag,0,translation`| `translation` |
| | 1 | 2 |
|------------|--------------------------|--------------------------|
| Main file | `tag,type,count,(alias)` | `tag,type,count,(alias)` |
| Extra file | `tag,type,(count),alias` | `alias` |
Methods 1 & 2 can also be mixed, in which case translations in the extra file will have priority over those in the main file if they translate the same tag.
Count in the extra file is optional, since there isn't always a post count for custom tag sets.
The extra files can also be used to just add new / custom tags not included in the main set, provided `onlyTranslationExtraFile` is false.
If an extra tag doesn't match any existing tag, it will be added to the list as a new tag instead.
The extra files can also be used to just add new / custom tags not included in the main set, provided `onlyAliasExtraFile` is false.
If an extra tag doesn't match any existing tag, it will be added to the list as a new tag instead. For this, it will need to include the post count and alias columns even if they don't contain anything, so it could be in the form of `tag,type,,`.
### CSV tag data
##### WARNING
Do not use e621.csv or danbooru.csv as an extra file. Alias comparison has exponential runtime, so for the combination of danbooru+e621, it will need to do 10,000,000,000 (yes, ten billion) lookups and usually take multiple minutes to load.
## CSV tag data
The script expects a CSV file with tags saved in the following way:
```csv
1girl,0
solo,0
highres,5
long_hair,0
<name>,<type>,<postCount>,"<aliases>"
```
Notably, it does not expect column names in the first row.
The first value needs to be the tag name, while the second value specifies the tag type. An optional third value will be interpreted as a translation as described in the section above.
Example:
```csv
1girl,0,4114588,"1girls,sole_female"
solo,0,3426446,"female_solo,solo_female"
highres,5,3008413,"high_res,high_resolution,hires"
long_hair,0,2898315,longhair
commentary_request,5,2610959,
```
Notably, it does not expect column names in the first row and both count and aliases are technically optional,
although count is always included in the default data. Multiple aliases need to be comma separated as well, but encased in string quotes to not break the CSV parsing.
The numbering system follows the [tag API docs](https://danbooru.donmai.us/wiki_pages/api%3Atags) of Danbooru:
| Value | Description |
|-------|-------------|
|0 | General |
|1 | Artist |
|3 | Copyright |
|4 | Character |
|5 | Meta |
|0 | General |
|1 | Artist |
|3 | Copyright |
|4 | Character |
|5 | Meta |
or of e621:
or similarly for e621:
| Value | Description |
|-------|-------------|
|-1 | Invalid |
|0 | General |
|1 | Artist |
|3 | Copyright |
|4 | Character |
|5 | Species |
|6 | Invalid |
|7 | Meta |
|8 | Lore |
|-1 | Invalid |
|0 | General |
|1 | Artist |
|3 | Copyright |
|4 | Character |
|5 | Species |
|6 | Invalid |
|7 | Meta |
|8 | Lore |
The tag type is used for coloring entries in the result list.

View File

@@ -1,3 +1,5 @@
![tag_autocomplete_light_zh](https://user-images.githubusercontent.com/34448969/208307331-430696b4-e854-4458-b9e9-f6a6594f19e1.png)
# Booru tag autocompletion for A1111
[![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/DominikDoom/a1111-sd-webui-tagcomplete)](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases)
@@ -11,9 +13,44 @@
你可以按照[以下方法](#installation)下载或拷贝文件,也可以使用[Releases](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases)中打包好的文件。
## 常见问题 & 已知缺陷:
* 浏览器可能因为缓存无法更新脚本、设置、embedding/wildcard列表尝试使用`CRTL+F5`清空浏览器缓存并重新加载
-`replaceUnderscores`选项开启时, 脚本只会替换Tag的一部分如果Tag包含多个单词,比如将`atago (azur lane)`修改`atago``taihou`并使用自动补全时.会得到 `taihou (azur lane), lane)`的结果, 因为脚本没有把后面的部分认为成同一个Tag。
## 演示与截图
演示视频(使用了键盘导航):
https://user-images.githubusercontent.com/34448969/200128020-10d9a8b2-cea6-4e3f-bcd2-8c40c8c73233.mp4
Wildcard支持演示:
https://user-images.githubusercontent.com/34448969/200128031-22dd7c33-71d1-464f-ae36-5f6c8fd49df0.mp4
深浅色主题支持,包括Tag的颜色:
![results_dark](https://user-images.githubusercontent.com/34448969/200128214-3b6f21b4-9dda-4acf-820e-5df0285c30d6.png)
![results_light](https://user-images.githubusercontent.com/34448969/200128217-bfac8b60-6673-447b-90fd-dc6326f1618c.png)
## 安装
### 作为一种扩展(推荐)
要么把它克隆到你的扩展文件夹里
```bash
git clone "https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" extensions/tag-autocomplete
```
(第二个参数指定文件夹的名称,你可以选择任何你喜欢的东西)。
或者手动创建一个文件夹,将 `javascript``scripts``tags`文件夹放在其中。
### 在根目录下(过时的方法)
这种安装方法适用于添加扩展系统之前的旧版webui在目前的版本上是行不通的。
---
在这两种配置中,标签文件夹包含`colors.json`和脚本用于自动完成的标签数据。
默认情况下Tag数据包括`Danbooru.csv``e621.csv`
在扫描过`/embeddings`和wildcards后会将列表存放在`tags/temp`文件夹下。删除该文件夹不会有任何影响,下次启动时它会重新创建。
### 注意:
本脚本的允许需要**全部的三个文件夹**。
## [Wildcard](https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py) & Embedding 支持
自动补全同样适用于 [Wildcard](https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py)中所述的通配符文件(后面有演示视频)。这将使你能够插入Wildcard脚本需要的通配符更进一步的你还可以插入通配符文件内的某个具体Tag。
@@ -23,124 +60,97 @@
现在这项功能默认是启用的,并会自动扫描`/embeddings``/scripts/wildcards`文件夹,不再需要使用`tags/wildcardNames.txt`文件了,早期版本的用户可以将它删除。
## 演示与截图
演示视频(使用了键盘导航):
https://user-images.githubusercontent.com/34448969/195344430-2b5f9945-b98b-4943-9fbc-82cf633321b1.mp4
Wildcard支持演示:
https://user-images.githubusercontent.com/34448969/195632461-49d226ae-d393-453d-8f04-1e44b073234c.mp4
深浅色主题支持,包括Tag的颜色:
![tagtypes](https://user-images.githubusercontent.com/34448969/195177127-f63949f8-271d-4767-bccd-f1b5e818a7f8.png)
![tagtypes_light](https://user-images.githubusercontent.com/34448969/195180061-ceebcc25-9e4c-424f-b0c9-ba8e8f4f17f4.png)
## 安装
只需要将`javascript``scripts``tags`文件夹复制到你的Web UI安装根目录下.下次启动Web UI时它将自动启动。
`tags`文件夹下包含`config.json`用于设置和Tag数据.csv格式。默认情况下Tag数据包括`Danbooru.csv``e621.csv`
在扫描过`/embeddings``/scripts/wildcards`后,会将列表存放在`tags/temp`文件夹下。删除该文件夹不会有任何影响,下次启动时它会重新创建。
### 注意:
本脚本的允许需要**全部的三个文件夹**。
## 配置文件
配置文件config.json的默认值如下
```json
{
"tagFile": "danbooru.csv",
"activeIn": {
"txt2img": true,
"img2img": true,
"negativePrompts": true
},
"maxResults": 5,
"showAllResults": false,
"replaceUnderscores": true,
"escapeParentheses": true,
"useWildcards": true,
"useEmbeddings": true,
"translation": {
"searchByTranslation": true,
"onlyShowTranslation": false
},
"extra": {
"extraFile": "",
"onlyTranslationExtraFile": false
},
"colors": {
"danbooru": {
"0": ["lightblue", "dodgerblue"],
"1": ["indianred", "firebrick"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["orange", "darkorange"]
},
"e621": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["gold", "goldenrod"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["tomato", "darksalmon"],
"6": ["red", "maroon"],
"7": ["whitesmoke", "black"],
"8": ["seagreen", "darkseagreen"]
}
}
}
```
该扩展有大量的配置和可定制性的内建
![image](https://user-images.githubusercontent.com/34448969/204093162-99c6a0e7-8183-4f47-963b-1f172774f527.png)
| 设置 | 描述 |
|---------|-------------|
| tagFile | 指定要使用的标记文件。您可以提供您喜欢的自定义标签数据库,但由于该脚本是在考虑 Danbooru 标签的情况下开发的,因此它可能无法与其他配置一起正常工作。|
| activeIn | 允许有选择地(取消)激活 txt2img、img2img 和两者的否定提示的脚本。|
| maxResults | 最多显示多少个结果。对于默认标记集,结果按出现次数排序。对于嵌入和通配符,它​​将在可滚动列表中显示所有结果。 |
| showAllResults | 如果为真,将忽略 maxResults 并在可滚动列表中显示所有结果。 **警告:**对于长列表,您的浏览器可能会滞后。 |
| showAllResults | 如果为真,将忽略 maxResults 并在可滚动列表中显示所有结果。 **警告:** 对于长列表,您的浏览器可能会滞后。 |
| resultStepLength | 允许以指定大小的小批次加载结果以便在长列表中获得更好的性能或者在showAllResults为真时。 |
| delayTime | 指定在触发自动完成之前要等待多少毫秒。有助于防止打字时过于频繁的更新。 |
| replaceUnderscores | 如果为 true则在单击标签时将取消划线替换为空格。对于某些型号可能会更好。|
| escapeParentheses | 如果为 true则转义包含 () 的标签,因此它们不会对 Web UI 的提示权重功能做出贡献。 |
| useWildcards | 用于切换通配符完成功能。 |
| useEmbeddings | 用于切换嵌入完成功能。 |
| alias | 标签别名的选项。更多信息在下面的部分。 |
| translation | 用于翻译标签的选项。更多信息在下面的部分。 |
| extras | 附加标签文件/翻译的选项。更多信息在下面的部分。|
| colors | 包含标签类型的可自定义颜色,您可以在此处为自定义标签文件添加新颜色(与文件名相同,不带 .csv。第一个值是暗模式第二个值是亮模式。颜色名称和十六进制代码都应该有效。|
## 翻译&新增Tag
通过最近的更新,现在可以为标签添加翻译。这些将根据 `config.json` 中的设置可搜索/显示:
- `searchByTranslation` - 是同时搜索翻译词还是仅搜索英文标签
- `onlyShowTranslation` - 如果有英文标签,则用其翻译替换它。仅用于显示,最后插入的文本仍然是英文标签。
### colors.json (标签颜色)
此外,标签类型的颜色可以使用扩展的`tags`文件夹中单独的`colors.json`文件来指定。
你也可以在这里为自定义标签文件添加新的(与文件名相同,不带 .csv。第一个值是暗模式第二个值是亮模式。颜色名称和十六进制代码都被支持
```json
{
"danbooru": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["indianred", "firebrick"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["orange", "darkorange"]
},
"e621": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["gold", "goldenrod"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["tomato", "darksalmon"],
"6": ["red", "maroon"],
"7": ["whitesmoke", "black"],
"8": ["seagreen", "darkseagreen"]
}
}
```
数字是指定标签的类型,这取决于标签的来源。例如,见[CSV tag data](#csv-tag-data)。
### 别名,翻译&新增Tag
#### 别名
像Booru网站一样标签可以有一个或多个别名完成后重定向到实际值。这些将根据`config.json`中的设置进行搜索/显示。
- `searchByAlias` - 是否也要搜索别名,或只搜索实际的标签。
- `onlyShowAlias` - 只显示别名,不显示 `别名->实际`。仅用于显示,最后的文本仍然是实际的标签。
#### 翻译
可以在翻译部分添加一个额外的文件,它将被用来翻译标签和别名,同时也可以通过翻译进行搜索。
这个文件需要是CSV格式的`<英语标签/别名>,<翻译>`,但为了向后兼容使用三栏格式的旧的额外文件,你可以打开`oldFormat`来代替它。
完整和部分中文标签集的示例:
![translation](https://user-images.githubusercontent.com/34448969/196175839-8aaacb26-5c90-48e3-be65-647a0b444ead.png)
![translation_mixed](https://user-images.githubusercontent.com/34448969/196176233-76d4cb5f-16cf-4800-a69b-adb64a79ca8b.png)
![IME-input](https://user-images.githubusercontent.com/34448969/200126551-2264e9cc-abb2-4450-9afa-43f362a77ab0.png)
![english-input](https://user-images.githubusercontent.com/34448969/200126513-bf6b3940-6e22-41b0-a369-f2b4640f87d6.png)
可以通过多种方式添加翻译,这就是额外文件发挥作用的地方。
1. 直接在主标签文件中。只需添加第三个值,用逗号分隔,包含该行中标签的翻译
2. 作为仅包含已翻译标签行的额外文件(因此仍包括英文标签名称和标签类型)。将根据名称和类型与主文件中的英文标签匹配,因此对于大型翻译文件可能会很慢。
3. 作为 `onlyTranslationExtraFile` 为 true 的额外文件。使用此配置,额外文件必须包含*仅*翻译本身。这意味着它完全基于索引,将翻译分配给主要标签非常快,但也需要匹配行(包括空行)。如果主文件中的顺序或数量发生变化,则翻译可能不再匹配
**重要的是**
从最近的更新来看用旧的Extra文件方式添加的翻译只能作为一个别名使用如果输入该翻译的英文标签将不再可见
可以通过多种方式添加别名,这就是额外文件发挥作用的地方
1. 作为仅包含已翻译标签行的额外文件(因此仍包括英文标签名称和标签类型)。将根据名称和类型与主文件中的英文标签匹配,因此对于大型翻译文件可能会很慢。
2. 作为 `onlyAliasExtraFile` 为 true 的额外文件。使用此配置,额外文件必须包含*仅*翻译本身。这意味着它完全基于索引,将翻译分配给主要标签非常快,但也需要匹配行(包括空行)。如果主文件中的顺序或数量发生变化,则翻译可能不再匹配。
因此,对于每种方法,您的 CSV 值将如下所示:
| | 1 | 2 | 3 |
|------------|---------------------|--------------------|---------------|
| Main file | `tag,0,translation` | `tag,0` | `tag,0` |
| Extra file | - | `tag,0,translation`| `translation` |
| | 1 | 2 |
|------------|--------------------------|--------------------------|
| Main file | `tag,type,count,(alias)` | `tag,type,count,(alias)` |
| Extra file | `tag,type,(count),alias` | `alias` |
方法 1 和 2 也可以混合使用,在这种情况下,如果它们翻译相同的标签,额外文件中的翻译将优先于主文件中的翻译
如果 `onlyTranslationExtraFile` 为 false额外文件也可用于添加未包含在主集中的新/自定义标签。
额外文件中的计数是可选的,因为自定义标签集并不总是有帖子计数
如果额外的标签与任何现有标签都不匹配,它将作为新标签添加到列表中。
### CSV tag data
本脚本的Tag文件格式如下你可以安装这个格式制作自己的Tag文件:
```csv
1girl,0
solo,0
highres,5
long_hair,0
1girl,0,4114588,"1girls,sole_female"
solo,0,3426446,"female_solo,solo_female"
highres,5,3008413,"high_res,high_resolution,hires"
long_hair,0,2898315,longhair
commentary_request,5,2610959,
```
值得注意的是,不希望第一行有列名。
第一个值需要是标签名称,而第二个值指定标签类型
值得注意的是,不希望第一行有列名而且count和aliases在技术上都是可选的
尽管count总是包含在默认数据中。多个别名也需要用逗号分隔但要用字符串引号包裹以免破坏CSV解析
编号系统遵循 Danbooru 的 [tag API docs](https://danbooru.donmai.us/wiki_pages/api%3Atags):
| Value | Description |
|-------|-------------|
@@ -150,7 +160,7 @@ long_hair,0
|4 | Character |
|5 | Meta |
or of e621:
类似的还有e621
| Value | Description |
|-------|-------------|
|-1 | Invalid |

51
javascript/__globals.js Normal file
View File

@@ -0,0 +1,51 @@
// Core components
var CFG = null;
var tagBasePath = "";
// Tag completion data loaded from files
var allTags = [];
var translations = new Map();
var extras = [];
// Same for tag-likes
var wildcardFiles = [];
var wildcardExtFiles = [];
var yamlWildcards = [];
var embeddings = [];
var hypernetworks = [];
var loras = [];
// Selected model info for black/whitelisting
var currentModelHash = "";
var currentModelName = "";
// Current results
var results = [];
var resultCount = 0;
// Relevant for parsing
var previousTags = [];
var tagword = "";
var originalTagword = "";
let hideBlocked = false;
// Tag selection for keyboard navigation
var selectedTag = null;
var oldSelectedTag = null;
// UMI
var umiPreviousTags = [];
/// Extendability system:
/// Provides "queues" for other files of the script (or really any js)
/// to add functions to be called at certain points in the script.
/// Similar to a callback system, but primitive.
// Queues
const QUEUE_AFTER_INSERT = [];
const QUEUE_AFTER_SETUP = [];
const QUEUE_FILE_LOAD = [];
const QUEUE_AFTER_CONFIG_CHANGE = [];
const QUEUE_SANITIZE = [];
// List of parsers to try
const PARSERS = [];

21
javascript/_baseParser.js Normal file
View File

@@ -0,0 +1,21 @@
class FunctionNotOverriddenError extends Error {
constructor(message = "", ...args) {
super(message, ...args);
this.message = message + " is an abstract base function and must be overwritten.";
}
}
class BaseTagParser {
triggerCondition = null;
constructor (triggerCondition) {
if (new.target === BaseTagParser) {
throw new TypeError("Cannot construct abstract BaseCompletionParser directly");
}
this.triggerCondition = triggerCondition;
}
parse() {
throw new FunctionNotOverriddenError("parse()");
}
}

32
javascript/_result.js Normal file
View File

@@ -0,0 +1,32 @@
// Result data type for cleaner use of optional completion result properties
// Type enum
const ResultType = Object.freeze({
"tag": 1,
"extra": 2,
"embedding": 3,
"wildcardTag": 4,
"wildcardFile": 5,
"yamlWildcard": 6,
"hypernetwork": 7,
"lora": 8
});
// Class to hold result data and annotations to make it clearer to use
class AutocompleteResult {
// Main properties
text = "";
type = ResultType.tag;
// Additional info, only used in some cases
category = null;
count = null;
aliases = null;
meta = null;
// Constructor
constructor(text, type) {
this.text = text;
this.type = type;
}
}

84
javascript/_textAreas.js Normal file
View File

@@ -0,0 +1,84 @@
// Utility functions to select text areas the script should work on,
// including third party options.
// Supported third party options so far:
// - Dataset Tag Editor
// Core text area selectors
const core = [
"#txt2img_prompt > label > textarea",
"#img2img_prompt > label > textarea",
"#txt2img_neg_prompt > label > textarea",
"#img2img_neg_prompt > label > textarea"
];
// Third party text area selectors
const thirdParty = {
"dataset-tag-editor": {
"base": "#tab_dataset_tag_editor_interface",
"hasIds": false,
"selectors": [
"Caption of Selected Image",
"Interrogate Result",
"Edit Caption",
"Edit Tags"
]
}
}
function getTextAreas() {
// First get all core text areas
let textAreas = [...gradioApp().querySelectorAll(core.join(", "))];
for (const [key, entry] of Object.entries(thirdParty)) {
if (entry.hasIds) { // If the entry has proper ids, we can just select them
textAreas = textAreas.concat([...gradioApp().querySelectorAll(entry.selectors.join(", "))]);
} else { // Otherwise, we have to find the text areas by their adjacent labels
let base = gradioApp().querySelector(entry.base);
// Safety check
if (!base) continue;
let allTextAreas = [...base.querySelectorAll("textarea")];
// Filter the text areas where the adjacent label matches one of the selectors
let matchingTextAreas = allTextAreas.filter(ta => [...ta.parentElement.childNodes].some(x => entry.selectors.includes(x.innerText)));
textAreas = textAreas.concat(matchingTextAreas);
}
};
return textAreas;
}
const thirdPartyIdSet = new Set();
// Get the identifier for the text area to differentiate between positive and negative
function getTextAreaIdentifier(textArea) {
let txt2img_p = gradioApp().querySelector('#txt2img_prompt > label > textarea');
let txt2img_n = gradioApp().querySelector('#txt2img_neg_prompt > label > textarea');
let img2img_p = gradioApp().querySelector('#img2img_prompt > label > textarea');
let img2img_n = gradioApp().querySelector('#img2img_neg_prompt > label > textarea');
let modifier = "";
switch (textArea) {
case txt2img_p:
modifier = ".txt2img.p";
break;
case txt2img_n:
modifier = ".txt2img.n";
break;
case img2img_p:
modifier = ".img2img.p";
break;
case img2img_n:
modifier = ".img2img.n";
break;
default:
// If the text area is not a core text area, it must be a third party text area
// Add it to the set of third party text areas and get its index as a unique identifier
if (!thirdPartyIdSet.has(textArea))
thirdPartyIdSet.add(textArea);
modifier = `.thirdParty.ta${[...thirdPartyIdSet].indexOf(textArea)}`;
break;
}
return modifier;
}

130
javascript/_utils.js Normal file
View File

@@ -0,0 +1,130 @@
// Utility functions for tag autocomplete
// Parse the CSV file into a 2D array. Doesn't use regex, so it is very lightweight.
function parseCSV(str) {
var arr = [];
var quote = false; // 'true' means we're inside a quoted field
// Iterate over each character, keep track of current row and column (of the returned array)
for (var row = 0, col = 0, c = 0; c < str.length; c++) {
var cc = str[c], nc = str[c + 1]; // Current character, next character
arr[row] = arr[row] || []; // Create a new row if necessary
arr[row][col] = arr[row][col] || ''; // Create a new column (start with empty string) if necessary
// If the current character is a quotation mark, and we're inside a
// quoted field, and the next character is also a quotation mark,
// add a quotation mark to the current column and skip the next character
if (cc == '"' && quote && nc == '"') { arr[row][col] += cc; ++c; continue; }
// If it's just one quotation mark, begin/end quoted field
if (cc == '"') { quote = !quote; continue; }
// If it's a comma and we're not in a quoted field, move on to the next column
if (cc == ',' && !quote) { ++col; continue; }
// If it's a newline (CRLF) and we're not in a quoted field, skip the next character
// and move on to the next row and move to column 0 of that new row
if (cc == '\r' && nc == '\n' && !quote) { ++row; col = 0; ++c; continue; }
// If it's a newline (LF or CR) and we're not in a quoted field,
// move on to the next row and move to column 0 of that new row
if (cc == '\n' && !quote) { ++row; col = 0; continue; }
if (cc == '\r' && !quote) { ++row; col = 0; continue; }
// Otherwise, append the current character to the current column
arr[row][col] += cc;
}
return arr;
}
// Load file
async function readFile(filePath, json = false, cache = false) {
if (!cache)
filePath += `?${new Date().getTime()}`;
let response = await fetch(`file=${filePath}`);
if (response.status != 200) {
console.error(`Error loading file "${filePath}": ` + response.status, response.statusText);
return null;
}
if (json)
return await response.json();
else
return await response.text();
}
// Load CSV
async function loadCSV(path) {
let text = await readFile(path);
return parseCSV(text);
}
// Debounce function to prevent spamming the autocomplete function
var dbTimeOut;
const debounce = (func, wait = 300) => {
return function (...args) {
if (dbTimeOut) {
clearTimeout(dbTimeOut);
}
dbTimeOut = setTimeout(() => {
func.apply(this, args);
}, wait);
}
}
// Difference function to fix duplicates not being seen as changes in normal filter
function difference(a, b) {
if (a.length == 0) {
return b;
}
if (b.length == 0) {
return a;
}
return [...b.reduce((acc, v) => acc.set(v, (acc.get(v) || 0) - 1),
a.reduce((acc, v) => acc.set(v, (acc.get(v) || 0) + 1), new Map())
)].reduce((acc, [v, count]) => acc.concat(Array(Math.abs(count)).fill(v)), []);
}
function escapeRegExp(string) {
return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string
}
function escapeHTML(unsafeText) {
let div = document.createElement('div');
div.textContent = unsafeText;
return div.innerHTML;
}
// Queue calling function to process global queues
async function processQueue(queue, context, ...args) {
for (let i = 0; i < queue.length; i++) {
await queue[i].call(context, ...args);
}
}
// The same but with return values
async function processQueueReturn(queue, context, ...args)
{
let qeueueReturns = [];
for (let i = 0; i < queue.length; i++) {
let returnValue = await queue[i].call(context, ...args);
if (returnValue)
qeueueReturns.push(returnValue);
}
return qeueueReturns;
}
// Specific to tag completion parsers
async function processParsers(textArea, prompt) {
// Get all parsers that have a successful trigger condition
let matchingParsers = PARSERS.filter(parser => parser.triggerCondition());
// Guard condition
if (matchingParsers.length === 0) {
return null;
}
let parseFunctions = matchingParsers.map(parser => parser.parse);
// Process them and return the results
return await processQueueReturn(parseFunctions, null, textArea, prompt);
}

View File

@@ -0,0 +1,58 @@
const EMB_REGEX = /<(?!l:|h:)[^,> ]*>?/g;
const EMB_TRIGGER = () => CFG.useEmbeddings && tagword.match(EMB_REGEX);
class EmbeddingParser extends BaseTagParser {
parse() {
// Show embeddings
let tempResults = [];
if (tagword !== "<" && tagword !== "<e:") {
let searchTerm = tagword.replace("<e:", "").replace("<", "");
let versionString;
if (searchTerm.startsWith("v1") || searchTerm.startsWith("v2")) {
versionString = searchTerm.slice(0, 2);
searchTerm = searchTerm.slice(2);
}
if (versionString)
tempResults = embeddings.filter(x => x[0].toLowerCase().includes(searchTerm) && x[1] && x[1] === versionString); // Filter by tagword
else
tempResults = embeddings.filter(x => x[0].toLowerCase().includes(searchTerm)); // Filter by tagword
} else {
tempResults = embeddings;
}
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.embedding)
result.meta = t[1] + " Embedding";
finalResults.push(result);
});
return finalResults;
}
}
async function load() {
if (embeddings.length === 0) {
try {
embeddings = (await readFile(`${tagBasePath}/temp/emb.txt`)).split("\n")
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => x.trim().split(",")); // Split into name, version type pairs
} catch (e) {
console.error("Error loading embeddings.txt: " + e);
}
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.embedding) {
return text.replace(/^.*?: /g, "");
}
return null;
}
PARSERS.push(new EmbeddingParser(EMB_TRIGGER));
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);

View File

@@ -0,0 +1,50 @@
const HYP_REGEX = /<(?!e:|l:)[^,> ]*>?/g;
const HYP_TRIGGER = () => CFG.useHypernetworks && tagword.match(HYP_REGEX);
class HypernetParser extends BaseTagParser {
parse() {
// Show hypernetworks
let tempResults = [];
if (tagword !== "<" && tagword !== "<h:") {
let searchTerm = tagword.replace("<h:", "").replace("<", "");
tempResults = hypernetworks.filter(x => x.toLowerCase().includes(searchTerm)); // Filter by tagword
} else {
tempResults = hypernetworks;
}
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t.trim(), ResultType.hypernetwork)
result.meta = "Hypernetwork";
finalResults.push(result);
});
return finalResults;
}
}
async function load() {
if (hypernetworks.length === 0) {
try {
hypernetworks = (await readFile(`${tagBasePath}/temp/hyp.txt`)).split("\n")
.filter(x => x.trim().length > 0) //Remove empty lines
.map(x => x.trim()); // Remove carriage returns and padding if it exists
} catch (e) {
console.error("Error loading hypernetworks.txt: " + e);
}
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.hypernetwork) {
return `<hypernet:${text}:${CFG.extraNetworksDefaultMultiplier}>`;
}
return null;
}
PARSERS.push(new HypernetParser(HYP_TRIGGER));
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);

50
javascript/ext_loras.js Normal file
View File

@@ -0,0 +1,50 @@
const LORA_REGEX = /<(?!e:|h:)[^,> ]*>?/g;
const LORA_TRIGGER = () => CFG.useLoras && tagword.match(LORA_REGEX);
class LoraParser extends BaseTagParser {
parse() {
// Show lora
let tempResults = [];
if (tagword !== "<" && tagword !== "<l:") {
let searchTerm = tagword.replace("<l:", "").replace("<", "");
tempResults = loras.filter(x => x.toLowerCase().includes(searchTerm)); // Filter by tagword
} else {
tempResults = loras;
}
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t.trim(), ResultType.lora)
result.meta = "Lora";
finalResults.push(result);
});
return finalResults;
}
}
async function load() {
if (loras.length === 0) {
try {
loras = (await readFile(`${tagBasePath}/temp/lora.txt`)).split("\n")
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => x.trim()); // Remove carriage returns and padding if it exists
} catch (e) {
console.error("Error loading lora.txt: " + e);
}
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.lora) {
return `<lora:${text}:${CFG.extraNetworksDefaultMultiplier}>`;
}
return null;
}
PARSERS.push(new LoraParser(LORA_TRIGGER));
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);

240
javascript/ext_umi.js Normal file
View File

@@ -0,0 +1,240 @@
const UMI_PROMPT_REGEX = /<[^\s]*?\[[^,<>]*[\]|]?>?/gi;
const UMI_TAG_REGEX = /(?:\[|\||--)([^<>\[\]\-|]+)/gi;
const UMI_TRIGGER = () => CFG.useWildcards && [...tagword.matchAll(UMI_PROMPT_REGEX)].length > 0;
class UmiParser extends BaseTagParser {
parse(textArea, prompt) {
// We are in a UMI yaml tag definition, parse further
let umiSubPrompts = [...prompt.matchAll(UMI_PROMPT_REGEX)];
let umiTags = [];
let umiTagsWithOperators = []
const insertAt = (str,char,pos) => str.slice(0,pos) + char + str.slice(pos);
umiSubPrompts.forEach(umiSubPrompt => {
umiTags = umiTags.concat([...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map(x => x[1].toLowerCase()));
const start = umiSubPrompt.index;
const end = umiSubPrompt.index + umiSubPrompt[0].length;
if (textArea.selectionStart >= start && textArea.selectionStart <= end) {
umiTagsWithOperators = insertAt(umiSubPrompt[0], '###', textArea.selectionStart - start);
}
});
// Safety check since UMI parsing sometimes seems to trigger outside of an UMI subprompt and thus fails
if (umiTagsWithOperators.length === 0) {
return null;
}
const promptSplitToTags = umiTagsWithOperators.replace(']###[', '][').split("][");
const clean = (str) => str
.replaceAll('>', '')
.replaceAll('<', '')
.replaceAll('[', '')
.replaceAll(']', '')
.trim();
const matches = promptSplitToTags.reduce((acc, curr) => {
let isOptional = curr.includes("|");
let isNegative = curr.startsWith("--");
let out;
if (isOptional) {
out = {
hasCursor: curr.includes("###"),
tags: clean(curr).split('|').map(x => ({
hasCursor: x.includes("###"),
isNegative: x.startsWith("--"),
tag: clean(x).replaceAll("###", '').replaceAll("--", '')
}))
};
acc.optional.push(out);
acc.all.push(...out.tags.map(x => x.tag));
} else if (isNegative) {
out = {
hasCursor: curr.includes("###"),
tags: clean(curr).replaceAll("###", '').split('|'),
};
out.tags = out.tags.map(x => x.startsWith("--") ? x.substring(2) : x);
acc.negative.push(out);
acc.all.push(...out.tags);
} else {
out = {
hasCursor: curr.includes("###"),
tags: clean(curr).replaceAll("###", '').split('|'),
};
acc.positive.push(out);
acc.all.push(...out.tags);
}
return acc;
}, { positive: [], negative: [], optional: [], all: [] });
//console.log({ matches })
const filteredWildcards = (tagword) => {
const wildcards = yamlWildcards.filter(x => {
let tags = x[1];
const matchesNeg =
matches.negative.length === 0
|| matches.negative.every(x =>
x.hasCursor
|| x.tags.every(t => !tags[t])
);
if (!matchesNeg) return false;
const matchesPos =
matches.positive.length === 0
|| matches.positive.every(x =>
x.hasCursor
|| x.tags.every(t => tags[t])
);
if (!matchesPos) return false;
const matchesOpt =
matches.optional.length === 0
|| matches.optional.some(x =>
x.tags.some(t =>
t.hasCursor
|| t.isNegative
? !tags[t.tag]
: tags[t.tag]
));
if (!matchesOpt) return false;
return true;
}).reduce((acc, val) => {
Object.keys(val[1]).forEach(tag => acc[tag] = acc[tag] + 1 || 1);
return acc;
}, {});
return Object.entries(wildcards)
.sort((a, b) => b[1] - a[1])
.filter(x =>
x[0] === tagword
|| !matches.all.includes(x[0])
);
}
if (umiTags.length > 0) {
// Get difference for subprompt
let tagCountChange = umiTags.length - umiPreviousTags.length;
let diff = difference(umiTags, umiPreviousTags);
umiPreviousTags = umiTags;
// Show all condition
let showAll = tagword.endsWith("[") || tagword.endsWith("[--") || tagword.endsWith("|");
// Exit early if the user closed the bracket manually
if ((!diff || diff.length === 0 || (diff.length === 1 && tagCountChange < 0)) && !showAll) {
if (!hideBlocked) hideResults(textArea);
return;
}
let umiTagword = diff[0] || '';
let tempResults = [];
if (umiTagword && umiTagword.length > 0) {
umiTagword = umiTagword.toLowerCase().replace(/[\n\r]/g, "");
originalTagword = tagword;
tagword = umiTagword;
let filteredWildcardsSorted = filteredWildcards(umiTagword);
let searchRegex = new RegExp(`(^|[^a-zA-Z])${escapeRegExp(umiTagword)}`, 'i')
let baseFilter = x => x[0].toLowerCase().search(searchRegex) > -1;
let spaceIncludeFilter = x => x[0].toLowerCase().replaceAll(" ", "_").search(searchRegex) > -1;
tempResults = filteredWildcardsSorted.filter(x => baseFilter(x) || spaceIncludeFilter(x)) // Filter by tagword
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.yamlWildcard)
result.count = t[1];
finalResults.push(result);
});
return finalResults;
} else if (showAll) {
let filteredWildcardsSorted = filteredWildcards("");
// Add final results
let finalResults = [];
filteredWildcardsSorted.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.yamlWildcard)
result.count = t[1];
finalResults.push(result);
});
originalTagword = tagword;
tagword = "";
return finalResults;
}
} else {
let filteredWildcardsSorted = filteredWildcards("");
// Add final results
let finalResults = [];
filteredWildcardsSorted.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.yamlWildcard)
result.count = t[1];
finalResults.push(result);
});
originalTagword = tagword;
tagword = "";
return finalResults;
}
}
}
function updateUmiTags( tagType, sanitizedText, newPrompt, textArea) {
// If it was a yaml wildcard, also update the umiPreviousTags
if (tagType === ResultType.yamlWildcard && originalTagword.length > 0) {
let umiSubPrompts = [...newPrompt.matchAll(UMI_PROMPT_REGEX)];
let umiTags = [];
umiSubPrompts.forEach(umiSubPrompt => {
umiTags = umiTags.concat([...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map(x => x[1].toLowerCase()));
});
umiPreviousTags = umiTags;
hideResults(textArea);
return true;
}
return false;
}
async function load() {
if (yamlWildcards.length === 0) {
try {
let yamlTags = (await readFile(`${tagBasePath}/temp/wcet.txt`)).split("\n");
// Split into tag, count pairs
yamlWildcards = yamlTags.map(x => x
.trim()
.split(","))
.map(([i, ...rest]) => [
i,
rest.reduce((a, b) => {
a[b.toLowerCase()] = true;
return a;
}, {}),
]);
} catch (e) {
console.error("Error loading yaml wildcards: " + e);
}
}
}
function sanitize(tagType, text) {
// Replace underscores only if the yaml tag is not using them
if (tagType === ResultType.yamlWildcard && !yamlWildcards.includes(text)) {
return text.replaceAll("_", " ");
}
return null;
}
// Add UMI parser
PARSERS.push(new UmiParser(UMI_TRIGGER));
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);
QUEUE_AFTER_INSERT.push(updateUmiTags);

123
javascript/ext_wildcards.js Normal file
View File

@@ -0,0 +1,123 @@
// Regex
const WC_REGEX = /\b__([^,]+)__([^, ]*)\b/g;
// Trigger conditions
const WC_TRIGGER = () => CFG.useWildcards && [...tagword.matchAll(WC_REGEX)].length > 0;
const WC_FILE_TRIGGER = () => CFG.useWildcards && (tagword.startsWith("__") && !tagword.endsWith("__") || tagword === "__");
class WildcardParser extends BaseTagParser {
async parse() {
// Show wildcards from a file with that name
let wcMatch = [...tagword.matchAll(WC_REGEX)]
let wcFile = wcMatch[0][1];
let wcWord = wcMatch[0][2];
// Look in normal wildcard files
let wcFound = wildcardFiles.find(x => x[1].toLowerCase() === wcFile);
// Use found wildcard file or look in external wildcard files
let wcPair = wcFound || wildcardExtFiles.find(x => x[1].toLowerCase() === wcFile);
let wildcards = (await readFile(`${wcPair[0]}/${wcPair[1]}.txt`)).split("\n")
.filter(x => x.trim().length > 0 && !x.startsWith('#')); // Remove empty lines and comments
let finalResults = [];
let tempResults = wildcards.filter(x => (wcWord !== null && wcWord.length > 0) ? x.toLowerCase().includes(wcWord) : x) // Filter by tagword
tempResults.forEach(t => {
let result = new AutocompleteResult(t.trim(), ResultType.wildcardTag);
result.meta = wcFile;
finalResults.push(result);
});
return finalResults;
}
}
class WildcardFileParser extends BaseTagParser {
parse() {
// Show available wildcard files
let tempResults = [];
if (tagword !== "__") {
let lmb = (x) => x[1].toLowerCase().includes(tagword.replace("__", ""))
tempResults = wildcardFiles.filter(lmb).concat(wildcardExtFiles.filter(lmb)) // Filter by tagword
} else {
tempResults = wildcardFiles.concat(wildcardExtFiles);
}
let finalResults = [];
// Get final results
tempResults.forEach(wcFile => {
let result = new AutocompleteResult(wcFile[1].trim(), ResultType.wildcardFile);
result.meta = "Wildcard file";
finalResults.push(result);
});
return finalResults;
}
}
async function load() {
if (wildcardFiles.length === 0 && wildcardExtFiles.length === 0) {
try {
let wcFileArr = (await readFile(`${tagBasePath}/temp/wc.txt`)).split("\n");
let wcBasePath = wcFileArr[0].trim(); // First line should be the base path
wildcardFiles = wcFileArr.slice(1)
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => [wcBasePath, x.trim().replace(".txt", "")]); // Remove file extension & newlines
// To support multiple sources, we need to separate them using the provided "-----" strings
let wcExtFileArr = (await readFile(`${tagBasePath}/temp/wce.txt`)).split("\n");
let splitIndices = [];
for (let index = 0; index < wcExtFileArr.length; index++) {
if (wcExtFileArr[index].trim() === "-----") {
splitIndices.push(index);
}
}
// For each group, add them to the wildcardFiles array with the base path as the first element
for (let i = 0; i < splitIndices.length; i++) {
let start = splitIndices[i - 1] || 0;
if (i > 0) start++; // Skip the "-----" line
let end = splitIndices[i];
let wcExtFile = wcExtFileArr.slice(start, end);
let base = wcExtFile[0].trim() + "/";
wcExtFile = wcExtFile.slice(1)
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => x.trim().replace(base, "").replace(".txt", "")); // Remove file extension & newlines;
wcExtFile = wcExtFile.map(x => [base, x]);
wildcardExtFiles.push(...wcExtFile);
}
} catch (e) {
console.error("Error loading wildcards: " + e);
}
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.wildcardFile) {
return `__${text}__`;
} else if (tagType === ResultType.wildcardTag) {
return text.replace(/^.*?: /g, "");
}
return null;
}
function keepOpenIfWildcard(tagType, sanitizedText, newPrompt, textArea) {
// If it's a wildcard, we want to keep the results open so the user can select another wildcard
if (tagType === ResultType.wildcardFile) {
hideBlocked = true;
autocomplete(textArea, newPrompt, sanitizedText);
setTimeout(() => { hideBlocked = false; }, 100);
return true;
}
return false;
}
// Register the parsers
PARSERS.push(new WildcardParser(WC_TRIGGER));
PARSERS.push(new WildcardFileParser(WC_FILE_TRIGGER));
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);
QUEUE_AFTER_INSERT.push(keepOpenIfWildcard);

File diff suppressed because it is too large Load Diff

View File

@@ -1,33 +1,171 @@
# This helper script scans folders for wildcards and embeddings and writes them
# to a temporary file to expose it to the javascript side
import gradio as gr
from pathlib import Path
from modules import scripts, script_callbacks, shared, sd_hijack
import yaml
import time
import threading
# Webui root path
FILE_DIR = Path().absolute()
# The extension base path
EXT_PATH = FILE_DIR.joinpath('extensions')
# Tags base path
TAGS_PATH = Path(scripts.basedir()).joinpath('tags')
# The path to the folder containing the wildcards and embeddings
FILE_DIR = Path().absolute()
WILDCARD_PATH = FILE_DIR.joinpath('scripts/wildcards')
WILDCARD_EXT_PATH = FILE_DIR.joinpath('extensions/wildcards/wildcards')
EMB_PATH = FILE_DIR.joinpath('embeddings')
# The path to the temporary file
TEMP_PATH = FILE_DIR.joinpath('tags/temp')
EMB_PATH = Path(shared.cmd_opts.embeddings_dir)
HYP_PATH = Path(shared.cmd_opts.hypernetwork_dir)
try:
LORA_PATH = Path(shared.cmd_opts.lora_dir)
except AttributeError:
LORA_PATH = None
def find_ext_wildcard_paths():
"""Returns the path to the extension wildcards folder"""
found = list(EXT_PATH.glob('*/wildcards/'))
return found
# The path to the extension wildcards folder
WILDCARD_EXT_PATHS = find_ext_wildcard_paths()
# The path to the temporary files
STATIC_TEMP_PATH = FILE_DIR.joinpath('tmp') # In the webui root, on windows it exists by default, on linux it doesn't
TEMP_PATH = TAGS_PATH.joinpath('temp') # Extension specific temp files
def get_wildcards():
"""Returns a list of all wildcards. Works on nested folders."""
wildcard_files = list(WILDCARD_PATH.rglob("*.txt"))
resolved = [w.relative_to(WILDCARD_PATH).as_posix() for w in wildcard_files if w.name != "put wildcards here.txt"]
resolved = [w.relative_to(WILDCARD_PATH).as_posix(
) for w in wildcard_files if w.name != "put wildcards here.txt"]
return resolved
def get_ext_wildcards():
"""Returns a list of all extension wildcards. Works on nested folders."""
wildcard_files = list(WILDCARD_EXT_PATH.rglob("*.txt"))
resolved = [w.relative_to(WILDCARD_EXT_PATH).as_posix() for w in wildcard_files if w.name != "put wildcards here.txt"]
return resolved
wildcard_files = []
for path in WILDCARD_EXT_PATHS:
wildcard_files.append(path.relative_to(FILE_DIR).as_posix())
wildcard_files.extend(p.relative_to(path).as_posix() for p in path.rglob("*.txt") if p.name != "put wildcards here.txt")
wildcard_files.append("-----")
return wildcard_files
def get_embeddings():
"""Returns a list of all embeddings"""
return [str(e.relative_to(EMB_PATH)) for e in EMB_PATH.glob("**/*") if e.suffix in {".bin", ".pt", ".png"}]
def get_ext_wildcard_tags():
"""Returns a list of all tags found in extension YAML files found under a Tags: key."""
wildcard_tags = {} # { tag: count }
yaml_files = []
for path in WILDCARD_EXT_PATHS:
yaml_files.extend(p for p in path.rglob("*.yml"))
yaml_files.extend(p for p in path.rglob("*.yaml"))
count = 0
for path in yaml_files:
try:
with open(path, encoding="utf8") as file:
data = yaml.safe_load(file)
for item in data:
if data[item] and 'Tags' in data[item]:
wildcard_tags[count] = ','.join(data[item]['Tags'])
count += 1
else:
print('Issue with tags found in ' + path.name + ' at item ' + item)
except yaml.YAMLError as exc:
print(exc)
# Sort by count
sorted_tags = sorted(wildcard_tags.items(), key=lambda item: item[1], reverse=True)
output = []
for tag, count in sorted_tags:
output.append(f"{tag},{count}")
return output
def get_embeddings(sd_model):
"""Write a list of all embeddings with their version"""
# Version constants
V1_SHAPE = 768
V2_SHAPE = 1024
emb_v1 = []
emb_v2 = []
results = []
try:
# Get embedding dict from sd_hijack to separate v1/v2 embeddings
emb_type_a = sd_hijack.model_hijack.embedding_db.word_embeddings
emb_type_b = sd_hijack.model_hijack.embedding_db.skipped_embeddings
# Get the shape of the first item in the dict
emb_a_shape = -1
emb_b_shape = -1
if (len(emb_type_a) > 0):
emb_a_shape = next(iter(emb_type_a.items()))[1].shape
if (len(emb_type_b) > 0):
emb_b_shape = next(iter(emb_type_b.items()))[1].shape
# Add embeddings to the correct list
if (emb_a_shape == V1_SHAPE):
emb_v1 = list(emb_type_a.keys())
elif (emb_a_shape == V2_SHAPE):
emb_v2 = list(emb_type_a.keys())
if (emb_b_shape == V1_SHAPE):
emb_v1 = list(emb_type_b.keys())
elif (emb_b_shape == V2_SHAPE):
emb_v2 = list(emb_type_b.keys())
# Get shape of current model
#vec = sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
#model_shape = vec.shape[1]
# Show relevant entries at the top
#if (model_shape == V1_SHAPE):
# results = [e + ",v1" for e in emb_v1] + [e + ",v2" for e in emb_v2]
#elif (model_shape == V2_SHAPE):
# results = [e + ",v2" for e in emb_v2] + [e + ",v1" for e in emb_v1]
#else:
# raise AttributeError # Fallback to old method
results = sorted([e + ",v1" for e in emb_v1] + [e + ",v2" for e in emb_v2], key=lambda x: x.lower())
except AttributeError:
print("tag_autocomplete_helper: Old webui version or unrecognized model shape, using fallback for embedding completion.")
# Get a list of all embeddings in the folder
all_embeds = [str(e.relative_to(EMB_PATH)) for e in EMB_PATH.rglob("*") if e.suffix in {".bin", ".pt", ".png",'.webp', '.jxl', '.avif'}]
# Remove files with a size of 0
all_embeds = [e for e in all_embeds if EMB_PATH.joinpath(e).stat().st_size > 0]
# Remove file extensions
all_embeds = [e[:e.rfind('.')] for e in all_embeds]
results = [e + "," for e in all_embeds]
write_to_temp_file('emb.txt', results)
def get_hypernetworks():
"""Write a list of all hypernetworks"""
# Get a list of all hypernetworks in the folder
all_hypernetworks = [str(h.name) for h in HYP_PATH.rglob("*") if h.suffix in {".pt"}]
# Remove file extensions
return [h[:h.rfind('.')] for h in all_hypernetworks]
def get_lora():
"""Write a list of all lora"""
# Get a list of all lora in the folder
all_lora = [str(l.name) for l in LORA_PATH.rglob("*") if l.suffix in {".safetensors", ".ckpt", ".pt"}]
# Remove file extensions
return [l[:l.rfind('.')] for l in all_lora]
def write_tag_base_path():
"""Writes the tag base path to a fixed location temporary file"""
with open(STATIC_TEMP_PATH.joinpath('tagAutocompletePath.txt'), 'w', encoding="utf-8") as f:
f.write(TAGS_PATH.relative_to(FILE_DIR).as_posix())
def write_to_temp_file(name, data):
@@ -36,6 +174,25 @@ def write_to_temp_file(name, data):
f.write(('\n'.join(data)))
csv_files = []
csv_files_withnone = []
def update_tag_files():
"""Returns a list of all potential tag files"""
global csv_files, csv_files_withnone
files = [str(t.relative_to(TAGS_PATH)) for t in TAGS_PATH.glob("*.csv")]
csv_files = files
csv_files_withnone = ["None"] + files
# Write the tag base path to a fixed location temporary file
# to enable the javascript side to find our files regardless of extension folder name
if not STATIC_TEMP_PATH.exists():
STATIC_TEMP_PATH.mkdir(exist_ok=True)
write_tag_base_path()
update_tag_files()
# Check if the temp path exists and create it if not
if not TEMP_PATH.exists():
TEMP_PATH.mkdir(parents=True, exist_ok=True)
@@ -44,22 +201,80 @@ if not TEMP_PATH.exists():
# even if no wildcards or embeddings are found
write_to_temp_file('wc.txt', [])
write_to_temp_file('wce.txt', [])
write_to_temp_file('emb.txt', [])
write_to_temp_file('wcet.txt', [])
write_to_temp_file('hyp.txt', [])
write_to_temp_file('lora.txt', [])
# Only reload embeddings if the file doesn't exist, since they are already re-written on model load
if not TEMP_PATH.joinpath("emb.txt").exists():
write_to_temp_file('emb.txt', [])
# Write wildcards to wc.txt if found
if WILDCARD_PATH.exists():
wildcards = get_wildcards()
wildcards = [WILDCARD_PATH.relative_to(FILE_DIR).as_posix()] + get_wildcards()
if wildcards:
write_to_temp_file('wc.txt', wildcards)
# Write extension wildcards to wce.txt if found
if WILDCARD_EXT_PATH.exists():
if WILDCARD_EXT_PATHS is not None:
wildcards_ext = get_ext_wildcards()
if wildcards_ext:
write_to_temp_file('wce.txt', wildcards_ext)
# Write yaml extension wildcards to wcet.txt if found
wildcards_yaml_ext = get_ext_wildcard_tags()
if wildcards_yaml_ext:
write_to_temp_file('wcet.txt', wildcards_yaml_ext)
# Write embeddings to emb.txt if found
if EMB_PATH.exists():
embeddings = get_embeddings()
if embeddings:
write_to_temp_file('emb.txt', embeddings)
# Get embeddings after the model loaded callback
script_callbacks.on_model_loaded(get_embeddings)
if HYP_PATH.exists():
hypernets = get_hypernetworks()
if hypernets:
write_to_temp_file('hyp.txt', hypernets)
if LORA_PATH is not None and LORA_PATH.exists():
lora = get_lora()
if lora:
write_to_temp_file('lora.txt', lora)
# Register autocomplete options
def on_ui_settings():
TAC_SECTION = ("tac", "Tag Autocomplete")
# Main tag file
shared.opts.add_option("tac_tagFile", shared.OptionInfo("danbooru.csv", "Tag filename", gr.Dropdown, lambda: {"choices": csv_files_withnone}, refresh=update_tag_files, section=TAC_SECTION))
# Active in settings
shared.opts.add_option("tac_active", shared.OptionInfo(True, "Enable Tag Autocompletion", section=TAC_SECTION))
shared.opts.add_option("tac_activeIn.txt2img", shared.OptionInfo(True, "Active in txt2img (Requires restart)", section=TAC_SECTION))
shared.opts.add_option("tac_activeIn.img2img", shared.OptionInfo(True, "Active in img2img (Requires restart)", section=TAC_SECTION))
shared.opts.add_option("tac_activeIn.negativePrompts", shared.OptionInfo(True, "Active in negative prompts (Requires restart)", section=TAC_SECTION))
shared.opts.add_option("tac_activeIn.thirdParty", shared.OptionInfo(True, "Active in third party textboxes [Dataset Tag Editor] (Requires restart)", section=TAC_SECTION))
shared.opts.add_option("tac_activeIn.modelList", shared.OptionInfo("", "List of model names (with file extension) or their hashes to use as black/whitelist, separated by commas.", section=TAC_SECTION))
shared.opts.add_option("tac_activeIn.modelListMode", shared.OptionInfo("Blacklist", "Mode to use for model list", gr.Dropdown, lambda: {"choices": ["Blacklist","Whitelist"]}, section=TAC_SECTION))
# Results related settings
shared.opts.add_option("tac_maxResults", shared.OptionInfo(5, "Maximum results", section=TAC_SECTION))
shared.opts.add_option("tac_showAllResults", shared.OptionInfo(False, "Show all results", section=TAC_SECTION))
shared.opts.add_option("tac_resultStepLength", shared.OptionInfo(100, "How many results to load at once", section=TAC_SECTION))
shared.opts.add_option("tac_delayTime", shared.OptionInfo(100, "Time in ms to wait before triggering completion again (Requires restart)", section=TAC_SECTION))
shared.opts.add_option("tac_useWildcards", shared.OptionInfo(True, "Search for wildcards", section=TAC_SECTION))
shared.opts.add_option("tac_useEmbeddings", shared.OptionInfo(True, "Search for embeddings", section=TAC_SECTION))
shared.opts.add_option("tac_useHypernetworks", shared.OptionInfo(True, "Search for hypernetworks", section=TAC_SECTION))
shared.opts.add_option("tac_useLoras", shared.OptionInfo(True, "Search for Loras", section=TAC_SECTION))
shared.opts.add_option("tac_showWikiLinks", shared.OptionInfo(False, "Show '?' next to tags, linking to its Danbooru or e621 wiki page (Warning: This is an external site and very likely contains NSFW examples!)", section=TAC_SECTION))
# Insertion related settings
shared.opts.add_option("tac_replaceUnderscores", shared.OptionInfo(True, "Replace underscores with spaces on insertion", section=TAC_SECTION))
shared.opts.add_option("tac_escapeParentheses", shared.OptionInfo(True, "Escape parentheses on insertion", section=TAC_SECTION))
shared.opts.add_option("tac_appendComma", shared.OptionInfo(True, "Append comma on tag autocompletion", section=TAC_SECTION))
# Alias settings
shared.opts.add_option("tac_alias.searchByAlias", shared.OptionInfo(True, "Search by alias", section=TAC_SECTION))
shared.opts.add_option("tac_alias.onlyShowAlias", shared.OptionInfo(False, "Only show alias", section=TAC_SECTION))
# Translation settings
shared.opts.add_option("tac_translation.translationFile", shared.OptionInfo("None", "Translation filename", gr.Dropdown, lambda: {"choices": csv_files_withnone}, refresh=update_tag_files, section=TAC_SECTION))
shared.opts.add_option("tac_translation.oldFormat", shared.OptionInfo(False, "Translation file uses old 3-column translation format instead of the new 2-column one", section=TAC_SECTION))
shared.opts.add_option("tac_translation.searchByTranslation", shared.OptionInfo(True, "Search by translation", section=TAC_SECTION))
# Extra file settings
shared.opts.add_option("tac_extra.extraFile", shared.OptionInfo("extra-quality-tags.csv", "Extra filename (for small sets of custom tags)", gr.Dropdown, lambda: {"choices": csv_files_withnone}, refresh=update_tag_files, section=TAC_SECTION))
shared.opts.add_option("tac_extra.addMode", shared.OptionInfo("Insert before", "Mode to add the extra tags to the main tag list", gr.Dropdown, lambda: {"choices": ["Insert before","Insert after"]}, section=TAC_SECTION))
script_callbacks.on_ui_settings(on_ui_settings)

View File

@@ -0,0 +1,6 @@
masterpiece,5,Quality tag,
best_quality,5,Quality tag,
high_quality,5,Quality tag,
normal_quality,5,Quality tag,
low_quality,5,Quality tag,
worst_quality,5,Quality tag,
1 masterpiece 5 Quality tag
2 best_quality 5 Quality tag
3 high_quality 5 Quality tag
4 normal_quality 5 Quality tag
5 low_quality 5 Quality tag
6 worst_quality 5 Quality tag

21
tags/colors.json Normal file
View File

@@ -0,0 +1,21 @@
{
"danbooru": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["indianred", "firebrick"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["orange", "darkorange"]
},
"e621": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["gold", "goldenrod"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["tomato", "darksalmon"],
"6": ["red", "maroon"],
"7": ["whitesmoke", "black"],
"8": ["seagreen", "darkseagreen"]
}
}

View File

@@ -1,44 +0,0 @@
{
"tagFile": "danbooru.csv",
"activeIn": {
"txt2img": true,
"img2img": true,
"negativePrompts": true
},
"maxResults": 5,
"resultStepLength": 500,
"showAllResults": false,
"useLeftRightArrowKeys": false,
"replaceUnderscores": true,
"escapeParentheses": true,
"useWildcards": true,
"useEmbeddings": true,
"translation": {
"searchByTranslation": true,
"onlyShowTranslation": false
},
"extra": {
"extraFile": "",
"onlyTranslationExtraFile": false
},
"colors": {
"danbooru": {
"0": ["lightblue", "dodgerblue"],
"1": ["indianred", "firebrick"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["orange", "darkorange"]
},
"e621": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["gold", "goldenrod"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["tomato", "darksalmon"],
"6": ["red", "maroon"],
"7": ["whitesmoke", "black"],
"8": ["seagreen", "darkseagreen"]
}
}
}

File diff suppressed because it is too large Load Diff

166094
tags/e621.csv

File diff suppressed because one or more lines are too long