Compare commits

..

227 Commits

Author SHA1 Message Date
DominikDoom
cd7ec48102 Merge branch 'main' into refactor-class-scope 2025-11-11 11:21:09 +01:00
Serick
19a30beed4 Fix glob pattern matching for filenames with special characters (#335) 2025-11-11 11:20:18 +01:00
DominikDoom
2e699f3ebd Merge branch 'main' into refactor-class-scope 2025-09-09 10:13:06 +02:00
DominikDoom
89fee277e3 Fix model path breaking with commas in filenames
Fixes #332
2025-09-09 10:04:00 +02:00
DominikDoom
c4510663ca Fix lora / embed preview being below the textbox
This was a small visual regression for the normal webui introduced by #327
Should keep the fix working for forge
2025-09-09 09:57:50 +02:00
DominikDoom
4b02fe921f Move main script into an IIFE too
Also exports the functions to the TAC.main object in case other extensions need to use them.
2025-07-13 17:03:01 +02:00
DominikDoom
f30214014b Fix lora / embed preview being below the textbox
This was a small visual regression for the normal webui introduced by #327
Should keep the fix working for forge
2025-07-12 20:13:07 +02:00
DominikDoom
20e48a124c Move caret coordinate function into TAC namespace too 2025-07-12 20:10:42 +02:00
DominikDoom
22a9449419 Fix typo 2025-07-12 20:09:14 +02:00
DominikDoom
bcb11af7ef Make internal class properties private & formatting 2025-07-12 20:09:01 +02:00
DominikDoom
88c8a1d5d6 Also move the TacUtils class into the namespace for consistency 2025-07-12 19:01:13 +02:00
DominikDoom
87fa3851ca Isolate all parsers in a local self-executing function 2025-07-12 18:54:56 +02:00
DominikDoom
8a574ec5e1 Isolate result type / class 2025-07-12 18:39:06 +02:00
DominikDoom
781cea83a0 Isolate textarea utils in sub-namespace 2025-07-12 18:29:06 +02:00
DominikDoom
0608706e7d Fix more broken/missed references 2025-07-10 19:03:09 +02:00
DominikDoom
d1cb5269f6 Fix setup never running 2025-07-10 18:47:55 +02:00
DominikDoom
ab253e30f4 We need to init the CFG object
I was too used to typescript syntax
2025-07-10 18:40:00 +02:00
DominikDoom
0d65238a55 Another missing this reference 2025-07-10 18:38:48 +02:00
DominikDoom
de912bc800 Missed this 2025-07-10 18:25:48 +02:00
DominikDoom
8eb5176ab4 Move CFG to top level for shorter access, add hacky jsdoc typedef 2025-07-10 18:22:18 +02:00
DominikDoom
bdbda299f7 Refactor whole script to use isolated globals 2025-07-10 17:52:20 +02:00
DominikDoom
4d6e5b14ac Use new TAC.Globals namespace for CFG 2025-07-10 14:07:13 +02:00
DominikDoom
085a7fc64c Move caret util into its own class 2025-07-10 13:47:49 +02:00
DominikDoom
61d799a908 Merge branch 'main' into refactor-class-scope 2025-07-10 12:20:36 +02:00
DominikDoom
8766965a30 Credit original author 2025-05-08 12:43:40 +02:00
Disty0
34e68e1628 Fix SDNext ModernUI by following the cursor (#327) 2025-05-05 20:44:51 +02:00
DominikDoom
41d185b616 Improve IME consistency
Might help with #326
2025-05-01 13:48:32 +02:00
DominikDoom
e0baa58ace Fix style appending to wrong node on forge classic
Fixes #323
2025-04-16 11:23:12 +02:00
DominikDoom
c1ef12d887 Fix weighted tags preventing normal tag completion
caused by filter applying to every tag instead of just one to one
Fixes #324
2025-04-15 21:56:16 +02:00
Serick
4fc122de4b Added support for Forge classic (#322)
Fixes issues due to removal of hypernetworks in Forge classic
2025-04-15 09:35:54 +02:00
re-unknown
c341ccccb6 Add TIPO configuration for tag prompt in third-party selectors (#319) 2025-03-23 14:26:34 +01:00
akoyaki ayagi
bda8701734 Add a character core tags list file for chant function (#317)
Alternative chant list ("<c:" or "<chant:" prefix) for 26k characters and their tag descriptions. Allows greater likeness even if the model doesn't know the character well.
2025-03-08 10:45:40 +01:00
undefined
63fca457a7 Indicate repeated tag (#313)
Shows 🔁 to mark a tag that has already been used in the prompt
2025-01-16 09:29:33 +01:00
DominikDoom
38700d4743 Formatting 2025-01-04 19:35:14 +01:00
DominikDoom
bb492ba059 Add default color config & wiki link fix for merged tag list 2025-01-04 19:33:29 +01:00
Drac
40ad070a02 Add danbooru_e621_merged.csv (#312)
Post count threshold for this file is 25
2025-01-04 19:12:57 +01:00
DominikDoom
209b1dd76b End of 2024 tag list update
Danbooru and e621 tag lists as of 2024-12-22 (no Derpibooru for now, sorry).
Both cut off at a post count of 25, slightly improved consistency & new aliases included.
Thanks a lot to @DraconicDragon for the up-to-date tag list at https://github.com/DraconicDragon/dbr-e621-lists-archive
2025-01-03 14:03:26 +01:00
DominikDoom
196fa19bfc Fix derpibooru tags containing merge conflict markers
Thanks to @heftig for noticing this, as discussed in #293
2024-12-08 18:23:21 +01:00
DominikDoom
6ffeeafc49 Update danbooru tags (2024-11-9)
Thanks to @yamosin.
Closes #309

Note: This changes the cutoff type from top 100k to post count > 30, which adds ~21k rows
2024-11-09 15:35:59 +01:00
DominikDoom
08b7c58ea7 More catches for fixing #308 2024-11-02 15:52:10 +01:00
DominikDoom
6be91449f3 Try-catch in umi format check
Possible fix for #308
2024-11-02 13:51:51 +01:00
david419kr
b515c15e01 Underscore replacement exclusion list feature (#306) 2024-10-30 17:45:32 +01:00
DominikDoom
827b99c961 Make embedding refresh non-force by default
Added option for force-refreshing embeddings to restore old behavior
Fixes #301
2024-09-04 22:58:55 +02:00
DominikDoom
49ec047af8 Fix extra network tab refresh listener 2024-08-15 11:52:49 +02:00
DominikDoom
f94da07ed1 Fix ref 2024-08-11 14:56:58 +02:00
DominikDoom
e2cfe7341b Re-register embed load callback after model load if needed 2024-08-11 14:55:35 +02:00
DominikDoom
ce51ec52a2 Fix for forge type detection, sorting fallback if filename is missing 2024-08-11 14:26:37 +02:00
DominikDoom
f64d728ac6 Partial embedding fixes for webui forge
Resolves some symptoms of #297, but doesn't fix the underlying cause
2024-08-11 14:08:31 +02:00
DominikDoom
1c6bba2a3d Formatting 2024-08-05 11:47:12 +02:00
DominikDoom
9a47c2ec2c Mouse hover can now trigger selection event for extra network previews
Closes #292
2024-08-05 11:39:38 +02:00
DominikDoom
fe32ad739d Merge pull request #293 from Siberpone/derpi-update
Derpibooru csv update
2024-07-05 17:15:26 +02:00
siberpone
ade67e30a6 Updated Derpibooru tags 2024-07-05 21:57:15 +07:00
DominikDoom
e9a21e7a55 Add pony quality tags to demo chants 2024-07-05 10:03:23 +02:00
DominikDoom
3ef2a7d206 Prefer loaded over skipped embeddings on name collisions
Fixes #290
2024-06-11 16:23:16 +02:00
DominikDoom
29b5bf0701 Fix db version being accessed before creation if table fails to create
This should prevent the script from hard crashing like in #288
2024-05-28 09:47:15 +02:00
DominikDoom
3eef536b64 Use custom wildcard wrapper chars from sd-dynamic-prompts if the option is set
Closes #286
2024-05-04 14:12:14 +02:00
DominikDoom
0d24e697d2 Update README.md 2024-04-16 13:28:33 +02:00
DominikDoom
a27633da55 Remove hardcoded preview url modifiers, use ResultType instead
Fixes #284
2024-04-15 18:48:44 +02:00
DominikDoom
4cd6174a22 Trying out a hopefully more robust import fix 2024-04-14 12:14:29 +02:00
DominikDoom
9155e4d42c Prevent db response errors from breaking the regular completion 2024-04-14 12:01:15 +02:00
DominikDoom
700642a400 Attempt to fix import error described in #283 2024-04-13 20:27:47 +02:00
DominikDoom
3e2ee75f37 Move the previously global util functions to TacUtils class & update references
(to prevent naming conflicts and have cleaner modularization)
2024-04-13 19:38:32 +02:00
DominikDoom
1b592dbf56 Add safety check for db access that was not yet handled by db_request 2024-04-13 19:26:20 +02:00
DominikDoom
d1eea880f3 Rename API call functions on JS side to prevent name conflicts
Fixes #282
2024-04-13 17:27:10 +02:00
DominikDoom
119a3ad51f Merge branch 'feature-sort-by-frequent-use' 2024-04-13 15:08:57 +02:00
DominikDoom
c820a22149 Merge branch 'main' into feature-sort-by-frequent-use 2024-04-13 15:06:53 +02:00
DominikDoom
eb1e1820f9 English dictionary and derpibooru tag lists, thanks to @Nenotriple
Closes #280
2024-04-13 14:42:21 +02:00
DominikDoom
ef59cff651 Move last used date check guard to SQL side, implement max cap
- Server side date comparison and cap check further improve js sort performance
- The alias check has also been moved out of calculateUsageBias to support the new cap system
2024-03-16 16:44:43 +01:00
DominikDoom
a454383c43 Merge branch 'main' into feature-sort-by-frequent-use 2024-03-03 13:52:32 +01:00
DominikDoom
bec567fe26 Merge pull request #277 from Symbiomatrix/embpath
Embedding relative path.
2024-03-03 13:50:33 +01:00
DominikDoom
d4041096c9 Merge pull request #275 from Symbiomatrix/wildcard 2024-03-03 13:43:27 +01:00
Symbiomatrix
0903259ddf Update ext_embeddings.js 2024-03-03 13:49:36 +02:00
Symbiomatrix
f3e64b1fa5 Update tag_autocomplete_helper.py 2024-03-03 13:47:39 +02:00
DominikDoom
312cec5d71 Merge pull request #276 from Symbiomatrix/modkey
[Feature] Modifier keys for list navigation.
2024-03-03 11:46:05 +01:00
Symbiomatrix
b71e6339bd Fix tabs. 2024-03-03 12:21:29 +02:00
Symbiomatrix
7ddbc3c0b2 Update tagAutocomplete.js 2024-03-03 04:23:13 +02:00
DominikDoom
4c2ef8f770 Merge branch 'main' into feature-sort-by-frequent-use 2024-02-09 19:23:52 +01:00
DominikDoom
97c5e4f53c Fix embeddings not loading in SD.Next diffusers backend
Fixes #273
2024-02-09 19:06:23 +01:00
DominikDoom
1d8d9f64b5 Update danbooru.csv with 2023 data
Closes #274
2024-02-09 17:59:06 +01:00
DominikDoom
7437850600 Merge branch 'main' into feature-sort-by-frequent-use 2024-02-04 14:46:29 +01:00
DominikDoom
829a4a7b89 Merge pull request #272 from rkfg/lora-visibility
Hide loras according to settings
2024-02-04 14:45:48 +01:00
rkfg
22472ac8ad Hide loras according to settings 2024-02-04 16:44:33 +03:00
DominikDoom
5f77fa26d3 Update README.md
Add feedback wanted notice
2024-02-04 12:04:02 +01:00
DominikDoom
f810b2dd8f Merge branch 'main' into feature-sort-by-frequent-use 2024-01-27 12:39:40 +01:00
DominikDoom
08d3436f3b Fix mtime check for list return
Fixes #269
2024-01-27 12:38:08 +01:00
DominikDoom
afa13306ef Small regex fix to make style completion work after [ or before , 2024-01-26 20:39:33 +01:00
DominikDoom
95200e82e1 Merge branch 'main' into feature-sort-by-frequent-use 2024-01-26 17:04:53 +01:00
DominikDoom
a63ce64f4e Small fix for nonexistent style file 2024-01-26 17:04:15 +01:00
DominikDoom
a966be7546 Merge branch 'main' into feature-sort-by-frequent-use 2024-01-26 16:21:15 +01:00
DominikDoom
d37e37acfa Added option to autocomplete style names
To be used in tandem with https://github.com/SirVeggie/extension-style-vars
Closes #268
2024-01-26 16:16:04 +01:00
DominikDoom
342fbc9041 Pre-calculate usage bias for all results instead of in the sort function
Roughly doubles the sort performance
2024-01-19 21:10:09 +01:00
DominikDoom
d496569c9a Cache sort key for small performance increase 2024-01-19 20:17:14 +01:00
DominikDoom
7778142520 Same fix for embeddings and chants, although probably not relevant 2024-01-11 15:42:30 +01:00
DominikDoom
cde90c13c4 Fix wildcards being cut off after colon
Fixes #267
2024-01-11 15:42:30 +01:00
DominikDoom
231b121fe0 Update README
Updated English and Japanese READMEs to include link to Japanese tag translations
2024-01-09 11:54:07 +01:00
DominikDoom
c659ed2155 Use <lora: for models in lycoris folder by default
Added a backcompat option to keep using <lyco: if needed.
Closes #263
2023-12-24 14:11:19 +01:00
DominikDoom
0a4c17cada Merge pull request #261 from Jibaku789/main
Add support of deepdanbooru-object-recognition
2023-12-19 11:01:53 +01:00
Jibaku789
6e65811d4a Add support of deepdanbooru-object-recognition
Add support of autocomplete in extension;
deepdanbooru-object-recognition
2023-12-18 15:57:00 -06:00
DominikDoom
03673c060e Add wildcard exclusion options
One for excluding specific folders by name, the other to skip them during refresh
2023-12-15 19:46:17 +01:00
DominikDoom
1c11c4ad5a Lora hash dict safety checks 2023-12-15 13:35:53 +01:00
DominikDoom
30c9593d3d Merge branch 'main' into feature-sort-by-frequent-use 2023-12-12 14:23:18 +01:00
DominikDoom
f840586b6b Auto-refresh embedding list after model change
Uses own API endpoint and doesn't force-reload to skip unneeded work
(only works for A1111 as SD.Next model change detection isn't implemented yet)
2023-12-12 14:22:53 +01:00
DominikDoom
886704e351 Fix lora import in a1111
This makes the built-in list method work on the initial load there
2023-12-12 13:46:51 +01:00
DominikDoom
41626d22c3 Fix refresh in SD.Next if no model was loaded 2023-12-12 12:15:58 +01:00
DominikDoom
57076060df Merge branch 'main' into feature-sort-by-frequent-use 2023-12-11 11:43:26 +01:00
DominikDoom
5ef346cde3 Attempt to use the build-in Lora.networks Lora/LyCORIS models lists (#258)
Co-authored-by: Midcoastal <midcoastal79@gmail.com>
2023-12-11 11:37:12 +01:00
DominikDoom
edf76d9df2 Revert "Attempt to use the build-in Lora.networks Lora/LyCORIS models lists (#255)"
This reverts commit 837dc39811.
2023-12-10 22:49:30 +01:00
Mike
837dc39811 Attempt to use the build-in Lora.networks Lora/LyCORIS models lists (#255) 2023-12-10 22:20:51 +01:00
DominikDoom
f1870b7e87 Force text color to account for themes not following gradio settings
As discussed in PR #256
2023-12-10 15:30:49 +01:00
DominikDoom
20b6635a2a WIP usage info table
Might get replaced with gradio depending on how well it works
2023-12-04 15:00:19 +01:00
DominikDoom
1fe8f26670 Add explanatory tooltip and inline reset ability
Also add tooltip for wiki links
2023-12-04 13:56:15 +01:00
DominikDoom
e82e958c3e Fix alias check for non-aliased tag types 2023-11-29 18:15:59 +01:00
DominikDoom
2dd48eab79 Fix error with db return value for no matches 2023-11-29 18:14:14 +01:00
DominikDoom
4df90f5c95 Don't frequency sort alias results by default
with an option to enable it if desired
2023-11-29 18:04:50 +01:00
DominikDoom
a156214a48 Last used & min count settings
Also some performance improvements
2023-11-29 17:45:51 +01:00
DominikDoom
15478e73b5 Count positive / negative prompt usage separately 2023-11-29 15:22:41 +01:00
DominikDoom
fcacf7dd66 Update README_ZH.md
Added warning about IDM integration blocking JavaScript
2023-11-19 13:20:06 +01:00
DominikDoom
82f819f336 Move file to correct location 2023-11-18 11:59:37 +01:00
Yuxi Liu
effda54526 e621 sfw version 2023-11-18 11:59:37 +01:00
DominikDoom
434301738a Merge branch 'main' into feature-sort-by-frequent-use 2023-11-05 13:30:51 +01:00
DominikDoom
58804796f0 Fix broken refresh buttons
Likely caused by gradio changes
2023-11-05 13:07:47 +01:00
DominikDoom
668ca800b8 Add is_file checks to all glob searches
Prevents folder names containing the suffix from breaking things
Fixes #251
2023-11-05 12:51:51 +01:00
DominikDoom
a7233a594f Escape $ signs for the insert functions
Fixes #248, as discussed in #247
2023-10-14 16:19:34 +02:00
DominikDoom
4fba7baa69 Merge branch 'main' into feature-sort-by-frequent-use 2023-10-06 18:36:24 +02:00
DominikDoom
5ebe22ddfc Add sha256 (V2) keyword lookup
As discussed in #245
2023-10-06 16:46:18 +02:00
DominikDoom
44c5450b28 Fix special characters breaking wiki link urls 2023-10-06 14:54:29 +02:00
DominikDoom
5fd48f53de Fix csv parsing for unclosed quotes
Fixes #245
2023-10-06 14:44:03 +02:00
DominikDoom
7128efc4f4 Apply same fix to extra tags
Count now defaults to max safe integer, which simplifies the sort function
Before, it resulted in really bad performance
2023-10-02 00:45:48 +02:00
DominikDoom
bd0ddfbb24 Fix embeddings not at top
(only affecting the "include embeddings in normal results" option)
2023-10-02 00:16:58 +02:00
DominikDoom
3108daf0e8 Remove kaomoji inclusion in < search
because it interfered with use count searching and is not commonly needed
2023-10-01 23:51:35 +02:00
DominikDoom
446ac14e7f Fix umi list not resetting after deleting chars behind "[" 2023-10-01 23:47:02 +02:00
DominikDoom
363895494b Fix hide after insert race condition 2023-10-01 23:17:12 +02:00
DominikDoom
04551a8132 Don't await increase, limit to 2k for performance 2023-10-01 22:59:28 +02:00
DominikDoom
ffc0e378d3 Add different sorting functions 2023-10-01 22:44:35 +02:00
DominikDoom
440f109f1f Use POST + body to get around URL length limit 2023-10-01 22:30:47 +02:00
DominikDoom
80fb247dbe Sort results by usage count 2023-10-01 21:44:24 +02:00
DominikDoom
b3e71e840d Safety check for missing shape 2023-09-26 15:12:29 +02:00
DominikDoom
998514bebb Proper support for SDXL embeddings
Now in their own category, other embeddings don't get mislabeled anymore if an XL model is loaded
2023-09-26 14:14:20 +02:00
DominikDoom
d7e98200a8 Use count increase logic 2023-09-26 12:20:15 +02:00
DominikDoom
ac790c8ede Return dict instead of array for clarity 2023-09-26 12:12:46 +02:00
DominikDoom
22365ec8d6 Add missing type return to list request 2023-09-26 12:02:36 +02:00
DominikDoom
030a83aa4d Use query parameter instead of path to fix wildcard subfolder issues 2023-09-26 11:55:12 +02:00
DominikDoom
460d32a4ed Ensure proper reload, fix error message 2023-09-26 11:45:42 +02:00
DominikDoom
581bf1e6a4 Use composite key with name & type to prevent collisions 2023-09-26 11:35:24 +02:00
DominikDoom
74ea5493e5 Add rest of utils functions 2023-09-26 10:58:46 +02:00
DominikDoom
94ec8884c3 Fix SD.Next error caused by embeddings without filenames
This only ignores these embeddings, the root cause is a bug / behavioral difference in SD.Next
Fixes #242
2023-09-26 10:30:01 +02:00
DominikDoom
6cf9acd6ab Catch sqlite exceptions, add tag list endpoint 2023-09-24 20:06:40 +02:00
DominikDoom
109a8a155e Change endpoint name for consistency 2023-09-24 18:00:41 +02:00
DominikDoom
3caa1b51ed Add db to gitignore 2023-09-24 17:59:39 +02:00
DominikDoom
b44c36425a Fix db load version comparison, add sort options 2023-09-24 17:59:14 +02:00
DominikDoom
1e81403180 Safety catches for DB API access 2023-09-24 16:50:39 +02:00
DominikDoom
0f487a5c5c WIP database setup inspired by ImageBrowser 2023-09-24 16:28:32 +02:00
DominikDoom
2baa12fea3 Merge branch 'main' into feature-sort-by-frequent-use 2023-09-24 15:34:18 +02:00
DominikDoom
1a9157fe6e Fix wildcard load if no non-extension wildcards exist
Fixes #241
2023-09-21 10:15:53 +02:00
DominikDoom
67eeb5fbf6 Merge branch 'main' into feature-sort-by-frequent-use 2023-09-19 12:14:12 +02:00
DominikDoom
5911248ab9 Merge branch 'feature-sorting' into main
Update including a new sorting option for extra network models & wildcards.
For now only by date modified, this might be expanded in the future.
A "sort by frequent use" is also in the works.
2023-09-19 12:13:01 +02:00
DominikDoom
1c693c0263 Catch UnicodeDecodeError to prevent corrupted yaml files from breaking the extension
As mentioned in #240
2023-09-17 15:28:34 +02:00
DominikDoom
11ffed8afc Merge branch 'feature-sorting' into feature-sort-by-frequent-use 2023-09-15 16:37:34 +02:00
DominikDoom
cb54b66eda Refactor PR #239 to use new refresh API endpoint of this branch 2023-09-15 16:32:20 +02:00
DominikDoom
92a937ad01 Merge branch 'main' into feature-sorting 2023-09-15 16:30:23 +02:00
DominikDoom
ba9dce8d90 Merge pull request #239 from NoCrypt/add_extra_refresh_listener 2023-09-15 16:29:35 +02:00
NoCrypt
2622e1b596 Refresh extra: fix python code did not excecuted 2023-09-15 21:12:30 +07:00
NoCrypt
b03b1a0211 Add listener for extra network refresh button 2023-09-15 20:48:16 +07:00
DominikDoom
3e33169a3a Disable sort order dropdown pointer events while refresh is running
Doesn't prevent keyboard focus, but changing the values there is much slower since the list doesn't stay open.
2023-09-13 22:30:37 +02:00
DominikDoom
d8d991531a Don't sort umi tags since they use count 2023-09-13 22:04:59 +02:00
DominikDoom
f626b9453d Merge branch 'main' into feature-sorting 2023-09-13 21:56:29 +02:00
DominikDoom
5067afeee9 Add missing null safety 2023-09-13 21:55:09 +02:00
DominikDoom
018c6c8198 Fix Umi tag gathering & sorting
Fixes #238
2023-09-13 21:50:41 +02:00
DominikDoom
2846d79b7d Small cleanup, add reverse option
Properly add text at the end on non-reverse numeric
2023-09-13 19:39:48 +02:00
DominikDoom
783a847978 Fix typo 2023-09-13 16:37:44 +02:00
DominikDoom
44effca702 Add sorting to javascript side
Now uses the sortKey if available. Elements without a sortKey will always use name as fallback.
Removed sort direction API again since it needs to be modeled case-by-case in the javascript anyway.
2023-09-13 14:03:49 +02:00
DominikDoom
475ef59197 Rework sorting function to calculate keys instead of pre-sort the list
Rename added/changed variables to be clearer
2023-09-13 11:46:17 +02:00
Symbiomatrix
3953260485 Model sort selection. 2023-09-13 01:34:49 +03:00
DominikDoom
0a8e7d7d84 Stub API setup for tag usage stats 2023-09-12 14:10:15 +02:00
DominikDoom
46d07d703a Improve parentheses handling
Still not perfect, but hoüpefully a good compromise. Should be less annoying during normal prompt writing.
Closes #107
2023-09-12 12:56:55 +02:00
DominikDoom
bd1dbe92c2 Don't trigger on programmatic third party input events
Fixes #233
2023-09-12 11:50:07 +02:00
DominikDoom
66fa745d6f Merge pull request #235 from hakaserver/main 2023-09-12 09:46:09 +02:00
hakaserver
37b5dca66e lyco_path fix 2023-09-12 00:57:35 -03:00
DominikDoom
5db035cc3a Add missing comma for keyword insertion at end 2023-09-09 14:54:11 +02:00
DominikDoom
90cf3147fd Formatting 2023-09-09 14:51:24 +02:00
DominikDoom
4d4f23e551 Formatting 2023-09-09 14:43:55 +02:00
DominikDoom
80b47c61bb Add new setting to choose where keywords get inserted
Closes #232
2023-09-09 14:41:52 +02:00
DominikDoom
57821aae6a Add option to include embeddings in normal search
along with new keybind functionality for quick jumping between sections.
Closes #230
2023-09-07 13:18:04 +02:00
DominikDoom
e23bb6d4ea Add support for --wildcards-dir cmd argument
Refactor PR #229 a bit to share code with this
2023-09-02 17:59:27 +02:00
DominikDoom
d4cca00575 Merge pull request #229 from azmodii/main 2023-09-02 17:27:00 +02:00
DominikDoom
86ea94a565 Merge pull request #228 from re-unknown/main 2023-09-02 17:08:13 +02:00
Joel Clark
53f46c91a2 feat: Allow support for custom wildcard directory in sd-dynamic-prompts 2023-09-02 21:30:01 +10:00
ReUnknown
e5f93188c3 Support for updated style editor 2023-09-02 16:51:41 +09:00
DominikDoom
3e57842ac6 Remove unnecessary autocomplete call in wildcards
which would result in duplicate file requests
2023-08-29 10:23:13 +02:00
DominikDoom
32c4589df3 Rework wildcards to use own API endpoint
Maybe fixes #226
2023-08-29 09:39:32 +02:00
DominikDoom
5bbd97588c Remove duplicate slash from wildcard files
(should be cosmetic only)
2023-08-28 19:15:34 +02:00
DominikDoom
b2a663f7a7 Merge pull request #223 from Symbiomatrix/embload 2023-08-20 19:08:40 +02:00
Symbiomatrix
6f93d19a2b Edit error message. 2023-08-20 20:02:57 +03:00
Symbiomatrix
79bab04fd2 Typo. 2023-08-20 18:59:12 +03:00
Symbiomatrix
5b69d1e622 Embedding forced reload. 2023-08-20 18:51:37 +03:00
DominikDoom
651cf5fb46 Add metaKey and Shift to non-captured modifiers
Fixes #222
2023-08-19 11:59:41 +02:00
DominikDoom
5deb72cddf Add clearer README description for legacy translations
As suggested in #221
2023-08-16 10:50:05 +02:00
DominikDoom
97ebe78205 !After Detailer (adetailer) support 2023-08-15 14:44:38 +02:00
DominikDoom
b937e853c9 Fix booru wiki links with translations 2023-08-08 19:23:13 +02:00
DominikDoom
f63bbf947f Fix API endpoint to work with symlinks / external folders
Fixes #217
2023-08-07 22:15:48 +02:00
DominikDoom
16bc6d8868 Update README.md 2023-08-07 19:48:27 +02:00
DominikDoom
ebe276ee44 Fix for lora filenames containing dots
Since file extensions are already cut off before the client-side request, it's not needed here anymore
2023-08-07 19:22:50 +02:00
DominikDoom
995a5ecdba Live preview images for extra networks
Same as the thumbnails in the extra networks tab, just in a small preview window during completion
2023-08-07 18:50:55 +02:00
DominikDoom
90d144a5f4 Fix for new trimming rule cutting off first letter
if Loras weren't in a subfolder
2023-08-07 17:51:21 +02:00
DominikDoom
14a4440c33 Fix extra network sorting
Caused by loras including their (hidden) folder prefixes instead of just by name
2023-08-07 17:38:40 +02:00
DominikDoom
cdf092f3ac Fix lora keyword lookup for deep subfolders 2023-08-07 15:17:49 +02:00
DominikDoom
e1598378dc Merge pull request #215 from bluelovers/pr/model-keyword-001 2023-08-07 09:24:38 +02:00
bluelovers
599ad7f95f fix: known_lora_hashes.txt
https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/issues/214

https://github.com/canisminor1990/sd-webui-lobe-theme/issues/324
2023-08-07 09:54:10 +08:00
DominikDoom
0b2bb138ee Add option to keep wildcard file content order
instead of sorting alphabetically
Fixes #211
2023-08-05 13:42:24 +02:00
DominikDoom
4a415f1a04 Fix for duplicate wildcard entries
Caused by multiple yaml files specifying the same subkey
2023-07-29 17:27:43 +02:00
DominikDoom
21de5fe003 Merge branch 'feature-fix-dynamic-prompt-yaml' into main
Fixes #209
2023-07-29 16:30:19 +02:00
DominikDoom
a020df91b2 Fix wildcard traversal condition 2023-07-29 16:26:07 +02:00
DominikDoom
0260765b27 Add support for dynamic-prompts yaml wildcards 2023-07-29 16:13:23 +02:00
DominikDoom
638c073f37 Merge branch 'feature-native-lora-config' into main 2023-07-26 15:05:37 +02:00
DominikDoom
d11b53083b Update README.md 2023-07-26 15:04:59 +02:00
DominikDoom
571072eea4 Update README.md 2023-07-26 15:03:52 +02:00
DominikDoom
acfdbf1ed4 Fix for loras in base folder 2023-07-26 14:53:03 +02:00
DominikDoom
2e271aea5c Support for new webui 1.5.0 lora features
Prefers trigger words over the model-keyword ones
Uses custom per-lora multiplier if set
2023-07-26 14:38:51 +02:00
DominikDoom
b28497764f Check keywords for .pt and .ckpt loras too
Especially for custom keywords, the preset list mostly uses safetensors
2023-07-23 11:27:02 +02:00
DominikDoom
0d9d5f1e44 Safety check & remove log 2023-07-23 11:08:29 +02:00
DominikDoom
de3380818e Quote lora filenames to handle commas in filenames
Fixes #206
2023-07-23 11:05:44 +02:00
DominikDoom
acb85d7bb1 Make sure both temp folders exist 2023-07-23 09:01:26 +02:00
DominikDoom
39ea33be9f Fix encoding for load too
Fixes #204
2023-07-22 21:15:44 +02:00
DominikDoom
1cac893e63 Create temp folder first before touching if it doesn't exist
Fixes #203
2023-07-22 20:46:10 +02:00
DominikDoom
94823b871c Add missing utf-8 encoding to cache write
Fixes #202
2023-07-22 18:17:30 +02:00
DominikDoom
599ff8a6f2 Don't load lycos if they point to the same path as loras
E.g. when using --lyco-patch-lora to replace built-in Loras.
Prevents duplicate entries.
2023-07-22 17:52:51 +02:00
DominikDoom
6893113e0b Fix typo 2023-07-22 15:51:04 +02:00
32 changed files with 873702 additions and 200660 deletions

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
tags/temp/
__pycache__/
tags/tag_frequency.db

View File

@@ -23,11 +23,12 @@ Booru style tag autocompletion for the AUTOMATIC1111 Stable Diffusion WebUI
# 📄 Description
Tag Autocomplete is an extension for the popular [AUTOMATIC1111 web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) for Stable Diffusion.
You can install it using the inbuilt available extensions list, clone the files manually as described [below](#-installation), or use a pre-packaged version from [Releases](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases).
It displays autocompletion hints for recognized tags from "image booru" boards such as Danbooru, which are primarily used for browsing Anime-style illustrations.
Since some Stable Diffusion models were trained using this information, for example [Waifu Diffusion](https://github.com/harubaru/waifu-diffusion) and many of the NAI-descendant models or merges, using exact tags in prompts can often improve composition and consistency.
Since most custom Stable Diffusion models were trained using this information or merged with ones that did, using exact tags in prompts can often improve composition and consistency, even if the model itself has a photorealistic style.
You can install it using the inbuilt available extensions list, clone the files manually as described [below](#-installation), or use a pre-packaged version from [Releases](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases).
Disclaimer: The default tag lists contain NSFW terms, please use them responsibly.
<br/>
@@ -74,6 +75,10 @@ Wildcard script support:
https://user-images.githubusercontent.com/34448969/200128031-22dd7c33-71d1-464f-ae36-5f6c8fd49df0.mp4
Extra Network preview support:
https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/3c0cad84-fb5f-436d-b05a-28db35860d13
Dark and Light mode supported, including tag colors:
![results_dark](https://user-images.githubusercontent.com/34448969/200128214-3b6f21b4-9dda-4acf-820e-5df0285c30d6.png)
@@ -123,24 +128,43 @@ Completion for these types is triggered by typing `<`. By default it will show t
- Or `<lora:` and `<lyco:` respectively for the long form
- `<h:` or `<hypernet:` will only show Hypernetworks
### Live previews
Tag Autocomplete will now also show the preview images used for the cards in the Extra Networks menu in a small window next to the regular popup.
This enables quick comparisons and additional info for unclear filenames without having to stop typing to look it up in the webui menu.
It works for all supported extra network types that use preview images (Loras/Lycos, Embeddings & Hypernetworks). The preview window will stay hidden for normal tags or if no preview was found.
![extra_live_preview](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/6a5d81e6-b3a0-407b-8bac-c9790f86016c)
### Lora / Lyco trigger word completion
This is an advanced feature that will try to add known trigger words on autocompleting a Lora/Lyco.
This feature will try to add known trigger words on autocompleting a Lora/Lyco.
It uses the list provided by the [model-keyword](https://github.com/mix1009/model-keyword/) extension, which thus needs to be installed to use this feature. The list is also regularly updated through it.
It primarily uses the list provided by the [model-keyword](https://github.com/mix1009/model-keyword/) extension, which thus needs to be installed to use this feature. The list is also regularly updated through it.
However, once installed, you can deactivate it if you want, since tag autocomplete only needs the local keyword lists it ships with, not the extension itself.
The used files are `lora-keywords.txt` and `lora-keywords-user.txt` in the model-keyword installation folder.
The used files are `lora-keyword.txt` and `lora-keyword-user.txt` in the model-keyword installation folder.
If the main file isn't found, the feature will simply deactivate itself, everything else should work normally.
To add custom mappings for unknown Loras, you can use the UI provided by model-keyword, it will automatically write it to the `lora-keywords-user.txt` for you (and create it if it doesn't exist).
The only issue is that it has no official support for the Lycoris extension and doesn't scan its folder for files, so to add them through the UI you will have to temporarily move them into the Lora model folder to be able to select them in model-keywords dropdown.
Some are already included in the default list though, so trying it out first is advisable.
<details>
<summary>Walkthorugh to add custom keywords</summary>
#### Note:
As of [v1.5.0](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/a3ddf464a2ed24c999f67ddfef7969f8291567be), the webui provides a native method to add activation keywords for Lora through the Extra networks config UI.
These trigger words will always be preferred over the model-keyword ones and can be used without needing to install the model-keyword extension. This will however, obviously, be limited to those manually added keywords. For automatic discovery of keywords, you will still need the big list provided by model-keyword.
![image](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/4302c44e-c632-473d-a14a-76f164f966cb)
</details>
After having added your custom keywords, you will need to either restart the UI or use the "Refresh TAC temp files" setting button.
Custom trigger words can be added through two methods:
1. Using the extra networks UI (recommended):
- Only works with webui version v1.5.0 upwards, but much easier to use and works without the model-keyword extension
- This method requires no manual refresh
- <details>
<summary>Image example</summary>
![edit button](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/22e95040-1d85-4b7e-a005-1918fafec807)
![lora_edit](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/3e6c5245-d3bc-498d-8cd2-26eadf8882e7)
</details>
2. Through the model-keyword UI:
- One issue with this method is that it has no official support for the Lycoris extension and doesn't scan its folder for files, so to add them through the UI you will have to temporarily move them into the Lora model folder to be able to select them in model-keywords dropdown. Some are already included in the default list though, so trying it out first is advisable.
- After having added your custom keywords, you will need to either restart the UI or use the "Refresh TAC temp files" setting button.
- <details>
<summary>Image example</summary>
![image](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/4302c44e-c632-473d-a14a-76f164f966cb)
</details>
Sometimes the inserted keywords can be wrong due to a hash collision, however model-keyword and tag autocomplete take the name of the file into account too if the collision is known.
@@ -295,6 +319,14 @@ If this option is turned on, it will show a `?` link next to the tag. Clicking t
![wikiLink](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/733e1ba8-89e1-4c2b-8c4e-2d23352bd3d7)
</details>
<!-- Wiki links -->
<details>
<summary>Extra network live previews</summary>
This option enables a small preview window alongside the normal completion popup that will show the card preview also usd in the extra networks tab for that file.
![extraNetworkPreviews](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/72b5473f-563e-4238-a513-38b60ac87e96)
</details>
<!-- Insertion -->
<details>
<summary>Completion settings</summary>
@@ -443,7 +475,9 @@ You can also add this to your quicksettings bar to have the refresh button avail
# Translations
An additional file can be added in the translation section, which will be used to translate both tags and aliases and also enables searching by translation.
This file needs to be a CSV in the format `<English tag/alias>,<Translation>`, but for backwards compatibility with older files that used a three column format, you can turn on `Translation file uses old 3-column translation format instead of the new 2-column one` to support them. In that case, the second column will be unused and skipped during parsing.
This file needs to be a CSV in the format `<English tag/alias>,<Translation>`. Some older files use a three column format, which requires a compatibility setting to be activated.
You can find it under `Settings > Tag autocomplete > Translation filename > Translation file uses old 3-column translation format instead of the new 2-column one`.
With it on, the second column will be unused and skipped during parsing.
Example with Chinese translation:
@@ -453,6 +487,7 @@ Example with Chinese translation:
## List of translations
- [🇨🇳 Chinese tags](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/23) by @HalfMAI, using machine translation and manual correction for the most common tags (uses legacy format)
- [🇨🇳 Chinese tags](https://github.com/sgmklp/tag-for-autocompletion-with-translation) by @sgmklp, smaller set of manual translations based on https://github.com/zcyzcy88/TagTable
- [🇯🇵 Japanese tags](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/265) by @applemango, both machine and human translations available
> ### 🫵 I need your help!
> Translations are a community effort. If you have translated a tag file or want to create one, please open a Pull Request or Issue so your link can be added here.

View File

@@ -410,8 +410,9 @@ https://www.w3.org/TR/uievents-key/#named-key-attribute-value
![english-input](https://user-images.githubusercontent.com/34448969/200126513-bf6b3940-6e22-41b0-a369-f2b4640f87d6.png)
## 翻訳リスト
- [🇨🇳 Chinese tags](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/23) by @HalfMAI, 最も一般的なタグを機械翻訳と手作業で修正(レガシーフォーマットを使用)
- [🇨🇳 Chinese tags](https://github.com/sgmklp/tag-for-autocompletion-with-translation) by @sgmklp, [こちら](https://github.com/zcyzcy88/TagTable)をベースにして、より小さくした手動での翻訳セット。
- [🇨🇳 中国語訳](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/23) by @HalfMAI, 最も一般的なタグを機械翻訳と手作業で修正(レガシーフォーマットを使用)
- [🇨🇳 中国語訳](https://github.com/sgmklp/tag-for-autocompletion-with-translation) by @sgmklp, [こちら](https://github.com/zcyzcy88/TagTable)をベースにして、より小さくした手動での翻訳セット。
- [🇯🇵 日本語訳](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/265) by @applemango, 機械翻訳と人力翻訳の両方が利用可能。
> ### 🫵 あなたの助けが必要です!
> 翻訳はコミュニティの努力により支えられています。もしあなたがタグファイルを翻訳したことがある場合、または作成したい場合は、あなたの成果をここに追加できるように、Pull RequestまたはIssueを開いてください。

View File

@@ -13,6 +13,12 @@
你可以按照[以下方法](#installation)下载或拷贝文件,也可以使用[Releases](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases)中打包好的文件。
## 常见问题 & 已知缺陷:
- 很多中国用户都报告过此扩展名和其他扩展名的 JavaScript 文件被阻止的问题。
常见的罪魁祸首是 IDM / Internet Download Manager 浏览器插件,它似乎出于安全目的阻止了本地文件请求。
如果您安装了 IDM请确保在使用 webui 时禁用以下插件:
![image](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/d253a4a9-71ab-4b5f-80c4-5aa926fc1fc9)
-`replaceUnderscores`选项开启时, 脚本只会替换Tag的一部分如果Tag包含多个单词,比如将`atago (azur lane)`修改`atago``taihou`并使用自动补全时.会得到 `taihou (azur lane), lane)`的结果, 因为脚本没有把后面的部分认为成同一个Tag。
## 演示与截图

View File

@@ -1,61 +1,204 @@
// Core components
var TAC_CFG = null;
var tagBasePath = "";
var modelKeywordPath = "";
// Create our TAC namespace
var TAC = TAC || {};
// Tag completion data loaded from files
var allTags = [];
var translations = new Map();
var extras = [];
// Same for tag-likes
var wildcardFiles = [];
var wildcardExtFiles = [];
var yamlWildcards = [];
var embeddings = [];
var hypernetworks = [];
var loras = [];
var lycos = [];
var modelKeywordDict = new Map();
var chants = [];
/**
* @typedef {Object} TAC.CFG
* @property {string} tagFile - Tag filename
* @property {{ global: boolean, txt2img: boolean, img2img: boolean, negativePrompts: boolean, thirdParty: boolean, modelList: string, modelListMode: "Blacklist"|"Whitelist" }} activeIn - Settings for which parts of the UI the tag completion is active in.
* @property {boolean} slidingPopup - Move completion popup together with text cursor
* @property {number} maxResults - Maximum results
* @property {boolean} showAllResults - Show all results
* @property {number} resultStepLength - How many results to load at once
* @property {number} delayTime - Time in ms to wait before triggering completion again
* @property {boolean} useWildcards - Search for wildcards
* @property {boolean} sortWildcardResults - Sort wildcard file contents alphabetically
* @property {boolean} useEmbeddings - Search for embeddings
* @property {boolean} includeEmbeddingsInNormalResults - Include embeddings in normal tag results
* @property {boolean} useHypernetworks - Search for hypernetworks
* @property {boolean} useLoras - Search for Loras
* @property {boolean} useLycos - Search for LyCORIS/LoHa
* @property {boolean} useLoraPrefixForLycos - Use the '<lora:' prefix instead of '<lyco:' for models in the LyCORIS folder
* @property {boolean} showWikiLinks - Show '?' next to tags, linking to its Danbooru or e621 wiki page
* @property {boolean} showExtraNetworkPreviews - Show preview thumbnails for extra networks if available
* @property {string} modelSortOrder - Model sort order
* @property {boolean} frequencySort - Locally record tag usage and sort frequent tags higher
* @property {string} frequencyFunction - Function to use for frequency sorting
* @property {number} frequencyMinCount - Minimum number of uses for a tag to be considered frequent
* @property {number} frequencyMaxAge - Maximum days since last use for a tag to be considered frequent
* @property {number} frequencyRecommendCap - Maximum number of recommended tags
* @property {boolean} frequencyIncludeAlias - Frequency sorting matches aliases for frequent tags
* @property {boolean} useStyleVars - Search for webui style names
* @property {boolean} replaceUnderscores - Replace underscores with spaces on insertion
* @property {string} replaceUnderscoresExclusionList - Underscore replacement exclusion list
* @property {boolean} escapeParentheses - Escape parentheses on insertion
* @property {boolean} appendComma - Append comma on tag autocompletion
* @property {boolean} appendSpace - Append space on tag autocompletion
* @property {boolean} alwaysSpaceAtEnd - Always append space if inserting at the end of the textbox
* @property {string} wildcardCompletionMode - How to complete nested wildcard paths
* @property {string} modelKeywordCompletion - Try to add known trigger words for LORA/LyCO models
* @property {string} modelKeywordLocation - Where to insert the trigger keyword
* @property {string} wcWrap - Wrapper characters for wildcard tags.
* @property {{ searchByAlias: boolean, onlyShowAlias: boolean }} alias - Alias-related settings.
* @property {{ translationFile: string, oldFormat: boolean, searchByTranslation: boolean, liveTranslation: boolean }} translation - Translation-related settings.
* @property {{ extraFile: string, addMode: "Insert before"|"Insert after" }} extra - Extra file-related settings.
* @property {string} chantFile - Chant filename
* @property {number} extraNetworksDefaultMultiplier - Default multiplier for extra networks.
* @property {string} extraNetworksSeparator - Separator used for extra networks.
* @property {{ MoveUp: string, MoveDown: string, JumpUp: string, JumpDown: string, JumpToStart: string, JumpToEnd: string, ChooseSelected: string, ChooseFirstOrSelected: string, Close: string }} keymap - Custom key mappings for tag completion.
* @property {{ [filename: string]: { [category: string]: string[] } }} colorMap - Color mapping for tag categories.
*/
/** @type {TAC.CFG} */
TAC.CFG = {
// Main tag file
tagFile: "",
// Active in settings
activeIn: {
global: true,
txt2img: true,
img2img: true,
negativePrompts: true,
thirdParty: true,
modelList: "",
modelListMode: "Blacklist",
},
// Results related settings
slidingPopup: true,
maxResults: 8,
showAllResults: false,
resultStepLength: 500,
delayTime: 100,
useWildcards: true,
sortWildcardResults: true,
useEmbeddings: true,
includeEmbeddingsInNormalResults: true,
useHypernetworks: true,
useLoras: true,
useLycos: true,
useLoraPrefixForLycos: true,
showWikiLinks: false,
showExtraNetworkPreviews: true,
modelSortOrder: "Name",
frequencySort: true,
frequencyFunction: "Logarithmic (weak)",
frequencyMinCount: 3,
frequencyMaxAge: 30,
frequencyRecommendCap: 10,
frequencyIncludeAlias: false,
useStyleVars: false,
// Insertion related settings
replaceUnderscores: true,
replaceUnderscoresExclusionList: "0_0,(o)_(o),+_+,+_-,._.,<o>_<o>,<|>_<|>,=_=,>_<,3_3,6_9,>_o,@_@,^_^,o_o,u_u,x_x,|_|,||_||",
escapeParentheses: true,
appendComma: true,
appendSpace: true,
alwaysSpaceAtEnd: true,
wildcardCompletionMode: "To next folder level",
modelKeywordCompletion: "Never",
modelKeywordLocation: "Start of prompt",
wcWrap: "__", // to support custom wrapper chars set by dp_parser
// Alias settings
alias: {
searchByAlias: true,
onlyShowAlias: false,
},
// Translation settings
translation: {
translationFile: "None",
oldFormat: false,
searchByTranslation: true,
liveTranslation: false,
},
// Extra file settings
extra: {
extraFile: "extra-quality-tags.csv",
addMode: "Insert before",
},
// Chant file settings
chantFile: "demo-chants.json",
// Settings not from tac but still used by the script
extraNetworksDefaultMultiplier: 1.0,
extraNetworksSeparator: ", ",
// Custom mapping settings
keymap: {
MoveUp: "ArrowUp",
MoveDown: "ArrowDown",
JumpUp: "PageUp",
JumpDown: "PageDown",
JumpToStart: "Home",
JumpToEnd: "End",
ChooseSelected: "Enter",
ChooseFirstOrSelected: "Tab",
Close: "Escape",
},
colorMap: {
filename: { category: ["light", "dark"] },
},
};
// Selected model info for black/whitelisting
var currentModelHash = "";
var currentModelName = "";
TAC.Globals = new (function () {
// Core components
this.tagBasePath = "";
this.modelKeywordPath = "";
this.selfTrigger = false;
// Current results
var results = [];
var resultCount = 0;
// Tag completion data loaded from files
this.allTags = [];
this.translations = new Map();
this.extras = [];
// Same for tag-likes
this.wildcardFiles = [];
this.wildcardExtFiles = [];
this.yamlWildcards = [];
this.umiWildcards = [];
this.embeddings = [];
this.hypernetworks = [];
this.loras = [];
this.lycos = [];
this.modelKeywordDict = new Map();
this.chants = [];
this.styleNames = [];
// Relevant for parsing
var previousTags = [];
var tagword = "";
var originalTagword = "";
let hideBlocked = false;
// Selected model info for black/whitelisting
this.currentModelHash = "";
this.currentModelName = "";
// Tag selection for keyboard navigation
var selectedTag = null;
var oldSelectedTag = null;
// Current results
this.results = [];
this.resultCount = 0;
// Lora keyword undo/redo history
var textBeforeKeywordInsertion = "";
var textAfterKeywordInsertion = "";
var lastEditWasKeywordInsertion = false;
var keywordInsertionUndone = false;
// Relevant for parsing
this.previousTags = [];
this.tagword = "";
this.originalTagword = "";
this.hideBlocked = false;
// UMI
var umiPreviousTags = [];
// Tag selection for keyboard navigation
this.selectedTag = null;
this.oldSelectedTag = null;
this.resultCountBeforeNormalTags = 0;
// Lora keyword undo/redo history
this.textBeforeKeywordInsertion = "";
this.textAfterKeywordInsertion = "";
this.lastEditWasKeywordInsertion = false;
this.keywordInsertionUndone = false;
// UMI
this.umiPreviousTags = [];
})();
/// Extendability system:
/// Provides "queues" for other files of the script (or really any js)
/// to add functions to be called at certain points in the script.
/// Similar to a callback system, but primitive.
TAC.Ext = new (function () {
// Queues
this.QUEUE_AFTER_INSERT = [];
this.QUEUE_AFTER_SETUP = [];
this.QUEUE_FILE_LOAD = [];
this.QUEUE_AFTER_CONFIG_CHANGE = [];
this.QUEUE_SANITIZE = [];
// Queues
const QUEUE_AFTER_INSERT = [];
const QUEUE_AFTER_SETUP = [];
const QUEUE_FILE_LOAD = [];
const QUEUE_AFTER_CONFIG_CHANGE = [];
const QUEUE_SANITIZE = [];
// List of parsers to try
const PARSERS = [];
// List of parsers to try
this.PARSERS = [];
})();

View File

@@ -1,21 +1,21 @@
class FunctionNotOverriddenError extends Error {
TAC.FunctionNotOverriddenError = class FunctionNotOverriddenError extends Error {
constructor(message = "", ...args) {
super(message, ...args);
this.message = message + " is an abstract base function and must be overwritten.";
}
}
class BaseTagParser {
TAC.BaseTagParser = class BaseTagParser {
triggerCondition = null;
constructor (triggerCondition) {
if (new.target === BaseTagParser) {
if (new.target === TAC.BaseTagParser) {
throw new TypeError("Cannot construct abstract BaseCompletionParser directly");
}
this.triggerCondition = triggerCondition;
}
parse() {
throw new FunctionNotOverriddenError("parse()");
throw new TAC.FunctionNotOverriddenError("parse()");
}
}

View File

@@ -1,145 +1,146 @@
// From https://github.com/component/textarea-caret-position
// We'll copy the properties below into the mirror div.
// Note that some browsers, such as Firefox, do not concatenate properties
// into their shorthand (e.g. padding-top, padding-bottom etc. -> padding),
// so we have to list every single property explicitly.
var properties = [
'direction', // RTL support
'boxSizing',
'width', // on Chrome and IE, exclude the scrollbar, so the mirror div wraps exactly as the textarea does
'height',
'overflowX',
'overflowY', // copy the scrollbar for IE
TAC.getCaretCoordinates = class CaretUtils {
// We'll copy the properties below into the mirror div.
// Note that some browsers, such as Firefox, do not concatenate properties
// into their shorthand (e.g. padding-top, padding-bottom etc. -> padding),
// so we have to list every single property explicitly.
static #properties = [
"direction", // RTL support
"boxSizing",
"width", // on Chrome and IE, exclude the scrollbar, so the mirror div wraps exactly as the textarea does
"height",
"overflowX",
"overflowY", // copy the scrollbar for IE
'borderTopWidth',
'borderRightWidth',
'borderBottomWidth',
'borderLeftWidth',
'borderStyle',
"borderTopWidth",
"borderRightWidth",
"borderBottomWidth",
"borderLeftWidth",
"borderStyle",
'paddingTop',
'paddingRight',
'paddingBottom',
'paddingLeft',
"paddingTop",
"paddingRight",
"paddingBottom",
"paddingLeft",
// https://developer.mozilla.org/en-US/docs/Web/CSS/font
'fontStyle',
'fontVariant',
'fontWeight',
'fontStretch',
'fontSize',
'fontSizeAdjust',
'lineHeight',
'fontFamily',
// https://developer.mozilla.org/en-US/docs/Web/CSS/font
"fontStyle",
"fontVariant",
"fontWeight",
"fontStretch",
"fontSize",
"fontSizeAdjust",
"lineHeight",
"fontFamily",
'textAlign',
'textTransform',
'textIndent',
'textDecoration', // might not make a difference, but better be safe
"textAlign",
"textTransform",
"textIndent",
"textDecoration", // might not make a difference, but better be safe
'letterSpacing',
'wordSpacing',
"letterSpacing",
"wordSpacing",
'tabSize',
'MozTabSize'
"tabSize",
"MozTabSize",
];
];
static #isBrowser = typeof window !== "undefined";
static #isFirefox = this.#isBrowser && window.mozInnerScreenX != null;
var isBrowser = (typeof window !== 'undefined');
var isFirefox = (isBrowser && window.mozInnerScreenX != null);
function getCaretCoordinates(element, position, options) {
if (!isBrowser) {
throw new Error('textarea-caret-position#getCaretCoordinates should only be called in a browser');
}
var debug = options && options.debug || false;
if (debug) {
var el = document.querySelector('#input-textarea-caret-position-mirror-div');
if (el) el.parentNode.removeChild(el);
}
// The mirror div will replicate the textarea's style
var div = document.createElement('div');
div.id = 'input-textarea-caret-position-mirror-div';
document.body.appendChild(div);
var style = div.style;
var computed = window.getComputedStyle ? window.getComputedStyle(element) : element.currentStyle; // currentStyle for IE < 9
var isInput = element.nodeName === 'INPUT';
// Default textarea styles
style.whiteSpace = 'pre-wrap';
if (!isInput)
style.wordWrap = 'break-word'; // only for textarea-s
// Position off-screen
style.position = 'absolute'; // required to return coordinates properly
if (!debug)
style.visibility = 'hidden'; // not 'display: none' because we want rendering
// Transfer the element's properties to the div
properties.forEach(function (prop) {
if (isInput && prop === 'lineHeight') {
// Special case for <input>s because text is rendered centered and line height may be != height
if (computed.boxSizing === "border-box") {
var height = parseInt(computed.height);
var outerHeight =
parseInt(computed.paddingTop) +
parseInt(computed.paddingBottom) +
parseInt(computed.borderTopWidth) +
parseInt(computed.borderBottomWidth);
var targetHeight = outerHeight + parseInt(computed.lineHeight);
if (height > targetHeight) {
style.lineHeight = height - outerHeight + "px";
} else if (height === targetHeight) {
style.lineHeight = computed.lineHeight;
} else {
style.lineHeight = 0;
static getCaretCoordinates(element, position, options) {
if (!CaretUtils.#isBrowser) {
throw new Error(
"textarea-caret-position#getCaretCoordinates should only be called in a browser"
);
}
} else {
style.lineHeight = computed.height;
}
} else {
style[prop] = computed[prop];
var debug = (options && options.debug) || false;
if (debug) {
var el = document.querySelector("#input-textarea-caret-position-mirror-div");
if (el) el.parentNode.removeChild(el);
}
// The mirror div will replicate the textarea's style
var div = document.createElement("div");
div.id = "input-textarea-caret-position-mirror-div";
document.body.appendChild(div);
var style = div.style;
var computed = window.getComputedStyle
? window.getComputedStyle(element)
: element.currentStyle; // currentStyle for IE < 9
var isInput = element.nodeName === "INPUT";
// Default textarea styles
style.whiteSpace = "pre-wrap";
if (!isInput) style.wordWrap = "break-word"; // only for textarea-s
// Position off-screen
style.position = "absolute"; // required to return coordinates properly
if (!debug) style.visibility = "hidden"; // not 'display: none' because we want rendering
// Transfer the element's properties to the div
CaretUtils.#properties.forEach(function (prop) {
if (isInput && prop === "lineHeight") {
// Special case for <input>s because text is rendered centered and line height may be != height
if (computed.boxSizing === "border-box") {
var height = parseInt(computed.height);
var outerHeight =
parseInt(computed.paddingTop) +
parseInt(computed.paddingBottom) +
parseInt(computed.borderTopWidth) +
parseInt(computed.borderBottomWidth);
var targetHeight = outerHeight + parseInt(computed.lineHeight);
if (height > targetHeight) {
style.lineHeight = height - outerHeight + "px";
} else if (height === targetHeight) {
style.lineHeight = computed.lineHeight;
} else {
style.lineHeight = 0;
}
} else {
style.lineHeight = computed.height;
}
} else {
style[prop] = computed[prop];
}
});
if (CaretUtils.#isFirefox) {
// Firefox lies about the overflow property for textareas: https://bugzilla.mozilla.org/show_bug.cgi?id=984275
if (element.scrollHeight > parseInt(computed.height)) style.overflowY = "scroll";
} else {
style.overflow = "hidden"; // for Chrome to not render a scrollbar; IE keeps overflowY = 'scroll'
}
div.textContent = element.value.substring(0, position);
// The second special handling for input type="text" vs textarea:
// spaces need to be replaced with non-breaking spaces - http://stackoverflow.com/a/13402035/1269037
if (isInput) div.textContent = div.textContent.replace(/\s/g, "\u00a0");
var span = document.createElement("span");
// Wrapping must be replicated *exactly*, including when a long word gets
// onto the next line, with whitespace at the end of the line before (#7).
// The *only* reliable way to do that is to copy the *entire* rest of the
// textarea's content into the <span> created at the caret position.
// For inputs, just '.' would be enough, but no need to bother.
span.textContent = element.value.substring(position) || "."; // || because a completely empty faux span doesn't render at all
div.appendChild(span);
var coordinates = {
top: span.offsetTop + parseInt(computed["borderTopWidth"]),
left: span.offsetLeft + parseInt(computed["borderLeftWidth"]),
height: parseInt(computed["lineHeight"]),
};
if (debug) {
span.style.backgroundColor = "#aaa";
} else {
document.body.removeChild(div);
}
return coordinates;
}
});
if (isFirefox) {
// Firefox lies about the overflow property for textareas: https://bugzilla.mozilla.org/show_bug.cgi?id=984275
if (element.scrollHeight > parseInt(computed.height))
style.overflowY = 'scroll';
} else {
style.overflow = 'hidden'; // for Chrome to not render a scrollbar; IE keeps overflowY = 'scroll'
}
div.textContent = element.value.substring(0, position);
// The second special handling for input type="text" vs textarea:
// spaces need to be replaced with non-breaking spaces - http://stackoverflow.com/a/13402035/1269037
if (isInput)
div.textContent = div.textContent.replace(/\s/g, '\u00a0');
var span = document.createElement('span');
// Wrapping must be replicated *exactly*, including when a long word gets
// onto the next line, with whitespace at the end of the line before (#7).
// The *only* reliable way to do that is to copy the *entire* rest of the
// textarea's content into the <span> created at the caret position.
// For inputs, just '.' would be enough, but no need to bother.
span.textContent = element.value.substring(position) || '.'; // || because a completely empty faux span doesn't render at all
div.appendChild(span);
var coordinates = {
top: span.offsetTop + parseInt(computed['borderTopWidth']),
left: span.offsetLeft + parseInt(computed['borderLeftWidth']),
height: parseInt(computed['lineHeight'])
};
if (debug) {
span.style.backgroundColor = '#aaa';
} else {
document.body.removeChild(div);
}
return coordinates;
}
}.getCaretCoordinates;

View File

@@ -1,31 +1,35 @@
// Result data type for cleaner use of optional completion result properties
// Type enum
const ResultType = Object.freeze({
TAC.ResultType = Object.freeze({
"tag": 1,
"extra": 2,
"embedding": 3,
"wildcardTag": 4,
"wildcardFile": 5,
"yamlWildcard": 6,
"hypernetwork": 7,
"lora": 8,
"lyco": 9,
"chant": 10
"umiWildcard": 7,
"hypernetwork": 8,
"lora": 9,
"lyco": 10,
"chant": 11,
"styleName": 12
});
// Class to hold result data and annotations to make it clearer to use
class AutocompleteResult {
TAC.AutocompleteResult = class AutocompleteResult {
// Main properties
text = "";
type = ResultType.tag;
type = TAC.ResultType.tag;
// Additional info, only used in some cases
category = null;
count = null;
count = Number.MAX_SAFE_INTEGER;
usageBias = null;
aliases = null;
meta = null;
hash = null;
sortKey = null;
// Constructor
constructor(text, type) {

View File

@@ -1,168 +1,218 @@
// Utility functions to select text areas the script should work on,
// including third party options.
// Supported third party options so far:
// - Dataset Tag Editor
// Core text area selectors
const core = [
"#txt2img_prompt > label > textarea",
"#img2img_prompt > label > textarea",
"#txt2img_neg_prompt > label > textarea",
"#img2img_neg_prompt > label > textarea",
".prompt > label > textarea"
];
TAC.TextAreas = new (function () {
// Core text area selectors
const core = [
"#txt2img_prompt > label > textarea",
"#img2img_prompt > label > textarea",
"#txt2img_neg_prompt > label > textarea",
"#img2img_neg_prompt > label > textarea",
".prompt > label > textarea",
"#txt2img_edit_style_prompt > label > textarea",
"#txt2img_edit_style_neg_prompt > label > textarea",
"#img2img_edit_style_prompt > label > textarea",
"#img2img_edit_style_neg_prompt > label > textarea",
];
// Third party text area selectors
const thirdParty = {
"dataset-tag-editor": {
"base": "#tab_dataset_tag_editor_interface",
"hasIds": false,
"selectors": [
"Caption of Selected Image",
"Interrogate Result",
"Edit Caption",
"Edit Tags"
]
},
"image browser": {
"base": "#tab_image_browser",
"hasIds": false,
"selectors": [
"Filename keyword search",
"EXIF keyword search"
]
},
"tab_tagger": {
"base": "#tab_tagger",
"hasIds": false,
"selectors": [
"Additional tags (split by comma)",
"Exclude tags (split by comma)"
]
},
"tiled-diffusion-t2i": {
"base": "#txt2img_script_container",
"hasIds": true,
"onDemand": true,
"selectors": [
"[id^=MD-t2i][id$=prompt] textarea",
"[id^=MD-t2i][id$=prompt] input[type='text']"
]
},
"tiled-diffusion-i2i": {
"base": "#img2img_script_container",
"hasIds": true,
"onDemand": true,
"selectors": [
"[id^=MD-i2i][id$=prompt] textarea",
"[id^=MD-i2i][id$=prompt] input[type='text']"
]
// Third party text area selectors
const thirdParty = {
"dataset-tag-editor": {
base: "#tab_dataset_tag_editor_interface",
hasIds: false,
selectors: [
"Caption of Selected Image",
"Interrogate Result",
"Edit Caption",
"Edit Tags",
],
},
"image browser": {
base: "#tab_image_browser",
hasIds: false,
selectors: ["Filename keyword search", "EXIF keyword search"],
},
tab_tagger: {
base: "#tab_tagger",
hasIds: false,
selectors: ["Additional tags (split by comma)", "Exclude tags (split by comma)"],
},
"tiled-diffusion-t2i": {
base: "#txt2img_script_container",
hasIds: true,
onDemand: true,
selectors: [
"[id^=MD-t2i][id$=prompt] textarea",
"[id^=MD-t2i][id$=prompt] input[type='text']",
],
},
"tiled-diffusion-i2i": {
base: "#img2img_script_container",
hasIds: true,
onDemand: true,
selectors: [
"[id^=MD-i2i][id$=prompt] textarea",
"[id^=MD-i2i][id$=prompt] input[type='text']",
],
},
"adetailer-t2i": {
base: "#txt2img_script_container",
hasIds: true,
onDemand: true,
selectors: [
"[id^=script_txt2img_adetailer_ad_prompt] textarea",
"[id^=script_txt2img_adetailer_ad_negative_prompt] textarea",
],
},
"adetailer-i2i": {
base: "#img2img_script_container",
hasIds: true,
onDemand: true,
selectors: [
"[id^=script_img2img_adetailer_ad_prompt] textarea",
"[id^=script_img2img_adetailer_ad_negative_prompt] textarea",
],
},
"deepdanbooru-object-recognition": {
base: "#tab_deepdanboru_object_recg_tab",
hasIds: false,
selectors: ["Found tags"],
},
TIPO: {
base: "#tab_txt2img",
hasIds: false,
selectors: ["Tag Prompt"],
},
};
this.getTextAreas = function () {
// First get all core text areas
let textAreas = [...gradioApp().querySelectorAll(core.join(", "))];
for (const [key, entry] of Object.entries(thirdParty)) {
if (entry.hasIds) {
// If the entry has proper ids, we can just select them
textAreas = textAreas.concat([
...gradioApp().querySelectorAll(entry.selectors.join(", ")),
]);
} else {
// Otherwise, we have to find the text areas by their adjacent labels
let base = gradioApp().querySelector(entry.base);
// Safety check
if (!base) continue;
let allTextAreas = [...base.querySelectorAll("textarea, input[type='text']")];
// Filter the text areas where the adjacent label matches one of the selectors
let matchingTextAreas = allTextAreas.filter((ta) =>
[...ta.parentElement.childNodes].some((x) =>
entry.selectors.includes(x.innerText)
)
);
textAreas = textAreas.concat(matchingTextAreas);
}
}
return textAreas;
}
}
function getTextAreas() {
// First get all core text areas
let textAreas = [...gradioApp().querySelectorAll(core.join(", "))];
this.addOnDemandObservers = function (setupFunction) {
for (const [key, entry] of Object.entries(thirdParty)) {
if (!entry.onDemand) continue;
for (const [key, entry] of Object.entries(thirdParty)) {
if (entry.hasIds) { // If the entry has proper ids, we can just select them
textAreas = textAreas.concat([...gradioApp().querySelectorAll(entry.selectors.join(", "))]);
} else { // Otherwise, we have to find the text areas by their adjacent labels
let base = gradioApp().querySelector(entry.base);
// Safety check
if (!base) continue;
let allTextAreas = [...base.querySelectorAll("textarea, input[type='text']")];
let accordions = [...base?.querySelectorAll(".gradio-accordion")];
if (!accordions) continue;
// Filter the text areas where the adjacent label matches one of the selectors
let matchingTextAreas = allTextAreas.filter(ta => [...ta.parentElement.childNodes].some(x => entry.selectors.includes(x.innerText)));
textAreas = textAreas.concat(matchingTextAreas);
}
};
return textAreas;
}
function addOnDemandObservers(setupFunction) {
for (const [key, entry] of Object.entries(thirdParty)) {
if (!entry.onDemand) continue;
let base = gradioApp().querySelector(entry.base);
if (!base) continue;
let accordions = [...base?.querySelectorAll(".gradio-accordion")];
if (!accordions) continue;
accordions.forEach(acc => {
let accObserver = new MutationObserver((mutationList, observer) => {
for (const mutation of mutationList) {
if (mutation.type === "childList") {
let newChildren = mutation.addedNodes;
if (!newChildren) {
accObserver.disconnect();
continue;
}
newChildren.forEach(child => {
if (child.classList.contains("gradio-accordion") || child.querySelector(".gradio-accordion")) {
let newAccordions = [...child.querySelectorAll(".gradio-accordion")];
newAccordions.forEach(nAcc => accObserver.observe(nAcc, { childList: true }));
accordions.forEach((acc) => {
let accObserver = new MutationObserver((mutationList, observer) => {
for (const mutation of mutationList) {
if (mutation.type === "childList") {
let newChildren = mutation.addedNodes;
if (!newChildren) {
accObserver.disconnect();
continue;
}
});
if (entry.hasIds) { // If the entry has proper ids, we can just select them
[...gradioApp().querySelectorAll(entry.selectors.join(", "))].forEach(x => setupFunction(x));
} else { // Otherwise, we have to find the text areas by their adjacent labels
let base = gradioApp().querySelector(entry.base);
// Safety check
if (!base) continue;
let allTextAreas = [...base.querySelectorAll("textarea, input[type='text']")];
// Filter the text areas where the adjacent label matches one of the selectors
let matchingTextAreas = allTextAreas.filter(ta => [...ta.parentElement.childNodes].some(x => entry.selectors.includes(x.innerText)));
matchingTextAreas.forEach(x => setupFunction(x));
newChildren.forEach((child) => {
if (
child.classList.contains("gradio-accordion") ||
child.querySelector(".gradio-accordion")
) {
let newAccordions = [
...child.querySelectorAll(".gradio-accordion"),
];
newAccordions.forEach((nAcc) =>
accObserver.observe(nAcc, { childList: true })
);
}
});
if (entry.hasIds) {
// If the entry has proper ids, we can just select them
[
...gradioApp().querySelectorAll(entry.selectors.join(", ")),
].forEach((x) => setupFunction(x));
} else {
// Otherwise, we have to find the text areas by their adjacent labels
let base = gradioApp().querySelector(entry.base);
// Safety check
if (!base) continue;
let allTextAreas = [
...base.querySelectorAll("textarea, input[type='text']"),
];
// Filter the text areas where the adjacent label matches one of the selectors
let matchingTextAreas = allTextAreas.filter((ta) =>
[...ta.parentElement.childNodes].some((x) =>
entry.selectors.includes(x.innerText)
)
);
matchingTextAreas.forEach((x) => setupFunction(x));
}
}
}
}
});
accObserver.observe(acc, { childList: true });
});
accObserver.observe(acc, { childList: true });
});
};
}
const thirdPartyIdSet = new Set();
// Get the identifier for the text area to differentiate between positive and negative
function getTextAreaIdentifier(textArea) {
let txt2img_p = gradioApp().querySelector('#txt2img_prompt > label > textarea');
let txt2img_n = gradioApp().querySelector('#txt2img_neg_prompt > label > textarea');
let img2img_p = gradioApp().querySelector('#img2img_prompt > label > textarea');
let img2img_n = gradioApp().querySelector('#img2img_neg_prompt > label > textarea');
let modifier = "";
switch (textArea) {
case txt2img_p:
modifier = ".txt2img.p";
break;
case txt2img_n:
modifier = ".txt2img.n";
break;
case img2img_p:
modifier = ".img2img.p";
break;
case img2img_n:
modifier = ".img2img.n";
break;
default:
// If the text area is not a core text area, it must be a third party text area
// Add it to the set of third party text areas and get its index as a unique identifier
if (!thirdPartyIdSet.has(textArea))
thirdPartyIdSet.add(textArea);
modifier = `.thirdParty.ta${[...thirdPartyIdSet].indexOf(textArea)}`;
break;
}
}
return modifier;
}
const thirdPartyIdSet = new Set();
// Get the identifier for the text area to differentiate between positive and negative
this.getTextAreaIdentifier = function (textArea) {
let txt2img_p = gradioApp().querySelector("#txt2img_prompt > label > textarea");
let txt2img_n = gradioApp().querySelector("#txt2img_neg_prompt > label > textarea");
let img2img_p = gradioApp().querySelector("#img2img_prompt > label > textarea");
let img2img_n = gradioApp().querySelector("#img2img_neg_prompt > label > textarea");
let modifier = "";
switch (textArea) {
case txt2img_p:
modifier = ".txt2img.p";
break;
case txt2img_n:
modifier = ".txt2img.n";
break;
case img2img_p:
modifier = ".img2img.p";
break;
case img2img_n:
modifier = ".img2img.n";
break;
default:
// If the text area is not a core text area, it must be a third party text area
// Add it to the set of third party text areas and get its index as a unique identifier
if (!thirdPartyIdSet.has(textArea)) thirdPartyIdSet.add(textArea);
modifier = `.thirdParty.ta${[...thirdPartyIdSet].indexOf(textArea)}`;
break;
}
return modifier;
}
})();

View File

@@ -1,173 +1,624 @@
// Utility functions for tag autocomplete
TAC.Utils = class TacUtils {
/**
* Parses a CSV file into a 2D array. Doesn't use regex, so it is very lightweight.
* We are ignoring newlines in quote fields since we expect one-line entries and parsing would break for unclosed quotes otherwise
* @param {String} str - The CSV string to parse (likely from a file with multiple lines)
* @returns {string[][]} A 2D array of CSV entries (rows and columns of that row)
*/
static parseCSV(str) {
const arr = [];
let quote = false; // 'true' means we're inside a quoted field
// Parse the CSV file into a 2D array. Doesn't use regex, so it is very lightweight.
function parseCSV(str) {
var arr = [];
var quote = false; // 'true' means we're inside a quoted field
// Iterate over each character, keep track of current row and column (of the returned array)
for (let row = 0, col = 0, c = 0; c < str.length; c++) {
let cc = str[c],
nc = str[c + 1]; // Current character, next character
arr[row] = arr[row] || []; // Create a new row if necessary
arr[row][col] = arr[row][col] || ""; // Create a new column (start with empty string) if necessary
// Iterate over each character, keep track of current row and column (of the returned array)
for (var row = 0, col = 0, c = 0; c < str.length; c++) {
var cc = str[c], nc = str[c + 1]; // Current character, next character
arr[row] = arr[row] || []; // Create a new row if necessary
arr[row][col] = arr[row][col] || ''; // Create a new column (start with empty string) if necessary
// If the current character is a quotation mark, and we're inside a
// quoted field, and the next character is also a quotation mark,
// add a quotation mark to the current column and skip the next character
if (cc == '"' && quote && nc == '"') {
arr[row][col] += cc;
++c;
continue;
}
// If the current character is a quotation mark, and we're inside a
// quoted field, and the next character is also a quotation mark,
// add a quotation mark to the current column and skip the next character
if (cc == '"' && quote && nc == '"') { arr[row][col] += cc; ++c; continue; }
// If it's just one quotation mark, begin/end quoted field
if (cc == '"') {
quote = !quote;
continue;
}
// If it's just one quotation mark, begin/end quoted field
if (cc == '"') { quote = !quote; continue; }
// If it's a comma and we're not in a quoted field, move on to the next column
if (cc == "," && !quote) {
++col;
continue;
}
// If it's a comma and we're not in a quoted field, move on to the next column
if (cc == ',' && !quote) { ++col; continue; }
// If it's a newline (CRLF), skip the next character and move on to the next row and move to column 0 of that new row
if (cc == "\r" && nc == "\n") {
++row;
col = 0;
++c;
quote = false;
continue;
}
// If it's a newline (CRLF) and we're not in a quoted field, skip the next character
// and move on to the next row and move to column 0 of that new row
if (cc == '\r' && nc == '\n' && !quote) { ++row; col = 0; ++c; continue; }
// If it's a newline (LF or CR) move on to the next row and move to column 0 of that new row
if (cc == "\n") {
++row;
col = 0;
quote = false;
continue;
}
if (cc == "\r") {
++row;
col = 0;
quote = false;
continue;
}
// If it's a newline (LF or CR) and we're not in a quoted field,
// move on to the next row and move to column 0 of that new row
if (cc == '\n' && !quote) { ++row; col = 0; continue; }
if (cc == '\r' && !quote) { ++row; col = 0; continue; }
// Otherwise, append the current character to the current column
arr[row][col] += cc;
}
return arr;
}
// Load file
async function readFile(filePath, json = false, cache = false) {
if (!cache)
filePath += `?${new Date().getTime()}`;
let response = await fetch(`file=${filePath}`);
if (response.status != 200) {
console.error(`Error loading file "${filePath}": ` + response.status, response.statusText);
return null;
// Otherwise, append the current character to the current column
arr[row][col] += cc;
}
return arr;
}
if (json)
return await response.json();
else
return await response.text();
}
/** Wrapper function to read a file from a path, using Gradio's "file="" accessor API
* @param {String} filePath - The path to the file
* @param {Boolean} json - Whether to parse the file as JSON
* @param {Boolean} cache - Whether to cache the response
* @returns {Promise<String | any>} The file content as a string or JSON object (if json is true)
*/
static async readFile(filePath, json = false, cache = false) {
if (!cache) filePath += `?${new Date().getTime()}`;
// Load CSV
async function loadCSV(path) {
let text = await readFile(path);
return parseCSV(text);
}
let response = await fetch(`file=${filePath}`);
// Debounce function to prevent spamming the autocomplete function
var dbTimeOut;
const debounce = (func, wait = 300) => {
return function (...args) {
if (dbTimeOut) {
clearTimeout(dbTimeOut);
if (response.status != 200) {
console.error(
`Error loading file "${filePath}": ` + response.status,
response.statusText
);
return null;
}
dbTimeOut = setTimeout(() => {
func.apply(this, args);
}, wait);
}
}
// Difference function to fix duplicates not being seen as changes in normal filter
function difference(a, b) {
if (a.length == 0) {
return b;
}
if (b.length == 0) {
return a;
if (json) return await response.json();
else return await response.text();
}
return [...b.reduce((acc, v) => acc.set(v, (acc.get(v) || 0) - 1),
a.reduce((acc, v) => acc.set(v, (acc.get(v) || 0) + 1), new Map())
)].reduce((acc, [v, count]) => acc.concat(Array(Math.abs(count)).fill(v)), []);
}
// Sliding window function to get possible combination groups of an array
function toNgrams(inputArray, size) {
return Array.from(
{ length: inputArray.length - (size - 1) }, //get the appropriate length
(_, index) => inputArray.slice(index, index + size) //create the windows
);
}
function escapeRegExp(string) {
return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string
}
function escapeHTML(unsafeText) {
let div = document.createElement('div');
div.textContent = unsafeText;
return div.innerHTML;
}
// For black/whitelisting
function updateModelName() {
let sdm = gradioApp().querySelector("#setting_sd_model_checkpoint");
let modelDropdown = sdm?.querySelector("input") || sdm?.querySelector("select");
if (modelDropdown) {
currentModelName = modelDropdown.value;
} else {
// Fallback for intermediate versions
modelDropdown = sdm?.querySelector("span.single-select");
currentModelName = modelDropdown?.textContent || "";
/** Wrapper function to read a file from the path and parse it as CSV
* @param {String} path - The path to the CSV file
* @returns {Promise<String[][]>} A 2D array of CSV entries
*/
static async loadCSV(path) {
let text = await this.readFile(path);
return this.parseCSV(text);
}
}
// From https://stackoverflow.com/a/61975440, how to detect JS value changes
function observeElement(element, property, callback, delay = 0) {
let elementPrototype = Object.getPrototypeOf(element);
if (elementPrototype.hasOwnProperty(property)) {
let descriptor = Object.getOwnPropertyDescriptor(elementPrototype, property);
Object.defineProperty(element, property, {
get: function() {
return descriptor.get.apply(this, arguments);
},
set: function () {
let oldValue = this[property];
descriptor.set.apply(this, arguments);
let newValue = this[property];
if (typeof callback == "function") {
setTimeout(callback.bind(this, oldValue, newValue), delay);
}
return newValue;
/**
* Calls the TAC API for a GET request
* @param {String} url - The URL to fetch from
* @param {Boolean} json - Whether to parse the response as JSON or plain text
* @param {Boolean} cache - Whether to cache the response
* @returns {Promise<any | String>} JSON or text response from the API, depending on the "json" parameter
*/
static async fetchAPI(url, json = true, cache = false) {
if (!cache) {
const appendChar = url.includes("?") ? "&" : "?";
url += `${appendChar}${new Date().getTime()}`;
}
let response = await fetch(url);
if (response.status != 200) {
console.error(
`Error fetching API endpoint "${url}": ` + response.status,
response.statusText
);
return null;
}
if (json) return await response.json();
else return await response.text();
}
/**
* Posts to the TAC API
* @param {String} url - The URL to post to
* @param {String} body - (optional) The body of the POST request as a JSON string
* @returns JSON response from the API
*/
static async postAPI(url, body = null) {
let response = await fetch(url, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: body,
});
if (response.status != 200) {
console.error(
`Error posting to API endpoint "${url}": ` + response.status,
response.statusText
);
return null;
}
return await response.json();
}
/**
* Puts to the TAC API
* @param {String} url - The URL to post to
* @param {String} body - (optional) The body of the PUT request as a JSON string
* @returns JSON response from the API
*/
static async putAPI(url, body = null) {
let response = await fetch(url, { method: "PUT", body: body });
if (response.status != 200) {
console.error(
`Error putting to API endpoint "${url}": ` + response.status,
response.statusText
);
return null;
}
return await response.json();
}
/**
* Get a preview image URL for a given extra network file.
* Uses the official webui endpoint if available, otherwise creates a blob URL.
* @param {String} filename - The filename of the extra network file
* @param {String} type - One of "embed", "hyper", "lora", or "lyco", to determine the lookup location
* @returns {Promise<String>} URL to a preview image for the extra network file, if available
*/
static async getExtraNetworkPreviewURL(filename, type) {
const previewJSON = await this.fetchAPI(
`tacapi/v1/thumb-preview/${filename}?type=${type}`,
true,
true
);
if (previewJSON?.url) {
const properURL = `sd_extra_networks/thumb?filename=${previewJSON.url}`;
if ((await fetch(properURL)).status == 200) {
return properURL;
} else {
// create blob url
const blob = await (
await fetch(`tacapi/v1/thumb-preview-blob/${filename}?type=${type}`)
).blob();
return URL.createObjectURL(blob);
}
} else {
return null;
}
}
static #lastStyleRefresh = 0;
/**
* Refreshes the styles.txt file if it has changed since the last check.
* Checks at most once per second to prevent spamming the API.
*/
static async refreshStyleNamesIfChanged() {
// Only refresh once per second
let currentTimestamp = new Date().getTime();
if (currentTimestamp - this.#lastStyleRefresh < 1000) return;
this.#lastStyleRefresh = currentTimestamp;
const response = await fetch(`tacapi/v1/refresh-styles-if-changed?${new Date().getTime()}`);
if (response.status === 304) {
// Not modified
} else if (response.status === 200) {
// Reload
TAC.Ext.QUEUE_FILE_LOAD.forEach(async (fn) => {
if (fn.toString().includes("styleNames")) await fn.call(null, true);
});
} else {
// Error
console.error(`Error refreshing styles.txt: ` + response.status, response.statusText);
}
}
static #dbTimeOut;
/**
* Generic debounce function to prevent spamming the autocompletion during fast typing
* @param {Function} func - The function to debounce
* @param {Number} wait - The debounce time in milliseconds
* @returns {Function} The debounced function
*/
static debounce = (func, wait = 300) => {
return function (...args) {
// Caution: Since we are in an anonymous function, 'this' would not refer to the class
if (TacUtils.#dbTimeOut) {
clearTimeout(TacUtils.#dbTimeOut);
}
TacUtils.#dbTimeOut = setTimeout(() => {
func.apply(this, args);
}, wait);
};
};
/**
* Calculates the difference between two arrays (order-sensitive).
* Fixes duplicates not being seen as changes in a normal filter function.
* @param {Array} a
* @param {Array} b
* @returns {Array} The difference between the two arrays
*/
static difference(a, b) {
if (a.length == 0) {
return b;
}
if (b.length == 0) {
return a;
}
return [
...b.reduce(
(acc, v) => acc.set(v, (acc.get(v) || 0) - 1),
a.reduce((acc, v) => acc.set(v, (acc.get(v) || 0) + 1), new Map())
),
].reduce((acc, [v, count]) => acc.concat(Array(Math.abs(count)).fill(v)), []);
}
/**
* Object flatten function adapted from https://stackoverflow.com/a/61602592
* @param {*} obj - The object to flatten
* @param {Array} roots - Keeps previous parent properties as they will be added as a prefix for each prop.
* @param {String} sep - Just a preference if you want to seperate nested paths other than dot.
* @returns The flattened object
*/
static flatten(obj, roots = [], sep = ".") {
return Object.keys(obj).reduce(
(memo, prop) =>
Object.assign(
// create a new object
{},
// include previously returned object
memo,
Object.prototype.toString.call(obj[prop]) === "[object Object]"
? // keep working if value is an object
this.flatten(obj[prop], roots.concat([prop]), sep)
: // include current prop and value and prefix prop with the roots
{ [roots.concat([prop]).join(sep)]: obj[prop] }
),
{}
);
}
/**
* Calculate biased tag score based on post count and frequent usage
* @param {TAC.AutocompleteResult} result - The unbiased result
* @param {Number} count - The post count (or similar base metric)
* @param {Number} uses - The usage count
* @returns {Number} The biased score for sorting
*/
static calculateUsageBias(result, count, uses) {
// Check setting conditions
if (uses < TAC.CFG.frequencyMinCount) {
uses = 0;
} else if (uses != 0) {
result.usageBias = true;
}
switch (TAC.CFG.frequencyFunction) {
case "Logarithmic (weak)":
return Math.log(1 + count) + Math.log(1 + uses);
case "Logarithmic (strong)":
return Math.log(1 + count) + 2 * Math.log(1 + uses);
case "Usage first":
return uses;
default:
return count;
}
}
/**
* Utility function to map the use count array from the database to a more readable format,
* since FastAPI omits the field names in the response.
* @param {Array} useCounts
* @param {Boolean} posAndNeg - Whether to include negative counts
*/
static mapUseCountArray(useCounts, posAndNeg = false) {
return useCounts.map((useCount) => {
if (posAndNeg) {
return {
name: useCount[0],
type: useCount[1],
count: useCount[2],
negCount: useCount[3],
lastUseDate: useCount[4],
};
}
return {
name: useCount[0],
type: useCount[1],
count: useCount[2],
lastUseDate: useCount[3],
};
});
}
}
// Queue calling function to process global queues
async function processQueue(queue, context, ...args) {
for (let i = 0; i < queue.length; i++) {
await queue[i].call(context, ...args);
}
}
// The same but with return values
async function processQueueReturn(queue, context, ...args)
{
let qeueueReturns = [];
for (let i = 0; i < queue.length; i++) {
let returnValue = await queue[i].call(context, ...args);
if (returnValue)
qeueueReturns.push(returnValue);
}
return qeueueReturns;
}
// Specific to tag completion parsers
async function processParsers(textArea, prompt) {
// Get all parsers that have a successful trigger condition
let matchingParsers = PARSERS.filter(parser => parser.triggerCondition());
// Guard condition
if (matchingParsers.length === 0) {
return null;
/**
* Calls API endpoint to increase the count of a tag in the database.
* Not awaited as it is non-critical and can be executed as fire-and-forget.
* @param {String} tagName - The name of the tag
* @param {TAC.ResultType} type - The type of the tag as mapped in {@link TAC.ResultType}
* @param {Boolean} negative - Whether the tag was typed in a negative prompt field
*/
static increaseUseCount(tagName, type, negative = false) {
this.postAPI(
`tacapi/v1/increase-use-count?tagname=${tagName}&ttype=${type}&neg=${negative}`
);
}
let parseFunctions = matchingParsers.map(parser => parser.parse);
// Process them and return the results
return await processQueueReturn(parseFunctions, null, textArea, prompt);
}
/**
* Get the use count of a tag from the database
* @param {String} tagName - The name of the tag
* @param {TAC.ResultType} type - The type of the tag as mapped in {@link TAC.ResultType}
* @param {Boolean} negative - Whether we are currently in a negative prompt field
* @returns {Promise<Number>} The use count of the tag
*/
static async getUseCount(tagName, type, negative = false) {
const response = await this.fetchAPI(
`tacapi/v1/get-use-count?tagname=${tagName}&ttype=${type}&neg=${negative}`,
true,
false
);
// Guard for no db
if (response == null) return null;
// Result
return response["result"];
}
/**
* Retrieves the use counts of multiple tags at once from the database for improved performance
* during typing.
* @param {String[]} tagNames - An array of tag names
* @param {TAC.ResultType[]} types - An array of tag types as mapped in {@link TAC.ResultType}
* @param {Boolean} negative - Whether we are currently in a negative prompt field
* @returns {Promise<Array>} The use count array mapped to named fields by {@link mapUseCountArray}
*/
static async getUseCounts(tagNames, types, negative = false) {
// While semantically weird, we have to use POST here for the body, as urls are limited in length
const body = JSON.stringify({ tagNames: tagNames, tagTypes: types, neg: negative });
const response = await this.postAPI(`tacapi/v1/get-use-count-list`, body);
// Guard for no db
if (response == null) return null;
// Results
return this.mapUseCountArray(response["result"]);
}
/**
* Gets all use counts existing in the database.
* @returns {Array} The use count array mapped to named fields by {@link mapUseCountArray}
*/
static async getAllUseCounts() {
const response = await this.fetchAPI(`tacapi/v1/get-all-use-counts`);
// Guard for no db
if (response == null) return null;
// Results
return this.mapUseCountArray(response["result"], true);
}
/**
* Resets the use count of the given tag back to zero.
* @param {String} tagName - The name of the tag
* @param {TAC.ResultType} type - The type of the tag as mapped in {@link TAC.ResultType}
* @param {Boolean} resetPosCount - Whether to reset the positive count
* @param {Boolean} resetNegCount - Whether to reset the negative count
*/
static async resetUseCount(tagName, type, resetPosCount, resetNegCount) {
await this.putAPI(
`tacapi/v1/reset-use-count?tagname=${tagName}&ttype=${type}&pos=${resetPosCount}&neg=${resetNegCount}`
);
}
/**
* Creates a table to display an overview of tag usage statistics.
* Currently unused.
* @param {Array} tagCounts - The use count array to use, mapped to named fields by {@link mapUseCountArray}
* @returns
*/
static createTagUsageTable(tagCounts) {
// Create table
let tagTable = document.createElement("table");
tagTable.innerHTML = `<thead>
<tr>
<td>Name</td>
<td>Type</td>
<td>Count(+)</td>
<td>Count(-)</td>
<td>Last used</td>
</tr>
</thead>`;
tagTable.id = "tac_tagUsageTable";
tagCounts.forEach((t) => {
let tr = document.createElement("tr");
// Fill values
let values = [t.name, t.type - 1, t.count, t.negCount, t.lastUseDate];
values.forEach((v) => {
let td = document.createElement("td");
td.innerText = v;
tr.append(td);
});
// Add delete/reset button
let delButton = document.createElement("button");
delButton.innerText = "🗑️";
delButton.title = "Reset count";
tr.append(delButton);
tagTable.append(tr);
});
return tagTable;
}
/**
* Sliding window function to get possible combination groups of an array
* @param {Array} inputArray
* @param {Number} size
* @returns {Array[]} ngram permutations of the input array
*/
static toNgrams(inputArray, size) {
return Array.from(
{ length: inputArray.length - (size - 1) }, //get the appropriate length
(_, index) => inputArray.slice(index, index + size) //create the windows
);
}
/**
* Escapes a string for use in a regular expression.
* @param {String} string
* @param {Boolean} wildcardMatching - Wildcard matching mode doesn't escape asterisks and question marks as they are handled separately there.
* @returns {String} The escaped string
*/
static escapeRegExp(string, wildcardMatching = false) {
if (wildcardMatching) {
// Escape all characters except asterisks and ?, which should be treated separately as placeholders.
return string
.replace(/[-[\]{}()+.,\\^$|#\s]/g, "\\$&")
.replace(/\*/g, ".*")
.replace(/\?/g, ".");
}
return string.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string
}
/**
* Escapes a string for use in HTML to not break formatting.
* @param {String} unsafeText
* @returns {String} The escaped HTML string
*/
static escapeHTML(unsafeText) {
let div = document.createElement("div");
div.textContent = unsafeText;
return div.innerHTML;
}
/** Updates {@link TAC.Globals.currentModelName} to the current model */
static updateModelName() {
let sdm = gradioApp().querySelector("#setting_sd_model_checkpoint");
let modelDropdown = sdm?.querySelector("input") || sdm?.querySelector("select");
if (modelDropdown) {
TAC.Globals.currentModelName = modelDropdown.value;
} else {
// Fallback for intermediate versions
modelDropdown = sdm?.querySelector("span.single-select");
TAC.Globals.currentModelName = modelDropdown?.textContent || "";
}
}
/**
* From https://stackoverflow.com/a/61975440.
* Detects value changes in an element that were triggered programmatically
* @param {HTMLElement} element - The DOM element to observe
* @param {String} property - The object property to observe
* @param {Function} callback - The callback function to call when the property changes
* @param {Number} delay - The delay in milliseconds to wait before calling the callback
*/
static observeElement(element, property, callback, delay = 0) {
let elementPrototype = Object.getPrototypeOf(element);
if (elementPrototype.hasOwnProperty(property)) {
let descriptor = Object.getOwnPropertyDescriptor(elementPrototype, property);
Object.defineProperty(element, property, {
get: function () {
return descriptor.get.apply(this, arguments);
},
set: function () {
let oldValue = this[property];
descriptor.set.apply(this, arguments);
let newValue = this[property];
if (typeof callback == "function") {
setTimeout(callback.bind(this, oldValue, newValue), delay);
}
return newValue;
},
});
}
}
/**
* Returns a matching sort function based on the current configuration
* @returns {((a: any, b: any) => number)}
*/
static getSortFunction() {
let criterion = TAC.CFG.modelSortOrder || "Name";
const textSort = (a, b, reverse = false) => {
// Assign keys so next sort is faster
if (!a.sortKey) {
a.sortKey = a.type === TAC.ResultType.chant ? a.aliases : a.text;
}
if (!b.sortKey) {
b.sortKey = b.type === TAC.ResultType.chant ? b.aliases : b.text;
}
return reverse
? b.sortKey.localeCompare(a.sortKey)
: a.sortKey.localeCompare(b.sortKey);
};
const numericSort = (a, b, reverse = false) => {
const noKey = reverse ? "-1" : Number.MAX_SAFE_INTEGER;
let aParsed = parseFloat(a.sortKey || noKey);
let bParsed = parseFloat(b.sortKey || noKey);
if (aParsed === bParsed) {
return textSort(a, b, false);
}
return reverse ? bParsed - aParsed : aParsed - bParsed;
};
return (a, b) => {
switch (criterion) {
case "Date Modified (newest first)":
return numericSort(a, b, true);
case "Date Modified (oldest first)":
return numericSort(a, b, false);
default:
return textSort(a, b);
}
};
}
/**
* Queue calling function to process global queues
* @param {Array} queue - The queue to process
* @param {*} context - The context to call the functions in (null for global)
* @param {...any} args - Arguments to pass to the functions
*/
static async processQueue(queue, context, ...args) {
for (let i = 0; i < queue.length; i++) {
await queue[i].call(context, ...args);
}
}
/** The same as {@link processQueue}, but can accept and return results from the queued functions. */
static async processQueueReturn(queue, context, ...args) {
let qeueueReturns = [];
for (let i = 0; i < queue.length; i++) {
let returnValue = await queue[i].call(context, ...args);
if (returnValue) qeueueReturns.push(returnValue);
}
return qeueueReturns;
}
/**
* A queue processing function specific to tag completion parsers
* @param {HTMLTextAreaElement} textArea - The current text area used by TAC
* @param {String} prompt - The current prompt
* @returns The results of the parsers
*/
static async processParsers(textArea, prompt) {
// Get all parsers that have a successful trigger condition
let matchingParsers = TAC.Ext.PARSERS.filter((parser) => parser.triggerCondition());
// Guard condition
if (matchingParsers.length === 0) {
return null;
}
let parseFunctions = matchingParsers.map((parser) => parser.parse);
// Process them and return the results
return await this.processQueueReturn(parseFunctions, null, textArea, prompt);
}
};

View File

@@ -1,54 +1,66 @@
const CHANT_REGEX = /<(?!e:|h:|l:)[^,> ]*>?/g;
const CHANT_TRIGGER = () => TAC_CFG.chantFile && TAC_CFG.chantFile !== "None" && tagword.match(CHANT_REGEX);
(function ChantExtension() {
const CHANT_REGEX = /<(?!e:|h:|l:)[^,> ]*>?/g;
const CHANT_TRIGGER = () =>
TAC.CFG.chantFile && TAC.CFG.chantFile !== "None" && TAC.Globals.tagword.match(CHANT_REGEX);
class ChantParser extends BaseTagParser {
parse() {
// Show Chant
let tempResults = [];
if (tagword !== "<" && tagword !== "<c:") {
let searchTerm = tagword.replace("<chant:", "").replace("<c:", "").replace("<", "");
let filterCondition = x => x.terms.toLowerCase().includes(searchTerm) || x.name.toLowerCase().includes(searchTerm);
tempResults = chants.filter(x => filterCondition(x)); // Filter by tagword
class ChantParser extends TAC.BaseTagParser {
parse() {
// Show Chant
let tempResults = [];
if (TAC.Globals.tagword !== "<" && TAC.Globals.tagword !== "<c:") {
let searchTerm = TAC.Globals.tagword
.replace("<chant:", "")
.replace("<c:", "")
.replace("<", "");
let filterCondition = (x) => {
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
return regex.test(x.terms.toLowerCase()) || regex.test(x.name.toLowerCase());
};
tempResults = TAC.Globals.chants.filter((x) => filterCondition(x)); // Filter by tagword
} else {
tempResults = TAC.Globals.chants;
}
// Add final results
let finalResults = [];
tempResults.forEach((t) => {
let result = new TAC.AutocompleteResult(t.content.trim(), TAC.ResultType.chant);
result.meta = "Chant";
result.aliases = t.name;
result.category = t.color;
finalResults.push(result);
});
return finalResults;
}
}
async function load() {
if (TAC.CFG.chantFile && TAC.CFG.chantFile !== "None") {
try {
TAC.Globals.chants = await TAC.Utils.readFile(
`${TAC.Globals.tagBasePath}/${TAC.CFG.chantFile}?`,
true
);
} catch (e) {
console.error("Error loading chants.json: " + e);
}
} else {
tempResults = chants;
TAC.Globals.chants = [];
}
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t.content.trim(), ResultType.chant)
result.meta = "Chant";
result.aliases = t.name;
result.category = t.color;
finalResults.push(result);
});
return finalResults;
}
}
async function load() {
if (TAC_CFG.chantFile && TAC_CFG.chantFile !== "None") {
try {
chants = await readFile(`${tagBasePath}/${TAC_CFG.chantFile}?`, true);
} catch (e) {
console.error("Error loading chants.json: " + e);
function sanitize(tagType, text) {
if (tagType === TAC.ResultType.chant) {
return text;
}
} else {
chants = [];
return null;
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.chant) {
return text.replace(/^.*?: /g, "");
}
return null;
}
TAC.Ext.PARSERS.push(new ChantParser(CHANT_TRIGGER));
PARSERS.push(new ChantParser(CHANT_TRIGGER));
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);
QUEUE_AFTER_CONFIG_CHANGE.push(load);
// Add our utility functions to their respective queues
TAC.Ext.QUEUE_FILE_LOAD.push(load);
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
TAC.Ext.QUEUE_AFTER_CONFIG_CHANGE.push(load);
})();

View File

@@ -1,61 +1,85 @@
const EMB_REGEX = /<(?!l:|h:|c:)[^,> ]*>?/g;
const EMB_TRIGGER = () => TAC_CFG.useEmbeddings && tagword.match(EMB_REGEX);
(function EmbeddingExtension() {
const EMB_REGEX = /<(?!l:|h:|c:)[^,> ]*>?/g;
const EMB_TRIGGER = () =>
TAC.CFG.useEmbeddings &&
(TAC.Globals.tagword.match(EMB_REGEX) || TAC.CFG.includeEmbeddingsInNormalResults);
class EmbeddingParser extends BaseTagParser {
parse() {
// Show embeddings
let tempResults = [];
if (tagword !== "<" && tagword !== "<e:") {
let searchTerm = tagword.replace("<e:", "").replace("<", "");
let versionString;
if (searchTerm.startsWith("v1") || searchTerm.startsWith("v2")) {
versionString = searchTerm.slice(0, 2);
searchTerm = searchTerm.slice(2);
class EmbeddingParser extends TAC.BaseTagParser {
parse() {
// Show embeddings
let tempResults = [];
if (TAC.Globals.tagword !== "<" && TAC.Globals.tagword !== "<e:") {
let searchTerm = TAC.Globals.tagword.replace("<e:", "").replace("<", "");
let versionString;
if (searchTerm.startsWith("v1") || searchTerm.startsWith("v2")) {
versionString = searchTerm.slice(0, 2);
searchTerm = searchTerm.slice(2);
} else if (searchTerm.startsWith("vxl")) {
versionString = searchTerm.slice(0, 3);
searchTerm = searchTerm.slice(3);
}
let filterCondition = (x) => {
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
return (
regex.test(x[0].toLowerCase()) ||
regex.test(x[0].toLowerCase().replaceAll(" ", "_"))
);
};
if (versionString)
tempResults = TAC.Globals.embeddings.filter(
(x) =>
filterCondition(x) &&
x[2] &&
x[2].toLowerCase() === versionString.toLowerCase()
); // Filter by tagword
else tempResults = TAC.Globals.embeddings.filter((x) => filterCondition(x)); // Filter by tagword
} else {
tempResults = TAC.Globals.embeddings;
}
let filterCondition = x => x[0].toLowerCase().includes(searchTerm) || x[0].toLowerCase().replaceAll(" ", "_").includes(searchTerm);
// Add final results
let finalResults = [];
tempResults.forEach((t) => {
let lastDot = t[0].lastIndexOf(".") > -1 ? t[0].lastIndexOf(".") : t[0].length;
let lastSlash = t[0].lastIndexOf("/") > -1 ? t[0].lastIndexOf("/") : -1;
let name = t[0].trim().substring(lastSlash + 1, lastDot);
if (versionString)
tempResults = embeddings.filter(x => filterCondition(x) && x[1] && x[1] === versionString); // Filter by tagword
else
tempResults = embeddings.filter(x => filterCondition(x)); // Filter by tagword
} else {
tempResults = embeddings;
}
let result = new TAC.AutocompleteResult(name, TAC.ResultType.embedding);
result.sortKey = t[1];
result.meta = t[2] + " Embedding";
finalResults.push(result);
});
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.embedding)
result.meta = t[1] + " Embedding";
finalResults.push(result);
});
return finalResults;
}
}
async function load() {
if (embeddings.length === 0) {
try {
embeddings = (await readFile(`${tagBasePath}/temp/emb.txt`)).split("\n")
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => x.trim().split(",")); // Split into name, version type pairs
} catch (e) {
console.error("Error loading embeddings.txt: " + e);
return finalResults;
}
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.embedding) {
return text.replace(/^.*?: /g, "");
async function load() {
if (TAC.Globals.embeddings.length === 0) {
try {
TAC.Globals.embeddings = (
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/emb.txt`)
)
.filter((x) => x[0]?.trim().length > 0) // Remove empty lines
.map((x) => [x[0].trim(), x[1], x[2]]); // Return name, sortKey, hash tuples
} catch (e) {
console.error("Error loading embeddings.txt: " + e);
}
}
}
return null;
}
PARSERS.push(new EmbeddingParser(EMB_TRIGGER));
function sanitize(tagType, text) {
if (tagType === TAC.ResultType.embedding) {
return text;
}
return null;
}
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);
TAC.Ext.PARSERS.push(new EmbeddingParser(EMB_TRIGGER));
// Add our utility functions to their respective queues
TAC.Ext.QUEUE_FILE_LOAD.push(load);
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
})();

View File

@@ -1,51 +1,69 @@
const HYP_REGEX = /<(?!e:|l:|c:)[^,> ]*>?/g;
const HYP_TRIGGER = () => TAC_CFG.useHypernetworks && tagword.match(HYP_REGEX);
(function HypernetExtension() {
const HYP_REGEX = /<(?!e:|l:|c:)[^,> ]*>?/g;
const HYP_TRIGGER = () => TAC.CFG.useHypernetworks && TAC.Globals.tagword.match(HYP_REGEX);
class HypernetParser extends BaseTagParser {
parse() {
// Show hypernetworks
let tempResults = [];
if (tagword !== "<" && tagword !== "<h:" && tagword !== "<hypernet:") {
let searchTerm = tagword.replace("<hypernet:", "").replace("<h:", "").replace("<", "");
let filterCondition = x => x.toLowerCase().includes(searchTerm) || x.toLowerCase().replaceAll(" ", "_").includes(searchTerm);
tempResults = hypernetworks.filter(x => filterCondition(x)); // Filter by tagword
} else {
tempResults = hypernetworks;
}
class HypernetParser extends TAC.BaseTagParser {
parse() {
// Show hypernetworks
let tempResults = [];
if (
TAC.Globals.tagword !== "<" &&
TAC.Globals.tagword !== "<h:" &&
TAC.Globals.tagword !== "<hypernet:"
) {
let searchTerm = TAC.Globals.tagword
.replace("<hypernet:", "")
.replace("<h:", "")
.replace("<", "");
let filterCondition = (x) => {
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
return (
regex.test(x.toLowerCase()) ||
regex.test(x.toLowerCase().replaceAll(" ", "_"))
);
};
tempResults = TAC.Globals.hypernetworks.filter((x) => filterCondition(x[0])); // Filter by tagword
} else {
tempResults = TAC.Globals.hypernetworks;
}
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t.trim(), ResultType.hypernetwork)
result.meta = "Hypernetwork";
finalResults.push(result);
});
// Add final results
let finalResults = [];
tempResults.forEach((t) => {
let result = new TAC.AutocompleteResult(t[0].trim(), TAC.ResultType.hypernetwork);
result.meta = "Hypernetwork";
result.sortKey = t[1];
finalResults.push(result);
});
return finalResults;
}
}
async function load() {
if (hypernetworks.length === 0) {
try {
hypernetworks = (await readFile(`${tagBasePath}/temp/hyp.txt`)).split("\n")
.filter(x => x.trim().length > 0) //Remove empty lines
.map(x => x.trim()); // Remove carriage returns and padding if it exists
} catch (e) {
console.error("Error loading hypernetworks.txt: " + e);
return finalResults;
}
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.hypernetwork) {
return `<hypernet:${text}:${TAC_CFG.extraNetworksDefaultMultiplier}>`;
async function load() {
if (TAC.Globals.hypernetworks.length === 0) {
try {
TAC.Globals.hypernetworks = (
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/hyp.txt`)
)
.filter((x) => x[0]?.trim().length > 0) //Remove empty lines
.map((x) => [x[0]?.trim(), x[1]]); // Remove carriage returns and padding if it exists
} catch (e) {
console.error("Error loading hypernetworks.txt: " + e);
}
}
}
return null;
}
PARSERS.push(new HypernetParser(HYP_TRIGGER));
function sanitize(tagType, text) {
if (tagType === TAC.ResultType.hypernetwork) {
return `<hypernet:${text}:${TAC.CFG.extraNetworksDefaultMultiplier}>`;
}
return null;
}
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);
TAC.Ext.PARSERS.push(new HypernetParser(HYP_TRIGGER));
// Add our utility functions to their respective queues
TAC.Ext.QUEUE_FILE_LOAD.push(load);
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
})();

View File

@@ -1,52 +1,81 @@
const LORA_REGEX = /<(?!e:|h:|c:)[^,> ]*>?/g;
const LORA_TRIGGER = () => TAC_CFG.useLoras && tagword.match(LORA_REGEX);
(function LoraExtension() {
const LORA_REGEX = /<(?!e:|h:|c:)[^,> ]*>?/g;
const LORA_TRIGGER = () => TAC.CFG.useLoras && TAC.Globals.tagword.match(LORA_REGEX);
class LoraParser extends BaseTagParser {
parse() {
// Show lora
let tempResults = [];
if (tagword !== "<" && tagword !== "<l:" && tagword !== "<lora:") {
let searchTerm = tagword.replace("<lora:", "").replace("<l:", "").replace("<", "");
let filterCondition = x => x.toLowerCase().includes(searchTerm) || x.toLowerCase().replaceAll(" ", "_").includes(searchTerm);
tempResults = loras.filter(x => filterCondition(x[0])); // Filter by tagword
} else {
tempResults = loras;
}
class LoraParser extends TAC.BaseTagParser {
parse() {
// Show lora
let tempResults = [];
if (
TAC.Globals.tagword !== "<" &&
TAC.Globals.tagword !== "<l:" &&
TAC.Globals.tagword !== "<lora:"
) {
let searchTerm = TAC.Globals.tagword
.replace("<lora:", "")
.replace("<l:", "")
.replace("<", "");
let filterCondition = (x) => {
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
return (
regex.test(x.toLowerCase()) ||
regex.test(x.toLowerCase().replaceAll(" ", "_"))
);
};
tempResults = TAC.Globals.loras.filter((x) => filterCondition(x[0])); // Filter by tagword
} else {
tempResults = TAC.Globals.loras;
}
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.lora)
result.meta = "Lora";
result.hash = t[1];
finalResults.push(result);
});
// Add final results
let finalResults = [];
tempResults.forEach((t) => {
const text = t[0].trim();
let lastDot = text.lastIndexOf(".") > -1 ? text.lastIndexOf(".") : text.length;
let lastSlash = text.lastIndexOf("/") > -1 ? text.lastIndexOf("/") : -1;
let name = text.substring(lastSlash + 1, lastDot);
return finalResults;
}
}
let result = new TAC.AutocompleteResult(name, TAC.ResultType.lora);
result.meta = "Lora";
result.sortKey = t[1];
result.hash = t[2];
finalResults.push(result);
});
async function load() {
if (loras.length === 0) {
try {
loras = (await readFile(`${tagBasePath}/temp/lora.txt`)).split("\n")
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => x.trim().split(",")); // Remove carriage returns and padding if it exists, split into name, hash pairs
} catch (e) {
console.error("Error loading lora.txt: " + e);
return finalResults;
}
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.lora) {
return `<lora:${text}:${TAC_CFG.extraNetworksDefaultMultiplier}>`;
async function load() {
if (TAC.Globals.loras.length === 0) {
try {
TAC.Globals.loras = (
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/lora.txt`)
)
.filter((x) => x[0]?.trim().length > 0) // Remove empty lines
.map((x) => [x[0]?.trim(), x[1], x[2]]); // Trim filenames and return the name, sortKey, hash pairs
} catch (e) {
console.error("Error loading lora.txt: " + e);
}
}
}
return null;
}
PARSERS.push(new LoraParser(LORA_TRIGGER));
async function sanitize(tagType, text) {
if (tagType === TAC.ResultType.lora) {
let multiplier = TAC.CFG.extraNetworksDefaultMultiplier;
let info = await TAC.Utils.fetchAPI(`tacapi/v1/lora-info/${text}`);
if (info && info["preferred weight"]) {
multiplier = info["preferred weight"];
}
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);
return `<lora:${text}:${multiplier}>`;
}
return null;
}
TAC.Ext.PARSERS.push(new LoraParser(LORA_TRIGGER));
// Add our utility functions to their respective queues
TAC.Ext.QUEUE_FILE_LOAD.push(load);
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
})();

View File

@@ -1,52 +1,84 @@
const LYCO_REGEX = /<(?!e:|h:|c:)[^,> ]*>?/g;
const LYCO_TRIGGER = () => TAC_CFG.useLycos && tagword.match(LYCO_REGEX);
(function LycoExtension() {
const LYCO_REGEX = /<(?!e:|h:|c:)[^,> ]*>?/g;
const LYCO_TRIGGER = () => TAC.CFG.useLycos && TAC.Globals.tagword.match(LYCO_REGEX);
class LycoParser extends BaseTagParser {
parse() {
// Show lyco
let tempResults = [];
if (tagword !== "<" && tagword !== "<l:" && tagword !== "<lyco:") {
let searchTerm = tagword.replace("<lyco:", "").replace("<l:", "").replace("<", "");
let filterCondition = x => x.toLowerCase().includes(searchTerm) || x.toLowerCase().replaceAll(" ", "_").includes(searchTerm);
tempResults = lycos.filter(x => filterCondition(x[0])); // Filter by tagword
} else {
tempResults = lycos;
}
class LycoParser extends TAC.BaseTagParser {
parse() {
// Show lyco
let tempResults = [];
if (
TAC.Globals.tagword !== "<" &&
TAC.Globals.tagword !== "<l:" &&
TAC.Globals.tagword !== "<lyco:" &&
TAC.Globals.tagword !== "<lora:"
) {
let searchTerm = TAC.Globals.tagword
.replace("<lyco:", "")
.replace("<lora:", "")
.replace("<l:", "")
.replace("<", "");
let filterCondition = (x) => {
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
return (
regex.test(x.toLowerCase()) ||
regex.test(x.toLowerCase().replaceAll(" ", "_"))
);
};
tempResults = TAC.Globals.lycos.filter((x) => filterCondition(x[0])); // Filter by tagword
} else {
tempResults = TAC.Globals.lycos;
}
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.lyco)
result.meta = "Lyco";
result.hash = t[1];
finalResults.push(result);
});
// Add final results
let finalResults = [];
tempResults.forEach((t) => {
const text = t[0].trim();
let lastDot = text.lastIndexOf(".") > -1 ? text.lastIndexOf(".") : text.length;
let lastSlash = text.lastIndexOf("/") > -1 ? text.lastIndexOf("/") : -1;
let name = text.substring(lastSlash + 1, lastDot);
return finalResults;
}
}
let result = new TAC.AutocompleteResult(name, TAC.ResultType.lyco);
result.meta = "Lyco";
result.sortKey = t[1];
result.hash = t[2];
finalResults.push(result);
});
async function load() {
if (lycos.length === 0) {
try {
lycos = (await readFile(`${tagBasePath}/temp/lyco.txt`)).split("\n")
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => x.trim().split(",")); // Remove carriage returns and padding if it exists, split into name, hash pairs
} catch (e) {
console.error("Error loading lyco.txt: " + e);
return finalResults;
}
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.lyco) {
return `<lyco:${text}:${TAC_CFG.extraNetworksDefaultMultiplier}>`;
async function load() {
if (TAC.Globals.lycos.length === 0) {
try {
TAC.Globals.lycos = (
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/lyco.txt`)
)
.filter((x) => x[0]?.trim().length > 0) // Remove empty lines
.map((x) => [x[0]?.trim(), x[1], x[2]]); // Trim filenames and return the name, sortKey, hash pairs
} catch (e) {
console.error("Error loading lyco.txt: " + e);
}
}
}
return null;
}
PARSERS.push(new LycoParser(LYCO_TRIGGER));
async function sanitize(tagType, text) {
if (tagType === TAC.ResultType.lyco) {
let multiplier = TAC.CFG.extraNetworksDefaultMultiplier;
let info = await TAC.Utils.fetchAPI(`tacapi/v1/lyco-info/${text}`);
if (info && info["preferred weight"]) {
multiplier = info["preferred weight"];
}
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);
let prefix = TAC.CFG.useLoraPrefixForLycos ? "lora" : "lyco";
return `<${prefix}:${text}:${multiplier}>`;
}
return null;
}
TAC.Ext.PARSERS.push(new LycoParser(LYCO_TRIGGER));
// Add our utility functions to their respective queues
TAC.Ext.QUEUE_FILE_LOAD.push(load);
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
})();

View File

@@ -1,43 +1,56 @@
async function load() {
let modelKeywordParts = (await readFile(`tmp/modelKeywordPath.txt`)).split(",")
modelKeywordPath = modelKeywordParts[0];
let customFileExists = modelKeywordParts[1] === "True";
(function ModelKeywordExtension() {
async function load() {
let modelKeywordParts = (await TAC.Utils.readFile(`tmp/modelKeywordPath.txt`)).split(",");
TAC.Globals.modelKeywordPath = modelKeywordParts[0];
let customFileExists = modelKeywordParts[1] === "True";
if (modelKeywordPath.length > 0 && modelKeywordDict.size === 0) {
try {
let lines = [];
// Only add default keywords if wanted by the user
if (TAC_CFG.modelKeywordCompletion !== "Only user list")
lines = (await readFile(`${modelKeywordPath}/lora-keyword.txt`)).split("\n");
// Add custom user keywords if the file exists
if (customFileExists)
lines = lines.concat((await readFile(`${modelKeywordPath}/lora-keyword-user.txt`)).split("\n"));
if (TAC.Globals.modelKeywordPath.length > 0 && TAC.Globals.modelKeywordDict.size === 0) {
try {
let csv_lines = [];
// Only add default keywords if wanted by the user
if (TAC.CFG.modelKeywordCompletion !== "Only user list")
csv_lines = await TAC.Utils.loadCSV(
`${TAC.Globals.modelKeywordPath}/lora-keyword.txt`
);
// Add custom user keywords if the file exists
if (customFileExists)
csv_lines = csv_lines.concat(
await TAC.Utils.loadCSV(
`${TAC.Globals.modelKeywordPath}/lora-keyword-user.txt`
)
);
if (lines.length === 0) return;
if (csv_lines.length === 0) return;
lines = lines.filter(x => x.trim().length > 0 && x.trim()[0] !== "#") // Remove empty lines and comments
// Add to the dict
lines.forEach(line => {
const parts = line.split(",");
const hash = parts[0];
const keywords = parts[1].replaceAll("| ", ", ").replaceAll("|", ", ").trim();
const lastSepIndex = parts[2]?.lastIndexOf("/") + 1 || parts[2]?.lastIndexOf("\\") + 1 || 0;
const name = parts[2]?.substring(lastSepIndex).trim() || "none"
csv_lines = csv_lines.filter(
(x) => x[0].trim().length > 0 && x[0].trim()[0] !== "#"
); // Remove empty lines and comments
if (modelKeywordDict.has(hash) && name !== "none") {
// Add a new name key if the hash already exists
modelKeywordDict.get(hash).set(name, keywords);
} else {
// Create new hash entry
let map = new Map().set(name, keywords);
modelKeywordDict.set(hash, map);
}
});
} catch (e) {
console.error("Error loading model-keywords list: " + e);
// Add to the dict
csv_lines.forEach((parts) => {
const hash = parts[0];
const keywords = parts[1]
?.replaceAll("| ", ", ")
?.replaceAll("|", ", ")
?.trim();
const lastSepIndex =
parts[2]?.lastIndexOf("/") + 1 || parts[2]?.lastIndexOf("\\") + 1 || 0;
const name = parts[2]?.substring(lastSepIndex).trim() || "none";
if (TAC.Globals.modelKeywordDict.has(hash) && name !== "none") {
// Add a new name key if the hash already exists
TAC.Globals.modelKeywordDict.get(hash).set(name, keywords);
} else {
// Create new hash entry
let map = new Map().set(name, keywords);
TAC.Globals.modelKeywordDict.set(hash, map);
}
});
} catch (e) {
console.error("Error loading model-keywords list: " + e);
}
}
}
}
QUEUE_FILE_LOAD.push(load);
TAC.Ext.QUEUE_FILE_LOAD.push(load);
})();

77
javascript/ext_styles.js Normal file
View File

@@ -0,0 +1,77 @@
(function StyleExtension() {
const STYLE_REGEX = /(\$(\d*)\(?)[^$|\[\],\s]*\)?/;
const STYLE_TRIGGER = () => TAC.CFG.useStyleVars && TAC.Globals.tagword.match(STYLE_REGEX);
var lastStyleVarIndex = "";
class StyleParser extends TAC.BaseTagParser {
async parse() {
// Refresh if needed
await TAC.Utils.refreshStyleNamesIfChanged();
// Show styles
let tempResults = [];
let matchGroups = TAC.Globals.tagword.match(STYLE_REGEX);
// Save index to insert again later or clear last one
lastStyleVarIndex = matchGroups[2] ? matchGroups[2] : "";
if (TAC.Globals.tagword !== matchGroups[1]) {
let searchTerm = TAC.Globals.tagword.replace(matchGroups[1], "");
let filterCondition = (x) => {
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
return (
regex.test(x[0].toLowerCase()) ||
regex.test(x[0].toLowerCase().replaceAll(" ", "_"))
);
};
tempResults = TAC.Globals.styleNames.filter((x) => filterCondition(x)); // Filter by tagword
} else {
tempResults = TAC.Globals.styleNames;
}
// Add final results
let finalResults = [];
tempResults.forEach((t) => {
let result = new TAC.AutocompleteResult(t[0].trim(), TAC.ResultType.styleName);
result.meta = "Style";
finalResults.push(result);
});
return finalResults;
}
}
async function load(force = false) {
if (TAC.Globals.styleNames.length === 0 || force) {
try {
TAC.Globals.styleNames = (
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/styles.txt`)
)
.filter((x) => x[0]?.trim().length > 0) // Remove empty lines
.filter((x) => x[0] !== "None") // Remove "None" style
.map((x) => [x[0].trim()]); // Trim name
} catch (e) {
console.error("Error loading styles.txt: " + e);
}
}
}
function sanitize(tagType, text) {
if (tagType === TAC.ResultType.styleName) {
if (text.includes(" ")) {
return `$${lastStyleVarIndex}(${text})`;
} else {
return `$${lastStyleVarIndex}${text}`;
}
}
return null;
}
TAC.Ext.PARSERS.push(new StyleParser(STYLE_TRIGGER));
// Add our utility functions to their respective queues
TAC.Ext.QUEUE_FILE_LOAD.push(load);
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
})();

View File

@@ -1,240 +1,279 @@
const UMI_PROMPT_REGEX = /<[^\s]*?\[[^,<>]*[\]|]?>?/gi;
const UMI_TAG_REGEX = /(?:\[|\||--)([^<>\[\]\-|]+)/gi;
(function UmiExtension() {
const UMI_PROMPT_REGEX = /<[^\s]*?\[[^,<>]*[\]|]?>?/gi;
const UMI_TAG_REGEX = /(?:\[|\||--)([^<>\[\]\-|]+)/gi;
const UMI_TRIGGER = () => TAC_CFG.useWildcards && [...tagword.matchAll(UMI_PROMPT_REGEX)].length > 0;
const UMI_TRIGGER = () =>
TAC.CFG.useWildcards && [...TAC.Globals.tagword.matchAll(UMI_PROMPT_REGEX)].length > 0;
class UmiParser extends BaseTagParser {
parse(textArea, prompt) {
// We are in a UMI yaml tag definition, parse further
let umiSubPrompts = [...prompt.matchAll(UMI_PROMPT_REGEX)];
let umiTags = [];
let umiTagsWithOperators = []
class UmiParser extends TAC.BaseTagParser {
parse(textArea, prompt) {
// We are in a UMI yaml tag definition, parse further
let umiSubPrompts = [...prompt.matchAll(UMI_PROMPT_REGEX)];
const insertAt = (str,char,pos) => str.slice(0,pos) + char + str.slice(pos);
let umiTags = [];
let umiTagsWithOperators = [];
umiSubPrompts.forEach(umiSubPrompt => {
umiTags = umiTags.concat([...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map(x => x[1].toLowerCase()));
const start = umiSubPrompt.index;
const end = umiSubPrompt.index + umiSubPrompt[0].length;
if (textArea.selectionStart >= start && textArea.selectionStart <= end) {
umiTagsWithOperators = insertAt(umiSubPrompt[0], '###', textArea.selectionStart - start);
}
});
const insertAt = (str, char, pos) => str.slice(0, pos) + char + str.slice(pos);
// Safety check since UMI parsing sometimes seems to trigger outside of an UMI subprompt and thus fails
if (umiTagsWithOperators.length === 0) {
return null;
}
const promptSplitToTags = umiTagsWithOperators.replace(']###[', '][').split("][");
const clean = (str) => str
.replaceAll('>', '')
.replaceAll('<', '')
.replaceAll('[', '')
.replaceAll(']', '')
.trim();
const matches = promptSplitToTags.reduce((acc, curr) => {
let isOptional = curr.includes("|");
let isNegative = curr.startsWith("--");
let out;
if (isOptional) {
out = {
hasCursor: curr.includes("###"),
tags: clean(curr).split('|').map(x => ({
hasCursor: x.includes("###"),
isNegative: x.startsWith("--"),
tag: clean(x).replaceAll("###", '').replaceAll("--", '')
}))
};
acc.optional.push(out);
acc.all.push(...out.tags.map(x => x.tag));
} else if (isNegative) {
out = {
hasCursor: curr.includes("###"),
tags: clean(curr).replaceAll("###", '').split('|'),
};
out.tags = out.tags.map(x => x.startsWith("--") ? x.substring(2) : x);
acc.negative.push(out);
acc.all.push(...out.tags);
} else {
out = {
hasCursor: curr.includes("###"),
tags: clean(curr).replaceAll("###", '').split('|'),
};
acc.positive.push(out);
acc.all.push(...out.tags);
}
return acc;
}, { positive: [], negative: [], optional: [], all: [] });
//console.log({ matches })
const filteredWildcards = (tagword) => {
const wildcards = yamlWildcards.filter(x => {
let tags = x[1];
const matchesNeg =
matches.negative.length === 0
|| matches.negative.every(x =>
x.hasCursor
|| x.tags.every(t => !tags[t])
);
if (!matchesNeg) return false;
const matchesPos =
matches.positive.length === 0
|| matches.positive.every(x =>
x.hasCursor
|| x.tags.every(t => tags[t])
);
if (!matchesPos) return false;
const matchesOpt =
matches.optional.length === 0
|| matches.optional.some(x =>
x.tags.some(t =>
t.hasCursor
|| t.isNegative
? !tags[t.tag]
: tags[t.tag]
));
if (!matchesOpt) return false;
return true;
}).reduce((acc, val) => {
Object.keys(val[1]).forEach(tag => acc[tag] = acc[tag] + 1 || 1);
return acc;
}, {});
return Object.entries(wildcards)
.sort((a, b) => b[1] - a[1])
.filter(x =>
x[0] === tagword
|| !matches.all.includes(x[0])
umiSubPrompts.forEach((umiSubPrompt) => {
umiTags = umiTags.concat(
[...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map((x) => x[1].toLowerCase())
);
}
if (umiTags.length > 0) {
// Get difference for subprompt
let tagCountChange = umiTags.length - umiPreviousTags.length;
let diff = difference(umiTags, umiPreviousTags);
umiPreviousTags = umiTags;
// Show all condition
let showAll = tagword.endsWith("[") || tagword.endsWith("[--") || tagword.endsWith("|");
// Exit early if the user closed the bracket manually
if ((!diff || diff.length === 0 || (diff.length === 1 && tagCountChange < 0)) && !showAll) {
if (!hideBlocked) hideResults(textArea);
return;
}
let umiTagword = diff[0] || '';
let tempResults = [];
if (umiTagword && umiTagword.length > 0) {
umiTagword = umiTagword.toLowerCase().replace(/[\n\r]/g, "");
originalTagword = tagword;
tagword = umiTagword;
let filteredWildcardsSorted = filteredWildcards(umiTagword);
let searchRegex = new RegExp(`(^|[^a-zA-Z])${escapeRegExp(umiTagword)}`, 'i')
let baseFilter = x => x[0].toLowerCase().search(searchRegex) > -1;
let spaceIncludeFilter = x => x[0].toLowerCase().replaceAll(" ", "_").search(searchRegex) > -1;
tempResults = filteredWildcardsSorted.filter(x => baseFilter(x) || spaceIncludeFilter(x)) // Filter by tagword
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.yamlWildcard)
result.count = t[1];
finalResults.push(result);
});
return finalResults;
} else if (showAll) {
let filteredWildcardsSorted = filteredWildcards("");
// Add final results
let finalResults = [];
filteredWildcardsSorted.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.yamlWildcard)
result.count = t[1];
finalResults.push(result);
});
originalTagword = tagword;
tagword = "";
return finalResults;
}
} else {
let filteredWildcardsSorted = filteredWildcards("");
// Add final results
let finalResults = [];
filteredWildcardsSorted.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.yamlWildcard)
result.count = t[1];
finalResults.push(result);
const start = umiSubPrompt.index;
const end = umiSubPrompt.index + umiSubPrompt[0].length;
if (textArea.selectionStart >= start && textArea.selectionStart <= end) {
umiTagsWithOperators = insertAt(
umiSubPrompt[0],
"###",
textArea.selectionStart - start
);
}
});
originalTagword = tagword;
tagword = "";
return finalResults;
// Safety check since UMI parsing sometimes seems to trigger outside of an UMI subprompt and thus fails
if (umiTagsWithOperators.length === 0) {
return null;
}
const promptSplitToTags = umiTagsWithOperators.replace("]###[", "][").split("][");
const clean = (str) =>
str
.replaceAll(">", "")
.replaceAll("<", "")
.replaceAll("[", "")
.replaceAll("]", "")
.trim();
const matches = promptSplitToTags.reduce(
(acc, curr) => {
let isOptional = curr.includes("|");
let isNegative = curr.startsWith("--");
let out;
if (isOptional) {
out = {
hasCursor: curr.includes("###"),
tags: clean(curr)
.split("|")
.map((x) => ({
hasCursor: x.includes("###"),
isNegative: x.startsWith("--"),
tag: clean(x).replaceAll("###", "").replaceAll("--", ""),
})),
};
acc.optional.push(out);
acc.all.push(...out.tags.map((x) => x.tag));
} else if (isNegative) {
out = {
hasCursor: curr.includes("###"),
tags: clean(curr).replaceAll("###", "").split("|"),
};
out.tags = out.tags.map((x) => (x.startsWith("--") ? x.substring(2) : x));
acc.negative.push(out);
acc.all.push(...out.tags);
} else {
out = {
hasCursor: curr.includes("###"),
tags: clean(curr).replaceAll("###", "").split("|"),
};
acc.positive.push(out);
acc.all.push(...out.tags);
}
return acc;
},
{ positive: [], negative: [], optional: [], all: [] }
);
//console.log({ matches })
const filteredWildcards = (tagword) => {
const wildcards = TAC.Globals.umiWildcards
.filter((x) => {
let tags = x[1];
const matchesNeg =
matches.negative.length === 0 ||
matches.negative.every(
(x) => x.hasCursor || x.tags.every((t) => !tags[t])
);
if (!matchesNeg) return false;
const matchesPos =
matches.positive.length === 0 ||
matches.positive.every(
(x) => x.hasCursor || x.tags.every((t) => tags[t])
);
if (!matchesPos) return false;
const matchesOpt =
matches.optional.length === 0 ||
matches.optional.some((x) =>
x.tags.some((t) =>
t.hasCursor || t.isNegative ? !tags[t.tag] : tags[t.tag]
)
);
if (!matchesOpt) return false;
return true;
})
.reduce((acc, val) => {
Object.keys(val[1]).forEach((tag) => (acc[tag] = acc[tag] + 1 || 1));
return acc;
}, {});
return Object.entries(wildcards)
.sort((a, b) => b[1] - a[1])
.filter((x) => x[0] === tagword || !matches.all.includes(x[0]));
};
if (umiTags.length > 0) {
// Get difference for subprompt
let tagCountChange = umiTags.length - TAC.Globals.umiPreviousTags.length;
let diff = TAC.Utils.difference(umiTags, TAC.Globals.umiPreviousTags);
TAC.Globals.umiPreviousTags = umiTags;
// Show all condition
let showAll =
TAC.Globals.tagword.endsWith("[") ||
TAC.Globals.tagword.endsWith("[--") ||
TAC.Globals.tagword.endsWith("|");
// Exit early if the user closed the bracket manually
if (
(!diff || diff.length === 0 || (diff.length === 1 && tagCountChange < 0)) &&
!showAll
) {
if (!TAC.Globals.hideBlocked) hideResults(textArea);
return;
}
let umiTagword = tagCountChange < 0 ? "" : diff[0] || "";
let tempResults = [];
if (umiTagword && umiTagword.length > 0) {
umiTagword = umiTagword.toLowerCase().replace(/[\n\r]/g, "");
TAC.Globals.originalTagword = TAC.Globals.tagword;
TAC.Globals.tagword = umiTagword;
let filteredWildcardsSorted = filteredWildcards(umiTagword);
let searchRegex = new RegExp(
`(^|[^a-zA-Z])${TAC.Utils.escapeRegExp(umiTagword)}`,
"i"
);
let baseFilter = (x) => x[0].toLowerCase().search(searchRegex) > -1;
let spaceIncludeFilter = (x) =>
x[0].toLowerCase().replaceAll(" ", "_").search(searchRegex) > -1;
tempResults = filteredWildcardsSorted.filter(
(x) => baseFilter(x) || spaceIncludeFilter(x)
); // Filter by tagword
// Add final results
let finalResults = [];
tempResults.forEach((t) => {
let result = new TAC.AutocompleteResult(
t[0].trim(),
TAC.ResultType.umiWildcard
);
result.count = t[1];
finalResults.push(result);
});
finalResults = finalResults.sort((a, b) => b.count - a.count);
return finalResults;
} else if (showAll) {
let filteredWildcardsSorted = filteredWildcards("");
// Add final results
let finalResults = [];
filteredWildcardsSorted.forEach((t) => {
let result = new TAC.AutocompleteResult(
t[0].trim(),
TAC.ResultType.umiWildcard
);
result.count = t[1];
finalResults.push(result);
});
TAC.Globals.originalTagword = TAC.Globals.tagword;
TAC.Globals.tagword = "";
finalResults = finalResults.sort((a, b) => b.count - a.count);
return finalResults;
}
} else {
let filteredWildcardsSorted = filteredWildcards("");
// Add final results
let finalResults = [];
filteredWildcardsSorted.forEach((t) => {
let result = new TAC.AutocompleteResult(
t[0].trim(),
TAC.ResultType.umiWildcard
);
result.count = t[1];
finalResults.push(result);
});
TAC.Globals.originalTagword = TAC.Globals.tagword;
TAC.Globals.tagword = "";
finalResults = finalResults.sort((a, b) => b.count - a.count);
return finalResults;
}
}
}
}
function updateUmiTags( tagType, sanitizedText, newPrompt, textArea) {
// If it was a yaml wildcard, also update the umiPreviousTags
if (tagType === ResultType.yamlWildcard && originalTagword.length > 0) {
let umiSubPrompts = [...newPrompt.matchAll(UMI_PROMPT_REGEX)];
function updateUmiTags(tagType, sanitizedText, newPrompt, textArea) {
// If it was a umi wildcard, also update the TAC.Globals.umiPreviousTags
if (tagType === TAC.ResultType.umiWildcard && TAC.Globals.originalTagword.length > 0) {
let umiSubPrompts = [...newPrompt.matchAll(UMI_PROMPT_REGEX)];
let umiTags = [];
umiSubPrompts.forEach(umiSubPrompt => {
umiTags = umiTags.concat([...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map(x => x[1].toLowerCase()));
});
let umiTags = [];
umiSubPrompts.forEach((umiSubPrompt) => {
umiTags = umiTags.concat(
[...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map((x) => x[1].toLowerCase())
);
});
umiPreviousTags = umiTags;
TAC.Globals.umiPreviousTags = umiTags;
hideResults(textArea);
hideResults(textArea);
return true;
return true;
}
return false;
}
return false;
}
async function load() {
if (yamlWildcards.length === 0) {
try {
let yamlTags = (await readFile(`${tagBasePath}/temp/wcet.txt`)).split("\n");
// Split into tag, count pairs
yamlWildcards = yamlTags.map(x => x
.trim()
.split(","))
.map(([i, ...rest]) => [
i,
rest.reduce((a, b) => {
a[b.toLowerCase()] = true;
return a;
}, {}),
]);
} catch (e) {
console.error("Error loading yaml wildcards: " + e);
async function load() {
if (TAC.Globals.umiWildcards.length === 0) {
try {
let umiTags = (
await TAC.Utils.readFile(`${TAC.Globals.tagBasePath}/temp/umi_tags.txt`)
).split("\n");
// Split into tag, count pairs
TAC.Globals.umiWildcards = umiTags
.map((x) => x.trim().split(","))
.map(([i, ...rest]) => [
i,
rest.reduce((a, b) => {
a[b.toLowerCase()] = true;
return a;
}, {}),
]);
} catch (e) {
console.error("Error loading umi wildcards: " + e);
}
}
}
}
function sanitize(tagType, text) {
// Replace underscores only if the yaml tag is not using them
if (tagType === ResultType.yamlWildcard && !yamlWildcards.includes(text)) {
return text.replaceAll("_", " ");
function sanitize(tagType, text) {
// Replace underscores only if the umi tag is not using them
if (tagType === TAC.ResultType.umiWildcard && !TAC.Globals.umiWildcards.includes(text)) {
return text.replaceAll("_", " ");
}
return null;
}
return null;
}
// Add UMI parser
PARSERS.push(new UmiParser(UMI_TRIGGER));
// Add UMI parser
TAC.Ext.PARSERS.push(new UmiParser(UMI_TRIGGER));
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);
QUEUE_AFTER_INSERT.push(updateUmiTags);
// Add our utility functions to their respective queues
TAC.Ext.QUEUE_FILE_LOAD.push(load);
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
TAC.Ext.QUEUE_AFTER_INSERT.push(updateUmiTags);
})();

View File

@@ -1,123 +1,232 @@
// Regex
const WC_REGEX = /\b__([^,]+)__([^, ]*)\b/g;
(function WildcardExtension() {
// Regex
const WC_REGEX = new RegExp(/__([^,]+)__([^, ]*)/g);
// Trigger conditions
const WC_TRIGGER = () => TAC_CFG.useWildcards && [...tagword.matchAll(WC_REGEX)].length > 0;
const WC_FILE_TRIGGER = () => TAC_CFG.useWildcards && (tagword.startsWith("__") && !tagword.endsWith("__") || tagword === "__");
// Trigger conditions
const WC_TRIGGER = () =>
TAC.CFG.useWildcards &&
[
...TAC.Globals.tagword.matchAll(
new RegExp(
WC_REGEX.source.replaceAll("__", TAC.Utils.escapeRegExp(TAC.CFG.wcWrap)),
"g"
)
),
].length > 0;
const WC_FILE_TRIGGER = () =>
TAC.CFG.useWildcards &&
((TAC.Globals.tagword.startsWith(TAC.CFG.wcWrap) &&
!TAC.Globals.tagword.endsWith(TAC.CFG.wcWrap)) ||
TAC.Globals.tagword === TAC.CFG.wcWrap);
class WildcardParser extends BaseTagParser {
async parse() {
// Show wildcards from a file with that name
let wcMatch = [...tagword.matchAll(WC_REGEX)]
let wcFile = wcMatch[0][1];
let wcWord = wcMatch[0][2];
class WildcardParser extends TAC.BaseTagParser {
async parse() {
// Show wildcards from a file with that name
let wcMatch = [
...TAC.Globals.tagword.matchAll(
new RegExp(
WC_REGEX.source.replaceAll("__", TAC.Utils.escapeRegExp(TAC.CFG.wcWrap)),
"g"
)
),
];
let wcFile = wcMatch[0][1];
let wcWord = wcMatch[0][2];
// Look in normal wildcard files
let wcFound = wildcardFiles.find(x => x[1].toLowerCase() === wcFile);
// Use found wildcard file or look in external wildcard files
let wcPair = wcFound || wildcardExtFiles.find(x => x[1].toLowerCase() === wcFile);
// Look in normal wildcard files
let wcFound = TAC.Globals.wildcardFiles.filter((x) => x[1].toLowerCase() === wcFile);
if (wcFound.length === 0) wcFound = null;
// Use found wildcard file or look in external wildcard files
let wcPairs =
wcFound ||
TAC.Globals.wildcardExtFiles.filter((x) => x[1].toLowerCase() === wcFile);
let wildcards = (await readFile(`${wcPair[0]}/${wcPair[1]}.txt`)).split("\n")
.filter(x => x.trim().length > 0 && !x.startsWith('#')); // Remove empty lines and comments
if (!wcPairs) return [];
let finalResults = [];
let tempResults = wildcards.filter(x => (wcWord !== null && wcWord.length > 0) ? x.toLowerCase().includes(wcWord) : x) // Filter by tagword
tempResults.forEach(t => {
let result = new AutocompleteResult(t.trim(), ResultType.wildcardTag);
result.meta = wcFile;
finalResults.push(result);
});
let wildcards = [];
for (let i = 0; i < wcPairs.length; i++) {
const basePath = wcPairs[i][0];
const fileName = wcPairs[i][1];
if (!basePath || !fileName) return;
return finalResults;
}
}
class WildcardFileParser extends BaseTagParser {
parse() {
// Show available wildcard files
let tempResults = [];
if (tagword !== "__") {
let lmb = (x) => x[1].toLowerCase().includes(tagword.replace("__", ""))
tempResults = wildcardFiles.filter(lmb).concat(wildcardExtFiles.filter(lmb)) // Filter by tagword
} else {
tempResults = wildcardFiles.concat(wildcardExtFiles);
}
let finalResults = [];
// Get final results
tempResults.forEach(wcFile => {
let result = new AutocompleteResult(wcFile[1].trim(), ResultType.wildcardFile);
result.meta = "Wildcard file";
finalResults.push(result);
});
return finalResults;
}
}
async function load() {
if (wildcardFiles.length === 0 && wildcardExtFiles.length === 0) {
try {
let wcFileArr = (await readFile(`${tagBasePath}/temp/wc.txt`)).split("\n");
let wcBasePath = wcFileArr[0].trim(); // First line should be the base path
wildcardFiles = wcFileArr.slice(1)
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => [wcBasePath, x.trim().replace(".txt", "")]); // Remove file extension & newlines
// To support multiple sources, we need to separate them using the provided "-----" strings
let wcExtFileArr = (await readFile(`${tagBasePath}/temp/wce.txt`)).split("\n");
let splitIndices = [];
for (let index = 0; index < wcExtFileArr.length; index++) {
if (wcExtFileArr[index].trim() === "-----") {
splitIndices.push(index);
// YAML wildcards are already loaded as json, so we can get the values directly.
// basePath is the name of the file in this case, and fileName the key
if (basePath.endsWith(".yaml")) {
const getDescendantProp = (obj, desc) => {
const arr = desc.split("/");
while (arr.length) {
obj = obj[arr.shift()];
}
return obj;
};
wildcards = wildcards.concat(
getDescendantProp(TAC.Globals.yamlWildcards[basePath], fileName)
);
} else {
const fileContent = (
await TAC.Utils.fetchAPI(
`tacapi/v1/wildcard-contents?basepath=${basePath}&filename=${fileName}.txt`,
false
)
)
.split("\n")
.filter((x) => x.trim().length > 0 && !x.startsWith("#")); // Remove empty lines and comments
wildcards = wildcards.concat(fileContent);
}
}
// For each group, add them to the wildcardFiles array with the base path as the first element
for (let i = 0; i < splitIndices.length; i++) {
let start = splitIndices[i - 1] || 0;
if (i > 0) start++; // Skip the "-----" line
let end = splitIndices[i];
let wcExtFile = wcExtFileArr.slice(start, end);
let base = wcExtFile[0].trim() + "/";
wcExtFile = wcExtFile.slice(1)
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => x.trim().replace(base, "").replace(".txt", "")); // Remove file extension & newlines;
if (TAC.CFG.sortWildcardResults) wildcards.sort((a, b) => a.localeCompare(b));
wcExtFile = wcExtFile.map(x => [base, x]);
wildcardExtFiles.push(...wcExtFile);
}
} catch (e) {
console.error("Error loading wildcards: " + e);
let finalResults = [];
let tempResults = wildcards.filter((x) =>
wcWord !== null && wcWord.length > 0 ? x.toLowerCase().includes(wcWord) : x
); // Filter by tagword
tempResults.forEach((t) => {
let result = new TAC.AutocompleteResult(t.trim(), TAC.ResultType.wildcardTag);
result.meta = wcFile;
finalResults.push(result);
});
return finalResults;
}
}
}
function sanitize(tagType, text) {
if (tagType === ResultType.wildcardFile) {
return `__${text}__`;
} else if (tagType === ResultType.wildcardTag) {
return text.replace(/^.*?: /g, "");
class WildcardFileParser extends TAC.BaseTagParser {
parse() {
// Show available wildcard files
let tempResults = [];
if (TAC.Globals.tagword !== TAC.CFG.wcWrap) {
let lmb = (x) =>
x[1].toLowerCase().includes(TAC.Globals.tagword.replace(TAC.CFG.wcWrap, ""));
tempResults = TAC.Globals.wildcardFiles
.filter(lmb)
.concat(TAC.Globals.wildcardExtFiles.filter(lmb)); // Filter by tagword
} else {
tempResults = TAC.Globals.wildcardFiles.concat(TAC.Globals.wildcardExtFiles);
}
let finalResults = [];
const alreadyAdded = new Map();
// Get final results
tempResults.forEach((wcFile) => {
// Skip duplicate entries incase multiple files have the same name or yaml category
if (alreadyAdded.has(wcFile[1])) return;
let result = null;
if (wcFile[0].endsWith(".yaml")) {
result = new TAC.AutocompleteResult(
wcFile[1].trim(),
TAC.ResultType.yamlWildcard
);
result.meta = "YAML wildcard collection";
} else {
result = new TAC.AutocompleteResult(
wcFile[1].trim(),
TAC.ResultType.wildcardFile
);
result.meta = "Wildcard file";
result.sortKey = wcFile[2].trim();
}
finalResults.push(result);
alreadyAdded.set(wcFile[1], true);
});
finalResults.sort(TAC.Utils.getSortFunction());
return finalResults;
}
}
return null;
}
function keepOpenIfWildcard(tagType, sanitizedText, newPrompt, textArea) {
// If it's a wildcard, we want to keep the results open so the user can select another wildcard
if (tagType === ResultType.wildcardFile) {
hideBlocked = true;
autocomplete(textArea, newPrompt, sanitizedText);
setTimeout(() => { hideBlocked = false; }, 450);
return true;
async function load() {
if (TAC.Globals.wildcardFiles.length === 0 && TAC.Globals.wildcardExtFiles.length === 0) {
try {
let wcFileArr = await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/wc.txt`);
if (wcFileArr && wcFileArr.length > 0) {
let wcBasePath = wcFileArr[0][0].trim(); // First line should be the base path
TAC.Globals.wildcardFiles = wcFileArr
.slice(1)
.filter((x) => x[0]?.trim().length > 0) //Remove empty lines
.map((x) => [wcBasePath, x[0]?.trim().replace(".txt", ""), x[1]]); // Remove file extension & newlines
}
// To support multiple sources, we need to separate them using the provided "-----" strings
let wcExtFileArr = await TAC.Utils.loadCSV(
`${TAC.Globals.tagBasePath}/temp/wce.txt`
);
let splitIndices = [];
for (let index = 0; index < wcExtFileArr.length; index++) {
if (wcExtFileArr[index][0].trim() === "-----") {
splitIndices.push(index);
}
}
// For each group, add them to the wildcardFiles array with the base path as the first element
for (let i = 0; i < splitIndices.length; i++) {
let start = splitIndices[i - 1] || 0;
if (i > 0) start++; // Skip the "-----" line
let end = splitIndices[i];
let wcExtFile = wcExtFileArr.slice(start, end);
if (wcExtFile && wcExtFile.length > 0) {
let base = wcExtFile[0][0].trim() + "/";
wcExtFile = wcExtFile
.slice(1)
.filter((x) => x[0]?.trim().length > 0) //Remove empty lines
.map((x) => [
base,
x[0]?.trim().replace(base, "").replace(".txt", ""),
x[1],
]);
TAC.Globals.wildcardExtFiles.push(...wcExtFile);
}
}
// Load the yaml wildcard json file and append it as a wildcard file, appending each key as a path component until we reach the end
TAC.Globals.yamlWildcards = await TAC.Utils.readFile(
`${TAC.Globals.tagBasePath}/temp/wc_yaml.json`,
true
);
// Append each key as a path component until we reach a leaf
Object.keys(TAC.Globals.yamlWildcards).forEach((file) => {
const flattened = TAC.Utils.flatten(TAC.Globals.yamlWildcards[file], [], "/");
Object.keys(flattened).forEach((key) => {
TAC.Globals.wildcardExtFiles.push([file, key]);
});
});
} catch (e) {
console.error("Error loading wildcards: " + e);
}
}
}
return false;
}
// Register the parsers
PARSERS.push(new WildcardParser(WC_TRIGGER));
PARSERS.push(new WildcardFileParser(WC_FILE_TRIGGER));
function sanitize(tagType, text) {
if (tagType === TAC.ResultType.wildcardFile || tagType === TAC.ResultType.yamlWildcard) {
return `${TAC.CFG.wcWrap}${text}${TAC.CFG.wcWrap}`;
} else if (tagType === TAC.ResultType.wildcardTag) {
return text;
}
return null;
}
// Add our utility functions to their respective queues
QUEUE_FILE_LOAD.push(load);
QUEUE_SANITIZE.push(sanitize);
QUEUE_AFTER_INSERT.push(keepOpenIfWildcard);
function keepOpenIfWildcard(tagType, sanitizedText, newPrompt, textArea) {
// If it's a wildcard, we want to keep the results open so the user can select another wildcard
if (tagType === TAC.ResultType.wildcardFile || tagType === TAC.ResultType.yamlWildcard) {
TAC.Globals.hideBlocked = true;
setTimeout(() => {
TAC.Globals.hideBlocked = false;
}, 450);
return true;
}
return false;
}
// Register the parsers
TAC.Ext.PARSERS.push(new WildcardParser(WC_TRIGGER));
TAC.Ext.PARSERS.push(new WildcardFileParser(WC_FILE_TRIGGER));
// Add our utility functions to their respective queues
TAC.Ext.QUEUE_FILE_LOAD.push(load);
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
TAC.Ext.QUEUE_AFTER_INSERT.push(keepOpenIfWildcard);
})();

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,6 @@
# This file provides support for the model-keyword extension to add known lora keywords on completion
import csv
import hashlib
from pathlib import Path
@@ -15,18 +16,26 @@ hash_dict = {}
def load_hash_cache():
with open(known_hashes_file, "r") as file:
for line in file:
name, hash, mtime = line.replace("\n", "").split(",")
if not known_hashes_file.exists():
known_hashes_file.touch()
with open(known_hashes_file, "r", encoding="utf-8") as file:
reader = csv.reader(
file.readlines(), delimiter=",", quotechar='"', skipinitialspace=True
)
for line in reader:
name, hash, mtime = line
hash_dict[name] = (hash, mtime)
def update_hash_cache():
global file_needs_update
if file_needs_update:
with open(known_hashes_file, "w") as file:
if not known_hashes_file.exists():
known_hashes_file.touch()
with open(known_hashes_file, "w", encoding="utf-8", newline='') as file:
writer = csv.writer(file)
for name, (hash, mtime) in hash_dict.items():
file.write(f"{name},{hash},{mtime}\n")
writer.writerow([name, hash, mtime])
# Copy of the fast inaccurate hash function from the extension
@@ -71,6 +80,6 @@ def write_model_keyword_path():
return True
else:
print(
"Tag Autocomplete: Could not locate model-keyword extension, LORA/LYCO trigger word completion will be unavailable."
"Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu."
)
return False

View File

@@ -1,41 +1,79 @@
from pathlib import Path
from modules import scripts, shared
try:
from modules.paths import extensions_dir, script_path
# Webui root path
FILE_DIR = Path(script_path)
FILE_DIR = Path(script_path).absolute()
# The extension base path
EXT_PATH = Path(extensions_dir)
EXT_PATH = Path(extensions_dir).absolute()
except ImportError:
# Webui root path
FILE_DIR = Path().absolute()
# The extension base path
EXT_PATH = FILE_DIR.joinpath('extensions')
EXT_PATH = FILE_DIR.joinpath("extensions").absolute()
# Tags base path
TAGS_PATH = Path(scripts.basedir()).joinpath('tags')
TAGS_PATH = Path(scripts.basedir()).joinpath("tags").absolute()
# The path to the folder containing the wildcards and embeddings
WILDCARD_PATH = FILE_DIR.joinpath('scripts/wildcards')
EMB_PATH = Path(shared.cmd_opts.embeddings_dir)
HYP_PATH = Path(shared.cmd_opts.hypernetwork_dir)
try: # SD.Next
WILDCARD_PATH = Path(shared.opts.wildcards_dir).absolute()
except Exception: # A1111
WILDCARD_PATH = FILE_DIR.joinpath("scripts/wildcards").absolute()
EMB_PATH = Path(shared.cmd_opts.embeddings_dir).absolute()
# Forge Classic detection
try:
from modules_forge.forge_version import version as forge_version
IS_FORGE_CLASSIC = forge_version == "classic"
except ImportError:
IS_FORGE_CLASSIC = False
# Forge Classic skips it
if not IS_FORGE_CLASSIC:
try:
HYP_PATH = Path(shared.cmd_opts.hypernetwork_dir).absolute()
except AttributeError:
HYP_PATH = None
else:
HYP_PATH = None
try:
LORA_PATH = Path(shared.cmd_opts.lora_dir)
LORA_PATH = Path(shared.cmd_opts.lora_dir).absolute()
except AttributeError:
LORA_PATH = None
try:
LYCO_PATH = Path(shared.cmd_opts.lyco_dir)
try:
LYCO_PATH = Path(shared.cmd_opts.lyco_dir_backcompat).absolute()
except:
LYCO_PATH = Path(shared.cmd_opts.lyco_dir).absolute() # attempt original non-backcompat path
except AttributeError:
LYCO_PATH = None
def find_ext_wildcard_paths():
"""Returns the path to the extension wildcards folder"""
found = list(EXT_PATH.glob('*/wildcards/'))
found = list(EXT_PATH.glob("*/wildcards/"))
# Try to find the wildcard path from the shared opts
try:
from modules.shared import opts
except ImportError: # likely not in an a1111 context
opts = None
# Append custom wildcard paths
custom_paths = [
getattr(shared.cmd_opts, "wildcards_dir", None), # Cmd arg from the wildcard extension
getattr(opts, "wildcard_dir", None), # Custom path from sd-dynamic-prompts
]
for path in [Path(p).absolute() for p in custom_paths if p is not None]:
if path.exists():
found.append(path)
return found
@@ -43,5 +81,12 @@ def find_ext_wildcard_paths():
WILDCARD_EXT_PATHS = find_ext_wildcard_paths()
# The path to the temporary files
STATIC_TEMP_PATH = FILE_DIR.joinpath('tmp') # In the webui root, on windows it exists by default, on linux it doesn't
TEMP_PATH = TAGS_PATH.joinpath('temp') # Extension specific temp files
# In the webui root, on windows it exists by default, on linux it doesn't
STATIC_TEMP_PATH = FILE_DIR.joinpath("tmp").absolute()
TEMP_PATH = TAGS_PATH.joinpath("temp").absolute() # Extension specific temp files
# Make sure these folders exist
if not TEMP_PATH.exists():
TEMP_PATH.mkdir()
if not STATIC_TEMP_PATH.exists():
STATIC_TEMP_PATH.mkdir()

View File

@@ -2,68 +2,213 @@
# to a temporary file to expose it to the javascript side
import glob
import importlib
import json
import sqlite3
import sys
import urllib.parse
from asyncio import sleep
from pathlib import Path
import gradio as gr
import yaml
from modules import script_callbacks, sd_hijack, shared
from fastapi import FastAPI
from fastapi.responses import FileResponse, JSONResponse, Response
from modules import hashes, script_callbacks, sd_hijack, sd_models, shared
from pydantic import BaseModel
from scripts.model_keyword_support import (get_lora_simple_hash,
load_hash_cache, update_hash_cache,
write_model_keyword_path)
from scripts.shared_paths import *
try:
try:
from scripts import tag_frequency_db as tdb
except ModuleNotFoundError:
from inspect import currentframe, getframeinfo
filename = getframeinfo(currentframe()).filename
parent = Path(filename).resolve().parent
sys.path.append(str(parent))
import tag_frequency_db as tdb
# Ensure the db dependency is reloaded on script reload
importlib.reload(tdb)
db = tdb.TagFrequencyDb()
if int(db.version) != int(tdb.db_ver):
raise ValueError("Database version mismatch")
except (ImportError, ValueError, sqlite3.Error) as e:
print(f"Tag Autocomplete: Tag frequency database error - \"{e}\"")
db = None
def get_embed_db(sd_model=None):
"""Returns the embedding database, if available."""
try:
return sd_hijack.model_hijack.embedding_db
except Exception:
try: # sd next with diffusers backend
sdnext_model = sd_model if sd_model is not None else shared.sd_model
return sdnext_model.embedding_db
except Exception:
try: # forge webui
forge_model = sd_model if sd_model is not None else sd_models.model_data.get_sd_model()
if type(forge_model).__name__ == "FakeInitialModel":
return None
else:
processer = getattr(forge_model, "text_processing_engine", getattr(forge_model, "text_processing_engine_l"))
return processer.embeddings
except Exception:
return None
# Attempt to get embedding load function, using the same call as api.
try:
embed_db = get_embed_db()
if embed_db is not None:
load_textual_inversion_embeddings = embed_db.load_textual_inversion_embeddings
else:
load_textual_inversion_embeddings = lambda *args, **kwargs: None
except Exception as e: # Not supported.
load_textual_inversion_embeddings = lambda *args, **kwargs: None
print("Tag Autocomplete: Cannot reload embeddings instantly:", e)
# Sorting functions for extra networks / embeddings stuff
sort_criteria = {
"Name": lambda path, name, subpath: name.lower() if subpath else path.stem.lower(),
"Date Modified (newest first)": lambda path, name, subpath: path.stat().st_mtime if path.exists() else name.lower(),
"Date Modified (oldest first)": lambda path, name, subpath: path.stat().st_mtime if path.exists() else name.lower()
}
def sort_models(model_list, sort_method = None, name_has_subpath = False):
"""Sorts models according to the setting.
Input: list of (full_path, display_name, {hash}) models.
Returns models in the format of name, sort key, meta.
Meta is optional and can be a hash, version string or other required info.
"""
if len(model_list) == 0:
return model_list
if sort_method is None:
sort_method = getattr(shared.opts, "tac_modelSortOrder", "Name")
# Get sorting method from dictionary
sorter = sort_criteria.get(sort_method, sort_criteria["Name"])
# During merging on the JS side we need to re-sort anyway, so here only the sort criteria are calculated.
# The list itself doesn't need to get sorted at this point.
if len(model_list[0]) > 2:
results = [f'"{name}","{sorter(path, name, name_has_subpath)}",{meta}' for path, name, meta in model_list]
else:
results = [f'"{name}","{sorter(path, name, name_has_subpath)}"' for path, name in model_list]
return results
def get_wildcards():
"""Returns a list of all wildcards. Works on nested folders."""
wildcard_files = list(WILDCARD_PATH.rglob("*.txt"))
resolved = [w.relative_to(WILDCARD_PATH).as_posix(
) for w in wildcard_files if w.name != "put wildcards here.txt"]
return resolved
resolved = [(w, w.relative_to(WILDCARD_PATH).as_posix())
for w in wildcard_files
if w.name != "put wildcards here.txt"
and w.is_file()]
return sort_models(resolved, name_has_subpath=True)
def get_ext_wildcards():
"""Returns a list of all extension wildcards. Works on nested folders."""
wildcard_files = []
excluded_folder_names = [s.strip() for s in getattr(shared.opts, "tac_wildcardExclusionList", "").split(",")]
for path in WILDCARD_EXT_PATHS:
wildcard_files.append(path.as_posix())
wildcard_files.extend(p.relative_to(path).as_posix() for p in path.rglob("*.txt") if p.name != "put wildcards here.txt")
resolved = [(w, w.relative_to(path).as_posix())
for w in path.rglob("*.txt")
if w.name != "put wildcards here.txt"
and not any(excluded in w.parts for excluded in excluded_folder_names)
and w.is_file()]
wildcard_files.extend(sort_models(resolved, name_has_subpath=True))
wildcard_files.append("-----")
return wildcard_files
def is_umi_format(data):
"""Returns True if the YAML file is in UMI format."""
issue_found = False
for item in data:
try:
if not (data[item] and 'Tags' in data[item] and isinstance(data[item]['Tags'], list)):
issue_found = True
break
except:
issue_found = True
break
return not issue_found
def get_ext_wildcard_tags():
count = 0
def parse_umi_format(umi_tags, data):
global count
for item in data:
umi_tags[count] = ','.join(data[item]['Tags'])
count += 1
def parse_dynamic_prompt_format(yaml_wildcards, data, path):
# Recurse subkeys, delete those without string lists as values
def recurse_dict(d: dict):
for key, value in d.copy().items():
if isinstance(value, dict):
recurse_dict(value)
elif not (isinstance(value, list) and all(isinstance(v, str) for v in value)):
del d[key]
try:
recurse_dict(data)
# Add to yaml_wildcards
yaml_wildcards[path.name] = data
except:
return
def get_yaml_wildcards():
"""Returns a list of all tags found in extension YAML files found under a Tags: key."""
wildcard_tags = {} # { tag: count }
yaml_files = []
for path in WILDCARD_EXT_PATHS:
yaml_files.extend(p for p in path.rglob("*.yml"))
yaml_files.extend(p for p in path.rglob("*.yaml"))
count = 0
yaml_files.extend(p for p in path.rglob("*.yml") if p.is_file())
yaml_files.extend(p for p in path.rglob("*.yaml") if p.is_file())
yaml_wildcards = {}
umi_tags = {} # { tag: count }
for path in yaml_files:
try:
with open(path, encoding="utf8") as file:
data = yaml.safe_load(file)
if data:
for item in data:
if data[item] and 'Tags' in data[item] and isinstance(data[item]['Tags'], list):
wildcard_tags[count] = ','.join(data[item]['Tags'])
count += 1
else:
print('Issue with tags found in ' + path.name + ' at item ' + item)
if (data):
if (is_umi_format(data)):
parse_umi_format(umi_tags, data)
else:
parse_dynamic_prompt_format(yaml_wildcards, data, path)
else:
print('No data found in ' + path.name)
except yaml.YAMLError:
print('Issue in parsing YAML file ' + path.name )
except (yaml.YAMLError, UnicodeDecodeError, AttributeError, TypeError) as e:
# YAML file not in wildcard format or couldn't be read
print(f'Issue in parsing YAML file {path.name}: {e}')
continue
except Exception as e:
# Something else went wrong, just skip
continue
# Sort by count
sorted_tags = sorted(wildcard_tags.items(), key=lambda item: item[1], reverse=True)
output = []
for tag, count in sorted_tags:
output.append(f"{tag},{count}")
return output
umi_sorted = sorted(umi_tags.items(), key=lambda item: item[1], reverse=True)
umi_output = []
for tag, count in umi_sorted:
umi_output.append(f"{tag},{count}")
if (len(umi_output) > 0):
write_to_temp_file('umi_tags.txt', umi_output)
with open(TEMP_PATH.joinpath("wc_yaml.json"), "w", encoding="utf-8") as file:
json.dump(yaml_wildcards, file, ensure_ascii=False)
def get_embeddings(sd_model):
@@ -72,48 +217,59 @@ def get_embeddings(sd_model):
# Version constants
V1_SHAPE = 768
V2_SHAPE = 1024
VXL_SHAPE = 2048
emb_v1 = []
emb_v2 = []
emb_vXL = []
emb_unknown = []
results = []
try:
# Get embedding dict from sd_hijack to separate v1/v2 embeddings
emb_type_a = sd_hijack.model_hijack.embedding_db.word_embeddings
emb_type_b = sd_hijack.model_hijack.embedding_db.skipped_embeddings
# Get the shape of the first item in the dict
emb_a_shape = -1
emb_b_shape = -1
if (len(emb_type_a) > 0):
emb_a_shape = next(iter(emb_type_a.items()))[1].shape
if (len(emb_type_b) > 0):
emb_b_shape = next(iter(emb_type_b.items()))[1].shape
embed_db = get_embed_db(sd_model)
# Re-register callback if needed
global load_textual_inversion_embeddings
if embed_db is not None and load_textual_inversion_embeddings != embed_db.load_textual_inversion_embeddings:
load_textual_inversion_embeddings = embed_db.load_textual_inversion_embeddings
loaded = embed_db.word_embeddings
skipped = embed_db.skipped_embeddings
# Add embeddings to the correct list
if (emb_a_shape == V1_SHAPE):
emb_v1 = list(emb_type_a.keys())
elif (emb_a_shape == V2_SHAPE):
emb_v2 = list(emb_type_a.keys())
for key, emb in (skipped | loaded).items():
filename = getattr(emb, "filename", None)
if filename is None:
if emb.shape is None:
emb_unknown.append((Path(key), key, ""))
elif emb.shape == V1_SHAPE:
emb_v1.append((Path(key), key, "v1"))
elif emb.shape == V2_SHAPE:
emb_v2.append((Path(key), key, "v2"))
elif emb.shape == VXL_SHAPE:
emb_vXL.append((Path(key), key, "vXL"))
else:
emb_unknown.append((Path(key), key, ""))
else:
if emb.filename is None:
continue
if (emb_b_shape == V1_SHAPE):
emb_v1 = list(emb_type_b.keys())
elif (emb_b_shape == V2_SHAPE):
emb_v2 = list(emb_type_b.keys())
if emb.shape is None:
emb_unknown.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), ""))
elif emb.shape == V1_SHAPE:
emb_v1.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), "v1"))
elif emb.shape == V2_SHAPE:
emb_v2.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), "v2"))
elif emb.shape == VXL_SHAPE:
emb_vXL.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), "vXL"))
else:
emb_unknown.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), ""))
# Get shape of current model
#vec = sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
#model_shape = vec.shape[1]
# Show relevant entries at the top
#if (model_shape == V1_SHAPE):
# results = [e + ",v1" for e in emb_v1] + [e + ",v2" for e in emb_v2]
#elif (model_shape == V2_SHAPE):
# results = [e + ",v2" for e in emb_v2] + [e + ",v1" for e in emb_v1]
#else:
# raise AttributeError # Fallback to old method
results = sorted([e + ",v1" for e in emb_v1] + [e + ",v2" for e in emb_v2], key=lambda x: x.lower())
results = sort_models(emb_v1) + sort_models(emb_v2) + sort_models(emb_vXL) + sort_models(emb_unknown)
except AttributeError:
print("tag_autocomplete_helper: Old webui version or unrecognized model shape, using fallback for embedding completion.")
# Get a list of all embeddings in the folder
all_embeds = [str(e.relative_to(EMB_PATH)) for e in EMB_PATH.rglob("*") if e.suffix in {".bin", ".pt", ".png",'.webp', '.jxl', '.avif'}]
all_embeds = [str(e.relative_to(EMB_PATH)) for e in EMB_PATH.rglob("*") if e.suffix in {".bin", ".pt", ".png",'.webp', '.jxl', '.avif'} and e.is_file()]
# Remove files with a size of 0
all_embeds = [e for e in all_embeds if EMB_PATH.joinpath(e).stat().st_size > 0]
# Remove file extensions
@@ -127,51 +283,129 @@ def get_hypernetworks():
# Get a list of all hypernetworks in the folder
hyp_paths = [Path(h) for h in glob.glob(HYP_PATH.joinpath("**/*").as_posix(), recursive=True)]
all_hypernetworks = [str(h.name) for h in hyp_paths if h.suffix in {".pt"}]
# Remove file extensions
return sorted([h[:h.rfind('.')] for h in all_hypernetworks], key=lambda x: x.lower())
all_hypernetworks = [(h, h.stem) for h in hyp_paths if h.suffix in {".pt"} and h.is_file()]
return sort_models(all_hypernetworks)
model_keyword_installed = write_model_keyword_path()
def _get_lora():
"""
Write a list of all lora.
Fallback method for when the built-in Lora.networks module is not available.
"""
# Get a list of all lora in the folder
lora_paths = [
Path(l)
for l in glob.glob(LORA_PATH.joinpath("**/*").as_posix(), recursive=True)
]
# Get hashes
valid_loras = [
lf
for lf in lora_paths
if lf.suffix in {".safetensors", ".ckpt", ".pt"} and lf.is_file()
]
return valid_loras
def _get_lyco():
"""
Write a list of all LyCORIS/LOHA from https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris
Fallback method for when the built-in Lora.networks module is not available.
"""
# Get a list of all LyCORIS in the folder
lyco_paths = [
Path(ly)
for ly in glob.glob(LYCO_PATH.joinpath("**/*").as_posix(), recursive=True)
]
# Get hashes
valid_lycos = [
lyf
for lyf in lyco_paths
if lyf.suffix in {".safetensors", ".ckpt", ".pt"} and lyf.is_file()
]
return valid_lycos
# Attempt to use the build-in Lora.networks Lora/LyCORIS models lists.
try:
import sys
from modules import extensions
sys.path.append(Path(extensions.extensions_builtin_dir).joinpath("Lora").as_posix())
import lora # pyright: ignore [reportMissingImports]
def _get_lora():
return [
Path(model.filename).absolute()
for model in lora.available_loras.values()
if Path(model.filename).absolute().is_relative_to(LORA_PATH)
]
def _get_lyco():
return [
Path(model.filename).absolute()
for model in lora.available_loras.values()
if Path(model.filename).absolute().is_relative_to(LYCO_PATH)
]
except Exception as e:
pass
# no need to report
# print(f'Exception setting-up performant fetchers: {e}')
def is_visible(p: Path) -> bool:
if getattr(shared.opts, "extra_networks_hidden_models", "When searched") != "Never":
return True
for part in p.parts:
if part.startswith('.'):
return False
return True
def get_lora():
"""Write a list of all lora"""
global model_keyword_installed
# Get a list of all lora in the folder
lora_paths = [Path(l) for l in glob.glob(LORA_PATH.joinpath("**/*").as_posix(), recursive=True)]
# Get hashes
valid_loras = [lf for lf in lora_paths if lf.suffix in {".safetensors", ".ckpt", ".pt"}]
hashes = {}
valid_loras = _get_lora()
loras_with_hash = []
for l in valid_loras:
name = l.name[:l.name.rfind('.')]
if not l.exists() or not l.is_file() or not is_visible(l):
continue
name = l.relative_to(LORA_PATH).as_posix()
if model_keyword_installed:
hashes[name] = get_lora_simple_hash(l)
hash = get_lora_simple_hash(l)
else:
hashes[name] = ""
hash = ""
loras_with_hash.append((l, name, hash))
# Sort
sorted_loras = dict(sorted(hashes.items()))
# Add hashes and return
return [f"{name},{hash}" for name, hash in sorted_loras.items()]
return sort_models(loras_with_hash)
def get_lyco():
"""Write a list of all LyCORIS/LOHA from https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris"""
# Get a list of all LyCORIS in the folder
lyco_paths = [Path(ly) for ly in glob.glob(LYCO_PATH.joinpath("**/*").as_posix(), recursive=True)]
# Get hashes
valid_lycos = [lyf for lyf in lyco_paths if lyf.suffix in {".safetensors", ".ckpt", ".pt"}]
hashes = {}
valid_lycos = _get_lyco()
lycos_with_hash = []
for ly in valid_lycos:
name = ly.name[:ly.name.rfind('.')]
hashes[name] = get_lora_simple_hash(ly)
if not ly.exists() or not ly.is_file() or not is_visible(ly):
continue
name = ly.relative_to(LYCO_PATH).as_posix()
if model_keyword_installed:
hash = get_lora_simple_hash(ly)
else:
hash = ""
lycos_with_hash.append((ly, name, hash))
# Sort
sorted_lycos = dict(sorted(hashes.items()))
# Add hashes and return
return [f"{name},{hash}" for name, hash in sorted_lycos.items()]
return sort_models(lycos_with_hash)
def get_style_names():
try:
style_names: list[str] = shared.prompt_styles.styles.keys()
style_names = sorted(style_names, key=len, reverse=True)
return style_names
except Exception:
return None
def write_tag_base_path():
"""Writes the tag base path to a fixed location temporary file"""
@@ -187,19 +421,19 @@ def write_to_temp_file(name, data):
csv_files = []
csv_files_withnone = []
def update_tag_files():
def update_tag_files(*args, **kwargs):
"""Returns a list of all potential tag files"""
global csv_files, csv_files_withnone
files = [str(t.relative_to(TAGS_PATH)) for t in TAGS_PATH.glob("*.csv")]
files = [str(t.relative_to(TAGS_PATH)) for t in TAGS_PATH.glob("*.csv") if t.is_file()]
csv_files = files
csv_files_withnone = ["None"] + files
json_files = []
json_files_withnone = []
def update_json_files():
def update_json_files(*args, **kwargs):
"""Returns a list of all potential json files"""
global json_files, json_files_withnone
files = [str(j.relative_to(TAGS_PATH)) for j in TAGS_PATH.glob("*.json")]
files = [str(j.relative_to(TAGS_PATH)) for j in TAGS_PATH.glob("*.json") if j.is_file()]
json_files = files
json_files_withnone = ["None"] + files
@@ -221,10 +455,12 @@ if not TEMP_PATH.exists():
# even if no wildcards or embeddings are found
write_to_temp_file('wc.txt', [])
write_to_temp_file('wce.txt', [])
write_to_temp_file('wcet.txt', [])
write_to_temp_file('wc_yaml.json', [])
write_to_temp_file('umi_tags.txt', [])
write_to_temp_file('hyp.txt', [])
write_to_temp_file('lora.txt', [])
write_to_temp_file('lyco.txt', [])
write_to_temp_file('styles.txt', [])
# Only reload embeddings if the file doesn't exist, since they are already re-written on model load
if not TEMP_PATH.joinpath("emb.txt").exists():
write_to_temp_file('emb.txt', [])
@@ -234,28 +470,59 @@ if EMB_PATH.exists():
# Get embeddings after the model loaded callback
script_callbacks.on_model_loaded(get_embeddings)
def refresh_temp_files():
write_temp_files()
get_embeddings(shared.sd_model)
def refresh_embeddings(force: bool, *args, **kwargs):
try:
# Fix for SD.Next infinite refresh loop due to gradio not updating after model load on demand.
# This will just skip embedding loading if no model is loaded yet (or there really are no embeddings).
# Try catch is just for safety incase sd_hijack access fails for some reason.
embed_db = get_embed_db()
if embed_db is None:
return
loaded = embed_db.word_embeddings
skipped = embed_db.skipped_embeddings
if len((loaded | skipped)) > 0:
load_textual_inversion_embeddings(force_reload=force)
get_embeddings(None)
except Exception:
pass
def write_temp_files():
def refresh_temp_files(*args, **kwargs):
global WILDCARD_EXT_PATHS
skip_wildcard_refresh = getattr(shared.opts, "tac_skipWildcardRefresh", False)
if skip_wildcard_refresh:
WILDCARD_EXT_PATHS = find_ext_wildcard_paths()
write_temp_files(skip_wildcard_refresh)
force_embed_refresh = getattr(shared.opts, "tac_forceRefreshEmbeddings", False)
refresh_embeddings(force=force_embed_refresh)
def write_style_names(*args, **kwargs):
styles = get_style_names()
if styles:
write_to_temp_file('styles.txt', styles)
def write_temp_files(skip_wildcard_refresh = False):
# Write wildcards to wc.txt if found
if WILDCARD_PATH.exists():
wildcards = [WILDCARD_PATH.relative_to(FILE_DIR).as_posix()] + get_wildcards()
if WILDCARD_PATH.exists() and not skip_wildcard_refresh:
try:
# Attempt to create a relative path, but fall back to an absolute path if not possible
relative_wildcard_path = WILDCARD_PATH.relative_to(FILE_DIR).as_posix()
except ValueError:
# If the paths are not relative, use the absolute path
relative_wildcard_path = WILDCARD_PATH.as_posix()
wildcards = [relative_wildcard_path] + get_wildcards()
if wildcards:
write_to_temp_file('wc.txt', wildcards)
# Write extension wildcards to wce.txt if found
if WILDCARD_EXT_PATHS is not None:
if WILDCARD_EXT_PATHS is not None and not skip_wildcard_refresh:
wildcards_ext = get_ext_wildcards()
if wildcards_ext:
write_to_temp_file('wce.txt', wildcards_ext)
# Write yaml extension wildcards to wcet.txt if found
wildcards_yaml_ext = get_ext_wildcard_tags()
if wildcards_yaml_ext:
write_to_temp_file('wcet.txt', wildcards_yaml_ext)
# Write yaml extension wildcards to umi_tags.txt and wc_yaml.json if found
get_yaml_wildcards()
if HYP_PATH.exists():
if HYP_PATH is not None and HYP_PATH.exists():
hypernets = get_hypernetworks()
if hypernets:
write_to_temp_file('hyp.txt', hypernets)
@@ -263,19 +530,25 @@ def write_temp_files():
if model_keyword_installed:
load_hash_cache()
if LORA_PATH is not None and LORA_PATH.exists():
lora_exists = LORA_PATH is not None and LORA_PATH.exists()
if lora_exists:
lora = get_lora()
if lora:
write_to_temp_file('lora.txt', lora)
if LYCO_PATH is not None and LYCO_PATH.exists():
lyco_exists = LYCO_PATH is not None and LYCO_PATH.exists()
if lyco_exists and not (lora_exists and LYCO_PATH.samefile(LORA_PATH)):
lyco = get_lyco()
if lyco:
write_to_temp_file('lyco.txt', lyco)
elif lyco_exists and lora_exists and LYCO_PATH.samefile(LORA_PATH):
print("tag_autocomplete_helper: LyCORIS path is the same as LORA path, skipping")
if model_keyword_installed:
update_hash_cache()
if shared.prompt_styles is not None:
write_style_names()
write_temp_files()
@@ -295,6 +568,13 @@ def on_ui_settings():
return self
shared.OptionInfo.needs_restart = needs_restart
# Dictionary of function options and their explanations
frequency_sort_functions = {
"Logarithmic (weak)": "Will respect the base order and slightly prefer often used tags",
"Logarithmic (strong)": "Same as Logarithmic (weak), but with a stronger bias",
"Usage first": "Will list used tags by frequency before all others",
}
tac_options = {
# Main tag file
"tac_tagFile": shared.OptionInfo("danbooru.csv", "Tag filename", gr.Dropdown, lambda: {"choices": csv_files_withnone}, refresh=update_tag_files),
@@ -313,18 +593,36 @@ def on_ui_settings():
"tac_resultStepLength": shared.OptionInfo(100, "How many results to load at once"),
"tac_delayTime": shared.OptionInfo(100, "Time in ms to wait before triggering completion again").needs_restart(),
"tac_useWildcards": shared.OptionInfo(True, "Search for wildcards"),
"tac_sortWildcardResults": shared.OptionInfo(True, "Sort wildcard file contents alphabetically").info("If your wildcard files have a specific custom order, disable this to keep it"),
"tac_wildcardExclusionList": shared.OptionInfo("", "Wildcard folder exclusion list").info("Add folder names that shouldn't be searched for wildcards, separated by comma.").needs_restart(),
"tac_skipWildcardRefresh": shared.OptionInfo(False, "Don't re-scan for wildcard files when pressing the extra networks refresh button").info("Useful to prevent hanging if you use a very large wildcard collection."),
"tac_useEmbeddings": shared.OptionInfo(True, "Search for embeddings"),
"tac_forceRefreshEmbeddings": shared.OptionInfo(False, "Force refresh embeddings when pressing the extra networks refresh button").info("Turn this on if you have issues with new embeddings not registering correctly in TAC. Warning: Seems to cause reloading issues in gradio for some users."),
"tac_includeEmbeddingsInNormalResults": shared.OptionInfo(False, "Include embeddings in normal tag results").info("The 'JumpTo...' keybinds (End & Home key by default) will select the first non-embedding result of their direction on the first press for quick navigation in longer lists."),
"tac_useHypernetworks": shared.OptionInfo(True, "Search for hypernetworks"),
"tac_useLoras": shared.OptionInfo(True, "Search for Loras"),
"tac_useLycos": shared.OptionInfo(True, "Search for LyCORIS/LoHa"),
"tac_useLoraPrefixForLycos": shared.OptionInfo(True, "Use the '<lora:' prefix instead of '<lyco:' for models in the LyCORIS folder").info("The lyco prefix is included for backwards compatibility and not used anymore by default. Disable this if you are on an old webui version without built-in lyco support."),
"tac_showWikiLinks": shared.OptionInfo(False, "Show '?' next to tags, linking to its Danbooru or e621 wiki page").info("Warning: This is an external site and very likely contains NSFW examples!"),
"tac_showExtraNetworkPreviews": shared.OptionInfo(True, "Show preview thumbnails for extra networks if available"),
"tac_modelSortOrder": shared.OptionInfo("Name", "Model sort order", gr.Dropdown, lambda: {"choices": list(sort_criteria.keys())}).info("Order for extra network models and wildcards in dropdown"),
"tac_useStyleVars": shared.OptionInfo(False, "Search for webui style names").info("Suggests style names from the webui dropdown with '$'. Currently requires a secondary extension like <a href=\"https://github.com/SirVeggie/extension-style-vars\" target=\"_blank\">style-vars</a> to actually apply the styles before generating."),
# Frequency sorting settings
"tac_frequencySort": shared.OptionInfo(True, "Locally record tag usage and sort frequent tags higher").info("Will also work for extra networks, keeping the specified base order"),
"tac_frequencyFunction": shared.OptionInfo("Logarithmic (weak)", "Function to use for frequency sorting", gr.Dropdown, lambda: {"choices": list(frequency_sort_functions.keys())}).info("; ".join([f'<b>{key}</b>: {val}' for key, val in frequency_sort_functions.items()])),
"tac_frequencyMinCount": shared.OptionInfo(3, "Minimum number of uses for a tag to be considered frequent").info("Tags with less uses than this will not be sorted higher, even if the sorting function would normally result in a higher position."),
"tac_frequencyMaxAge": shared.OptionInfo(30, "Maximum days since last use for a tag to be considered frequent").info("Similar to the above, tags that haven't been used in this many days will not be sorted higher. Set to 0 to disable."),
"tac_frequencyRecommendCap": shared.OptionInfo(10, "Maximum number of recommended tags").info("Limits the maximum number of recommended tags to not drown out normal results. Set to 0 to disable."),
"tac_frequencyIncludeAlias": shared.OptionInfo(False, "Frequency sorting matches aliases for frequent tags").info("Tag frequency will be increased for the main tag even if an alias is used for completion. This option can be used to override the default behavior of alias results being ignored for frequency sorting."),
# Insertion related settings
"tac_replaceUnderscores": shared.OptionInfo(True, "Replace underscores with spaces on insertion"),
"tac_undersocreReplacementExclusionList": shared.OptionInfo("0_0,(o)_(o),+_+,+_-,._.,<o>_<o>,<|>_<|>,=_=,>_<,3_3,6_9,>_o,@_@,^_^,o_o,u_u,x_x,|_|,||_||", "Underscore replacement exclusion list").info("Add tags that shouldn't have underscores replaced with spaces, separated by comma."),
"tac_escapeParentheses": shared.OptionInfo(True, "Escape parentheses on insertion"),
"tac_appendComma": shared.OptionInfo(True, "Append comma on tag autocompletion"),
"tac_appendSpace": shared.OptionInfo(True, "Append space on tag autocompletion").info("will append after comma if the above is enabled"),
"tac_alwaysSpaceAtEnd": shared.OptionInfo(True, "Always append space if inserting at the end of the textbox").info("takes precedence over the regular space setting for that position"),
"tac_modelKeywordCompletion": shared.OptionInfo("Never", "Try to add known trigger words for LORA/LyCO models", gr.Dropdown, lambda: {"interactive": model_keyword_installed, "choices": ["Never","Only user list","Always"]}).info("Requires the <a href=\"https://github.com/mix1009/model-keyword\" target=\"_blank\">model-keyword</a> extension to be installed, but will work with it disabled.").needs_restart(),
"tac_modelKeywordCompletion": shared.OptionInfo("Never", "Try to add known trigger words for LORA/LyCO models", gr.Dropdown, lambda: {"choices": ["Never","Only user list","Always"]}).info("Will use & prefer the native activation keywords settable in the extra networks UI. Other functionality requires the <a href=\"https://github.com/mix1009/model-keyword\" target=\"_blank\">model-keyword</a> extension to be installed, but will work with it disabled.").needs_restart(),
"tac_modelKeywordLocation": shared.OptionInfo("Start of prompt", "Where to insert the trigger keyword", gr.Dropdown, lambda: {"choices": ["Start of prompt","End of prompt","Before LORA/LyCO"]}).info("Only relevant if the above option is enabled"),
"tac_wildcardCompletionMode": shared.OptionInfo("To next folder level", "How to complete nested wildcard paths", gr.Dropdown, lambda: {"choices": ["To next folder level","To first difference","Always fully"]}).info("e.g. \"hair/colours/light/...\""),
# Alias settings
"tac_alias.searchByAlias": shared.OptionInfo(True, "Search by alias"),
@@ -381,6 +679,37 @@ def on_ui_settings():
"6": ["red", "maroon"],
"7": ["whitesmoke", "black"],
"8": ["seagreen", "darkseagreen"]
},
"derpibooru": {
"-1": ["red", "maroon"],
"0": ["#60d160", "#3d9d3d"],
"1": ["#fff956", "#918e2e"],
"3": ["#fd9961", "#a14c2e"],
"4": ["#cf5bbe", "#6c1e6c"],
"5": ["#3c8ad9", "#1e5e93"],
"6": ["#a6a6a6", "#555555"],
"7": ["#47abc1", "#1f6c7c"],
"8": ["#7871d0", "#392f7d"],
"9": ["#df3647", "#8e1c2b"],
"10": ["#c98f2b", "#7b470e"],
"11": ["#e87ebe", "#a83583"]
},
"danbooru_e621_merged": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["indianred", "firebrick"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["orange", "darkorange"],
"6": ["red", "maroon"],
"7": ["lightblue", "dodgerblue"],
"8": ["gold", "goldenrod"],
"9": ["gold", "goldenrod"],
"10": ["violet", "darkorchid"],
"11": ["lightgreen", "darkgreen"],
"12": ["tomato", "darksalmon"],
"14": ["whitesmoke", "black"],
"15": ["seagreen", "darkseagreen"]
}
}\
"""
@@ -395,5 +724,188 @@ def on_ui_settings():
shared.opts.add_option("tac_colormap", shared.OptionInfo(colorDefault, colorLabel, gr.Textbox, section=TAC_SECTION))
shared.opts.add_option("tac_refreshTempFiles", shared.OptionInfo("Refresh TAC temp files", "Refresh internal temp files", gr.HTML, {}, refresh=refresh_temp_files, section=TAC_SECTION))
script_callbacks.on_ui_settings(on_ui_settings)
def get_style_mtime():
try:
style_file = getattr(shared, "styles_filename", "styles.csv")
# Check in case a list is returned
if isinstance(style_file, list):
style_file = style_file[0]
style_file = Path(FILE_DIR).joinpath(style_file)
if Path.exists(style_file):
return style_file.stat().st_mtime
except Exception:
return None
last_style_mtime = get_style_mtime()
def api_tac(_: gr.Blocks, app: FastAPI):
async def get_json_info(base_path: Path, filename: str = None):
if base_path is None or (not base_path.exists()):
return Response(status_code=404)
try:
json_candidates = glob.glob(base_path.as_posix() + f"/**/{glob.escape(filename)}.json", recursive=True)
if json_candidates is not None and len(json_candidates) > 0 and Path(json_candidates[0]).is_file():
return FileResponse(json_candidates[0])
except Exception as e:
return JSONResponse({"error": e}, status_code=500)
async def get_preview_thumbnail(base_path: Path, filename: str = None, blob: bool = False):
if base_path is None or (not base_path.exists()):
return Response(status_code=404)
try:
img_glob = glob.glob(base_path.as_posix() + f"/**/{glob.escape(filename)}.*", recursive=True)
img_candidates = [img for img in img_glob if Path(img).suffix in [".png", ".jpg", ".jpeg", ".webp", ".gif"] and Path(img).is_file()]
if img_candidates is not None and len(img_candidates) > 0:
if blob:
return FileResponse(img_candidates[0])
else:
return JSONResponse({"url": urllib.parse.quote(img_candidates[0])})
except Exception as e:
return JSONResponse({"error": e}, status_code=500)
@app.post("/tacapi/v1/refresh-temp-files")
async def api_refresh_temp_files():
await sleep(0) # might help with refresh blocking gradio
refresh_temp_files()
@app.post("/tacapi/v1/refresh-embeddings")
async def api_refresh_embeddings():
refresh_embeddings(force=False)
@app.get("/tacapi/v1/lora-info/{lora_name}")
async def get_lora_info(lora_name):
return await get_json_info(LORA_PATH, lora_name)
@app.get("/tacapi/v1/lyco-info/{lyco_name}")
async def get_lyco_info(lyco_name):
return await get_json_info(LYCO_PATH, lyco_name)
@app.get("/tacapi/v1/lora-cached-hash/{lora_name}")
async def get_lora_cached_hash(lora_name: str):
path_glob = glob.glob(LORA_PATH.as_posix() + f"/**/{glob.escape(lora_name)}.*", recursive=True)
paths = [lora for lora in path_glob if Path(lora).suffix in [".safetensors", ".ckpt", ".pt"] and Path(lora).is_file()]
if paths is not None and len(paths) > 0:
path = paths[0]
hash = hashes.sha256_from_cache(path, f"lora/{lora_name}", path.endswith(".safetensors"))
if hash is not None:
return hash
return None
def get_path_for_type(type):
if type == "lora":
return LORA_PATH
elif type == "lyco":
return LYCO_PATH
elif type == "hypernetwork":
return HYP_PATH
elif type == "embedding":
return EMB_PATH
else:
return None
@app.get("/tacapi/v1/thumb-preview/{filename}")
async def get_thumb_preview(filename, type):
return await get_preview_thumbnail(get_path_for_type(type), filename, False)
@app.get("/tacapi/v1/thumb-preview-blob/{filename}")
async def get_thumb_preview_blob(filename, type):
return await get_preview_thumbnail(get_path_for_type(type), filename, True)
@app.get("/tacapi/v1/wildcard-contents")
async def get_wildcard_contents(basepath: str, filename: str):
if basepath is None or basepath == "":
return Response(status_code=404)
base = Path(basepath)
if base is None or (not base.exists()):
return Response(status_code=404)
try:
wildcard_path = base.joinpath(filename)
if wildcard_path.exists() and wildcard_path.is_file():
return FileResponse(wildcard_path)
else:
return Response(status_code=404)
except Exception as e:
return JSONResponse({"error": e}, status_code=500)
@app.get("/tacapi/v1/refresh-styles-if-changed")
async def refresh_styles_if_changed():
global last_style_mtime
mtime = get_style_mtime()
if mtime is not None and mtime > last_style_mtime:
last_style_mtime = mtime
# Update temp file
if shared.prompt_styles is not None:
write_style_names()
return Response(status_code=200) # Success
else:
return Response(status_code=304) # Not modified
def db_request(func, get = False):
if db is not None:
try:
if get:
ret = func()
if ret is list:
ret = [{"name": t[0], "type": t[1], "count": t[2], "lastUseDate": t[3]} for t in ret]
return JSONResponse({"result": ret})
else:
func()
except sqlite3.Error as e:
return JSONResponse({"error": e.__cause__}, status_code=500)
else:
return JSONResponse({"error": "Database not initialized"}, status_code=500)
@app.post("/tacapi/v1/increase-use-count")
async def increase_use_count(tagname: str, ttype: int, neg: bool):
db_request(lambda: db.increase_tag_count(tagname, ttype, neg))
@app.get("/tacapi/v1/get-use-count")
async def get_use_count(tagname: str, ttype: int, neg: bool):
return db_request(lambda: db.get_tag_count(tagname, ttype, neg), get=True)
# Small dataholder class
class UseCountListRequest(BaseModel):
tagNames: list[str]
tagTypes: list[int]
neg: bool = False
# Semantically weird to use post here, but it's required for the body on js side
@app.post("/tacapi/v1/get-use-count-list")
async def get_use_count_list(body: UseCountListRequest):
# If a date limit is set > 0, pass it to the db
date_limit = getattr(shared.opts, "tac_frequencyMaxAge", 30)
date_limit = date_limit if date_limit > 0 else None
if db:
count_list = list(db.get_tag_counts(body.tagNames, body.tagTypes, body.neg, date_limit))
else:
count_list = None
# If a limit is set, return at max the top n results by count
if count_list and len(count_list):
limit = int(min(getattr(shared.opts, "tac_frequencyRecommendCap", 10), len(count_list)))
# Sort by count and return the top n
if limit > 0:
count_list = sorted(count_list, key=lambda x: x[2], reverse=True)[:limit]
return db_request(lambda: count_list, get=True)
@app.put("/tacapi/v1/reset-use-count")
async def reset_use_count(tagname: str, ttype: int, pos: bool, neg: bool):
db_request(lambda: db.reset_tag_count(tagname, ttype, pos, neg))
@app.get("/tacapi/v1/get-all-use-counts")
async def get_all_tag_counts():
return db_request(lambda: db.get_all_tags(), get=True)
script_callbacks.on_app_started(api_tac)

190
scripts/tag_frequency_db.py Normal file
View File

@@ -0,0 +1,190 @@
import sqlite3
from contextlib import contextmanager
from scripts.shared_paths import TAGS_PATH
db_file = TAGS_PATH.joinpath("tag_frequency.db")
timeout = 30
db_ver = 1
@contextmanager
def transaction(db=db_file):
"""Context manager for database transactions.
Ensures that the connection is properly closed after the transaction.
"""
try:
conn = sqlite3.connect(db, timeout=timeout)
conn.isolation_level = None
cursor = conn.cursor()
cursor.execute("BEGIN")
yield cursor
cursor.execute("COMMIT")
except sqlite3.Error as e:
print("Tag Autocomplete: Frequency database error:", e)
finally:
if conn:
conn.close()
class TagFrequencyDb:
"""Class containing creation and interaction methods for the tag frequency database"""
def __init__(self) -> None:
self.version = self.__check()
def __check(self):
if not db_file.exists():
print("Tag Autocomplete: Creating frequency database")
with transaction() as cursor:
self.__create_db(cursor)
self.__update_db_data(cursor, "version", db_ver)
print("Tag Autocomplete: Database successfully created")
return self.__get_version()
def __create_db(self, cursor: sqlite3.Cursor):
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS db_data (
key TEXT PRIMARY KEY,
value TEXT
)
"""
)
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS tag_frequency (
name TEXT NOT NULL,
type INT NOT NULL,
count_pos INT,
count_neg INT,
last_used TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (name, type)
)
"""
)
def __update_db_data(self, cursor: sqlite3.Cursor, key, value):
cursor.execute(
"""
INSERT OR REPLACE
INTO db_data (key, value)
VALUES (?, ?)
""",
(key, value),
)
def __get_version(self):
db_version = None
with transaction() as cursor:
cursor.execute(
"""
SELECT value
FROM db_data
WHERE key = 'version'
"""
)
db_version = cursor.fetchone()
return db_version[0] if db_version else 0
def get_all_tags(self):
with transaction() as cursor:
cursor.execute(
f"""
SELECT name, type, count_pos, count_neg, last_used
FROM tag_frequency
WHERE count_pos > 0 OR count_neg > 0
ORDER BY count_pos + count_neg DESC
"""
)
tags = cursor.fetchall()
return tags
def get_tag_count(self, tag, ttype, negative=False):
count_str = "count_neg" if negative else "count_pos"
with transaction() as cursor:
cursor.execute(
f"""
SELECT {count_str}, last_used
FROM tag_frequency
WHERE name = ? AND type = ?
""",
(tag, ttype),
)
tag_count = cursor.fetchone()
if tag_count:
return tag_count[0], tag_count[1]
else:
return 0, None
def get_tag_counts(self, tags: list[str], ttypes: list[str], negative=False, date_limit=None):
count_str = "count_neg" if negative else "count_pos"
with transaction() as cursor:
for tag, ttype in zip(tags, ttypes):
if date_limit is not None:
cursor.execute(
f"""
SELECT {count_str}, last_used
FROM tag_frequency
WHERE name = ? AND type = ?
AND last_used > datetime('now', '-' || ? || ' days')
""",
(tag, ttype, date_limit),
)
else:
cursor.execute(
f"""
SELECT {count_str}, last_used
FROM tag_frequency
WHERE name = ? AND type = ?
""",
(tag, ttype),
)
tag_count = cursor.fetchone()
if tag_count:
yield (tag, ttype, tag_count[0], tag_count[1])
else:
yield (tag, ttype, 0, None)
def increase_tag_count(self, tag, ttype, negative=False):
pos_count = self.get_tag_count(tag, ttype, False)[0]
neg_count = self.get_tag_count(tag, ttype, True)[0]
if negative:
neg_count += 1
else:
pos_count += 1
with transaction() as cursor:
cursor.execute(
f"""
INSERT OR REPLACE
INTO tag_frequency (name, type, count_pos, count_neg)
VALUES (?, ?, ?, ?)
""",
(tag, ttype, pos_count, neg_count),
)
def reset_tag_count(self, tag, ttype, positive=True, negative=False):
if positive and negative:
set_str = "count_pos = 0, count_neg = 0"
elif positive:
set_str = "count_pos = 0"
elif negative:
set_str = "count_neg = 0"
with transaction() as cursor:
cursor.execute(
f"""
UPDATE tag_frequency
SET {set_str}
WHERE name = ? AND type = ?
""",
(tag, ttype),
)

113301
tags/EnglishDictionary.csv Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

221787
tags/danbooru_e621_merged.csv Normal file

File diff suppressed because one or more lines are too long

View File

@@ -28,5 +28,17 @@
"terms": "Water, Magic, Fancy",
"content": "(extremely detailed CG unity 8k wallpaper), (masterpiece), (best quality), (ultra-detailed), (best illustration),(best shadow), (an extremely delicate and beautiful), classic, dynamic angle, floating, fine detail, Depth of field, classic, (painting), (sketch), (bloom), (shine), glinting stars,\n\na girl, solo, bare shoulders, flat chest, diamond and glaring eyes, beautiful detailed cold face, very long blue and sliver hair, floating black feathers, wavy hair, extremely delicate and beautiful girls, beautiful detailed eyes, glowing eyes,\n\nriver, (forest),palace, (fairyland,feather,flowers, nature),(sunlight),Hazy fog, mist",
"color": 5
},
{
"name": "Pony-Positive",
"terms": "Pony,Score,Positive,Quality",
"content": "score_9, score_8_up, score_7_up, score_6_up, source_anime, source_furry, source_pony, source_cartoon",
"color": 1
},
{
"name": "Pony-Negative",
"terms": "Pony,Score,Negative,Quality",
"content": "score_1, score_2, score_3, score_4, score_5, source_anime, source_furry, source_pony, source_cartoon",
"color": 3
}
]

110665
tags/derpibooru.csv Normal file

File diff suppressed because it is too large Load Diff

200358
tags/e621.csv

File diff suppressed because one or more lines are too long

22419
tags/e621_sfw.csv Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff