mirror of
https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git
synced 2026-01-26 19:19:57 +00:00
Compare commits
178 Commits
2.10.6
...
refactor-c
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cd7ec48102 | ||
|
|
19a30beed4 | ||
|
|
2e699f3ebd | ||
|
|
89fee277e3 | ||
|
|
c4510663ca | ||
|
|
4b02fe921f | ||
|
|
f30214014b | ||
|
|
20e48a124c | ||
|
|
22a9449419 | ||
|
|
bcb11af7ef | ||
|
|
88c8a1d5d6 | ||
|
|
87fa3851ca | ||
|
|
8a574ec5e1 | ||
|
|
781cea83a0 | ||
|
|
0608706e7d | ||
|
|
d1cb5269f6 | ||
|
|
ab253e30f4 | ||
|
|
0d65238a55 | ||
|
|
de912bc800 | ||
|
|
8eb5176ab4 | ||
|
|
bdbda299f7 | ||
|
|
4d6e5b14ac | ||
|
|
085a7fc64c | ||
|
|
61d799a908 | ||
|
|
8766965a30 | ||
|
|
34e68e1628 | ||
|
|
41d185b616 | ||
|
|
e0baa58ace | ||
|
|
c1ef12d887 | ||
|
|
4fc122de4b | ||
|
|
c341ccccb6 | ||
|
|
bda8701734 | ||
|
|
63fca457a7 | ||
|
|
38700d4743 | ||
|
|
bb492ba059 | ||
|
|
40ad070a02 | ||
|
|
209b1dd76b | ||
|
|
196fa19bfc | ||
|
|
6ffeeafc49 | ||
|
|
08b7c58ea7 | ||
|
|
6be91449f3 | ||
|
|
b515c15e01 | ||
|
|
827b99c961 | ||
|
|
49ec047af8 | ||
|
|
f94da07ed1 | ||
|
|
e2cfe7341b | ||
|
|
ce51ec52a2 | ||
|
|
f64d728ac6 | ||
|
|
1c6bba2a3d | ||
|
|
9a47c2ec2c | ||
|
|
fe32ad739d | ||
|
|
ade67e30a6 | ||
|
|
e9a21e7a55 | ||
|
|
3ef2a7d206 | ||
|
|
29b5bf0701 | ||
|
|
3eef536b64 | ||
|
|
0d24e697d2 | ||
|
|
a27633da55 | ||
|
|
4cd6174a22 | ||
|
|
9155e4d42c | ||
|
|
700642a400 | ||
|
|
3e2ee75f37 | ||
|
|
1b592dbf56 | ||
|
|
d1eea880f3 | ||
|
|
119a3ad51f | ||
|
|
c820a22149 | ||
|
|
eb1e1820f9 | ||
|
|
ef59cff651 | ||
|
|
a454383c43 | ||
|
|
bec567fe26 | ||
|
|
d4041096c9 | ||
|
|
0903259ddf | ||
|
|
f3e64b1fa5 | ||
|
|
312cec5d71 | ||
|
|
b71e6339bd | ||
|
|
7ddbc3c0b2 | ||
|
|
4c2ef8f770 | ||
|
|
97c5e4f53c | ||
|
|
1d8d9f64b5 | ||
|
|
7437850600 | ||
|
|
829a4a7b89 | ||
|
|
22472ac8ad | ||
|
|
5f77fa26d3 | ||
|
|
f810b2dd8f | ||
|
|
08d3436f3b | ||
|
|
afa13306ef | ||
|
|
95200e82e1 | ||
|
|
a63ce64f4e | ||
|
|
a966be7546 | ||
|
|
d37e37acfa | ||
|
|
342fbc9041 | ||
|
|
d496569c9a | ||
|
|
7778142520 | ||
|
|
cde90c13c4 | ||
|
|
231b121fe0 | ||
|
|
c659ed2155 | ||
|
|
0a4c17cada | ||
|
|
6e65811d4a | ||
|
|
03673c060e | ||
|
|
1c11c4ad5a | ||
|
|
30c9593d3d | ||
|
|
f840586b6b | ||
|
|
886704e351 | ||
|
|
41626d22c3 | ||
|
|
57076060df | ||
|
|
5ef346cde3 | ||
|
|
edf76d9df2 | ||
|
|
837dc39811 | ||
|
|
f1870b7e87 | ||
|
|
20b6635a2a | ||
|
|
1fe8f26670 | ||
|
|
e82e958c3e | ||
|
|
2dd48eab79 | ||
|
|
4df90f5c95 | ||
|
|
a156214a48 | ||
|
|
15478e73b5 | ||
|
|
fcacf7dd66 | ||
|
|
82f819f336 | ||
|
|
effda54526 | ||
|
|
434301738a | ||
|
|
58804796f0 | ||
|
|
668ca800b8 | ||
|
|
a7233a594f | ||
|
|
4fba7baa69 | ||
|
|
5ebe22ddfc | ||
|
|
44c5450b28 | ||
|
|
5fd48f53de | ||
|
|
7128efc4f4 | ||
|
|
bd0ddfbb24 | ||
|
|
3108daf0e8 | ||
|
|
446ac14e7f | ||
|
|
363895494b | ||
|
|
04551a8132 | ||
|
|
ffc0e378d3 | ||
|
|
440f109f1f | ||
|
|
80fb247dbe | ||
|
|
b3e71e840d | ||
|
|
998514bebb | ||
|
|
d7e98200a8 | ||
|
|
ac790c8ede | ||
|
|
22365ec8d6 | ||
|
|
030a83aa4d | ||
|
|
460d32a4ed | ||
|
|
581bf1e6a4 | ||
|
|
74ea5493e5 | ||
|
|
94ec8884c3 | ||
|
|
6cf9acd6ab | ||
|
|
109a8a155e | ||
|
|
3caa1b51ed | ||
|
|
b44c36425a | ||
|
|
1e81403180 | ||
|
|
0f487a5c5c | ||
|
|
2baa12fea3 | ||
|
|
1a9157fe6e | ||
|
|
67eeb5fbf6 | ||
|
|
5911248ab9 | ||
|
|
1c693c0263 | ||
|
|
11ffed8afc | ||
|
|
cb54b66eda | ||
|
|
92a937ad01 | ||
|
|
ba9dce8d90 | ||
|
|
2622e1b596 | ||
|
|
b03b1a0211 | ||
|
|
3e33169a3a | ||
|
|
d8d991531a | ||
|
|
f626b9453d | ||
|
|
5067afeee9 | ||
|
|
018c6c8198 | ||
|
|
2846d79b7d | ||
|
|
783a847978 | ||
|
|
44effca702 | ||
|
|
475ef59197 | ||
|
|
3953260485 | ||
|
|
0a8e7d7d84 | ||
|
|
46d07d703a | ||
|
|
bd1dbe92c2 | ||
|
|
66fa745d6f | ||
|
|
37b5dca66e |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1,2 +1,3 @@
|
||||
tags/temp/
|
||||
__pycache__/
|
||||
tags/tag_frequency.db
|
||||
|
||||
@@ -23,11 +23,12 @@ Booru style tag autocompletion for the AUTOMATIC1111 Stable Diffusion WebUI
|
||||
# 📄 Description
|
||||
|
||||
Tag Autocomplete is an extension for the popular [AUTOMATIC1111 web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) for Stable Diffusion.
|
||||
You can install it using the inbuilt available extensions list, clone the files manually as described [below](#-installation), or use a pre-packaged version from [Releases](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases).
|
||||
|
||||
It displays autocompletion hints for recognized tags from "image booru" boards such as Danbooru, which are primarily used for browsing Anime-style illustrations.
|
||||
Since some Stable Diffusion models were trained using this information, for example [Waifu Diffusion](https://github.com/harubaru/waifu-diffusion) and many of the NAI-descendant models or merges, using exact tags in prompts can often improve composition and consistency.
|
||||
Since most custom Stable Diffusion models were trained using this information or merged with ones that did, using exact tags in prompts can often improve composition and consistency, even if the model itself has a photorealistic style.
|
||||
|
||||
You can install it using the inbuilt available extensions list, clone the files manually as described [below](#-installation), or use a pre-packaged version from [Releases](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases).
|
||||
Disclaimer: The default tag lists contain NSFW terms, please use them responsibly.
|
||||
|
||||
<br/>
|
||||
|
||||
@@ -486,6 +487,7 @@ Example with Chinese translation:
|
||||
## List of translations
|
||||
- [🇨🇳 Chinese tags](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/23) by @HalfMAI, using machine translation and manual correction for the most common tags (uses legacy format)
|
||||
- [🇨🇳 Chinese tags](https://github.com/sgmklp/tag-for-autocompletion-with-translation) by @sgmklp, smaller set of manual translations based on https://github.com/zcyzcy88/TagTable
|
||||
- [🇯🇵 Japanese tags](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/265) by @applemango, both machine and human translations available
|
||||
|
||||
> ### 🫵 I need your help!
|
||||
> Translations are a community effort. If you have translated a tag file or want to create one, please open a Pull Request or Issue so your link can be added here.
|
||||
|
||||
@@ -410,8 +410,9 @@ https://www.w3.org/TR/uievents-key/#named-key-attribute-value
|
||||

|
||||
|
||||
## 翻訳リスト
|
||||
- [🇨🇳 Chinese tags](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/23) by @HalfMAI, 最も一般的なタグを機械翻訳と手作業で修正(レガシーフォーマットを使用)
|
||||
- [🇨🇳 Chinese tags](https://github.com/sgmklp/tag-for-autocompletion-with-translation) by @sgmklp, [こちら](https://github.com/zcyzcy88/TagTable)をベースにして、より小さくした手動での翻訳セット。
|
||||
- [🇨🇳 中国語訳](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/23) by @HalfMAI, 最も一般的なタグを機械翻訳と手作業で修正(レガシーフォーマットを使用)
|
||||
- [🇨🇳 中国語訳](https://github.com/sgmklp/tag-for-autocompletion-with-translation) by @sgmklp, [こちら](https://github.com/zcyzcy88/TagTable)をベースにして、より小さくした手動での翻訳セット。
|
||||
- [🇯🇵 日本語訳](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/discussions/265) by @applemango, 機械翻訳と人力翻訳の両方が利用可能。
|
||||
|
||||
> ### 🫵 あなたの助けが必要です!
|
||||
> 翻訳はコミュニティの努力により支えられています。もしあなたがタグファイルを翻訳したことがある場合、または作成したい場合は、あなたの成果をここに追加できるように、Pull RequestまたはIssueを開いてください。
|
||||
|
||||
@@ -13,6 +13,12 @@
|
||||
你可以按照[以下方法](#installation)下载或拷贝文件,也可以使用[Releases](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/releases)中打包好的文件。
|
||||
|
||||
## 常见问题 & 已知缺陷:
|
||||
- 很多中国用户都报告过此扩展名和其他扩展名的 JavaScript 文件被阻止的问题。
|
||||
常见的罪魁祸首是 IDM / Internet Download Manager 浏览器插件,它似乎出于安全目的阻止了本地文件请求。
|
||||
如果您安装了 IDM,请确保在使用 webui 时禁用以下插件:
|
||||
|
||||

|
||||
|
||||
- 当`replaceUnderscores`选项开启时, 脚本只会替换Tag的一部分如果Tag包含多个单词,比如将`atago (azur lane)`修改`atago`为`taihou`并使用自动补全时.会得到 `taihou (azur lane), lane)`的结果, 因为脚本没有把后面的部分认为成同一个Tag。
|
||||
|
||||
## 演示与截图
|
||||
|
||||
@@ -1,63 +1,204 @@
|
||||
// Core components
|
||||
var TAC_CFG = null;
|
||||
var tagBasePath = "";
|
||||
var modelKeywordPath = "";
|
||||
// Create our TAC namespace
|
||||
var TAC = TAC || {};
|
||||
|
||||
// Tag completion data loaded from files
|
||||
var allTags = [];
|
||||
var translations = new Map();
|
||||
var extras = [];
|
||||
// Same for tag-likes
|
||||
var wildcardFiles = [];
|
||||
var wildcardExtFiles = [];
|
||||
var yamlWildcards = [];
|
||||
var umiWildcards = [];
|
||||
var embeddings = [];
|
||||
var hypernetworks = [];
|
||||
var loras = [];
|
||||
var lycos = [];
|
||||
var modelKeywordDict = new Map();
|
||||
var chants = [];
|
||||
/**
|
||||
* @typedef {Object} TAC.CFG
|
||||
* @property {string} tagFile - Tag filename
|
||||
* @property {{ global: boolean, txt2img: boolean, img2img: boolean, negativePrompts: boolean, thirdParty: boolean, modelList: string, modelListMode: "Blacklist"|"Whitelist" }} activeIn - Settings for which parts of the UI the tag completion is active in.
|
||||
* @property {boolean} slidingPopup - Move completion popup together with text cursor
|
||||
* @property {number} maxResults - Maximum results
|
||||
* @property {boolean} showAllResults - Show all results
|
||||
* @property {number} resultStepLength - How many results to load at once
|
||||
* @property {number} delayTime - Time in ms to wait before triggering completion again
|
||||
* @property {boolean} useWildcards - Search for wildcards
|
||||
* @property {boolean} sortWildcardResults - Sort wildcard file contents alphabetically
|
||||
* @property {boolean} useEmbeddings - Search for embeddings
|
||||
* @property {boolean} includeEmbeddingsInNormalResults - Include embeddings in normal tag results
|
||||
* @property {boolean} useHypernetworks - Search for hypernetworks
|
||||
* @property {boolean} useLoras - Search for Loras
|
||||
* @property {boolean} useLycos - Search for LyCORIS/LoHa
|
||||
* @property {boolean} useLoraPrefixForLycos - Use the '<lora:' prefix instead of '<lyco:' for models in the LyCORIS folder
|
||||
* @property {boolean} showWikiLinks - Show '?' next to tags, linking to its Danbooru or e621 wiki page
|
||||
* @property {boolean} showExtraNetworkPreviews - Show preview thumbnails for extra networks if available
|
||||
* @property {string} modelSortOrder - Model sort order
|
||||
* @property {boolean} frequencySort - Locally record tag usage and sort frequent tags higher
|
||||
* @property {string} frequencyFunction - Function to use for frequency sorting
|
||||
* @property {number} frequencyMinCount - Minimum number of uses for a tag to be considered frequent
|
||||
* @property {number} frequencyMaxAge - Maximum days since last use for a tag to be considered frequent
|
||||
* @property {number} frequencyRecommendCap - Maximum number of recommended tags
|
||||
* @property {boolean} frequencyIncludeAlias - Frequency sorting matches aliases for frequent tags
|
||||
* @property {boolean} useStyleVars - Search for webui style names
|
||||
* @property {boolean} replaceUnderscores - Replace underscores with spaces on insertion
|
||||
* @property {string} replaceUnderscoresExclusionList - Underscore replacement exclusion list
|
||||
* @property {boolean} escapeParentheses - Escape parentheses on insertion
|
||||
* @property {boolean} appendComma - Append comma on tag autocompletion
|
||||
* @property {boolean} appendSpace - Append space on tag autocompletion
|
||||
* @property {boolean} alwaysSpaceAtEnd - Always append space if inserting at the end of the textbox
|
||||
* @property {string} wildcardCompletionMode - How to complete nested wildcard paths
|
||||
* @property {string} modelKeywordCompletion - Try to add known trigger words for LORA/LyCO models
|
||||
* @property {string} modelKeywordLocation - Where to insert the trigger keyword
|
||||
* @property {string} wcWrap - Wrapper characters for wildcard tags.
|
||||
* @property {{ searchByAlias: boolean, onlyShowAlias: boolean }} alias - Alias-related settings.
|
||||
* @property {{ translationFile: string, oldFormat: boolean, searchByTranslation: boolean, liveTranslation: boolean }} translation - Translation-related settings.
|
||||
* @property {{ extraFile: string, addMode: "Insert before"|"Insert after" }} extra - Extra file-related settings.
|
||||
* @property {string} chantFile - Chant filename
|
||||
* @property {number} extraNetworksDefaultMultiplier - Default multiplier for extra networks.
|
||||
* @property {string} extraNetworksSeparator - Separator used for extra networks.
|
||||
* @property {{ MoveUp: string, MoveDown: string, JumpUp: string, JumpDown: string, JumpToStart: string, JumpToEnd: string, ChooseSelected: string, ChooseFirstOrSelected: string, Close: string }} keymap - Custom key mappings for tag completion.
|
||||
* @property {{ [filename: string]: { [category: string]: string[] } }} colorMap - Color mapping for tag categories.
|
||||
*/
|
||||
/** @type {TAC.CFG} */
|
||||
TAC.CFG = {
|
||||
// Main tag file
|
||||
tagFile: "",
|
||||
// Active in settings
|
||||
activeIn: {
|
||||
global: true,
|
||||
txt2img: true,
|
||||
img2img: true,
|
||||
negativePrompts: true,
|
||||
thirdParty: true,
|
||||
modelList: "",
|
||||
modelListMode: "Blacklist",
|
||||
},
|
||||
// Results related settings
|
||||
slidingPopup: true,
|
||||
maxResults: 8,
|
||||
showAllResults: false,
|
||||
resultStepLength: 500,
|
||||
delayTime: 100,
|
||||
useWildcards: true,
|
||||
sortWildcardResults: true,
|
||||
useEmbeddings: true,
|
||||
includeEmbeddingsInNormalResults: true,
|
||||
useHypernetworks: true,
|
||||
useLoras: true,
|
||||
useLycos: true,
|
||||
useLoraPrefixForLycos: true,
|
||||
showWikiLinks: false,
|
||||
showExtraNetworkPreviews: true,
|
||||
modelSortOrder: "Name",
|
||||
frequencySort: true,
|
||||
frequencyFunction: "Logarithmic (weak)",
|
||||
frequencyMinCount: 3,
|
||||
frequencyMaxAge: 30,
|
||||
frequencyRecommendCap: 10,
|
||||
frequencyIncludeAlias: false,
|
||||
useStyleVars: false,
|
||||
// Insertion related settings
|
||||
replaceUnderscores: true,
|
||||
replaceUnderscoresExclusionList: "0_0,(o)_(o),+_+,+_-,._.,<o>_<o>,<|>_<|>,=_=,>_<,3_3,6_9,>_o,@_@,^_^,o_o,u_u,x_x,|_|,||_||",
|
||||
escapeParentheses: true,
|
||||
appendComma: true,
|
||||
appendSpace: true,
|
||||
alwaysSpaceAtEnd: true,
|
||||
wildcardCompletionMode: "To next folder level",
|
||||
modelKeywordCompletion: "Never",
|
||||
modelKeywordLocation: "Start of prompt",
|
||||
wcWrap: "__", // to support custom wrapper chars set by dp_parser
|
||||
// Alias settings
|
||||
alias: {
|
||||
searchByAlias: true,
|
||||
onlyShowAlias: false,
|
||||
},
|
||||
// Translation settings
|
||||
translation: {
|
||||
translationFile: "None",
|
||||
oldFormat: false,
|
||||
searchByTranslation: true,
|
||||
liveTranslation: false,
|
||||
},
|
||||
// Extra file settings
|
||||
extra: {
|
||||
extraFile: "extra-quality-tags.csv",
|
||||
addMode: "Insert before",
|
||||
},
|
||||
// Chant file settings
|
||||
chantFile: "demo-chants.json",
|
||||
// Settings not from tac but still used by the script
|
||||
extraNetworksDefaultMultiplier: 1.0,
|
||||
extraNetworksSeparator: ", ",
|
||||
// Custom mapping settings
|
||||
keymap: {
|
||||
MoveUp: "ArrowUp",
|
||||
MoveDown: "ArrowDown",
|
||||
JumpUp: "PageUp",
|
||||
JumpDown: "PageDown",
|
||||
JumpToStart: "Home",
|
||||
JumpToEnd: "End",
|
||||
ChooseSelected: "Enter",
|
||||
ChooseFirstOrSelected: "Tab",
|
||||
Close: "Escape",
|
||||
},
|
||||
colorMap: {
|
||||
filename: { category: ["light", "dark"] },
|
||||
},
|
||||
};
|
||||
|
||||
// Selected model info for black/whitelisting
|
||||
var currentModelHash = "";
|
||||
var currentModelName = "";
|
||||
TAC.Globals = new (function () {
|
||||
// Core components
|
||||
this.tagBasePath = "";
|
||||
this.modelKeywordPath = "";
|
||||
this.selfTrigger = false;
|
||||
|
||||
// Current results
|
||||
var results = [];
|
||||
var resultCount = 0;
|
||||
// Tag completion data loaded from files
|
||||
this.allTags = [];
|
||||
this.translations = new Map();
|
||||
this.extras = [];
|
||||
// Same for tag-likes
|
||||
this.wildcardFiles = [];
|
||||
this.wildcardExtFiles = [];
|
||||
this.yamlWildcards = [];
|
||||
this.umiWildcards = [];
|
||||
this.embeddings = [];
|
||||
this.hypernetworks = [];
|
||||
this.loras = [];
|
||||
this.lycos = [];
|
||||
this.modelKeywordDict = new Map();
|
||||
this.chants = [];
|
||||
this.styleNames = [];
|
||||
|
||||
// Relevant for parsing
|
||||
var previousTags = [];
|
||||
var tagword = "";
|
||||
var originalTagword = "";
|
||||
let hideBlocked = false;
|
||||
// Selected model info for black/whitelisting
|
||||
this.currentModelHash = "";
|
||||
this.currentModelName = "";
|
||||
|
||||
// Tag selection for keyboard navigation
|
||||
var selectedTag = null;
|
||||
var oldSelectedTag = null;
|
||||
var resultCountBeforeNormalTags = 0;
|
||||
// Current results
|
||||
this.results = [];
|
||||
this.resultCount = 0;
|
||||
|
||||
// Lora keyword undo/redo history
|
||||
var textBeforeKeywordInsertion = "";
|
||||
var textAfterKeywordInsertion = "";
|
||||
var lastEditWasKeywordInsertion = false;
|
||||
var keywordInsertionUndone = false;
|
||||
// Relevant for parsing
|
||||
this.previousTags = [];
|
||||
this.tagword = "";
|
||||
this.originalTagword = "";
|
||||
this.hideBlocked = false;
|
||||
|
||||
// UMI
|
||||
var umiPreviousTags = [];
|
||||
// Tag selection for keyboard navigation
|
||||
this.selectedTag = null;
|
||||
this.oldSelectedTag = null;
|
||||
this.resultCountBeforeNormalTags = 0;
|
||||
|
||||
// Lora keyword undo/redo history
|
||||
this.textBeforeKeywordInsertion = "";
|
||||
this.textAfterKeywordInsertion = "";
|
||||
this.lastEditWasKeywordInsertion = false;
|
||||
this.keywordInsertionUndone = false;
|
||||
|
||||
// UMI
|
||||
this.umiPreviousTags = [];
|
||||
})();
|
||||
|
||||
/// Extendability system:
|
||||
/// Provides "queues" for other files of the script (or really any js)
|
||||
/// to add functions to be called at certain points in the script.
|
||||
/// Similar to a callback system, but primitive.
|
||||
TAC.Ext = new (function () {
|
||||
// Queues
|
||||
this.QUEUE_AFTER_INSERT = [];
|
||||
this.QUEUE_AFTER_SETUP = [];
|
||||
this.QUEUE_FILE_LOAD = [];
|
||||
this.QUEUE_AFTER_CONFIG_CHANGE = [];
|
||||
this.QUEUE_SANITIZE = [];
|
||||
|
||||
// Queues
|
||||
const QUEUE_AFTER_INSERT = [];
|
||||
const QUEUE_AFTER_SETUP = [];
|
||||
const QUEUE_FILE_LOAD = [];
|
||||
const QUEUE_AFTER_CONFIG_CHANGE = [];
|
||||
const QUEUE_SANITIZE = [];
|
||||
|
||||
// List of parsers to try
|
||||
const PARSERS = [];
|
||||
// List of parsers to try
|
||||
this.PARSERS = [];
|
||||
})();
|
||||
|
||||
@@ -1,21 +1,21 @@
|
||||
class FunctionNotOverriddenError extends Error {
|
||||
TAC.FunctionNotOverriddenError = class FunctionNotOverriddenError extends Error {
|
||||
constructor(message = "", ...args) {
|
||||
super(message, ...args);
|
||||
this.message = message + " is an abstract base function and must be overwritten.";
|
||||
}
|
||||
}
|
||||
|
||||
class BaseTagParser {
|
||||
TAC.BaseTagParser = class BaseTagParser {
|
||||
triggerCondition = null;
|
||||
|
||||
constructor (triggerCondition) {
|
||||
if (new.target === BaseTagParser) {
|
||||
if (new.target === TAC.BaseTagParser) {
|
||||
throw new TypeError("Cannot construct abstract BaseCompletionParser directly");
|
||||
}
|
||||
this.triggerCondition = triggerCondition;
|
||||
}
|
||||
|
||||
parse() {
|
||||
throw new FunctionNotOverriddenError("parse()");
|
||||
throw new TAC.FunctionNotOverriddenError("parse()");
|
||||
}
|
||||
}
|
||||
@@ -1,145 +1,146 @@
|
||||
// From https://github.com/component/textarea-caret-position
|
||||
|
||||
// We'll copy the properties below into the mirror div.
|
||||
// Note that some browsers, such as Firefox, do not concatenate properties
|
||||
// into their shorthand (e.g. padding-top, padding-bottom etc. -> padding),
|
||||
// so we have to list every single property explicitly.
|
||||
var properties = [
|
||||
'direction', // RTL support
|
||||
'boxSizing',
|
||||
'width', // on Chrome and IE, exclude the scrollbar, so the mirror div wraps exactly as the textarea does
|
||||
'height',
|
||||
'overflowX',
|
||||
'overflowY', // copy the scrollbar for IE
|
||||
TAC.getCaretCoordinates = class CaretUtils {
|
||||
// We'll copy the properties below into the mirror div.
|
||||
// Note that some browsers, such as Firefox, do not concatenate properties
|
||||
// into their shorthand (e.g. padding-top, padding-bottom etc. -> padding),
|
||||
// so we have to list every single property explicitly.
|
||||
static #properties = [
|
||||
"direction", // RTL support
|
||||
"boxSizing",
|
||||
"width", // on Chrome and IE, exclude the scrollbar, so the mirror div wraps exactly as the textarea does
|
||||
"height",
|
||||
"overflowX",
|
||||
"overflowY", // copy the scrollbar for IE
|
||||
|
||||
'borderTopWidth',
|
||||
'borderRightWidth',
|
||||
'borderBottomWidth',
|
||||
'borderLeftWidth',
|
||||
'borderStyle',
|
||||
"borderTopWidth",
|
||||
"borderRightWidth",
|
||||
"borderBottomWidth",
|
||||
"borderLeftWidth",
|
||||
"borderStyle",
|
||||
|
||||
'paddingTop',
|
||||
'paddingRight',
|
||||
'paddingBottom',
|
||||
'paddingLeft',
|
||||
"paddingTop",
|
||||
"paddingRight",
|
||||
"paddingBottom",
|
||||
"paddingLeft",
|
||||
|
||||
// https://developer.mozilla.org/en-US/docs/Web/CSS/font
|
||||
'fontStyle',
|
||||
'fontVariant',
|
||||
'fontWeight',
|
||||
'fontStretch',
|
||||
'fontSize',
|
||||
'fontSizeAdjust',
|
||||
'lineHeight',
|
||||
'fontFamily',
|
||||
// https://developer.mozilla.org/en-US/docs/Web/CSS/font
|
||||
"fontStyle",
|
||||
"fontVariant",
|
||||
"fontWeight",
|
||||
"fontStretch",
|
||||
"fontSize",
|
||||
"fontSizeAdjust",
|
||||
"lineHeight",
|
||||
"fontFamily",
|
||||
|
||||
'textAlign',
|
||||
'textTransform',
|
||||
'textIndent',
|
||||
'textDecoration', // might not make a difference, but better be safe
|
||||
"textAlign",
|
||||
"textTransform",
|
||||
"textIndent",
|
||||
"textDecoration", // might not make a difference, but better be safe
|
||||
|
||||
'letterSpacing',
|
||||
'wordSpacing',
|
||||
"letterSpacing",
|
||||
"wordSpacing",
|
||||
|
||||
'tabSize',
|
||||
'MozTabSize'
|
||||
"tabSize",
|
||||
"MozTabSize",
|
||||
];
|
||||
|
||||
];
|
||||
static #isBrowser = typeof window !== "undefined";
|
||||
static #isFirefox = this.#isBrowser && window.mozInnerScreenX != null;
|
||||
|
||||
var isBrowser = (typeof window !== 'undefined');
|
||||
var isFirefox = (isBrowser && window.mozInnerScreenX != null);
|
||||
|
||||
function getCaretCoordinates(element, position, options) {
|
||||
if (!isBrowser) {
|
||||
throw new Error('textarea-caret-position#getCaretCoordinates should only be called in a browser');
|
||||
}
|
||||
|
||||
var debug = options && options.debug || false;
|
||||
if (debug) {
|
||||
var el = document.querySelector('#input-textarea-caret-position-mirror-div');
|
||||
if (el) el.parentNode.removeChild(el);
|
||||
}
|
||||
|
||||
// The mirror div will replicate the textarea's style
|
||||
var div = document.createElement('div');
|
||||
div.id = 'input-textarea-caret-position-mirror-div';
|
||||
document.body.appendChild(div);
|
||||
|
||||
var style = div.style;
|
||||
var computed = window.getComputedStyle ? window.getComputedStyle(element) : element.currentStyle; // currentStyle for IE < 9
|
||||
var isInput = element.nodeName === 'INPUT';
|
||||
|
||||
// Default textarea styles
|
||||
style.whiteSpace = 'pre-wrap';
|
||||
if (!isInput)
|
||||
style.wordWrap = 'break-word'; // only for textarea-s
|
||||
|
||||
// Position off-screen
|
||||
style.position = 'absolute'; // required to return coordinates properly
|
||||
if (!debug)
|
||||
style.visibility = 'hidden'; // not 'display: none' because we want rendering
|
||||
|
||||
// Transfer the element's properties to the div
|
||||
properties.forEach(function (prop) {
|
||||
if (isInput && prop === 'lineHeight') {
|
||||
// Special case for <input>s because text is rendered centered and line height may be != height
|
||||
if (computed.boxSizing === "border-box") {
|
||||
var height = parseInt(computed.height);
|
||||
var outerHeight =
|
||||
parseInt(computed.paddingTop) +
|
||||
parseInt(computed.paddingBottom) +
|
||||
parseInt(computed.borderTopWidth) +
|
||||
parseInt(computed.borderBottomWidth);
|
||||
var targetHeight = outerHeight + parseInt(computed.lineHeight);
|
||||
if (height > targetHeight) {
|
||||
style.lineHeight = height - outerHeight + "px";
|
||||
} else if (height === targetHeight) {
|
||||
style.lineHeight = computed.lineHeight;
|
||||
} else {
|
||||
style.lineHeight = 0;
|
||||
static getCaretCoordinates(element, position, options) {
|
||||
if (!CaretUtils.#isBrowser) {
|
||||
throw new Error(
|
||||
"textarea-caret-position#getCaretCoordinates should only be called in a browser"
|
||||
);
|
||||
}
|
||||
} else {
|
||||
style.lineHeight = computed.height;
|
||||
}
|
||||
} else {
|
||||
style[prop] = computed[prop];
|
||||
|
||||
var debug = (options && options.debug) || false;
|
||||
if (debug) {
|
||||
var el = document.querySelector("#input-textarea-caret-position-mirror-div");
|
||||
if (el) el.parentNode.removeChild(el);
|
||||
}
|
||||
|
||||
// The mirror div will replicate the textarea's style
|
||||
var div = document.createElement("div");
|
||||
div.id = "input-textarea-caret-position-mirror-div";
|
||||
document.body.appendChild(div);
|
||||
|
||||
var style = div.style;
|
||||
var computed = window.getComputedStyle
|
||||
? window.getComputedStyle(element)
|
||||
: element.currentStyle; // currentStyle for IE < 9
|
||||
var isInput = element.nodeName === "INPUT";
|
||||
|
||||
// Default textarea styles
|
||||
style.whiteSpace = "pre-wrap";
|
||||
if (!isInput) style.wordWrap = "break-word"; // only for textarea-s
|
||||
|
||||
// Position off-screen
|
||||
style.position = "absolute"; // required to return coordinates properly
|
||||
if (!debug) style.visibility = "hidden"; // not 'display: none' because we want rendering
|
||||
|
||||
// Transfer the element's properties to the div
|
||||
CaretUtils.#properties.forEach(function (prop) {
|
||||
if (isInput && prop === "lineHeight") {
|
||||
// Special case for <input>s because text is rendered centered and line height may be != height
|
||||
if (computed.boxSizing === "border-box") {
|
||||
var height = parseInt(computed.height);
|
||||
var outerHeight =
|
||||
parseInt(computed.paddingTop) +
|
||||
parseInt(computed.paddingBottom) +
|
||||
parseInt(computed.borderTopWidth) +
|
||||
parseInt(computed.borderBottomWidth);
|
||||
var targetHeight = outerHeight + parseInt(computed.lineHeight);
|
||||
if (height > targetHeight) {
|
||||
style.lineHeight = height - outerHeight + "px";
|
||||
} else if (height === targetHeight) {
|
||||
style.lineHeight = computed.lineHeight;
|
||||
} else {
|
||||
style.lineHeight = 0;
|
||||
}
|
||||
} else {
|
||||
style.lineHeight = computed.height;
|
||||
}
|
||||
} else {
|
||||
style[prop] = computed[prop];
|
||||
}
|
||||
});
|
||||
|
||||
if (CaretUtils.#isFirefox) {
|
||||
// Firefox lies about the overflow property for textareas: https://bugzilla.mozilla.org/show_bug.cgi?id=984275
|
||||
if (element.scrollHeight > parseInt(computed.height)) style.overflowY = "scroll";
|
||||
} else {
|
||||
style.overflow = "hidden"; // for Chrome to not render a scrollbar; IE keeps overflowY = 'scroll'
|
||||
}
|
||||
|
||||
div.textContent = element.value.substring(0, position);
|
||||
// The second special handling for input type="text" vs textarea:
|
||||
// spaces need to be replaced with non-breaking spaces - http://stackoverflow.com/a/13402035/1269037
|
||||
if (isInput) div.textContent = div.textContent.replace(/\s/g, "\u00a0");
|
||||
|
||||
var span = document.createElement("span");
|
||||
// Wrapping must be replicated *exactly*, including when a long word gets
|
||||
// onto the next line, with whitespace at the end of the line before (#7).
|
||||
// The *only* reliable way to do that is to copy the *entire* rest of the
|
||||
// textarea's content into the <span> created at the caret position.
|
||||
// For inputs, just '.' would be enough, but no need to bother.
|
||||
span.textContent = element.value.substring(position) || "."; // || because a completely empty faux span doesn't render at all
|
||||
div.appendChild(span);
|
||||
|
||||
var coordinates = {
|
||||
top: span.offsetTop + parseInt(computed["borderTopWidth"]),
|
||||
left: span.offsetLeft + parseInt(computed["borderLeftWidth"]),
|
||||
height: parseInt(computed["lineHeight"]),
|
||||
};
|
||||
|
||||
if (debug) {
|
||||
span.style.backgroundColor = "#aaa";
|
||||
} else {
|
||||
document.body.removeChild(div);
|
||||
}
|
||||
|
||||
return coordinates;
|
||||
}
|
||||
});
|
||||
|
||||
if (isFirefox) {
|
||||
// Firefox lies about the overflow property for textareas: https://bugzilla.mozilla.org/show_bug.cgi?id=984275
|
||||
if (element.scrollHeight > parseInt(computed.height))
|
||||
style.overflowY = 'scroll';
|
||||
} else {
|
||||
style.overflow = 'hidden'; // for Chrome to not render a scrollbar; IE keeps overflowY = 'scroll'
|
||||
}
|
||||
|
||||
div.textContent = element.value.substring(0, position);
|
||||
// The second special handling for input type="text" vs textarea:
|
||||
// spaces need to be replaced with non-breaking spaces - http://stackoverflow.com/a/13402035/1269037
|
||||
if (isInput)
|
||||
div.textContent = div.textContent.replace(/\s/g, '\u00a0');
|
||||
|
||||
var span = document.createElement('span');
|
||||
// Wrapping must be replicated *exactly*, including when a long word gets
|
||||
// onto the next line, with whitespace at the end of the line before (#7).
|
||||
// The *only* reliable way to do that is to copy the *entire* rest of the
|
||||
// textarea's content into the <span> created at the caret position.
|
||||
// For inputs, just '.' would be enough, but no need to bother.
|
||||
span.textContent = element.value.substring(position) || '.'; // || because a completely empty faux span doesn't render at all
|
||||
div.appendChild(span);
|
||||
|
||||
var coordinates = {
|
||||
top: span.offsetTop + parseInt(computed['borderTopWidth']),
|
||||
left: span.offsetLeft + parseInt(computed['borderLeftWidth']),
|
||||
height: parseInt(computed['lineHeight'])
|
||||
};
|
||||
|
||||
if (debug) {
|
||||
span.style.backgroundColor = '#aaa';
|
||||
} else {
|
||||
document.body.removeChild(div);
|
||||
}
|
||||
|
||||
return coordinates;
|
||||
}
|
||||
}.getCaretCoordinates;
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
// Result data type for cleaner use of optional completion result properties
|
||||
|
||||
// Type enum
|
||||
const ResultType = Object.freeze({
|
||||
TAC.ResultType = Object.freeze({
|
||||
"tag": 1,
|
||||
"extra": 2,
|
||||
"embedding": 3,
|
||||
@@ -12,21 +12,24 @@ const ResultType = Object.freeze({
|
||||
"hypernetwork": 8,
|
||||
"lora": 9,
|
||||
"lyco": 10,
|
||||
"chant": 11
|
||||
"chant": 11,
|
||||
"styleName": 12
|
||||
});
|
||||
|
||||
// Class to hold result data and annotations to make it clearer to use
|
||||
class AutocompleteResult {
|
||||
TAC.AutocompleteResult = class AutocompleteResult {
|
||||
// Main properties
|
||||
text = "";
|
||||
type = ResultType.tag;
|
||||
type = TAC.ResultType.tag;
|
||||
|
||||
// Additional info, only used in some cases
|
||||
category = null;
|
||||
count = null;
|
||||
count = Number.MAX_SAFE_INTEGER;
|
||||
usageBias = null;
|
||||
aliases = null;
|
||||
meta = null;
|
||||
hash = null;
|
||||
sortKey = null;
|
||||
|
||||
// Constructor
|
||||
constructor(text, type) {
|
||||
|
||||
@@ -1,190 +1,218 @@
|
||||
// Utility functions to select text areas the script should work on,
|
||||
// including third party options.
|
||||
// Supported third party options so far:
|
||||
// - Dataset Tag Editor
|
||||
|
||||
// Core text area selectors
|
||||
const core = [
|
||||
"#txt2img_prompt > label > textarea",
|
||||
"#img2img_prompt > label > textarea",
|
||||
"#txt2img_neg_prompt > label > textarea",
|
||||
"#img2img_neg_prompt > label > textarea",
|
||||
".prompt > label > textarea",
|
||||
"#txt2img_edit_style_prompt > label > textarea",
|
||||
"#txt2img_edit_style_neg_prompt > label > textarea",
|
||||
"#img2img_edit_style_prompt > label > textarea",
|
||||
"#img2img_edit_style_neg_prompt > label > textarea"
|
||||
];
|
||||
TAC.TextAreas = new (function () {
|
||||
// Core text area selectors
|
||||
const core = [
|
||||
"#txt2img_prompt > label > textarea",
|
||||
"#img2img_prompt > label > textarea",
|
||||
"#txt2img_neg_prompt > label > textarea",
|
||||
"#img2img_neg_prompt > label > textarea",
|
||||
".prompt > label > textarea",
|
||||
"#txt2img_edit_style_prompt > label > textarea",
|
||||
"#txt2img_edit_style_neg_prompt > label > textarea",
|
||||
"#img2img_edit_style_prompt > label > textarea",
|
||||
"#img2img_edit_style_neg_prompt > label > textarea",
|
||||
];
|
||||
|
||||
// Third party text area selectors
|
||||
const thirdParty = {
|
||||
"dataset-tag-editor": {
|
||||
"base": "#tab_dataset_tag_editor_interface",
|
||||
"hasIds": false,
|
||||
"selectors": [
|
||||
"Caption of Selected Image",
|
||||
"Interrogate Result",
|
||||
"Edit Caption",
|
||||
"Edit Tags"
|
||||
]
|
||||
},
|
||||
"image browser": {
|
||||
"base": "#tab_image_browser",
|
||||
"hasIds": false,
|
||||
"selectors": [
|
||||
"Filename keyword search",
|
||||
"EXIF keyword search"
|
||||
]
|
||||
},
|
||||
"tab_tagger": {
|
||||
"base": "#tab_tagger",
|
||||
"hasIds": false,
|
||||
"selectors": [
|
||||
"Additional tags (split by comma)",
|
||||
"Exclude tags (split by comma)"
|
||||
]
|
||||
},
|
||||
"tiled-diffusion-t2i": {
|
||||
"base": "#txt2img_script_container",
|
||||
"hasIds": true,
|
||||
"onDemand": true,
|
||||
"selectors": [
|
||||
"[id^=MD-t2i][id$=prompt] textarea",
|
||||
"[id^=MD-t2i][id$=prompt] input[type='text']"
|
||||
]
|
||||
},
|
||||
"tiled-diffusion-i2i": {
|
||||
"base": "#img2img_script_container",
|
||||
"hasIds": true,
|
||||
"onDemand": true,
|
||||
"selectors": [
|
||||
"[id^=MD-i2i][id$=prompt] textarea",
|
||||
"[id^=MD-i2i][id$=prompt] input[type='text']"
|
||||
]
|
||||
},
|
||||
"adetailer-t2i": {
|
||||
"base": "#txt2img_script_container",
|
||||
"hasIds": true,
|
||||
"onDemand": true,
|
||||
"selectors": [
|
||||
"[id^=script_txt2img_adetailer_ad_prompt] textarea",
|
||||
"[id^=script_txt2img_adetailer_ad_negative_prompt] textarea"
|
||||
]
|
||||
},
|
||||
"adetailer-i2i": {
|
||||
"base": "#img2img_script_container",
|
||||
"hasIds": true,
|
||||
"onDemand": true,
|
||||
"selectors": [
|
||||
"[id^=script_img2img_adetailer_ad_prompt] textarea",
|
||||
"[id^=script_img2img_adetailer_ad_negative_prompt] textarea"
|
||||
]
|
||||
// Third party text area selectors
|
||||
const thirdParty = {
|
||||
"dataset-tag-editor": {
|
||||
base: "#tab_dataset_tag_editor_interface",
|
||||
hasIds: false,
|
||||
selectors: [
|
||||
"Caption of Selected Image",
|
||||
"Interrogate Result",
|
||||
"Edit Caption",
|
||||
"Edit Tags",
|
||||
],
|
||||
},
|
||||
"image browser": {
|
||||
base: "#tab_image_browser",
|
||||
hasIds: false,
|
||||
selectors: ["Filename keyword search", "EXIF keyword search"],
|
||||
},
|
||||
tab_tagger: {
|
||||
base: "#tab_tagger",
|
||||
hasIds: false,
|
||||
selectors: ["Additional tags (split by comma)", "Exclude tags (split by comma)"],
|
||||
},
|
||||
"tiled-diffusion-t2i": {
|
||||
base: "#txt2img_script_container",
|
||||
hasIds: true,
|
||||
onDemand: true,
|
||||
selectors: [
|
||||
"[id^=MD-t2i][id$=prompt] textarea",
|
||||
"[id^=MD-t2i][id$=prompt] input[type='text']",
|
||||
],
|
||||
},
|
||||
"tiled-diffusion-i2i": {
|
||||
base: "#img2img_script_container",
|
||||
hasIds: true,
|
||||
onDemand: true,
|
||||
selectors: [
|
||||
"[id^=MD-i2i][id$=prompt] textarea",
|
||||
"[id^=MD-i2i][id$=prompt] input[type='text']",
|
||||
],
|
||||
},
|
||||
"adetailer-t2i": {
|
||||
base: "#txt2img_script_container",
|
||||
hasIds: true,
|
||||
onDemand: true,
|
||||
selectors: [
|
||||
"[id^=script_txt2img_adetailer_ad_prompt] textarea",
|
||||
"[id^=script_txt2img_adetailer_ad_negative_prompt] textarea",
|
||||
],
|
||||
},
|
||||
"adetailer-i2i": {
|
||||
base: "#img2img_script_container",
|
||||
hasIds: true,
|
||||
onDemand: true,
|
||||
selectors: [
|
||||
"[id^=script_img2img_adetailer_ad_prompt] textarea",
|
||||
"[id^=script_img2img_adetailer_ad_negative_prompt] textarea",
|
||||
],
|
||||
},
|
||||
"deepdanbooru-object-recognition": {
|
||||
base: "#tab_deepdanboru_object_recg_tab",
|
||||
hasIds: false,
|
||||
selectors: ["Found tags"],
|
||||
},
|
||||
TIPO: {
|
||||
base: "#tab_txt2img",
|
||||
hasIds: false,
|
||||
selectors: ["Tag Prompt"],
|
||||
},
|
||||
};
|
||||
|
||||
this.getTextAreas = function () {
|
||||
// First get all core text areas
|
||||
let textAreas = [...gradioApp().querySelectorAll(core.join(", "))];
|
||||
|
||||
for (const [key, entry] of Object.entries(thirdParty)) {
|
||||
if (entry.hasIds) {
|
||||
// If the entry has proper ids, we can just select them
|
||||
textAreas = textAreas.concat([
|
||||
...gradioApp().querySelectorAll(entry.selectors.join(", ")),
|
||||
]);
|
||||
} else {
|
||||
// Otherwise, we have to find the text areas by their adjacent labels
|
||||
let base = gradioApp().querySelector(entry.base);
|
||||
|
||||
// Safety check
|
||||
if (!base) continue;
|
||||
|
||||
let allTextAreas = [...base.querySelectorAll("textarea, input[type='text']")];
|
||||
|
||||
// Filter the text areas where the adjacent label matches one of the selectors
|
||||
let matchingTextAreas = allTextAreas.filter((ta) =>
|
||||
[...ta.parentElement.childNodes].some((x) =>
|
||||
entry.selectors.includes(x.innerText)
|
||||
)
|
||||
);
|
||||
textAreas = textAreas.concat(matchingTextAreas);
|
||||
}
|
||||
}
|
||||
|
||||
return textAreas;
|
||||
}
|
||||
}
|
||||
|
||||
function getTextAreas() {
|
||||
// First get all core text areas
|
||||
let textAreas = [...gradioApp().querySelectorAll(core.join(", "))];
|
||||
this.addOnDemandObservers = function (setupFunction) {
|
||||
for (const [key, entry] of Object.entries(thirdParty)) {
|
||||
if (!entry.onDemand) continue;
|
||||
|
||||
for (const [key, entry] of Object.entries(thirdParty)) {
|
||||
if (entry.hasIds) { // If the entry has proper ids, we can just select them
|
||||
textAreas = textAreas.concat([...gradioApp().querySelectorAll(entry.selectors.join(", "))]);
|
||||
} else { // Otherwise, we have to find the text areas by their adjacent labels
|
||||
let base = gradioApp().querySelector(entry.base);
|
||||
|
||||
// Safety check
|
||||
if (!base) continue;
|
||||
|
||||
let allTextAreas = [...base.querySelectorAll("textarea, input[type='text']")];
|
||||
let accordions = [...base?.querySelectorAll(".gradio-accordion")];
|
||||
if (!accordions) continue;
|
||||
|
||||
// Filter the text areas where the adjacent label matches one of the selectors
|
||||
let matchingTextAreas = allTextAreas.filter(ta => [...ta.parentElement.childNodes].some(x => entry.selectors.includes(x.innerText)));
|
||||
textAreas = textAreas.concat(matchingTextAreas);
|
||||
}
|
||||
};
|
||||
|
||||
return textAreas;
|
||||
}
|
||||
|
||||
function addOnDemandObservers(setupFunction) {
|
||||
for (const [key, entry] of Object.entries(thirdParty)) {
|
||||
if (!entry.onDemand) continue;
|
||||
|
||||
let base = gradioApp().querySelector(entry.base);
|
||||
if (!base) continue;
|
||||
|
||||
let accordions = [...base?.querySelectorAll(".gradio-accordion")];
|
||||
if (!accordions) continue;
|
||||
|
||||
accordions.forEach(acc => {
|
||||
let accObserver = new MutationObserver((mutationList, observer) => {
|
||||
for (const mutation of mutationList) {
|
||||
if (mutation.type === "childList") {
|
||||
let newChildren = mutation.addedNodes;
|
||||
if (!newChildren) {
|
||||
accObserver.disconnect();
|
||||
continue;
|
||||
}
|
||||
|
||||
newChildren.forEach(child => {
|
||||
if (child.classList.contains("gradio-accordion") || child.querySelector(".gradio-accordion")) {
|
||||
let newAccordions = [...child.querySelectorAll(".gradio-accordion")];
|
||||
newAccordions.forEach(nAcc => accObserver.observe(nAcc, { childList: true }));
|
||||
accordions.forEach((acc) => {
|
||||
let accObserver = new MutationObserver((mutationList, observer) => {
|
||||
for (const mutation of mutationList) {
|
||||
if (mutation.type === "childList") {
|
||||
let newChildren = mutation.addedNodes;
|
||||
if (!newChildren) {
|
||||
accObserver.disconnect();
|
||||
continue;
|
||||
}
|
||||
});
|
||||
|
||||
if (entry.hasIds) { // If the entry has proper ids, we can just select them
|
||||
[...gradioApp().querySelectorAll(entry.selectors.join(", "))].forEach(x => setupFunction(x));
|
||||
} else { // Otherwise, we have to find the text areas by their adjacent labels
|
||||
let base = gradioApp().querySelector(entry.base);
|
||||
newChildren.forEach((child) => {
|
||||
if (
|
||||
child.classList.contains("gradio-accordion") ||
|
||||
child.querySelector(".gradio-accordion")
|
||||
) {
|
||||
let newAccordions = [
|
||||
...child.querySelectorAll(".gradio-accordion"),
|
||||
];
|
||||
newAccordions.forEach((nAcc) =>
|
||||
accObserver.observe(nAcc, { childList: true })
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
// Safety check
|
||||
if (!base) continue;
|
||||
if (entry.hasIds) {
|
||||
// If the entry has proper ids, we can just select them
|
||||
[
|
||||
...gradioApp().querySelectorAll(entry.selectors.join(", ")),
|
||||
].forEach((x) => setupFunction(x));
|
||||
} else {
|
||||
// Otherwise, we have to find the text areas by their adjacent labels
|
||||
let base = gradioApp().querySelector(entry.base);
|
||||
|
||||
let allTextAreas = [...base.querySelectorAll("textarea, input[type='text']")];
|
||||
// Safety check
|
||||
if (!base) continue;
|
||||
|
||||
// Filter the text areas where the adjacent label matches one of the selectors
|
||||
let matchingTextAreas = allTextAreas.filter(ta => [...ta.parentElement.childNodes].some(x => entry.selectors.includes(x.innerText)));
|
||||
matchingTextAreas.forEach(x => setupFunction(x));
|
||||
let allTextAreas = [
|
||||
...base.querySelectorAll("textarea, input[type='text']"),
|
||||
];
|
||||
|
||||
// Filter the text areas where the adjacent label matches one of the selectors
|
||||
let matchingTextAreas = allTextAreas.filter((ta) =>
|
||||
[...ta.parentElement.childNodes].some((x) =>
|
||||
entry.selectors.includes(x.innerText)
|
||||
)
|
||||
);
|
||||
matchingTextAreas.forEach((x) => setupFunction(x));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
accObserver.observe(acc, { childList: true });
|
||||
});
|
||||
accObserver.observe(acc, { childList: true });
|
||||
});
|
||||
};
|
||||
}
|
||||
|
||||
const thirdPartyIdSet = new Set();
|
||||
// Get the identifier for the text area to differentiate between positive and negative
|
||||
function getTextAreaIdentifier(textArea) {
|
||||
let txt2img_p = gradioApp().querySelector('#txt2img_prompt > label > textarea');
|
||||
let txt2img_n = gradioApp().querySelector('#txt2img_neg_prompt > label > textarea');
|
||||
let img2img_p = gradioApp().querySelector('#img2img_prompt > label > textarea');
|
||||
let img2img_n = gradioApp().querySelector('#img2img_neg_prompt > label > textarea');
|
||||
|
||||
let modifier = "";
|
||||
switch (textArea) {
|
||||
case txt2img_p:
|
||||
modifier = ".txt2img.p";
|
||||
break;
|
||||
case txt2img_n:
|
||||
modifier = ".txt2img.n";
|
||||
break;
|
||||
case img2img_p:
|
||||
modifier = ".img2img.p";
|
||||
break;
|
||||
case img2img_n:
|
||||
modifier = ".img2img.n";
|
||||
break;
|
||||
default:
|
||||
// If the text area is not a core text area, it must be a third party text area
|
||||
// Add it to the set of third party text areas and get its index as a unique identifier
|
||||
if (!thirdPartyIdSet.has(textArea))
|
||||
thirdPartyIdSet.add(textArea);
|
||||
|
||||
modifier = `.thirdParty.ta${[...thirdPartyIdSet].indexOf(textArea)}`;
|
||||
break;
|
||||
}
|
||||
}
|
||||
return modifier;
|
||||
}
|
||||
|
||||
const thirdPartyIdSet = new Set();
|
||||
// Get the identifier for the text area to differentiate between positive and negative
|
||||
this.getTextAreaIdentifier = function (textArea) {
|
||||
let txt2img_p = gradioApp().querySelector("#txt2img_prompt > label > textarea");
|
||||
let txt2img_n = gradioApp().querySelector("#txt2img_neg_prompt > label > textarea");
|
||||
let img2img_p = gradioApp().querySelector("#img2img_prompt > label > textarea");
|
||||
let img2img_n = gradioApp().querySelector("#img2img_neg_prompt > label > textarea");
|
||||
|
||||
let modifier = "";
|
||||
switch (textArea) {
|
||||
case txt2img_p:
|
||||
modifier = ".txt2img.p";
|
||||
break;
|
||||
case txt2img_n:
|
||||
modifier = ".txt2img.n";
|
||||
break;
|
||||
case img2img_p:
|
||||
modifier = ".img2img.p";
|
||||
break;
|
||||
case img2img_n:
|
||||
modifier = ".img2img.n";
|
||||
break;
|
||||
default:
|
||||
// If the text area is not a core text area, it must be a third party text area
|
||||
// Add it to the set of third party text areas and get its index as a unique identifier
|
||||
if (!thirdPartyIdSet.has(textArea)) thirdPartyIdSet.add(textArea);
|
||||
|
||||
modifier = `.thirdParty.ta${[...thirdPartyIdSet].indexOf(textArea)}`;
|
||||
break;
|
||||
}
|
||||
return modifier;
|
||||
}
|
||||
})();
|
||||
|
||||
@@ -1,232 +1,624 @@
|
||||
// Utility functions for tag autocomplete
|
||||
TAC.Utils = class TacUtils {
|
||||
/**
|
||||
* Parses a CSV file into a 2D array. Doesn't use regex, so it is very lightweight.
|
||||
* We are ignoring newlines in quote fields since we expect one-line entries and parsing would break for unclosed quotes otherwise
|
||||
* @param {String} str - The CSV string to parse (likely from a file with multiple lines)
|
||||
* @returns {string[][]} A 2D array of CSV entries (rows and columns of that row)
|
||||
*/
|
||||
static parseCSV(str) {
|
||||
const arr = [];
|
||||
let quote = false; // 'true' means we're inside a quoted field
|
||||
|
||||
// Parse the CSV file into a 2D array. Doesn't use regex, so it is very lightweight.
|
||||
function parseCSV(str) {
|
||||
var arr = [];
|
||||
var quote = false; // 'true' means we're inside a quoted field
|
||||
// Iterate over each character, keep track of current row and column (of the returned array)
|
||||
for (let row = 0, col = 0, c = 0; c < str.length; c++) {
|
||||
let cc = str[c],
|
||||
nc = str[c + 1]; // Current character, next character
|
||||
arr[row] = arr[row] || []; // Create a new row if necessary
|
||||
arr[row][col] = arr[row][col] || ""; // Create a new column (start with empty string) if necessary
|
||||
|
||||
// Iterate over each character, keep track of current row and column (of the returned array)
|
||||
for (var row = 0, col = 0, c = 0; c < str.length; c++) {
|
||||
var cc = str[c], nc = str[c + 1]; // Current character, next character
|
||||
arr[row] = arr[row] || []; // Create a new row if necessary
|
||||
arr[row][col] = arr[row][col] || ''; // Create a new column (start with empty string) if necessary
|
||||
|
||||
// If the current character is a quotation mark, and we're inside a
|
||||
// quoted field, and the next character is also a quotation mark,
|
||||
// add a quotation mark to the current column and skip the next character
|
||||
if (cc == '"' && quote && nc == '"') { arr[row][col] += cc; ++c; continue; }
|
||||
|
||||
// If it's just one quotation mark, begin/end quoted field
|
||||
if (cc == '"') { quote = !quote; continue; }
|
||||
|
||||
// If it's a comma and we're not in a quoted field, move on to the next column
|
||||
if (cc == ',' && !quote) { ++col; continue; }
|
||||
|
||||
// If it's a newline (CRLF) and we're not in a quoted field, skip the next character
|
||||
// and move on to the next row and move to column 0 of that new row
|
||||
if (cc == '\r' && nc == '\n' && !quote) { ++row; col = 0; ++c; continue; }
|
||||
|
||||
// If it's a newline (LF or CR) and we're not in a quoted field,
|
||||
// move on to the next row and move to column 0 of that new row
|
||||
if (cc == '\n' && !quote) { ++row; col = 0; continue; }
|
||||
if (cc == '\r' && !quote) { ++row; col = 0; continue; }
|
||||
|
||||
// Otherwise, append the current character to the current column
|
||||
arr[row][col] += cc;
|
||||
}
|
||||
return arr;
|
||||
}
|
||||
|
||||
// Load file
|
||||
async function readFile(filePath, json = false, cache = false) {
|
||||
if (!cache)
|
||||
filePath += `?${new Date().getTime()}`;
|
||||
|
||||
let response = await fetch(`file=${filePath}`);
|
||||
|
||||
if (response.status != 200) {
|
||||
console.error(`Error loading file "${filePath}": ` + response.status, response.statusText);
|
||||
return null;
|
||||
}
|
||||
|
||||
if (json)
|
||||
return await response.json();
|
||||
else
|
||||
return await response.text();
|
||||
}
|
||||
|
||||
// Load CSV
|
||||
async function loadCSV(path) {
|
||||
let text = await readFile(path);
|
||||
return parseCSV(text);
|
||||
}
|
||||
|
||||
// Fetch API
|
||||
async function fetchAPI(url, json = true, cache = false) {
|
||||
if (!cache) {
|
||||
const appendChar = url.includes("?") ? "&" : "?";
|
||||
url += `${appendChar}${new Date().getTime()}`
|
||||
}
|
||||
|
||||
let response = await fetch(url);
|
||||
|
||||
if (response.status != 200) {
|
||||
console.error(`Error fetching API endpoint "${url}": ` + response.status, response.statusText);
|
||||
return null;
|
||||
}
|
||||
|
||||
if (json)
|
||||
return await response.json();
|
||||
else
|
||||
return await response.text();
|
||||
}
|
||||
|
||||
// Extra network preview thumbnails
|
||||
async function getExtraNetworkPreviewURL(filename, type) {
|
||||
const previewJSON = await fetchAPI(`tacapi/v1/thumb-preview/${filename}?type=${type}`, true, true);
|
||||
if (previewJSON?.url) {
|
||||
const properURL = `sd_extra_networks/thumb?filename=${previewJSON.url}`;
|
||||
if ((await fetch(properURL)).status == 200) {
|
||||
return properURL;
|
||||
} else {
|
||||
// create blob url
|
||||
const blob = await (await fetch(`tacapi/v1/thumb-preview-blob/${filename}?type=${type}`)).blob();
|
||||
return URL.createObjectURL(blob);
|
||||
}
|
||||
} else {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// Debounce function to prevent spamming the autocomplete function
|
||||
var dbTimeOut;
|
||||
const debounce = (func, wait = 300) => {
|
||||
return function (...args) {
|
||||
if (dbTimeOut) {
|
||||
clearTimeout(dbTimeOut);
|
||||
}
|
||||
|
||||
dbTimeOut = setTimeout(() => {
|
||||
func.apply(this, args);
|
||||
}, wait);
|
||||
}
|
||||
}
|
||||
|
||||
// Difference function to fix duplicates not being seen as changes in normal filter
|
||||
function difference(a, b) {
|
||||
if (a.length == 0) {
|
||||
return b;
|
||||
}
|
||||
if (b.length == 0) {
|
||||
return a;
|
||||
}
|
||||
|
||||
return [...b.reduce((acc, v) => acc.set(v, (acc.get(v) || 0) - 1),
|
||||
a.reduce((acc, v) => acc.set(v, (acc.get(v) || 0) + 1), new Map())
|
||||
)].reduce((acc, [v, count]) => acc.concat(Array(Math.abs(count)).fill(v)), []);
|
||||
}
|
||||
|
||||
// Object flatten function adapted from https://stackoverflow.com/a/61602592
|
||||
// $roots keeps previous parent properties as they will be added as a prefix for each prop.
|
||||
// $sep is just a preference if you want to seperate nested paths other than dot.
|
||||
function flatten(obj, roots = [], sep = ".") {
|
||||
return Object.keys(obj).reduce(
|
||||
(memo, prop) =>
|
||||
Object.assign(
|
||||
// create a new object
|
||||
{},
|
||||
// include previously returned object
|
||||
memo,
|
||||
Object.prototype.toString.call(obj[prop]) === "[object Object]"
|
||||
? // keep working if value is an object
|
||||
flatten(obj[prop], roots.concat([prop]), sep)
|
||||
: // include current prop and value and prefix prop with the roots
|
||||
{ [roots.concat([prop]).join(sep)]: obj[prop] }
|
||||
),
|
||||
{}
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
// Sliding window function to get possible combination groups of an array
|
||||
function toNgrams(inputArray, size) {
|
||||
return Array.from(
|
||||
{ length: inputArray.length - (size - 1) }, //get the appropriate length
|
||||
(_, index) => inputArray.slice(index, index + size) //create the windows
|
||||
);
|
||||
}
|
||||
|
||||
function escapeRegExp(string) {
|
||||
return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string
|
||||
}
|
||||
function escapeHTML(unsafeText) {
|
||||
let div = document.createElement('div');
|
||||
div.textContent = unsafeText;
|
||||
return div.innerHTML;
|
||||
}
|
||||
|
||||
// For black/whitelisting
|
||||
function updateModelName() {
|
||||
let sdm = gradioApp().querySelector("#setting_sd_model_checkpoint");
|
||||
let modelDropdown = sdm?.querySelector("input") || sdm?.querySelector("select");
|
||||
if (modelDropdown) {
|
||||
currentModelName = modelDropdown.value;
|
||||
} else {
|
||||
// Fallback for intermediate versions
|
||||
modelDropdown = sdm?.querySelector("span.single-select");
|
||||
currentModelName = modelDropdown?.textContent || "";
|
||||
}
|
||||
}
|
||||
|
||||
// From https://stackoverflow.com/a/61975440, how to detect JS value changes
|
||||
function observeElement(element, property, callback, delay = 0) {
|
||||
let elementPrototype = Object.getPrototypeOf(element);
|
||||
if (elementPrototype.hasOwnProperty(property)) {
|
||||
let descriptor = Object.getOwnPropertyDescriptor(elementPrototype, property);
|
||||
Object.defineProperty(element, property, {
|
||||
get: function() {
|
||||
return descriptor.get.apply(this, arguments);
|
||||
},
|
||||
set: function () {
|
||||
let oldValue = this[property];
|
||||
descriptor.set.apply(this, arguments);
|
||||
let newValue = this[property];
|
||||
if (typeof callback == "function") {
|
||||
setTimeout(callback.bind(this, oldValue, newValue), delay);
|
||||
}
|
||||
return newValue;
|
||||
// If the current character is a quotation mark, and we're inside a
|
||||
// quoted field, and the next character is also a quotation mark,
|
||||
// add a quotation mark to the current column and skip the next character
|
||||
if (cc == '"' && quote && nc == '"') {
|
||||
arr[row][col] += cc;
|
||||
++c;
|
||||
continue;
|
||||
}
|
||||
|
||||
// If it's just one quotation mark, begin/end quoted field
|
||||
if (cc == '"') {
|
||||
quote = !quote;
|
||||
continue;
|
||||
}
|
||||
|
||||
// If it's a comma and we're not in a quoted field, move on to the next column
|
||||
if (cc == "," && !quote) {
|
||||
++col;
|
||||
continue;
|
||||
}
|
||||
|
||||
// If it's a newline (CRLF), skip the next character and move on to the next row and move to column 0 of that new row
|
||||
if (cc == "\r" && nc == "\n") {
|
||||
++row;
|
||||
col = 0;
|
||||
++c;
|
||||
quote = false;
|
||||
continue;
|
||||
}
|
||||
|
||||
// If it's a newline (LF or CR) move on to the next row and move to column 0 of that new row
|
||||
if (cc == "\n") {
|
||||
++row;
|
||||
col = 0;
|
||||
quote = false;
|
||||
continue;
|
||||
}
|
||||
if (cc == "\r") {
|
||||
++row;
|
||||
col = 0;
|
||||
quote = false;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Otherwise, append the current character to the current column
|
||||
arr[row][col] += cc;
|
||||
}
|
||||
return arr;
|
||||
}
|
||||
|
||||
/** Wrapper function to read a file from a path, using Gradio's "file="" accessor API
|
||||
* @param {String} filePath - The path to the file
|
||||
* @param {Boolean} json - Whether to parse the file as JSON
|
||||
* @param {Boolean} cache - Whether to cache the response
|
||||
* @returns {Promise<String | any>} The file content as a string or JSON object (if json is true)
|
||||
*/
|
||||
static async readFile(filePath, json = false, cache = false) {
|
||||
if (!cache) filePath += `?${new Date().getTime()}`;
|
||||
|
||||
let response = await fetch(`file=${filePath}`);
|
||||
|
||||
if (response.status != 200) {
|
||||
console.error(
|
||||
`Error loading file "${filePath}": ` + response.status,
|
||||
response.statusText
|
||||
);
|
||||
return null;
|
||||
}
|
||||
|
||||
if (json) return await response.json();
|
||||
else return await response.text();
|
||||
}
|
||||
|
||||
/** Wrapper function to read a file from the path and parse it as CSV
|
||||
* @param {String} path - The path to the CSV file
|
||||
* @returns {Promise<String[][]>} A 2D array of CSV entries
|
||||
*/
|
||||
static async loadCSV(path) {
|
||||
let text = await this.readFile(path);
|
||||
return this.parseCSV(text);
|
||||
}
|
||||
|
||||
/**
|
||||
* Calls the TAC API for a GET request
|
||||
* @param {String} url - The URL to fetch from
|
||||
* @param {Boolean} json - Whether to parse the response as JSON or plain text
|
||||
* @param {Boolean} cache - Whether to cache the response
|
||||
* @returns {Promise<any | String>} JSON or text response from the API, depending on the "json" parameter
|
||||
*/
|
||||
static async fetchAPI(url, json = true, cache = false) {
|
||||
if (!cache) {
|
||||
const appendChar = url.includes("?") ? "&" : "?";
|
||||
url += `${appendChar}${new Date().getTime()}`;
|
||||
}
|
||||
|
||||
let response = await fetch(url);
|
||||
|
||||
if (response.status != 200) {
|
||||
console.error(
|
||||
`Error fetching API endpoint "${url}": ` + response.status,
|
||||
response.statusText
|
||||
);
|
||||
return null;
|
||||
}
|
||||
|
||||
if (json) return await response.json();
|
||||
else return await response.text();
|
||||
}
|
||||
|
||||
/**
|
||||
* Posts to the TAC API
|
||||
* @param {String} url - The URL to post to
|
||||
* @param {String} body - (optional) The body of the POST request as a JSON string
|
||||
* @returns JSON response from the API
|
||||
*/
|
||||
static async postAPI(url, body = null) {
|
||||
let response = await fetch(url, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: body,
|
||||
});
|
||||
|
||||
if (response.status != 200) {
|
||||
console.error(
|
||||
`Error posting to API endpoint "${url}": ` + response.status,
|
||||
response.statusText
|
||||
);
|
||||
return null;
|
||||
}
|
||||
|
||||
return await response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Puts to the TAC API
|
||||
* @param {String} url - The URL to post to
|
||||
* @param {String} body - (optional) The body of the PUT request as a JSON string
|
||||
* @returns JSON response from the API
|
||||
*/
|
||||
static async putAPI(url, body = null) {
|
||||
let response = await fetch(url, { method: "PUT", body: body });
|
||||
|
||||
if (response.status != 200) {
|
||||
console.error(
|
||||
`Error putting to API endpoint "${url}": ` + response.status,
|
||||
response.statusText
|
||||
);
|
||||
return null;
|
||||
}
|
||||
|
||||
return await response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a preview image URL for a given extra network file.
|
||||
* Uses the official webui endpoint if available, otherwise creates a blob URL.
|
||||
* @param {String} filename - The filename of the extra network file
|
||||
* @param {String} type - One of "embed", "hyper", "lora", or "lyco", to determine the lookup location
|
||||
* @returns {Promise<String>} URL to a preview image for the extra network file, if available
|
||||
*/
|
||||
static async getExtraNetworkPreviewURL(filename, type) {
|
||||
const previewJSON = await this.fetchAPI(
|
||||
`tacapi/v1/thumb-preview/${filename}?type=${type}`,
|
||||
true,
|
||||
true
|
||||
);
|
||||
if (previewJSON?.url) {
|
||||
const properURL = `sd_extra_networks/thumb?filename=${previewJSON.url}`;
|
||||
if ((await fetch(properURL)).status == 200) {
|
||||
return properURL;
|
||||
} else {
|
||||
// create blob url
|
||||
const blob = await (
|
||||
await fetch(`tacapi/v1/thumb-preview-blob/${filename}?type=${type}`)
|
||||
).blob();
|
||||
return URL.createObjectURL(blob);
|
||||
}
|
||||
} else {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
static #lastStyleRefresh = 0;
|
||||
/**
|
||||
* Refreshes the styles.txt file if it has changed since the last check.
|
||||
* Checks at most once per second to prevent spamming the API.
|
||||
*/
|
||||
static async refreshStyleNamesIfChanged() {
|
||||
// Only refresh once per second
|
||||
let currentTimestamp = new Date().getTime();
|
||||
if (currentTimestamp - this.#lastStyleRefresh < 1000) return;
|
||||
this.#lastStyleRefresh = currentTimestamp;
|
||||
|
||||
const response = await fetch(`tacapi/v1/refresh-styles-if-changed?${new Date().getTime()}`);
|
||||
if (response.status === 304) {
|
||||
// Not modified
|
||||
} else if (response.status === 200) {
|
||||
// Reload
|
||||
TAC.Ext.QUEUE_FILE_LOAD.forEach(async (fn) => {
|
||||
if (fn.toString().includes("styleNames")) await fn.call(null, true);
|
||||
});
|
||||
} else {
|
||||
// Error
|
||||
console.error(`Error refreshing styles.txt: ` + response.status, response.statusText);
|
||||
}
|
||||
}
|
||||
|
||||
static #dbTimeOut;
|
||||
/**
|
||||
* Generic debounce function to prevent spamming the autocompletion during fast typing
|
||||
* @param {Function} func - The function to debounce
|
||||
* @param {Number} wait - The debounce time in milliseconds
|
||||
* @returns {Function} The debounced function
|
||||
*/
|
||||
static debounce = (func, wait = 300) => {
|
||||
return function (...args) {
|
||||
// Caution: Since we are in an anonymous function, 'this' would not refer to the class
|
||||
if (TacUtils.#dbTimeOut) {
|
||||
clearTimeout(TacUtils.#dbTimeOut);
|
||||
}
|
||||
|
||||
TacUtils.#dbTimeOut = setTimeout(() => {
|
||||
func.apply(this, args);
|
||||
}, wait);
|
||||
};
|
||||
};
|
||||
|
||||
/**
|
||||
* Calculates the difference between two arrays (order-sensitive).
|
||||
* Fixes duplicates not being seen as changes in a normal filter function.
|
||||
* @param {Array} a
|
||||
* @param {Array} b
|
||||
* @returns {Array} The difference between the two arrays
|
||||
*/
|
||||
static difference(a, b) {
|
||||
if (a.length == 0) {
|
||||
return b;
|
||||
}
|
||||
if (b.length == 0) {
|
||||
return a;
|
||||
}
|
||||
|
||||
return [
|
||||
...b.reduce(
|
||||
(acc, v) => acc.set(v, (acc.get(v) || 0) - 1),
|
||||
a.reduce((acc, v) => acc.set(v, (acc.get(v) || 0) + 1), new Map())
|
||||
),
|
||||
].reduce((acc, [v, count]) => acc.concat(Array(Math.abs(count)).fill(v)), []);
|
||||
}
|
||||
|
||||
/**
|
||||
* Object flatten function adapted from https://stackoverflow.com/a/61602592
|
||||
* @param {*} obj - The object to flatten
|
||||
* @param {Array} roots - Keeps previous parent properties as they will be added as a prefix for each prop.
|
||||
* @param {String} sep - Just a preference if you want to seperate nested paths other than dot.
|
||||
* @returns The flattened object
|
||||
*/
|
||||
static flatten(obj, roots = [], sep = ".") {
|
||||
return Object.keys(obj).reduce(
|
||||
(memo, prop) =>
|
||||
Object.assign(
|
||||
// create a new object
|
||||
{},
|
||||
// include previously returned object
|
||||
memo,
|
||||
Object.prototype.toString.call(obj[prop]) === "[object Object]"
|
||||
? // keep working if value is an object
|
||||
this.flatten(obj[prop], roots.concat([prop]), sep)
|
||||
: // include current prop and value and prefix prop with the roots
|
||||
{ [roots.concat([prop]).join(sep)]: obj[prop] }
|
||||
),
|
||||
{}
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate biased tag score based on post count and frequent usage
|
||||
* @param {TAC.AutocompleteResult} result - The unbiased result
|
||||
* @param {Number} count - The post count (or similar base metric)
|
||||
* @param {Number} uses - The usage count
|
||||
* @returns {Number} The biased score for sorting
|
||||
*/
|
||||
static calculateUsageBias(result, count, uses) {
|
||||
// Check setting conditions
|
||||
if (uses < TAC.CFG.frequencyMinCount) {
|
||||
uses = 0;
|
||||
} else if (uses != 0) {
|
||||
result.usageBias = true;
|
||||
}
|
||||
|
||||
switch (TAC.CFG.frequencyFunction) {
|
||||
case "Logarithmic (weak)":
|
||||
return Math.log(1 + count) + Math.log(1 + uses);
|
||||
case "Logarithmic (strong)":
|
||||
return Math.log(1 + count) + 2 * Math.log(1 + uses);
|
||||
case "Usage first":
|
||||
return uses;
|
||||
default:
|
||||
return count;
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Utility function to map the use count array from the database to a more readable format,
|
||||
* since FastAPI omits the field names in the response.
|
||||
* @param {Array} useCounts
|
||||
* @param {Boolean} posAndNeg - Whether to include negative counts
|
||||
*/
|
||||
static mapUseCountArray(useCounts, posAndNeg = false) {
|
||||
return useCounts.map((useCount) => {
|
||||
if (posAndNeg) {
|
||||
return {
|
||||
name: useCount[0],
|
||||
type: useCount[1],
|
||||
count: useCount[2],
|
||||
negCount: useCount[3],
|
||||
lastUseDate: useCount[4],
|
||||
};
|
||||
}
|
||||
return {
|
||||
name: useCount[0],
|
||||
type: useCount[1],
|
||||
count: useCount[2],
|
||||
lastUseDate: useCount[3],
|
||||
};
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Queue calling function to process global queues
|
||||
async function processQueue(queue, context, ...args) {
|
||||
for (let i = 0; i < queue.length; i++) {
|
||||
await queue[i].call(context, ...args);
|
||||
}
|
||||
}
|
||||
// The same but with return values
|
||||
async function processQueueReturn(queue, context, ...args)
|
||||
{
|
||||
let qeueueReturns = [];
|
||||
for (let i = 0; i < queue.length; i++) {
|
||||
let returnValue = await queue[i].call(context, ...args);
|
||||
if (returnValue)
|
||||
qeueueReturns.push(returnValue);
|
||||
}
|
||||
return qeueueReturns;
|
||||
}
|
||||
// Specific to tag completion parsers
|
||||
async function processParsers(textArea, prompt) {
|
||||
// Get all parsers that have a successful trigger condition
|
||||
let matchingParsers = PARSERS.filter(parser => parser.triggerCondition());
|
||||
// Guard condition
|
||||
if (matchingParsers.length === 0) {
|
||||
return null;
|
||||
/**
|
||||
* Calls API endpoint to increase the count of a tag in the database.
|
||||
* Not awaited as it is non-critical and can be executed as fire-and-forget.
|
||||
* @param {String} tagName - The name of the tag
|
||||
* @param {TAC.ResultType} type - The type of the tag as mapped in {@link TAC.ResultType}
|
||||
* @param {Boolean} negative - Whether the tag was typed in a negative prompt field
|
||||
*/
|
||||
static increaseUseCount(tagName, type, negative = false) {
|
||||
this.postAPI(
|
||||
`tacapi/v1/increase-use-count?tagname=${tagName}&ttype=${type}&neg=${negative}`
|
||||
);
|
||||
}
|
||||
|
||||
let parseFunctions = matchingParsers.map(parser => parser.parse);
|
||||
// Process them and return the results
|
||||
return await processQueueReturn(parseFunctions, null, textArea, prompt);
|
||||
}
|
||||
/**
|
||||
* Get the use count of a tag from the database
|
||||
* @param {String} tagName - The name of the tag
|
||||
* @param {TAC.ResultType} type - The type of the tag as mapped in {@link TAC.ResultType}
|
||||
* @param {Boolean} negative - Whether we are currently in a negative prompt field
|
||||
* @returns {Promise<Number>} The use count of the tag
|
||||
*/
|
||||
static async getUseCount(tagName, type, negative = false) {
|
||||
const response = await this.fetchAPI(
|
||||
`tacapi/v1/get-use-count?tagname=${tagName}&ttype=${type}&neg=${negative}`,
|
||||
true,
|
||||
false
|
||||
);
|
||||
// Guard for no db
|
||||
if (response == null) return null;
|
||||
// Result
|
||||
return response["result"];
|
||||
}
|
||||
/**
|
||||
* Retrieves the use counts of multiple tags at once from the database for improved performance
|
||||
* during typing.
|
||||
* @param {String[]} tagNames - An array of tag names
|
||||
* @param {TAC.ResultType[]} types - An array of tag types as mapped in {@link TAC.ResultType}
|
||||
* @param {Boolean} negative - Whether we are currently in a negative prompt field
|
||||
* @returns {Promise<Array>} The use count array mapped to named fields by {@link mapUseCountArray}
|
||||
*/
|
||||
static async getUseCounts(tagNames, types, negative = false) {
|
||||
// While semantically weird, we have to use POST here for the body, as urls are limited in length
|
||||
const body = JSON.stringify({ tagNames: tagNames, tagTypes: types, neg: negative });
|
||||
const response = await this.postAPI(`tacapi/v1/get-use-count-list`, body);
|
||||
// Guard for no db
|
||||
if (response == null) return null;
|
||||
// Results
|
||||
return this.mapUseCountArray(response["result"]);
|
||||
}
|
||||
/**
|
||||
* Gets all use counts existing in the database.
|
||||
* @returns {Array} The use count array mapped to named fields by {@link mapUseCountArray}
|
||||
*/
|
||||
static async getAllUseCounts() {
|
||||
const response = await this.fetchAPI(`tacapi/v1/get-all-use-counts`);
|
||||
// Guard for no db
|
||||
if (response == null) return null;
|
||||
// Results
|
||||
return this.mapUseCountArray(response["result"], true);
|
||||
}
|
||||
/**
|
||||
* Resets the use count of the given tag back to zero.
|
||||
* @param {String} tagName - The name of the tag
|
||||
* @param {TAC.ResultType} type - The type of the tag as mapped in {@link TAC.ResultType}
|
||||
* @param {Boolean} resetPosCount - Whether to reset the positive count
|
||||
* @param {Boolean} resetNegCount - Whether to reset the negative count
|
||||
*/
|
||||
static async resetUseCount(tagName, type, resetPosCount, resetNegCount) {
|
||||
await this.putAPI(
|
||||
`tacapi/v1/reset-use-count?tagname=${tagName}&ttype=${type}&pos=${resetPosCount}&neg=${resetNegCount}`
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a table to display an overview of tag usage statistics.
|
||||
* Currently unused.
|
||||
* @param {Array} tagCounts - The use count array to use, mapped to named fields by {@link mapUseCountArray}
|
||||
* @returns
|
||||
*/
|
||||
static createTagUsageTable(tagCounts) {
|
||||
// Create table
|
||||
let tagTable = document.createElement("table");
|
||||
tagTable.innerHTML = `<thead>
|
||||
<tr>
|
||||
<td>Name</td>
|
||||
<td>Type</td>
|
||||
<td>Count(+)</td>
|
||||
<td>Count(-)</td>
|
||||
<td>Last used</td>
|
||||
</tr>
|
||||
</thead>`;
|
||||
tagTable.id = "tac_tagUsageTable";
|
||||
|
||||
tagCounts.forEach((t) => {
|
||||
let tr = document.createElement("tr");
|
||||
|
||||
// Fill values
|
||||
let values = [t.name, t.type - 1, t.count, t.negCount, t.lastUseDate];
|
||||
values.forEach((v) => {
|
||||
let td = document.createElement("td");
|
||||
td.innerText = v;
|
||||
tr.append(td);
|
||||
});
|
||||
// Add delete/reset button
|
||||
let delButton = document.createElement("button");
|
||||
delButton.innerText = "🗑️";
|
||||
delButton.title = "Reset count";
|
||||
tr.append(delButton);
|
||||
|
||||
tagTable.append(tr);
|
||||
});
|
||||
|
||||
return tagTable;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sliding window function to get possible combination groups of an array
|
||||
* @param {Array} inputArray
|
||||
* @param {Number} size
|
||||
* @returns {Array[]} ngram permutations of the input array
|
||||
*/
|
||||
static toNgrams(inputArray, size) {
|
||||
return Array.from(
|
||||
{ length: inputArray.length - (size - 1) }, //get the appropriate length
|
||||
(_, index) => inputArray.slice(index, index + size) //create the windows
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Escapes a string for use in a regular expression.
|
||||
* @param {String} string
|
||||
* @param {Boolean} wildcardMatching - Wildcard matching mode doesn't escape asterisks and question marks as they are handled separately there.
|
||||
* @returns {String} The escaped string
|
||||
*/
|
||||
static escapeRegExp(string, wildcardMatching = false) {
|
||||
if (wildcardMatching) {
|
||||
// Escape all characters except asterisks and ?, which should be treated separately as placeholders.
|
||||
return string
|
||||
.replace(/[-[\]{}()+.,\\^$|#\s]/g, "\\$&")
|
||||
.replace(/\*/g, ".*")
|
||||
.replace(/\?/g, ".");
|
||||
}
|
||||
return string.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string
|
||||
}
|
||||
/**
|
||||
* Escapes a string for use in HTML to not break formatting.
|
||||
* @param {String} unsafeText
|
||||
* @returns {String} The escaped HTML string
|
||||
*/
|
||||
static escapeHTML(unsafeText) {
|
||||
let div = document.createElement("div");
|
||||
div.textContent = unsafeText;
|
||||
return div.innerHTML;
|
||||
}
|
||||
|
||||
/** Updates {@link TAC.Globals.currentModelName} to the current model */
|
||||
static updateModelName() {
|
||||
let sdm = gradioApp().querySelector("#setting_sd_model_checkpoint");
|
||||
let modelDropdown = sdm?.querySelector("input") || sdm?.querySelector("select");
|
||||
if (modelDropdown) {
|
||||
TAC.Globals.currentModelName = modelDropdown.value;
|
||||
} else {
|
||||
// Fallback for intermediate versions
|
||||
modelDropdown = sdm?.querySelector("span.single-select");
|
||||
TAC.Globals.currentModelName = modelDropdown?.textContent || "";
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* From https://stackoverflow.com/a/61975440.
|
||||
* Detects value changes in an element that were triggered programmatically
|
||||
* @param {HTMLElement} element - The DOM element to observe
|
||||
* @param {String} property - The object property to observe
|
||||
* @param {Function} callback - The callback function to call when the property changes
|
||||
* @param {Number} delay - The delay in milliseconds to wait before calling the callback
|
||||
*/
|
||||
static observeElement(element, property, callback, delay = 0) {
|
||||
let elementPrototype = Object.getPrototypeOf(element);
|
||||
if (elementPrototype.hasOwnProperty(property)) {
|
||||
let descriptor = Object.getOwnPropertyDescriptor(elementPrototype, property);
|
||||
Object.defineProperty(element, property, {
|
||||
get: function () {
|
||||
return descriptor.get.apply(this, arguments);
|
||||
},
|
||||
set: function () {
|
||||
let oldValue = this[property];
|
||||
descriptor.set.apply(this, arguments);
|
||||
let newValue = this[property];
|
||||
if (typeof callback == "function") {
|
||||
setTimeout(callback.bind(this, oldValue, newValue), delay);
|
||||
}
|
||||
return newValue;
|
||||
},
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a matching sort function based on the current configuration
|
||||
* @returns {((a: any, b: any) => number)}
|
||||
*/
|
||||
static getSortFunction() {
|
||||
let criterion = TAC.CFG.modelSortOrder || "Name";
|
||||
|
||||
const textSort = (a, b, reverse = false) => {
|
||||
// Assign keys so next sort is faster
|
||||
if (!a.sortKey) {
|
||||
a.sortKey = a.type === TAC.ResultType.chant ? a.aliases : a.text;
|
||||
}
|
||||
if (!b.sortKey) {
|
||||
b.sortKey = b.type === TAC.ResultType.chant ? b.aliases : b.text;
|
||||
}
|
||||
|
||||
return reverse
|
||||
? b.sortKey.localeCompare(a.sortKey)
|
||||
: a.sortKey.localeCompare(b.sortKey);
|
||||
};
|
||||
const numericSort = (a, b, reverse = false) => {
|
||||
const noKey = reverse ? "-1" : Number.MAX_SAFE_INTEGER;
|
||||
let aParsed = parseFloat(a.sortKey || noKey);
|
||||
let bParsed = parseFloat(b.sortKey || noKey);
|
||||
|
||||
if (aParsed === bParsed) {
|
||||
return textSort(a, b, false);
|
||||
}
|
||||
|
||||
return reverse ? bParsed - aParsed : aParsed - bParsed;
|
||||
};
|
||||
|
||||
return (a, b) => {
|
||||
switch (criterion) {
|
||||
case "Date Modified (newest first)":
|
||||
return numericSort(a, b, true);
|
||||
case "Date Modified (oldest first)":
|
||||
return numericSort(a, b, false);
|
||||
default:
|
||||
return textSort(a, b);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Queue calling function to process global queues
|
||||
* @param {Array} queue - The queue to process
|
||||
* @param {*} context - The context to call the functions in (null for global)
|
||||
* @param {...any} args - Arguments to pass to the functions
|
||||
*/
|
||||
static async processQueue(queue, context, ...args) {
|
||||
for (let i = 0; i < queue.length; i++) {
|
||||
await queue[i].call(context, ...args);
|
||||
}
|
||||
}
|
||||
/** The same as {@link processQueue}, but can accept and return results from the queued functions. */
|
||||
static async processQueueReturn(queue, context, ...args) {
|
||||
let qeueueReturns = [];
|
||||
for (let i = 0; i < queue.length; i++) {
|
||||
let returnValue = await queue[i].call(context, ...args);
|
||||
if (returnValue) qeueueReturns.push(returnValue);
|
||||
}
|
||||
return qeueueReturns;
|
||||
}
|
||||
/**
|
||||
* A queue processing function specific to tag completion parsers
|
||||
* @param {HTMLTextAreaElement} textArea - The current text area used by TAC
|
||||
* @param {String} prompt - The current prompt
|
||||
* @returns The results of the parsers
|
||||
*/
|
||||
static async processParsers(textArea, prompt) {
|
||||
// Get all parsers that have a successful trigger condition
|
||||
let matchingParsers = TAC.Ext.PARSERS.filter((parser) => parser.triggerCondition());
|
||||
// Guard condition
|
||||
if (matchingParsers.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
let parseFunctions = matchingParsers.map((parser) => parser.parse);
|
||||
// Process them and return the results
|
||||
return await this.processQueueReturn(parseFunctions, null, textArea, prompt);
|
||||
}
|
||||
};
|
||||
|
||||
@@ -1,54 +1,66 @@
|
||||
const CHANT_REGEX = /<(?!e:|h:|l:)[^,> ]*>?/g;
|
||||
const CHANT_TRIGGER = () => TAC_CFG.chantFile && TAC_CFG.chantFile !== "None" && tagword.match(CHANT_REGEX);
|
||||
(function ChantExtension() {
|
||||
const CHANT_REGEX = /<(?!e:|h:|l:)[^,> ]*>?/g;
|
||||
const CHANT_TRIGGER = () =>
|
||||
TAC.CFG.chantFile && TAC.CFG.chantFile !== "None" && TAC.Globals.tagword.match(CHANT_REGEX);
|
||||
|
||||
class ChantParser extends BaseTagParser {
|
||||
parse() {
|
||||
// Show Chant
|
||||
let tempResults = [];
|
||||
if (tagword !== "<" && tagword !== "<c:") {
|
||||
let searchTerm = tagword.replace("<chant:", "").replace("<c:", "").replace("<", "");
|
||||
let filterCondition = x => x.terms.toLowerCase().includes(searchTerm) || x.name.toLowerCase().includes(searchTerm);
|
||||
tempResults = chants.filter(x => filterCondition(x)); // Filter by tagword
|
||||
class ChantParser extends TAC.BaseTagParser {
|
||||
parse() {
|
||||
// Show Chant
|
||||
let tempResults = [];
|
||||
if (TAC.Globals.tagword !== "<" && TAC.Globals.tagword !== "<c:") {
|
||||
let searchTerm = TAC.Globals.tagword
|
||||
.replace("<chant:", "")
|
||||
.replace("<c:", "")
|
||||
.replace("<", "");
|
||||
let filterCondition = (x) => {
|
||||
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
|
||||
return regex.test(x.terms.toLowerCase()) || regex.test(x.name.toLowerCase());
|
||||
};
|
||||
tempResults = TAC.Globals.chants.filter((x) => filterCondition(x)); // Filter by tagword
|
||||
} else {
|
||||
tempResults = TAC.Globals.chants;
|
||||
}
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach((t) => {
|
||||
let result = new TAC.AutocompleteResult(t.content.trim(), TAC.ResultType.chant);
|
||||
result.meta = "Chant";
|
||||
result.aliases = t.name;
|
||||
result.category = t.color;
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
|
||||
async function load() {
|
||||
if (TAC.CFG.chantFile && TAC.CFG.chantFile !== "None") {
|
||||
try {
|
||||
TAC.Globals.chants = await TAC.Utils.readFile(
|
||||
`${TAC.Globals.tagBasePath}/${TAC.CFG.chantFile}?`,
|
||||
true
|
||||
);
|
||||
} catch (e) {
|
||||
console.error("Error loading chants.json: " + e);
|
||||
}
|
||||
} else {
|
||||
tempResults = chants;
|
||||
TAC.Globals.chants = [];
|
||||
}
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach(t => {
|
||||
let result = new AutocompleteResult(t.content.trim(), ResultType.chant)
|
||||
result.meta = "Chant";
|
||||
result.aliases = t.name;
|
||||
result.category = t.color;
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
|
||||
async function load() {
|
||||
if (TAC_CFG.chantFile && TAC_CFG.chantFile !== "None") {
|
||||
try {
|
||||
chants = await readFile(`${tagBasePath}/${TAC_CFG.chantFile}?`, true);
|
||||
} catch (e) {
|
||||
console.error("Error loading chants.json: " + e);
|
||||
function sanitize(tagType, text) {
|
||||
if (tagType === TAC.ResultType.chant) {
|
||||
return text;
|
||||
}
|
||||
} else {
|
||||
chants = [];
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function sanitize(tagType, text) {
|
||||
if (tagType === ResultType.chant) {
|
||||
return text.replace(/^.*?: /g, "");
|
||||
}
|
||||
return null;
|
||||
}
|
||||
TAC.Ext.PARSERS.push(new ChantParser(CHANT_TRIGGER));
|
||||
|
||||
PARSERS.push(new ChantParser(CHANT_TRIGGER));
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
QUEUE_FILE_LOAD.push(load);
|
||||
QUEUE_SANITIZE.push(sanitize);
|
||||
QUEUE_AFTER_CONFIG_CHANGE.push(load);
|
||||
// Add our utility functions to their respective queues
|
||||
TAC.Ext.QUEUE_FILE_LOAD.push(load);
|
||||
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
|
||||
TAC.Ext.QUEUE_AFTER_CONFIG_CHANGE.push(load);
|
||||
})();
|
||||
|
||||
@@ -1,61 +1,85 @@
|
||||
const EMB_REGEX = /<(?!l:|h:|c:)[^,> ]*>?/g;
|
||||
const EMB_TRIGGER = () => TAC_CFG.useEmbeddings && (tagword.match(EMB_REGEX) || TAC_CFG.includeEmbeddingsInNormalResults);
|
||||
(function EmbeddingExtension() {
|
||||
const EMB_REGEX = /<(?!l:|h:|c:)[^,> ]*>?/g;
|
||||
const EMB_TRIGGER = () =>
|
||||
TAC.CFG.useEmbeddings &&
|
||||
(TAC.Globals.tagword.match(EMB_REGEX) || TAC.CFG.includeEmbeddingsInNormalResults);
|
||||
|
||||
class EmbeddingParser extends BaseTagParser {
|
||||
parse() {
|
||||
// Show embeddings
|
||||
let tempResults = [];
|
||||
if (tagword !== "<" && tagword !== "<e:") {
|
||||
let searchTerm = tagword.replace("<e:", "").replace("<", "");
|
||||
let versionString;
|
||||
if (searchTerm.startsWith("v1") || searchTerm.startsWith("v2")) {
|
||||
versionString = searchTerm.slice(0, 2);
|
||||
searchTerm = searchTerm.slice(2);
|
||||
class EmbeddingParser extends TAC.BaseTagParser {
|
||||
parse() {
|
||||
// Show embeddings
|
||||
let tempResults = [];
|
||||
if (TAC.Globals.tagword !== "<" && TAC.Globals.tagword !== "<e:") {
|
||||
let searchTerm = TAC.Globals.tagword.replace("<e:", "").replace("<", "");
|
||||
let versionString;
|
||||
if (searchTerm.startsWith("v1") || searchTerm.startsWith("v2")) {
|
||||
versionString = searchTerm.slice(0, 2);
|
||||
searchTerm = searchTerm.slice(2);
|
||||
} else if (searchTerm.startsWith("vxl")) {
|
||||
versionString = searchTerm.slice(0, 3);
|
||||
searchTerm = searchTerm.slice(3);
|
||||
}
|
||||
|
||||
let filterCondition = (x) => {
|
||||
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
|
||||
return (
|
||||
regex.test(x[0].toLowerCase()) ||
|
||||
regex.test(x[0].toLowerCase().replaceAll(" ", "_"))
|
||||
);
|
||||
};
|
||||
|
||||
if (versionString)
|
||||
tempResults = TAC.Globals.embeddings.filter(
|
||||
(x) =>
|
||||
filterCondition(x) &&
|
||||
x[2] &&
|
||||
x[2].toLowerCase() === versionString.toLowerCase()
|
||||
); // Filter by tagword
|
||||
else tempResults = TAC.Globals.embeddings.filter((x) => filterCondition(x)); // Filter by tagword
|
||||
} else {
|
||||
tempResults = TAC.Globals.embeddings;
|
||||
}
|
||||
|
||||
let filterCondition = x => x[0].toLowerCase().includes(searchTerm) || x[0].toLowerCase().replaceAll(" ", "_").includes(searchTerm);
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach((t) => {
|
||||
let lastDot = t[0].lastIndexOf(".") > -1 ? t[0].lastIndexOf(".") : t[0].length;
|
||||
let lastSlash = t[0].lastIndexOf("/") > -1 ? t[0].lastIndexOf("/") : -1;
|
||||
let name = t[0].trim().substring(lastSlash + 1, lastDot);
|
||||
|
||||
if (versionString)
|
||||
tempResults = embeddings.filter(x => filterCondition(x) && x[1] && x[1] === versionString); // Filter by tagword
|
||||
else
|
||||
tempResults = embeddings.filter(x => filterCondition(x)); // Filter by tagword
|
||||
} else {
|
||||
tempResults = embeddings;
|
||||
}
|
||||
let result = new TAC.AutocompleteResult(name, TAC.ResultType.embedding);
|
||||
result.sortKey = t[1];
|
||||
result.meta = t[2] + " Embedding";
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach(t => {
|
||||
let result = new AutocompleteResult(t[0].trim(), ResultType.embedding)
|
||||
result.meta = t[1] + " Embedding";
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
|
||||
async function load() {
|
||||
if (embeddings.length === 0) {
|
||||
try {
|
||||
embeddings = (await readFile(`${tagBasePath}/temp/emb.txt`)).split("\n")
|
||||
.filter(x => x.trim().length > 0) // Remove empty lines
|
||||
.map(x => x.trim().split(",")); // Split into name, version type pairs
|
||||
} catch (e) {
|
||||
console.error("Error loading embeddings.txt: " + e);
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function sanitize(tagType, text) {
|
||||
if (tagType === ResultType.embedding) {
|
||||
return text.replace(/^.*?: /g, "");
|
||||
async function load() {
|
||||
if (TAC.Globals.embeddings.length === 0) {
|
||||
try {
|
||||
TAC.Globals.embeddings = (
|
||||
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/emb.txt`)
|
||||
)
|
||||
.filter((x) => x[0]?.trim().length > 0) // Remove empty lines
|
||||
.map((x) => [x[0].trim(), x[1], x[2]]); // Return name, sortKey, hash tuples
|
||||
} catch (e) {
|
||||
console.error("Error loading embeddings.txt: " + e);
|
||||
}
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
PARSERS.push(new EmbeddingParser(EMB_TRIGGER));
|
||||
function sanitize(tagType, text) {
|
||||
if (tagType === TAC.ResultType.embedding) {
|
||||
return text;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
QUEUE_FILE_LOAD.push(load);
|
||||
QUEUE_SANITIZE.push(sanitize);
|
||||
TAC.Ext.PARSERS.push(new EmbeddingParser(EMB_TRIGGER));
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
TAC.Ext.QUEUE_FILE_LOAD.push(load);
|
||||
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
|
||||
})();
|
||||
|
||||
@@ -1,51 +1,69 @@
|
||||
const HYP_REGEX = /<(?!e:|l:|c:)[^,> ]*>?/g;
|
||||
const HYP_TRIGGER = () => TAC_CFG.useHypernetworks && tagword.match(HYP_REGEX);
|
||||
(function HypernetExtension() {
|
||||
const HYP_REGEX = /<(?!e:|l:|c:)[^,> ]*>?/g;
|
||||
const HYP_TRIGGER = () => TAC.CFG.useHypernetworks && TAC.Globals.tagword.match(HYP_REGEX);
|
||||
|
||||
class HypernetParser extends BaseTagParser {
|
||||
parse() {
|
||||
// Show hypernetworks
|
||||
let tempResults = [];
|
||||
if (tagword !== "<" && tagword !== "<h:" && tagword !== "<hypernet:") {
|
||||
let searchTerm = tagword.replace("<hypernet:", "").replace("<h:", "").replace("<", "");
|
||||
let filterCondition = x => x.toLowerCase().includes(searchTerm) || x.toLowerCase().replaceAll(" ", "_").includes(searchTerm);
|
||||
tempResults = hypernetworks.filter(x => filterCondition(x)); // Filter by tagword
|
||||
} else {
|
||||
tempResults = hypernetworks;
|
||||
}
|
||||
class HypernetParser extends TAC.BaseTagParser {
|
||||
parse() {
|
||||
// Show hypernetworks
|
||||
let tempResults = [];
|
||||
if (
|
||||
TAC.Globals.tagword !== "<" &&
|
||||
TAC.Globals.tagword !== "<h:" &&
|
||||
TAC.Globals.tagword !== "<hypernet:"
|
||||
) {
|
||||
let searchTerm = TAC.Globals.tagword
|
||||
.replace("<hypernet:", "")
|
||||
.replace("<h:", "")
|
||||
.replace("<", "");
|
||||
let filterCondition = (x) => {
|
||||
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
|
||||
return (
|
||||
regex.test(x.toLowerCase()) ||
|
||||
regex.test(x.toLowerCase().replaceAll(" ", "_"))
|
||||
);
|
||||
};
|
||||
tempResults = TAC.Globals.hypernetworks.filter((x) => filterCondition(x[0])); // Filter by tagword
|
||||
} else {
|
||||
tempResults = TAC.Globals.hypernetworks;
|
||||
}
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach(t => {
|
||||
let result = new AutocompleteResult(t.trim(), ResultType.hypernetwork)
|
||||
result.meta = "Hypernetwork";
|
||||
finalResults.push(result);
|
||||
});
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach((t) => {
|
||||
let result = new TAC.AutocompleteResult(t[0].trim(), TAC.ResultType.hypernetwork);
|
||||
result.meta = "Hypernetwork";
|
||||
result.sortKey = t[1];
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
|
||||
async function load() {
|
||||
if (hypernetworks.length === 0) {
|
||||
try {
|
||||
hypernetworks = (await readFile(`${tagBasePath}/temp/hyp.txt`)).split("\n")
|
||||
.filter(x => x.trim().length > 0) //Remove empty lines
|
||||
.map(x => x.trim()); // Remove carriage returns and padding if it exists
|
||||
} catch (e) {
|
||||
console.error("Error loading hypernetworks.txt: " + e);
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function sanitize(tagType, text) {
|
||||
if (tagType === ResultType.hypernetwork) {
|
||||
return `<hypernet:${text}:${TAC_CFG.extraNetworksDefaultMultiplier}>`;
|
||||
async function load() {
|
||||
if (TAC.Globals.hypernetworks.length === 0) {
|
||||
try {
|
||||
TAC.Globals.hypernetworks = (
|
||||
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/hyp.txt`)
|
||||
)
|
||||
.filter((x) => x[0]?.trim().length > 0) //Remove empty lines
|
||||
.map((x) => [x[0]?.trim(), x[1]]); // Remove carriage returns and padding if it exists
|
||||
} catch (e) {
|
||||
console.error("Error loading hypernetworks.txt: " + e);
|
||||
}
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
PARSERS.push(new HypernetParser(HYP_TRIGGER));
|
||||
function sanitize(tagType, text) {
|
||||
if (tagType === TAC.ResultType.hypernetwork) {
|
||||
return `<hypernet:${text}:${TAC.CFG.extraNetworksDefaultMultiplier}>`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
QUEUE_FILE_LOAD.push(load);
|
||||
QUEUE_SANITIZE.push(sanitize);
|
||||
TAC.Ext.PARSERS.push(new HypernetParser(HYP_TRIGGER));
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
TAC.Ext.QUEUE_FILE_LOAD.push(load);
|
||||
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
|
||||
})();
|
||||
|
||||
@@ -1,63 +1,81 @@
|
||||
const LORA_REGEX = /<(?!e:|h:|c:)[^,> ]*>?/g;
|
||||
const LORA_TRIGGER = () => TAC_CFG.useLoras && tagword.match(LORA_REGEX);
|
||||
(function LoraExtension() {
|
||||
const LORA_REGEX = /<(?!e:|h:|c:)[^,> ]*>?/g;
|
||||
const LORA_TRIGGER = () => TAC.CFG.useLoras && TAC.Globals.tagword.match(LORA_REGEX);
|
||||
|
||||
class LoraParser extends BaseTagParser {
|
||||
parse() {
|
||||
// Show lora
|
||||
let tempResults = [];
|
||||
if (tagword !== "<" && tagword !== "<l:" && tagword !== "<lora:") {
|
||||
let searchTerm = tagword.replace("<lora:", "").replace("<l:", "").replace("<", "");
|
||||
let filterCondition = x => x.toLowerCase().includes(searchTerm) || x.toLowerCase().replaceAll(" ", "_").includes(searchTerm);
|
||||
tempResults = loras.filter(x => filterCondition(x[0])); // Filter by tagword
|
||||
} else {
|
||||
tempResults = loras;
|
||||
}
|
||||
class LoraParser extends TAC.BaseTagParser {
|
||||
parse() {
|
||||
// Show lora
|
||||
let tempResults = [];
|
||||
if (
|
||||
TAC.Globals.tagword !== "<" &&
|
||||
TAC.Globals.tagword !== "<l:" &&
|
||||
TAC.Globals.tagword !== "<lora:"
|
||||
) {
|
||||
let searchTerm = TAC.Globals.tagword
|
||||
.replace("<lora:", "")
|
||||
.replace("<l:", "")
|
||||
.replace("<", "");
|
||||
let filterCondition = (x) => {
|
||||
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
|
||||
return (
|
||||
regex.test(x.toLowerCase()) ||
|
||||
regex.test(x.toLowerCase().replaceAll(" ", "_"))
|
||||
);
|
||||
};
|
||||
tempResults = TAC.Globals.loras.filter((x) => filterCondition(x[0])); // Filter by tagword
|
||||
} else {
|
||||
tempResults = TAC.Globals.loras;
|
||||
}
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach(t => {
|
||||
const text = t[0].trim();
|
||||
let lastDot = text.lastIndexOf(".") > -1 ? text.lastIndexOf(".") : text.length;
|
||||
let lastSlash = text.lastIndexOf("/") > -1 ? text.lastIndexOf("/") : -1;
|
||||
let name = text.substring(lastSlash + 1, lastDot);
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach((t) => {
|
||||
const text = t[0].trim();
|
||||
let lastDot = text.lastIndexOf(".") > -1 ? text.lastIndexOf(".") : text.length;
|
||||
let lastSlash = text.lastIndexOf("/") > -1 ? text.lastIndexOf("/") : -1;
|
||||
let name = text.substring(lastSlash + 1, lastDot);
|
||||
|
||||
let result = new AutocompleteResult(name, ResultType.lora)
|
||||
result.meta = "Lora";
|
||||
result.hash = t[1];
|
||||
finalResults.push(result);
|
||||
});
|
||||
let result = new TAC.AutocompleteResult(name, TAC.ResultType.lora);
|
||||
result.meta = "Lora";
|
||||
result.sortKey = t[1];
|
||||
result.hash = t[2];
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
|
||||
async function load() {
|
||||
if (loras.length === 0) {
|
||||
try {
|
||||
loras = (await loadCSV(`${tagBasePath}/temp/lora.txt`))
|
||||
.filter(x => x[0]?.trim().length > 0) // Remove empty lines
|
||||
.map(x => [x[0]?.trim(), x[1]]); // Trim filenames and return the name, hash pairs
|
||||
} catch (e) {
|
||||
console.error("Error loading lora.txt: " + e);
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function sanitize(tagType, text) {
|
||||
if (tagType === ResultType.lora) {
|
||||
let multiplier = TAC_CFG.extraNetworksDefaultMultiplier;
|
||||
let info = await fetchAPI(`tacapi/v1/lora-info/${text}`)
|
||||
if (info && info["preferred weight"]) {
|
||||
multiplier = info["preferred weight"];
|
||||
async function load() {
|
||||
if (TAC.Globals.loras.length === 0) {
|
||||
try {
|
||||
TAC.Globals.loras = (
|
||||
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/lora.txt`)
|
||||
)
|
||||
.filter((x) => x[0]?.trim().length > 0) // Remove empty lines
|
||||
.map((x) => [x[0]?.trim(), x[1], x[2]]); // Trim filenames and return the name, sortKey, hash pairs
|
||||
} catch (e) {
|
||||
console.error("Error loading lora.txt: " + e);
|
||||
}
|
||||
}
|
||||
|
||||
return `<lora:${text}:${multiplier}>`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
PARSERS.push(new LoraParser(LORA_TRIGGER));
|
||||
async function sanitize(tagType, text) {
|
||||
if (tagType === TAC.ResultType.lora) {
|
||||
let multiplier = TAC.CFG.extraNetworksDefaultMultiplier;
|
||||
let info = await TAC.Utils.fetchAPI(`tacapi/v1/lora-info/${text}`);
|
||||
if (info && info["preferred weight"]) {
|
||||
multiplier = info["preferred weight"];
|
||||
}
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
QUEUE_FILE_LOAD.push(load);
|
||||
QUEUE_SANITIZE.push(sanitize);
|
||||
return `<lora:${text}:${multiplier}>`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
TAC.Ext.PARSERS.push(new LoraParser(LORA_TRIGGER));
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
TAC.Ext.QUEUE_FILE_LOAD.push(load);
|
||||
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
|
||||
})();
|
||||
|
||||
@@ -1,63 +1,84 @@
|
||||
const LYCO_REGEX = /<(?!e:|h:|c:)[^,> ]*>?/g;
|
||||
const LYCO_TRIGGER = () => TAC_CFG.useLycos && tagword.match(LYCO_REGEX);
|
||||
(function LycoExtension() {
|
||||
const LYCO_REGEX = /<(?!e:|h:|c:)[^,> ]*>?/g;
|
||||
const LYCO_TRIGGER = () => TAC.CFG.useLycos && TAC.Globals.tagword.match(LYCO_REGEX);
|
||||
|
||||
class LycoParser extends BaseTagParser {
|
||||
parse() {
|
||||
// Show lyco
|
||||
let tempResults = [];
|
||||
if (tagword !== "<" && tagword !== "<l:" && tagword !== "<lyco:") {
|
||||
let searchTerm = tagword.replace("<lyco:", "").replace("<l:", "").replace("<", "");
|
||||
let filterCondition = x => x.toLowerCase().includes(searchTerm) || x.toLowerCase().replaceAll(" ", "_").includes(searchTerm);
|
||||
tempResults = lycos.filter(x => filterCondition(x[0])); // Filter by tagword
|
||||
} else {
|
||||
tempResults = lycos;
|
||||
}
|
||||
class LycoParser extends TAC.BaseTagParser {
|
||||
parse() {
|
||||
// Show lyco
|
||||
let tempResults = [];
|
||||
if (
|
||||
TAC.Globals.tagword !== "<" &&
|
||||
TAC.Globals.tagword !== "<l:" &&
|
||||
TAC.Globals.tagword !== "<lyco:" &&
|
||||
TAC.Globals.tagword !== "<lora:"
|
||||
) {
|
||||
let searchTerm = TAC.Globals.tagword
|
||||
.replace("<lyco:", "")
|
||||
.replace("<lora:", "")
|
||||
.replace("<l:", "")
|
||||
.replace("<", "");
|
||||
let filterCondition = (x) => {
|
||||
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
|
||||
return (
|
||||
regex.test(x.toLowerCase()) ||
|
||||
regex.test(x.toLowerCase().replaceAll(" ", "_"))
|
||||
);
|
||||
};
|
||||
tempResults = TAC.Globals.lycos.filter((x) => filterCondition(x[0])); // Filter by tagword
|
||||
} else {
|
||||
tempResults = TAC.Globals.lycos;
|
||||
}
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach(t => {
|
||||
const text = t[0].trim();
|
||||
let lastDot = text.lastIndexOf(".") > -1 ? text.lastIndexOf(".") : text.length;
|
||||
let lastSlash = text.lastIndexOf("/") > -1 ? text.lastIndexOf("/") : -1;
|
||||
let name = text.substring(lastSlash + 1, lastDot);
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach((t) => {
|
||||
const text = t[0].trim();
|
||||
let lastDot = text.lastIndexOf(".") > -1 ? text.lastIndexOf(".") : text.length;
|
||||
let lastSlash = text.lastIndexOf("/") > -1 ? text.lastIndexOf("/") : -1;
|
||||
let name = text.substring(lastSlash + 1, lastDot);
|
||||
|
||||
let result = new AutocompleteResult(name, ResultType.lyco)
|
||||
result.meta = "Lyco";
|
||||
result.hash = t[1];
|
||||
finalResults.push(result);
|
||||
});
|
||||
let result = new TAC.AutocompleteResult(name, TAC.ResultType.lyco);
|
||||
result.meta = "Lyco";
|
||||
result.sortKey = t[1];
|
||||
result.hash = t[2];
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
|
||||
async function load() {
|
||||
if (lycos.length === 0) {
|
||||
try {
|
||||
lycos = (await loadCSV(`${tagBasePath}/temp/lyco.txt`))
|
||||
.filter(x => x[0]?.trim().length > 0) // Remove empty lines
|
||||
.map(x => [x[0]?.trim(), x[1]]); // Trim filenames and return the name, hash pairs
|
||||
} catch (e) {
|
||||
console.error("Error loading lyco.txt: " + e);
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function sanitize(tagType, text) {
|
||||
if (tagType === ResultType.lyco) {
|
||||
let multiplier = TAC_CFG.extraNetworksDefaultMultiplier;
|
||||
let info = await fetchAPI(`tacapi/v1/lyco-info/${text}`)
|
||||
if (info && info["preferred weight"]) {
|
||||
multiplier = info["preferred weight"];
|
||||
async function load() {
|
||||
if (TAC.Globals.lycos.length === 0) {
|
||||
try {
|
||||
TAC.Globals.lycos = (
|
||||
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/lyco.txt`)
|
||||
)
|
||||
.filter((x) => x[0]?.trim().length > 0) // Remove empty lines
|
||||
.map((x) => [x[0]?.trim(), x[1], x[2]]); // Trim filenames and return the name, sortKey, hash pairs
|
||||
} catch (e) {
|
||||
console.error("Error loading lyco.txt: " + e);
|
||||
}
|
||||
}
|
||||
|
||||
return `<lyco:${text}:${multiplier}>`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
PARSERS.push(new LycoParser(LYCO_TRIGGER));
|
||||
async function sanitize(tagType, text) {
|
||||
if (tagType === TAC.ResultType.lyco) {
|
||||
let multiplier = TAC.CFG.extraNetworksDefaultMultiplier;
|
||||
let info = await TAC.Utils.fetchAPI(`tacapi/v1/lyco-info/${text}`);
|
||||
if (info && info["preferred weight"]) {
|
||||
multiplier = info["preferred weight"];
|
||||
}
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
QUEUE_FILE_LOAD.push(load);
|
||||
QUEUE_SANITIZE.push(sanitize);
|
||||
let prefix = TAC.CFG.useLoraPrefixForLycos ? "lora" : "lyco";
|
||||
return `<${prefix}:${text}:${multiplier}>`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
TAC.Ext.PARSERS.push(new LycoParser(LYCO_TRIGGER));
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
TAC.Ext.QUEUE_FILE_LOAD.push(load);
|
||||
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
|
||||
})();
|
||||
|
||||
@@ -1,42 +1,56 @@
|
||||
async function load() {
|
||||
let modelKeywordParts = (await readFile(`tmp/modelKeywordPath.txt`)).split(",")
|
||||
modelKeywordPath = modelKeywordParts[0];
|
||||
let customFileExists = modelKeywordParts[1] === "True";
|
||||
(function ModelKeywordExtension() {
|
||||
async function load() {
|
||||
let modelKeywordParts = (await TAC.Utils.readFile(`tmp/modelKeywordPath.txt`)).split(",");
|
||||
TAC.Globals.modelKeywordPath = modelKeywordParts[0];
|
||||
let customFileExists = modelKeywordParts[1] === "True";
|
||||
|
||||
if (modelKeywordPath.length > 0 && modelKeywordDict.size === 0) {
|
||||
try {
|
||||
let csv_lines = [];
|
||||
// Only add default keywords if wanted by the user
|
||||
if (TAC_CFG.modelKeywordCompletion !== "Only user list")
|
||||
csv_lines = (await loadCSV(`${modelKeywordPath}/lora-keyword.txt`));
|
||||
// Add custom user keywords if the file exists
|
||||
if (customFileExists)
|
||||
csv_lines = csv_lines.concat((await loadCSV(`${modelKeywordPath}/lora-keyword-user.txt`)));
|
||||
if (TAC.Globals.modelKeywordPath.length > 0 && TAC.Globals.modelKeywordDict.size === 0) {
|
||||
try {
|
||||
let csv_lines = [];
|
||||
// Only add default keywords if wanted by the user
|
||||
if (TAC.CFG.modelKeywordCompletion !== "Only user list")
|
||||
csv_lines = await TAC.Utils.loadCSV(
|
||||
`${TAC.Globals.modelKeywordPath}/lora-keyword.txt`
|
||||
);
|
||||
// Add custom user keywords if the file exists
|
||||
if (customFileExists)
|
||||
csv_lines = csv_lines.concat(
|
||||
await TAC.Utils.loadCSV(
|
||||
`${TAC.Globals.modelKeywordPath}/lora-keyword-user.txt`
|
||||
)
|
||||
);
|
||||
|
||||
if (csv_lines.length === 0) return;
|
||||
if (csv_lines.length === 0) return;
|
||||
|
||||
csv_lines = csv_lines.filter(x => x[0].trim().length > 0 && x[0].trim()[0] !== "#") // Remove empty lines and comments
|
||||
csv_lines = csv_lines.filter(
|
||||
(x) => x[0].trim().length > 0 && x[0].trim()[0] !== "#"
|
||||
); // Remove empty lines and comments
|
||||
|
||||
// Add to the dict
|
||||
csv_lines.forEach(parts => {
|
||||
const hash = parts[0];
|
||||
const keywords = parts[1].replaceAll("| ", ", ").replaceAll("|", ", ").trim();
|
||||
const lastSepIndex = parts[2]?.lastIndexOf("/") + 1 || parts[2]?.lastIndexOf("\\") + 1 || 0;
|
||||
const name = parts[2]?.substring(lastSepIndex).trim() || "none"
|
||||
// Add to the dict
|
||||
csv_lines.forEach((parts) => {
|
||||
const hash = parts[0];
|
||||
const keywords = parts[1]
|
||||
?.replaceAll("| ", ", ")
|
||||
?.replaceAll("|", ", ")
|
||||
?.trim();
|
||||
const lastSepIndex =
|
||||
parts[2]?.lastIndexOf("/") + 1 || parts[2]?.lastIndexOf("\\") + 1 || 0;
|
||||
const name = parts[2]?.substring(lastSepIndex).trim() || "none";
|
||||
|
||||
if (modelKeywordDict.has(hash) && name !== "none") {
|
||||
// Add a new name key if the hash already exists
|
||||
modelKeywordDict.get(hash).set(name, keywords);
|
||||
} else {
|
||||
// Create new hash entry
|
||||
let map = new Map().set(name, keywords);
|
||||
modelKeywordDict.set(hash, map);
|
||||
}
|
||||
});
|
||||
} catch (e) {
|
||||
console.error("Error loading model-keywords list: " + e);
|
||||
if (TAC.Globals.modelKeywordDict.has(hash) && name !== "none") {
|
||||
// Add a new name key if the hash already exists
|
||||
TAC.Globals.modelKeywordDict.get(hash).set(name, keywords);
|
||||
} else {
|
||||
// Create new hash entry
|
||||
let map = new Map().set(name, keywords);
|
||||
TAC.Globals.modelKeywordDict.set(hash, map);
|
||||
}
|
||||
});
|
||||
} catch (e) {
|
||||
console.error("Error loading model-keywords list: " + e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
QUEUE_FILE_LOAD.push(load);
|
||||
TAC.Ext.QUEUE_FILE_LOAD.push(load);
|
||||
})();
|
||||
|
||||
77
javascript/ext_styles.js
Normal file
77
javascript/ext_styles.js
Normal file
@@ -0,0 +1,77 @@
|
||||
(function StyleExtension() {
|
||||
const STYLE_REGEX = /(\$(\d*)\(?)[^$|\[\],\s]*\)?/;
|
||||
const STYLE_TRIGGER = () => TAC.CFG.useStyleVars && TAC.Globals.tagword.match(STYLE_REGEX);
|
||||
|
||||
var lastStyleVarIndex = "";
|
||||
|
||||
class StyleParser extends TAC.BaseTagParser {
|
||||
async parse() {
|
||||
// Refresh if needed
|
||||
await TAC.Utils.refreshStyleNamesIfChanged();
|
||||
|
||||
// Show styles
|
||||
let tempResults = [];
|
||||
let matchGroups = TAC.Globals.tagword.match(STYLE_REGEX);
|
||||
|
||||
// Save index to insert again later or clear last one
|
||||
lastStyleVarIndex = matchGroups[2] ? matchGroups[2] : "";
|
||||
|
||||
if (TAC.Globals.tagword !== matchGroups[1]) {
|
||||
let searchTerm = TAC.Globals.tagword.replace(matchGroups[1], "");
|
||||
|
||||
let filterCondition = (x) => {
|
||||
let regex = new RegExp(TAC.Utils.escapeRegExp(searchTerm, true), "i");
|
||||
return (
|
||||
regex.test(x[0].toLowerCase()) ||
|
||||
regex.test(x[0].toLowerCase().replaceAll(" ", "_"))
|
||||
);
|
||||
};
|
||||
tempResults = TAC.Globals.styleNames.filter((x) => filterCondition(x)); // Filter by tagword
|
||||
} else {
|
||||
tempResults = TAC.Globals.styleNames;
|
||||
}
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach((t) => {
|
||||
let result = new TAC.AutocompleteResult(t[0].trim(), TAC.ResultType.styleName);
|
||||
result.meta = "Style";
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
|
||||
async function load(force = false) {
|
||||
if (TAC.Globals.styleNames.length === 0 || force) {
|
||||
try {
|
||||
TAC.Globals.styleNames = (
|
||||
await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/styles.txt`)
|
||||
)
|
||||
.filter((x) => x[0]?.trim().length > 0) // Remove empty lines
|
||||
.filter((x) => x[0] !== "None") // Remove "None" style
|
||||
.map((x) => [x[0].trim()]); // Trim name
|
||||
} catch (e) {
|
||||
console.error("Error loading styles.txt: " + e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function sanitize(tagType, text) {
|
||||
if (tagType === TAC.ResultType.styleName) {
|
||||
if (text.includes(" ")) {
|
||||
return `$${lastStyleVarIndex}(${text})`;
|
||||
} else {
|
||||
return `$${lastStyleVarIndex}${text}`;
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
TAC.Ext.PARSERS.push(new StyleParser(STYLE_TRIGGER));
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
TAC.Ext.QUEUE_FILE_LOAD.push(load);
|
||||
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
|
||||
})();
|
||||
@@ -1,240 +1,279 @@
|
||||
const UMI_PROMPT_REGEX = /<[^\s]*?\[[^,<>]*[\]|]?>?/gi;
|
||||
const UMI_TAG_REGEX = /(?:\[|\||--)([^<>\[\]\-|]+)/gi;
|
||||
(function UmiExtension() {
|
||||
const UMI_PROMPT_REGEX = /<[^\s]*?\[[^,<>]*[\]|]?>?/gi;
|
||||
const UMI_TAG_REGEX = /(?:\[|\||--)([^<>\[\]\-|]+)/gi;
|
||||
|
||||
const UMI_TRIGGER = () => TAC_CFG.useWildcards && [...tagword.matchAll(UMI_PROMPT_REGEX)].length > 0;
|
||||
const UMI_TRIGGER = () =>
|
||||
TAC.CFG.useWildcards && [...TAC.Globals.tagword.matchAll(UMI_PROMPT_REGEX)].length > 0;
|
||||
|
||||
class UmiParser extends BaseTagParser {
|
||||
parse(textArea, prompt) {
|
||||
// We are in a UMI yaml tag definition, parse further
|
||||
let umiSubPrompts = [...prompt.matchAll(UMI_PROMPT_REGEX)];
|
||||
class UmiParser extends TAC.BaseTagParser {
|
||||
parse(textArea, prompt) {
|
||||
// We are in a UMI yaml tag definition, parse further
|
||||
let umiSubPrompts = [...prompt.matchAll(UMI_PROMPT_REGEX)];
|
||||
|
||||
let umiTags = [];
|
||||
let umiTagsWithOperators = []
|
||||
let umiTags = [];
|
||||
let umiTagsWithOperators = [];
|
||||
|
||||
const insertAt = (str,char,pos) => str.slice(0,pos) + char + str.slice(pos);
|
||||
const insertAt = (str, char, pos) => str.slice(0, pos) + char + str.slice(pos);
|
||||
|
||||
umiSubPrompts.forEach(umiSubPrompt => {
|
||||
umiTags = umiTags.concat([...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map(x => x[1].toLowerCase()));
|
||||
|
||||
const start = umiSubPrompt.index;
|
||||
const end = umiSubPrompt.index + umiSubPrompt[0].length;
|
||||
if (textArea.selectionStart >= start && textArea.selectionStart <= end) {
|
||||
umiTagsWithOperators = insertAt(umiSubPrompt[0], '###', textArea.selectionStart - start);
|
||||
}
|
||||
});
|
||||
|
||||
// Safety check since UMI parsing sometimes seems to trigger outside of an UMI subprompt and thus fails
|
||||
if (umiTagsWithOperators.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const promptSplitToTags = umiTagsWithOperators.replace(']###[', '][').split("][");
|
||||
|
||||
const clean = (str) => str
|
||||
.replaceAll('>', '')
|
||||
.replaceAll('<', '')
|
||||
.replaceAll('[', '')
|
||||
.replaceAll(']', '')
|
||||
.trim();
|
||||
|
||||
const matches = promptSplitToTags.reduce((acc, curr) => {
|
||||
let isOptional = curr.includes("|");
|
||||
let isNegative = curr.startsWith("--");
|
||||
let out;
|
||||
if (isOptional) {
|
||||
out = {
|
||||
hasCursor: curr.includes("###"),
|
||||
tags: clean(curr).split('|').map(x => ({
|
||||
hasCursor: x.includes("###"),
|
||||
isNegative: x.startsWith("--"),
|
||||
tag: clean(x).replaceAll("###", '').replaceAll("--", '')
|
||||
}))
|
||||
};
|
||||
acc.optional.push(out);
|
||||
acc.all.push(...out.tags.map(x => x.tag));
|
||||
} else if (isNegative) {
|
||||
out = {
|
||||
hasCursor: curr.includes("###"),
|
||||
tags: clean(curr).replaceAll("###", '').split('|'),
|
||||
};
|
||||
out.tags = out.tags.map(x => x.startsWith("--") ? x.substring(2) : x);
|
||||
acc.negative.push(out);
|
||||
acc.all.push(...out.tags);
|
||||
} else {
|
||||
out = {
|
||||
hasCursor: curr.includes("###"),
|
||||
tags: clean(curr).replaceAll("###", '').split('|'),
|
||||
};
|
||||
acc.positive.push(out);
|
||||
acc.all.push(...out.tags);
|
||||
}
|
||||
return acc;
|
||||
}, { positive: [], negative: [], optional: [], all: [] });
|
||||
|
||||
//console.log({ matches })
|
||||
|
||||
const filteredWildcards = (tagword) => {
|
||||
const wildcards = umiWildcards.filter(x => {
|
||||
let tags = x[1];
|
||||
const matchesNeg =
|
||||
matches.negative.length === 0
|
||||
|| matches.negative.every(x =>
|
||||
x.hasCursor
|
||||
|| x.tags.every(t => !tags[t])
|
||||
);
|
||||
if (!matchesNeg) return false;
|
||||
const matchesPos =
|
||||
matches.positive.length === 0
|
||||
|| matches.positive.every(x =>
|
||||
x.hasCursor
|
||||
|| x.tags.every(t => tags[t])
|
||||
);
|
||||
if (!matchesPos) return false;
|
||||
const matchesOpt =
|
||||
matches.optional.length === 0
|
||||
|| matches.optional.some(x =>
|
||||
x.tags.some(t =>
|
||||
t.hasCursor
|
||||
|| t.isNegative
|
||||
? !tags[t.tag]
|
||||
: tags[t.tag]
|
||||
));
|
||||
if (!matchesOpt) return false;
|
||||
return true;
|
||||
}).reduce((acc, val) => {
|
||||
Object.keys(val[1]).forEach(tag => acc[tag] = acc[tag] + 1 || 1);
|
||||
return acc;
|
||||
}, {});
|
||||
|
||||
return Object.entries(wildcards)
|
||||
.sort((a, b) => b[1] - a[1])
|
||||
.filter(x =>
|
||||
x[0] === tagword
|
||||
|| !matches.all.includes(x[0])
|
||||
umiSubPrompts.forEach((umiSubPrompt) => {
|
||||
umiTags = umiTags.concat(
|
||||
[...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map((x) => x[1].toLowerCase())
|
||||
);
|
||||
}
|
||||
|
||||
if (umiTags.length > 0) {
|
||||
// Get difference for subprompt
|
||||
let tagCountChange = umiTags.length - umiPreviousTags.length;
|
||||
let diff = difference(umiTags, umiPreviousTags);
|
||||
umiPreviousTags = umiTags;
|
||||
const start = umiSubPrompt.index;
|
||||
const end = umiSubPrompt.index + umiSubPrompt[0].length;
|
||||
if (textArea.selectionStart >= start && textArea.selectionStart <= end) {
|
||||
umiTagsWithOperators = insertAt(
|
||||
umiSubPrompt[0],
|
||||
"###",
|
||||
textArea.selectionStart - start
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
// Show all condition
|
||||
let showAll = tagword.endsWith("[") || tagword.endsWith("[--") || tagword.endsWith("|");
|
||||
|
||||
// Exit early if the user closed the bracket manually
|
||||
if ((!diff || diff.length === 0 || (diff.length === 1 && tagCountChange < 0)) && !showAll) {
|
||||
if (!hideBlocked) hideResults(textArea);
|
||||
return;
|
||||
// Safety check since UMI parsing sometimes seems to trigger outside of an UMI subprompt and thus fails
|
||||
if (umiTagsWithOperators.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
let umiTagword = diff[0] || '';
|
||||
let tempResults = [];
|
||||
if (umiTagword && umiTagword.length > 0) {
|
||||
umiTagword = umiTagword.toLowerCase().replace(/[\n\r]/g, "");
|
||||
originalTagword = tagword;
|
||||
tagword = umiTagword;
|
||||
let filteredWildcardsSorted = filteredWildcards(umiTagword);
|
||||
let searchRegex = new RegExp(`(^|[^a-zA-Z])${escapeRegExp(umiTagword)}`, 'i')
|
||||
let baseFilter = x => x[0].toLowerCase().search(searchRegex) > -1;
|
||||
let spaceIncludeFilter = x => x[0].toLowerCase().replaceAll(" ", "_").search(searchRegex) > -1;
|
||||
tempResults = filteredWildcardsSorted.filter(x => baseFilter(x) || spaceIncludeFilter(x)) // Filter by tagword
|
||||
const promptSplitToTags = umiTagsWithOperators.replace("]###[", "][").split("][");
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach(t => {
|
||||
let result = new AutocompleteResult(t[0].trim(), ResultType.umiWildcard)
|
||||
result.count = t[1];
|
||||
finalResults.push(result);
|
||||
});
|
||||
const clean = (str) =>
|
||||
str
|
||||
.replaceAll(">", "")
|
||||
.replaceAll("<", "")
|
||||
.replaceAll("[", "")
|
||||
.replaceAll("]", "")
|
||||
.trim();
|
||||
|
||||
return finalResults;
|
||||
} else if (showAll) {
|
||||
const matches = promptSplitToTags.reduce(
|
||||
(acc, curr) => {
|
||||
let isOptional = curr.includes("|");
|
||||
let isNegative = curr.startsWith("--");
|
||||
let out;
|
||||
if (isOptional) {
|
||||
out = {
|
||||
hasCursor: curr.includes("###"),
|
||||
tags: clean(curr)
|
||||
.split("|")
|
||||
.map((x) => ({
|
||||
hasCursor: x.includes("###"),
|
||||
isNegative: x.startsWith("--"),
|
||||
tag: clean(x).replaceAll("###", "").replaceAll("--", ""),
|
||||
})),
|
||||
};
|
||||
acc.optional.push(out);
|
||||
acc.all.push(...out.tags.map((x) => x.tag));
|
||||
} else if (isNegative) {
|
||||
out = {
|
||||
hasCursor: curr.includes("###"),
|
||||
tags: clean(curr).replaceAll("###", "").split("|"),
|
||||
};
|
||||
out.tags = out.tags.map((x) => (x.startsWith("--") ? x.substring(2) : x));
|
||||
acc.negative.push(out);
|
||||
acc.all.push(...out.tags);
|
||||
} else {
|
||||
out = {
|
||||
hasCursor: curr.includes("###"),
|
||||
tags: clean(curr).replaceAll("###", "").split("|"),
|
||||
};
|
||||
acc.positive.push(out);
|
||||
acc.all.push(...out.tags);
|
||||
}
|
||||
return acc;
|
||||
},
|
||||
{ positive: [], negative: [], optional: [], all: [] }
|
||||
);
|
||||
|
||||
//console.log({ matches })
|
||||
|
||||
const filteredWildcards = (tagword) => {
|
||||
const wildcards = TAC.Globals.umiWildcards
|
||||
.filter((x) => {
|
||||
let tags = x[1];
|
||||
const matchesNeg =
|
||||
matches.negative.length === 0 ||
|
||||
matches.negative.every(
|
||||
(x) => x.hasCursor || x.tags.every((t) => !tags[t])
|
||||
);
|
||||
if (!matchesNeg) return false;
|
||||
const matchesPos =
|
||||
matches.positive.length === 0 ||
|
||||
matches.positive.every(
|
||||
(x) => x.hasCursor || x.tags.every((t) => tags[t])
|
||||
);
|
||||
if (!matchesPos) return false;
|
||||
const matchesOpt =
|
||||
matches.optional.length === 0 ||
|
||||
matches.optional.some((x) =>
|
||||
x.tags.some((t) =>
|
||||
t.hasCursor || t.isNegative ? !tags[t.tag] : tags[t.tag]
|
||||
)
|
||||
);
|
||||
if (!matchesOpt) return false;
|
||||
return true;
|
||||
})
|
||||
.reduce((acc, val) => {
|
||||
Object.keys(val[1]).forEach((tag) => (acc[tag] = acc[tag] + 1 || 1));
|
||||
return acc;
|
||||
}, {});
|
||||
|
||||
return Object.entries(wildcards)
|
||||
.sort((a, b) => b[1] - a[1])
|
||||
.filter((x) => x[0] === tagword || !matches.all.includes(x[0]));
|
||||
};
|
||||
|
||||
if (umiTags.length > 0) {
|
||||
// Get difference for subprompt
|
||||
let tagCountChange = umiTags.length - TAC.Globals.umiPreviousTags.length;
|
||||
let diff = TAC.Utils.difference(umiTags, TAC.Globals.umiPreviousTags);
|
||||
TAC.Globals.umiPreviousTags = umiTags;
|
||||
|
||||
// Show all condition
|
||||
let showAll =
|
||||
TAC.Globals.tagword.endsWith("[") ||
|
||||
TAC.Globals.tagword.endsWith("[--") ||
|
||||
TAC.Globals.tagword.endsWith("|");
|
||||
|
||||
// Exit early if the user closed the bracket manually
|
||||
if (
|
||||
(!diff || diff.length === 0 || (diff.length === 1 && tagCountChange < 0)) &&
|
||||
!showAll
|
||||
) {
|
||||
if (!TAC.Globals.hideBlocked) hideResults(textArea);
|
||||
return;
|
||||
}
|
||||
|
||||
let umiTagword = tagCountChange < 0 ? "" : diff[0] || "";
|
||||
let tempResults = [];
|
||||
if (umiTagword && umiTagword.length > 0) {
|
||||
umiTagword = umiTagword.toLowerCase().replace(/[\n\r]/g, "");
|
||||
TAC.Globals.originalTagword = TAC.Globals.tagword;
|
||||
TAC.Globals.tagword = umiTagword;
|
||||
let filteredWildcardsSorted = filteredWildcards(umiTagword);
|
||||
let searchRegex = new RegExp(
|
||||
`(^|[^a-zA-Z])${TAC.Utils.escapeRegExp(umiTagword)}`,
|
||||
"i"
|
||||
);
|
||||
let baseFilter = (x) => x[0].toLowerCase().search(searchRegex) > -1;
|
||||
let spaceIncludeFilter = (x) =>
|
||||
x[0].toLowerCase().replaceAll(" ", "_").search(searchRegex) > -1;
|
||||
tempResults = filteredWildcardsSorted.filter(
|
||||
(x) => baseFilter(x) || spaceIncludeFilter(x)
|
||||
); // Filter by tagword
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
tempResults.forEach((t) => {
|
||||
let result = new TAC.AutocompleteResult(
|
||||
t[0].trim(),
|
||||
TAC.ResultType.umiWildcard
|
||||
);
|
||||
result.count = t[1];
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
finalResults = finalResults.sort((a, b) => b.count - a.count);
|
||||
return finalResults;
|
||||
} else if (showAll) {
|
||||
let filteredWildcardsSorted = filteredWildcards("");
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
filteredWildcardsSorted.forEach((t) => {
|
||||
let result = new TAC.AutocompleteResult(
|
||||
t[0].trim(),
|
||||
TAC.ResultType.umiWildcard
|
||||
);
|
||||
result.count = t[1];
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
TAC.Globals.originalTagword = TAC.Globals.tagword;
|
||||
TAC.Globals.tagword = "";
|
||||
|
||||
finalResults = finalResults.sort((a, b) => b.count - a.count);
|
||||
return finalResults;
|
||||
}
|
||||
} else {
|
||||
let filteredWildcardsSorted = filteredWildcards("");
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
filteredWildcardsSorted.forEach(t => {
|
||||
let result = new AutocompleteResult(t[0].trim(), ResultType.umiWildcard)
|
||||
filteredWildcardsSorted.forEach((t) => {
|
||||
let result = new TAC.AutocompleteResult(
|
||||
t[0].trim(),
|
||||
TAC.ResultType.umiWildcard
|
||||
);
|
||||
result.count = t[1];
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
originalTagword = tagword;
|
||||
tagword = "";
|
||||
TAC.Globals.originalTagword = TAC.Globals.tagword;
|
||||
TAC.Globals.tagword = "";
|
||||
|
||||
finalResults = finalResults.sort((a, b) => b.count - a.count);
|
||||
return finalResults;
|
||||
}
|
||||
} else {
|
||||
let filteredWildcardsSorted = filteredWildcards("");
|
||||
}
|
||||
}
|
||||
|
||||
// Add final results
|
||||
let finalResults = [];
|
||||
filteredWildcardsSorted.forEach(t => {
|
||||
let result = new AutocompleteResult(t[0].trim(), ResultType.umiWildcard)
|
||||
result.count = t[1];
|
||||
finalResults.push(result);
|
||||
function updateUmiTags(tagType, sanitizedText, newPrompt, textArea) {
|
||||
// If it was a umi wildcard, also update the TAC.Globals.umiPreviousTags
|
||||
if (tagType === TAC.ResultType.umiWildcard && TAC.Globals.originalTagword.length > 0) {
|
||||
let umiSubPrompts = [...newPrompt.matchAll(UMI_PROMPT_REGEX)];
|
||||
|
||||
let umiTags = [];
|
||||
umiSubPrompts.forEach((umiSubPrompt) => {
|
||||
umiTags = umiTags.concat(
|
||||
[...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map((x) => x[1].toLowerCase())
|
||||
);
|
||||
});
|
||||
|
||||
originalTagword = tagword;
|
||||
tagword = "";
|
||||
return finalResults;
|
||||
TAC.Globals.umiPreviousTags = umiTags;
|
||||
|
||||
hideResults(textArea);
|
||||
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
async function load() {
|
||||
if (TAC.Globals.umiWildcards.length === 0) {
|
||||
try {
|
||||
let umiTags = (
|
||||
await TAC.Utils.readFile(`${TAC.Globals.tagBasePath}/temp/umi_tags.txt`)
|
||||
).split("\n");
|
||||
// Split into tag, count pairs
|
||||
TAC.Globals.umiWildcards = umiTags
|
||||
.map((x) => x.trim().split(","))
|
||||
.map(([i, ...rest]) => [
|
||||
i,
|
||||
rest.reduce((a, b) => {
|
||||
a[b.toLowerCase()] = true;
|
||||
return a;
|
||||
}, {}),
|
||||
]);
|
||||
} catch (e) {
|
||||
console.error("Error loading umi wildcards: " + e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function updateUmiTags( tagType, sanitizedText, newPrompt, textArea) {
|
||||
// If it was a umi wildcard, also update the umiPreviousTags
|
||||
if (tagType === ResultType.umiWildcard && originalTagword.length > 0) {
|
||||
let umiSubPrompts = [...newPrompt.matchAll(UMI_PROMPT_REGEX)];
|
||||
|
||||
let umiTags = [];
|
||||
umiSubPrompts.forEach(umiSubPrompt => {
|
||||
umiTags = umiTags.concat([...umiSubPrompt[0].matchAll(UMI_TAG_REGEX)].map(x => x[1].toLowerCase()));
|
||||
});
|
||||
|
||||
umiPreviousTags = umiTags;
|
||||
|
||||
hideResults(textArea);
|
||||
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
async function load() {
|
||||
if (umiWildcards.length === 0) {
|
||||
try {
|
||||
let umiTags = (await readFile(`${tagBasePath}/temp/umi_tags.txt`)).split("\n");
|
||||
// Split into tag, count pairs
|
||||
umiWildcards = umiTags.map(x => x
|
||||
.trim()
|
||||
.split(","))
|
||||
.map(([i, ...rest]) => [
|
||||
i,
|
||||
rest.reduce((a, b) => {
|
||||
a[b.toLowerCase()] = true;
|
||||
return a;
|
||||
}, {}),
|
||||
]);
|
||||
} catch (e) {
|
||||
console.error("Error loading umi wildcards: " + e);
|
||||
function sanitize(tagType, text) {
|
||||
// Replace underscores only if the umi tag is not using them
|
||||
if (tagType === TAC.ResultType.umiWildcard && !TAC.Globals.umiWildcards.includes(text)) {
|
||||
return text.replaceAll("_", " ");
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function sanitize(tagType, text) {
|
||||
// Replace underscores only if the umi tag is not using them
|
||||
if (tagType === ResultType.umiWildcard && !umiWildcards.includes(text)) {
|
||||
return text.replaceAll("_", " ");
|
||||
}
|
||||
return null;
|
||||
}
|
||||
// Add UMI parser
|
||||
TAC.Ext.PARSERS.push(new UmiParser(UMI_TRIGGER));
|
||||
|
||||
// Add UMI parser
|
||||
PARSERS.push(new UmiParser(UMI_TRIGGER));
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
QUEUE_FILE_LOAD.push(load);
|
||||
QUEUE_SANITIZE.push(sanitize);
|
||||
QUEUE_AFTER_INSERT.push(updateUmiTags);
|
||||
// Add our utility functions to their respective queues
|
||||
TAC.Ext.QUEUE_FILE_LOAD.push(load);
|
||||
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
|
||||
TAC.Ext.QUEUE_AFTER_INSERT.push(updateUmiTags);
|
||||
})();
|
||||
|
||||
@@ -1,175 +1,232 @@
|
||||
// Regex
|
||||
const WC_REGEX = /\b__([^,]+)__([^, ]*)\b/g;
|
||||
(function WildcardExtension() {
|
||||
// Regex
|
||||
const WC_REGEX = new RegExp(/__([^,]+)__([^, ]*)/g);
|
||||
|
||||
// Trigger conditions
|
||||
const WC_TRIGGER = () => TAC_CFG.useWildcards && [...tagword.matchAll(WC_REGEX)].length > 0;
|
||||
const WC_FILE_TRIGGER = () => TAC_CFG.useWildcards && (tagword.startsWith("__") && !tagword.endsWith("__") || tagword === "__");
|
||||
// Trigger conditions
|
||||
const WC_TRIGGER = () =>
|
||||
TAC.CFG.useWildcards &&
|
||||
[
|
||||
...TAC.Globals.tagword.matchAll(
|
||||
new RegExp(
|
||||
WC_REGEX.source.replaceAll("__", TAC.Utils.escapeRegExp(TAC.CFG.wcWrap)),
|
||||
"g"
|
||||
)
|
||||
),
|
||||
].length > 0;
|
||||
const WC_FILE_TRIGGER = () =>
|
||||
TAC.CFG.useWildcards &&
|
||||
((TAC.Globals.tagword.startsWith(TAC.CFG.wcWrap) &&
|
||||
!TAC.Globals.tagword.endsWith(TAC.CFG.wcWrap)) ||
|
||||
TAC.Globals.tagword === TAC.CFG.wcWrap);
|
||||
|
||||
class WildcardParser extends BaseTagParser {
|
||||
async parse() {
|
||||
// Show wildcards from a file with that name
|
||||
let wcMatch = [...tagword.matchAll(WC_REGEX)]
|
||||
let wcFile = wcMatch[0][1];
|
||||
let wcWord = wcMatch[0][2];
|
||||
class WildcardParser extends TAC.BaseTagParser {
|
||||
async parse() {
|
||||
// Show wildcards from a file with that name
|
||||
let wcMatch = [
|
||||
...TAC.Globals.tagword.matchAll(
|
||||
new RegExp(
|
||||
WC_REGEX.source.replaceAll("__", TAC.Utils.escapeRegExp(TAC.CFG.wcWrap)),
|
||||
"g"
|
||||
)
|
||||
),
|
||||
];
|
||||
let wcFile = wcMatch[0][1];
|
||||
let wcWord = wcMatch[0][2];
|
||||
|
||||
// Look in normal wildcard files
|
||||
let wcFound = wildcardFiles.filter(x => x[1].toLowerCase() === wcFile);
|
||||
if (wcFound.length === 0) wcFound = null;
|
||||
// Use found wildcard file or look in external wildcard files
|
||||
let wcPairs = wcFound || wildcardExtFiles.filter(x => x[1].toLowerCase() === wcFile);
|
||||
// Look in normal wildcard files
|
||||
let wcFound = TAC.Globals.wildcardFiles.filter((x) => x[1].toLowerCase() === wcFile);
|
||||
if (wcFound.length === 0) wcFound = null;
|
||||
// Use found wildcard file or look in external wildcard files
|
||||
let wcPairs =
|
||||
wcFound ||
|
||||
TAC.Globals.wildcardExtFiles.filter((x) => x[1].toLowerCase() === wcFile);
|
||||
|
||||
if (!wcPairs) return [];
|
||||
if (!wcPairs) return [];
|
||||
|
||||
let wildcards = [];
|
||||
for (let i = 0; i < wcPairs.length; i++) {
|
||||
const basePath = wcPairs[i][0];
|
||||
const fileName = wcPairs[i][1];
|
||||
if (!basePath || !fileName) return;
|
||||
let wildcards = [];
|
||||
for (let i = 0; i < wcPairs.length; i++) {
|
||||
const basePath = wcPairs[i][0];
|
||||
const fileName = wcPairs[i][1];
|
||||
if (!basePath || !fileName) return;
|
||||
|
||||
// YAML wildcards are already loaded as json, so we can get the values directly.
|
||||
// basePath is the name of the file in this case, and fileName the key
|
||||
if (basePath.endsWith(".yaml")) {
|
||||
const getDescendantProp = (obj, desc) => {
|
||||
const arr = desc.split("/");
|
||||
while (arr.length) {
|
||||
obj = obj[arr.shift()];
|
||||
}
|
||||
return obj;
|
||||
}
|
||||
wildcards = wildcards.concat(getDescendantProp(yamlWildcards[basePath], fileName));
|
||||
} else {
|
||||
const fileContent = (await fetchAPI(`tacapi/v1/wildcard-contents?basepath=${basePath}&filename=${fileName}.txt`, false))
|
||||
.split("\n")
|
||||
.filter(x => x.trim().length > 0 && !x.startsWith('#')); // Remove empty lines and comments
|
||||
wildcards = wildcards.concat(fileContent);
|
||||
}
|
||||
}
|
||||
|
||||
if (TAC_CFG.sortWildcardResults)
|
||||
wildcards.sort((a, b) => a.localeCompare(b));
|
||||
|
||||
let finalResults = [];
|
||||
let tempResults = wildcards.filter(x => (wcWord !== null && wcWord.length > 0) ? x.toLowerCase().includes(wcWord) : x) // Filter by tagword
|
||||
tempResults.forEach(t => {
|
||||
let result = new AutocompleteResult(t.trim(), ResultType.wildcardTag);
|
||||
result.meta = wcFile;
|
||||
finalResults.push(result);
|
||||
});
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
|
||||
class WildcardFileParser extends BaseTagParser {
|
||||
parse() {
|
||||
// Show available wildcard files
|
||||
let tempResults = [];
|
||||
if (tagword !== "__") {
|
||||
let lmb = (x) => x[1].toLowerCase().includes(tagword.replace("__", ""))
|
||||
tempResults = wildcardFiles.filter(lmb).concat(wildcardExtFiles.filter(lmb)) // Filter by tagword
|
||||
} else {
|
||||
tempResults = wildcardFiles.concat(wildcardExtFiles);
|
||||
}
|
||||
|
||||
let finalResults = [];
|
||||
const alreadyAdded = new Map();
|
||||
// Get final results
|
||||
tempResults.forEach(wcFile => {
|
||||
// Skip duplicate entries incase multiple files have the same name or yaml category
|
||||
if (alreadyAdded.has(wcFile[1])) return;
|
||||
|
||||
let result = null;
|
||||
if (wcFile[0].endsWith(".yaml")) {
|
||||
result = new AutocompleteResult(wcFile[1].trim(), ResultType.yamlWildcard);
|
||||
result.meta = "YAML wildcard collection";
|
||||
} else {
|
||||
result = new AutocompleteResult(wcFile[1].trim(), ResultType.wildcardFile);
|
||||
result.meta = "Wildcard file";
|
||||
}
|
||||
|
||||
finalResults.push(result);
|
||||
alreadyAdded.set(wcFile[1], true);
|
||||
});
|
||||
|
||||
finalResults.sort((a, b) => a.text.localeCompare(b.text));
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
|
||||
async function load() {
|
||||
if (wildcardFiles.length === 0 && wildcardExtFiles.length === 0) {
|
||||
try {
|
||||
let wcFileArr = (await readFile(`${tagBasePath}/temp/wc.txt`)).split("\n");
|
||||
let wcBasePath = wcFileArr[0].trim(); // First line should be the base path
|
||||
wildcardFiles = wcFileArr.slice(1)
|
||||
.filter(x => x.trim().length > 0) // Remove empty lines
|
||||
.map(x => [wcBasePath, x.trim().replace(".txt", "")]); // Remove file extension & newlines
|
||||
|
||||
// To support multiple sources, we need to separate them using the provided "-----" strings
|
||||
let wcExtFileArr = (await readFile(`${tagBasePath}/temp/wce.txt`)).split("\n");
|
||||
let splitIndices = [];
|
||||
for (let index = 0; index < wcExtFileArr.length; index++) {
|
||||
if (wcExtFileArr[index].trim() === "-----") {
|
||||
splitIndices.push(index);
|
||||
// YAML wildcards are already loaded as json, so we can get the values directly.
|
||||
// basePath is the name of the file in this case, and fileName the key
|
||||
if (basePath.endsWith(".yaml")) {
|
||||
const getDescendantProp = (obj, desc) => {
|
||||
const arr = desc.split("/");
|
||||
while (arr.length) {
|
||||
obj = obj[arr.shift()];
|
||||
}
|
||||
return obj;
|
||||
};
|
||||
wildcards = wildcards.concat(
|
||||
getDescendantProp(TAC.Globals.yamlWildcards[basePath], fileName)
|
||||
);
|
||||
} else {
|
||||
const fileContent = (
|
||||
await TAC.Utils.fetchAPI(
|
||||
`tacapi/v1/wildcard-contents?basepath=${basePath}&filename=${fileName}.txt`,
|
||||
false
|
||||
)
|
||||
)
|
||||
.split("\n")
|
||||
.filter((x) => x.trim().length > 0 && !x.startsWith("#")); // Remove empty lines and comments
|
||||
wildcards = wildcards.concat(fileContent);
|
||||
}
|
||||
}
|
||||
// For each group, add them to the wildcardFiles array with the base path as the first element
|
||||
for (let i = 0; i < splitIndices.length; i++) {
|
||||
let start = splitIndices[i - 1] || 0;
|
||||
if (i > 0) start++; // Skip the "-----" line
|
||||
let end = splitIndices[i];
|
||||
|
||||
let wcExtFile = wcExtFileArr.slice(start, end);
|
||||
let base = wcExtFile[0].trim() + "/";
|
||||
wcExtFile = wcExtFile.slice(1)
|
||||
.filter(x => x.trim().length > 0) // Remove empty lines
|
||||
.map(x => x.trim().replace(base, "").replace(".txt", "")); // Remove file extension & newlines;
|
||||
if (TAC.CFG.sortWildcardResults) wildcards.sort((a, b) => a.localeCompare(b));
|
||||
|
||||
wcExtFile = wcExtFile.map(x => [base, x]);
|
||||
wildcardExtFiles.push(...wcExtFile);
|
||||
}
|
||||
|
||||
// Load the yaml wildcard json file and append it as a wildcard file, appending each key as a path component until we reach the end
|
||||
yamlWildcards = await readFile(`${tagBasePath}/temp/wc_yaml.json`, true);
|
||||
|
||||
// Append each key as a path component until we reach a leaf
|
||||
Object.keys(yamlWildcards).forEach(file => {
|
||||
const flattened = flatten(yamlWildcards[file], [], "/");
|
||||
Object.keys(flattened).forEach(key => {
|
||||
wildcardExtFiles.push([file, key]);
|
||||
});
|
||||
let finalResults = [];
|
||||
let tempResults = wildcards.filter((x) =>
|
||||
wcWord !== null && wcWord.length > 0 ? x.toLowerCase().includes(wcWord) : x
|
||||
); // Filter by tagword
|
||||
tempResults.forEach((t) => {
|
||||
let result = new TAC.AutocompleteResult(t.trim(), TAC.ResultType.wildcardTag);
|
||||
result.meta = wcFile;
|
||||
finalResults.push(result);
|
||||
});
|
||||
} catch (e) {
|
||||
console.error("Error loading wildcards: " + e);
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function sanitize(tagType, text) {
|
||||
if (tagType === ResultType.wildcardFile || tagType === ResultType.yamlWildcard) {
|
||||
return `__${text}__`;
|
||||
} else if (tagType === ResultType.wildcardTag) {
|
||||
return text.replace(/^.*?: /g, "");
|
||||
class WildcardFileParser extends TAC.BaseTagParser {
|
||||
parse() {
|
||||
// Show available wildcard files
|
||||
let tempResults = [];
|
||||
if (TAC.Globals.tagword !== TAC.CFG.wcWrap) {
|
||||
let lmb = (x) =>
|
||||
x[1].toLowerCase().includes(TAC.Globals.tagword.replace(TAC.CFG.wcWrap, ""));
|
||||
tempResults = TAC.Globals.wildcardFiles
|
||||
.filter(lmb)
|
||||
.concat(TAC.Globals.wildcardExtFiles.filter(lmb)); // Filter by tagword
|
||||
} else {
|
||||
tempResults = TAC.Globals.wildcardFiles.concat(TAC.Globals.wildcardExtFiles);
|
||||
}
|
||||
|
||||
let finalResults = [];
|
||||
const alreadyAdded = new Map();
|
||||
// Get final results
|
||||
tempResults.forEach((wcFile) => {
|
||||
// Skip duplicate entries incase multiple files have the same name or yaml category
|
||||
if (alreadyAdded.has(wcFile[1])) return;
|
||||
|
||||
let result = null;
|
||||
if (wcFile[0].endsWith(".yaml")) {
|
||||
result = new TAC.AutocompleteResult(
|
||||
wcFile[1].trim(),
|
||||
TAC.ResultType.yamlWildcard
|
||||
);
|
||||
result.meta = "YAML wildcard collection";
|
||||
} else {
|
||||
result = new TAC.AutocompleteResult(
|
||||
wcFile[1].trim(),
|
||||
TAC.ResultType.wildcardFile
|
||||
);
|
||||
result.meta = "Wildcard file";
|
||||
result.sortKey = wcFile[2].trim();
|
||||
}
|
||||
|
||||
finalResults.push(result);
|
||||
alreadyAdded.set(wcFile[1], true);
|
||||
});
|
||||
|
||||
finalResults.sort(TAC.Utils.getSortFunction());
|
||||
|
||||
return finalResults;
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function keepOpenIfWildcard(tagType, sanitizedText, newPrompt, textArea) {
|
||||
// If it's a wildcard, we want to keep the results open so the user can select another wildcard
|
||||
if (tagType === ResultType.wildcardFile || tagType === ResultType.yamlWildcard) {
|
||||
hideBlocked = true;
|
||||
setTimeout(() => { hideBlocked = false; }, 450);
|
||||
return true;
|
||||
async function load() {
|
||||
if (TAC.Globals.wildcardFiles.length === 0 && TAC.Globals.wildcardExtFiles.length === 0) {
|
||||
try {
|
||||
let wcFileArr = await TAC.Utils.loadCSV(`${TAC.Globals.tagBasePath}/temp/wc.txt`);
|
||||
if (wcFileArr && wcFileArr.length > 0) {
|
||||
let wcBasePath = wcFileArr[0][0].trim(); // First line should be the base path
|
||||
TAC.Globals.wildcardFiles = wcFileArr
|
||||
.slice(1)
|
||||
.filter((x) => x[0]?.trim().length > 0) //Remove empty lines
|
||||
.map((x) => [wcBasePath, x[0]?.trim().replace(".txt", ""), x[1]]); // Remove file extension & newlines
|
||||
}
|
||||
|
||||
// To support multiple sources, we need to separate them using the provided "-----" strings
|
||||
let wcExtFileArr = await TAC.Utils.loadCSV(
|
||||
`${TAC.Globals.tagBasePath}/temp/wce.txt`
|
||||
);
|
||||
let splitIndices = [];
|
||||
for (let index = 0; index < wcExtFileArr.length; index++) {
|
||||
if (wcExtFileArr[index][0].trim() === "-----") {
|
||||
splitIndices.push(index);
|
||||
}
|
||||
}
|
||||
// For each group, add them to the wildcardFiles array with the base path as the first element
|
||||
for (let i = 0; i < splitIndices.length; i++) {
|
||||
let start = splitIndices[i - 1] || 0;
|
||||
if (i > 0) start++; // Skip the "-----" line
|
||||
let end = splitIndices[i];
|
||||
|
||||
let wcExtFile = wcExtFileArr.slice(start, end);
|
||||
if (wcExtFile && wcExtFile.length > 0) {
|
||||
let base = wcExtFile[0][0].trim() + "/";
|
||||
wcExtFile = wcExtFile
|
||||
.slice(1)
|
||||
.filter((x) => x[0]?.trim().length > 0) //Remove empty lines
|
||||
.map((x) => [
|
||||
base,
|
||||
x[0]?.trim().replace(base, "").replace(".txt", ""),
|
||||
x[1],
|
||||
]);
|
||||
TAC.Globals.wildcardExtFiles.push(...wcExtFile);
|
||||
}
|
||||
}
|
||||
|
||||
// Load the yaml wildcard json file and append it as a wildcard file, appending each key as a path component until we reach the end
|
||||
TAC.Globals.yamlWildcards = await TAC.Utils.readFile(
|
||||
`${TAC.Globals.tagBasePath}/temp/wc_yaml.json`,
|
||||
true
|
||||
);
|
||||
|
||||
// Append each key as a path component until we reach a leaf
|
||||
Object.keys(TAC.Globals.yamlWildcards).forEach((file) => {
|
||||
const flattened = TAC.Utils.flatten(TAC.Globals.yamlWildcards[file], [], "/");
|
||||
Object.keys(flattened).forEach((key) => {
|
||||
TAC.Globals.wildcardExtFiles.push([file, key]);
|
||||
});
|
||||
});
|
||||
} catch (e) {
|
||||
console.error("Error loading wildcards: " + e);
|
||||
}
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
// Register the parsers
|
||||
PARSERS.push(new WildcardParser(WC_TRIGGER));
|
||||
PARSERS.push(new WildcardFileParser(WC_FILE_TRIGGER));
|
||||
function sanitize(tagType, text) {
|
||||
if (tagType === TAC.ResultType.wildcardFile || tagType === TAC.ResultType.yamlWildcard) {
|
||||
return `${TAC.CFG.wcWrap}${text}${TAC.CFG.wcWrap}`;
|
||||
} else if (tagType === TAC.ResultType.wildcardTag) {
|
||||
return text;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
QUEUE_FILE_LOAD.push(load);
|
||||
QUEUE_SANITIZE.push(sanitize);
|
||||
QUEUE_AFTER_INSERT.push(keepOpenIfWildcard);
|
||||
function keepOpenIfWildcard(tagType, sanitizedText, newPrompt, textArea) {
|
||||
// If it's a wildcard, we want to keep the results open so the user can select another wildcard
|
||||
if (tagType === TAC.ResultType.wildcardFile || tagType === TAC.ResultType.yamlWildcard) {
|
||||
TAC.Globals.hideBlocked = true;
|
||||
setTimeout(() => {
|
||||
TAC.Globals.hideBlocked = false;
|
||||
}, 450);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
// Register the parsers
|
||||
TAC.Ext.PARSERS.push(new WildcardParser(WC_TRIGGER));
|
||||
TAC.Ext.PARSERS.push(new WildcardFileParser(WC_FILE_TRIGGER));
|
||||
|
||||
// Add our utility functions to their respective queues
|
||||
TAC.Ext.QUEUE_FILE_LOAD.push(load);
|
||||
TAC.Ext.QUEUE_SANITIZE.push(sanitize);
|
||||
TAC.Ext.QUEUE_AFTER_INSERT.push(keepOpenIfWildcard);
|
||||
})();
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -16,6 +16,8 @@ hash_dict = {}
|
||||
|
||||
|
||||
def load_hash_cache():
|
||||
if not known_hashes_file.exists():
|
||||
known_hashes_file.touch()
|
||||
with open(known_hashes_file, "r", encoding="utf-8") as file:
|
||||
reader = csv.reader(
|
||||
file.readlines(), delimiter=",", quotechar='"', skipinitialspace=True
|
||||
@@ -28,6 +30,8 @@ def load_hash_cache():
|
||||
def update_hash_cache():
|
||||
global file_needs_update
|
||||
if file_needs_update:
|
||||
if not known_hashes_file.exists():
|
||||
known_hashes_file.touch()
|
||||
with open(known_hashes_file, "w", encoding="utf-8", newline='') as file:
|
||||
writer = csv.writer(file)
|
||||
for name, (hash, mtime) in hash_dict.items():
|
||||
|
||||
@@ -6,31 +6,52 @@ try:
|
||||
from modules.paths import extensions_dir, script_path
|
||||
|
||||
# Webui root path
|
||||
FILE_DIR = Path(script_path)
|
||||
FILE_DIR = Path(script_path).absolute()
|
||||
|
||||
# The extension base path
|
||||
EXT_PATH = Path(extensions_dir)
|
||||
EXT_PATH = Path(extensions_dir).absolute()
|
||||
except ImportError:
|
||||
# Webui root path
|
||||
FILE_DIR = Path().absolute()
|
||||
# The extension base path
|
||||
EXT_PATH = FILE_DIR.joinpath("extensions")
|
||||
EXT_PATH = FILE_DIR.joinpath("extensions").absolute()
|
||||
|
||||
# Tags base path
|
||||
TAGS_PATH = Path(scripts.basedir()).joinpath("tags")
|
||||
TAGS_PATH = Path(scripts.basedir()).joinpath("tags").absolute()
|
||||
|
||||
# The path to the folder containing the wildcards and embeddings
|
||||
WILDCARD_PATH = FILE_DIR.joinpath("scripts/wildcards")
|
||||
EMB_PATH = Path(shared.cmd_opts.embeddings_dir)
|
||||
HYP_PATH = Path(shared.cmd_opts.hypernetwork_dir)
|
||||
try: # SD.Next
|
||||
WILDCARD_PATH = Path(shared.opts.wildcards_dir).absolute()
|
||||
except Exception: # A1111
|
||||
WILDCARD_PATH = FILE_DIR.joinpath("scripts/wildcards").absolute()
|
||||
EMB_PATH = Path(shared.cmd_opts.embeddings_dir).absolute()
|
||||
|
||||
# Forge Classic detection
|
||||
try:
|
||||
from modules_forge.forge_version import version as forge_version
|
||||
IS_FORGE_CLASSIC = forge_version == "classic"
|
||||
except ImportError:
|
||||
IS_FORGE_CLASSIC = False
|
||||
|
||||
# Forge Classic skips it
|
||||
if not IS_FORGE_CLASSIC:
|
||||
try:
|
||||
HYP_PATH = Path(shared.cmd_opts.hypernetwork_dir).absolute()
|
||||
except AttributeError:
|
||||
HYP_PATH = None
|
||||
else:
|
||||
HYP_PATH = None
|
||||
|
||||
try:
|
||||
LORA_PATH = Path(shared.cmd_opts.lora_dir)
|
||||
LORA_PATH = Path(shared.cmd_opts.lora_dir).absolute()
|
||||
except AttributeError:
|
||||
LORA_PATH = None
|
||||
|
||||
try:
|
||||
LYCO_PATH = Path(shared.cmd_opts.lyco_dir)
|
||||
try:
|
||||
LYCO_PATH = Path(shared.cmd_opts.lyco_dir_backcompat).absolute()
|
||||
except:
|
||||
LYCO_PATH = Path(shared.cmd_opts.lyco_dir).absolute() # attempt original non-backcompat path
|
||||
except AttributeError:
|
||||
LYCO_PATH = None
|
||||
|
||||
@@ -49,7 +70,7 @@ def find_ext_wildcard_paths():
|
||||
getattr(shared.cmd_opts, "wildcards_dir", None), # Cmd arg from the wildcard extension
|
||||
getattr(opts, "wildcard_dir", None), # Custom path from sd-dynamic-prompts
|
||||
]
|
||||
for path in [Path(p) for p in custom_paths if p is not None]:
|
||||
for path in [Path(p).absolute() for p in custom_paths if p is not None]:
|
||||
if path.exists():
|
||||
found.append(path)
|
||||
|
||||
@@ -61,8 +82,8 @@ WILDCARD_EXT_PATHS = find_ext_wildcard_paths()
|
||||
|
||||
# The path to the temporary files
|
||||
# In the webui root, on windows it exists by default, on linux it doesn't
|
||||
STATIC_TEMP_PATH = FILE_DIR.joinpath("tmp")
|
||||
TEMP_PATH = TAGS_PATH.joinpath("temp") # Extension specific temp files
|
||||
STATIC_TEMP_PATH = FILE_DIR.joinpath("tmp").absolute()
|
||||
TEMP_PATH = TAGS_PATH.joinpath("temp").absolute() # Extension specific temp files
|
||||
|
||||
# Make sure these folders exist
|
||||
if not TEMP_PATH.exists():
|
||||
|
||||
@@ -2,43 +2,130 @@
|
||||
# to a temporary file to expose it to the javascript side
|
||||
|
||||
import glob
|
||||
import importlib
|
||||
import json
|
||||
import sqlite3
|
||||
import sys
|
||||
import urllib.parse
|
||||
from asyncio import sleep
|
||||
from pathlib import Path
|
||||
|
||||
import gradio as gr
|
||||
import yaml
|
||||
from fastapi import FastAPI
|
||||
from fastapi.responses import FileResponse, JSONResponse
|
||||
from modules import script_callbacks, sd_hijack, shared
|
||||
from fastapi.responses import FileResponse, JSONResponse, Response
|
||||
from modules import hashes, script_callbacks, sd_hijack, sd_models, shared
|
||||
from pydantic import BaseModel
|
||||
|
||||
from scripts.model_keyword_support import (get_lora_simple_hash,
|
||||
load_hash_cache, update_hash_cache,
|
||||
write_model_keyword_path)
|
||||
from scripts.shared_paths import *
|
||||
|
||||
try:
|
||||
try:
|
||||
from scripts import tag_frequency_db as tdb
|
||||
except ModuleNotFoundError:
|
||||
from inspect import currentframe, getframeinfo
|
||||
filename = getframeinfo(currentframe()).filename
|
||||
parent = Path(filename).resolve().parent
|
||||
sys.path.append(str(parent))
|
||||
import tag_frequency_db as tdb
|
||||
|
||||
# Ensure the db dependency is reloaded on script reload
|
||||
importlib.reload(tdb)
|
||||
|
||||
db = tdb.TagFrequencyDb()
|
||||
if int(db.version) != int(tdb.db_ver):
|
||||
raise ValueError("Database version mismatch")
|
||||
except (ImportError, ValueError, sqlite3.Error) as e:
|
||||
print(f"Tag Autocomplete: Tag frequency database error - \"{e}\"")
|
||||
db = None
|
||||
|
||||
def get_embed_db(sd_model=None):
|
||||
"""Returns the embedding database, if available."""
|
||||
try:
|
||||
return sd_hijack.model_hijack.embedding_db
|
||||
except Exception:
|
||||
try: # sd next with diffusers backend
|
||||
sdnext_model = sd_model if sd_model is not None else shared.sd_model
|
||||
return sdnext_model.embedding_db
|
||||
except Exception:
|
||||
try: # forge webui
|
||||
forge_model = sd_model if sd_model is not None else sd_models.model_data.get_sd_model()
|
||||
if type(forge_model).__name__ == "FakeInitialModel":
|
||||
return None
|
||||
else:
|
||||
processer = getattr(forge_model, "text_processing_engine", getattr(forge_model, "text_processing_engine_l"))
|
||||
return processer.embeddings
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
# Attempt to get embedding load function, using the same call as api.
|
||||
try:
|
||||
load_textual_inversion_embeddings = sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings
|
||||
embed_db = get_embed_db()
|
||||
if embed_db is not None:
|
||||
load_textual_inversion_embeddings = embed_db.load_textual_inversion_embeddings
|
||||
else:
|
||||
load_textual_inversion_embeddings = lambda *args, **kwargs: None
|
||||
except Exception as e: # Not supported.
|
||||
load_textual_inversion_embeddings = lambda *args, **kwargs: None
|
||||
print("Tag Autocomplete: Cannot reload embeddings instantly:", e)
|
||||
|
||||
# Sorting functions for extra networks / embeddings stuff
|
||||
sort_criteria = {
|
||||
"Name": lambda path, name, subpath: name.lower() if subpath else path.stem.lower(),
|
||||
"Date Modified (newest first)": lambda path, name, subpath: path.stat().st_mtime if path.exists() else name.lower(),
|
||||
"Date Modified (oldest first)": lambda path, name, subpath: path.stat().st_mtime if path.exists() else name.lower()
|
||||
}
|
||||
|
||||
def sort_models(model_list, sort_method = None, name_has_subpath = False):
|
||||
"""Sorts models according to the setting.
|
||||
|
||||
Input: list of (full_path, display_name, {hash}) models.
|
||||
Returns models in the format of name, sort key, meta.
|
||||
Meta is optional and can be a hash, version string or other required info.
|
||||
"""
|
||||
if len(model_list) == 0:
|
||||
return model_list
|
||||
|
||||
if sort_method is None:
|
||||
sort_method = getattr(shared.opts, "tac_modelSortOrder", "Name")
|
||||
|
||||
# Get sorting method from dictionary
|
||||
sorter = sort_criteria.get(sort_method, sort_criteria["Name"])
|
||||
|
||||
# During merging on the JS side we need to re-sort anyway, so here only the sort criteria are calculated.
|
||||
# The list itself doesn't need to get sorted at this point.
|
||||
if len(model_list[0]) > 2:
|
||||
results = [f'"{name}","{sorter(path, name, name_has_subpath)}",{meta}' for path, name, meta in model_list]
|
||||
else:
|
||||
results = [f'"{name}","{sorter(path, name, name_has_subpath)}"' for path, name in model_list]
|
||||
return results
|
||||
|
||||
|
||||
def get_wildcards():
|
||||
"""Returns a list of all wildcards. Works on nested folders."""
|
||||
wildcard_files = list(WILDCARD_PATH.rglob("*.txt"))
|
||||
resolved = [w.relative_to(WILDCARD_PATH).as_posix(
|
||||
) for w in wildcard_files if w.name != "put wildcards here.txt"]
|
||||
return resolved
|
||||
resolved = [(w, w.relative_to(WILDCARD_PATH).as_posix())
|
||||
for w in wildcard_files
|
||||
if w.name != "put wildcards here.txt"
|
||||
and w.is_file()]
|
||||
return sort_models(resolved, name_has_subpath=True)
|
||||
|
||||
|
||||
def get_ext_wildcards():
|
||||
"""Returns a list of all extension wildcards. Works on nested folders."""
|
||||
wildcard_files = []
|
||||
|
||||
excluded_folder_names = [s.strip() for s in getattr(shared.opts, "tac_wildcardExclusionList", "").split(",")]
|
||||
for path in WILDCARD_EXT_PATHS:
|
||||
wildcard_files.append(path.as_posix())
|
||||
wildcard_files.extend(p.relative_to(path).as_posix() for p in path.rglob("*.txt") if p.name != "put wildcards here.txt")
|
||||
resolved = [(w, w.relative_to(path).as_posix())
|
||||
for w in path.rglob("*.txt")
|
||||
if w.name != "put wildcards here.txt"
|
||||
and not any(excluded in w.parts for excluded in excluded_folder_names)
|
||||
and w.is_file()]
|
||||
wildcard_files.extend(sort_models(resolved, name_has_subpath=True))
|
||||
wildcard_files.append("-----")
|
||||
|
||||
return wildcard_files
|
||||
@@ -47,12 +134,18 @@ def is_umi_format(data):
|
||||
"""Returns True if the YAML file is in UMI format."""
|
||||
issue_found = False
|
||||
for item in data:
|
||||
if not (data[item] and 'Tags' in data[item] and isinstance(data[item]['Tags'], list)):
|
||||
try:
|
||||
if not (data[item] and 'Tags' in data[item] and isinstance(data[item]['Tags'], list)):
|
||||
issue_found = True
|
||||
break
|
||||
except:
|
||||
issue_found = True
|
||||
break
|
||||
return not issue_found
|
||||
|
||||
def parse_umi_format(umi_tags, count, data):
|
||||
count = 0
|
||||
def parse_umi_format(umi_tags, data):
|
||||
global count
|
||||
for item in data:
|
||||
umi_tags[count] = ','.join(data[item]['Tags'])
|
||||
count += 1
|
||||
@@ -67,22 +160,24 @@ def parse_dynamic_prompt_format(yaml_wildcards, data, path):
|
||||
elif not (isinstance(value, list) and all(isinstance(v, str) for v in value)):
|
||||
del d[key]
|
||||
|
||||
recurse_dict(data)
|
||||
# Add to yaml_wildcards
|
||||
yaml_wildcards[path.name] = data
|
||||
try:
|
||||
recurse_dict(data)
|
||||
# Add to yaml_wildcards
|
||||
yaml_wildcards[path.name] = data
|
||||
except:
|
||||
return
|
||||
|
||||
|
||||
def get_yaml_wildcards():
|
||||
"""Returns a list of all tags found in extension YAML files found under a Tags: key."""
|
||||
yaml_files = []
|
||||
for path in WILDCARD_EXT_PATHS:
|
||||
yaml_files.extend(p for p in path.rglob("*.yml"))
|
||||
yaml_files.extend(p for p in path.rglob("*.yaml"))
|
||||
yaml_files.extend(p for p in path.rglob("*.yml") if p.is_file())
|
||||
yaml_files.extend(p for p in path.rglob("*.yaml") if p.is_file())
|
||||
|
||||
yaml_wildcards = {}
|
||||
|
||||
umi_tags = {} # { tag: count }
|
||||
count = 0
|
||||
|
||||
for path in yaml_files:
|
||||
try:
|
||||
@@ -90,13 +185,17 @@ def get_yaml_wildcards():
|
||||
data = yaml.safe_load(file)
|
||||
if (data):
|
||||
if (is_umi_format(data)):
|
||||
parse_umi_format(umi_tags, count, data)
|
||||
parse_umi_format(umi_tags, data)
|
||||
else:
|
||||
parse_dynamic_prompt_format(yaml_wildcards, data, path)
|
||||
else:
|
||||
print('No data found in ' + path.name)
|
||||
except yaml.YAMLError:
|
||||
print('Issue in parsing YAML file ' + path.name)
|
||||
except (yaml.YAMLError, UnicodeDecodeError, AttributeError, TypeError) as e:
|
||||
# YAML file not in wildcard format or couldn't be read
|
||||
print(f'Issue in parsing YAML file {path.name}: {e}')
|
||||
continue
|
||||
except Exception as e:
|
||||
# Something else went wrong, just skip
|
||||
continue
|
||||
|
||||
# Sort by count
|
||||
@@ -118,48 +217,59 @@ def get_embeddings(sd_model):
|
||||
# Version constants
|
||||
V1_SHAPE = 768
|
||||
V2_SHAPE = 1024
|
||||
VXL_SHAPE = 2048
|
||||
emb_v1 = []
|
||||
emb_v2 = []
|
||||
emb_vXL = []
|
||||
emb_unknown = []
|
||||
results = []
|
||||
|
||||
try:
|
||||
# Get embedding dict from sd_hijack to separate v1/v2 embeddings
|
||||
emb_type_a = sd_hijack.model_hijack.embedding_db.word_embeddings
|
||||
emb_type_b = sd_hijack.model_hijack.embedding_db.skipped_embeddings
|
||||
# Get the shape of the first item in the dict
|
||||
emb_a_shape = -1
|
||||
emb_b_shape = -1
|
||||
if (len(emb_type_a) > 0):
|
||||
emb_a_shape = next(iter(emb_type_a.items()))[1].shape
|
||||
if (len(emb_type_b) > 0):
|
||||
emb_b_shape = next(iter(emb_type_b.items()))[1].shape
|
||||
embed_db = get_embed_db(sd_model)
|
||||
# Re-register callback if needed
|
||||
global load_textual_inversion_embeddings
|
||||
if embed_db is not None and load_textual_inversion_embeddings != embed_db.load_textual_inversion_embeddings:
|
||||
load_textual_inversion_embeddings = embed_db.load_textual_inversion_embeddings
|
||||
|
||||
loaded = embed_db.word_embeddings
|
||||
skipped = embed_db.skipped_embeddings
|
||||
|
||||
# Add embeddings to the correct list
|
||||
if (emb_a_shape == V1_SHAPE):
|
||||
emb_v1 = list(emb_type_a.keys())
|
||||
elif (emb_a_shape == V2_SHAPE):
|
||||
emb_v2 = list(emb_type_a.keys())
|
||||
for key, emb in (skipped | loaded).items():
|
||||
filename = getattr(emb, "filename", None)
|
||||
|
||||
if filename is None:
|
||||
if emb.shape is None:
|
||||
emb_unknown.append((Path(key), key, ""))
|
||||
elif emb.shape == V1_SHAPE:
|
||||
emb_v1.append((Path(key), key, "v1"))
|
||||
elif emb.shape == V2_SHAPE:
|
||||
emb_v2.append((Path(key), key, "v2"))
|
||||
elif emb.shape == VXL_SHAPE:
|
||||
emb_vXL.append((Path(key), key, "vXL"))
|
||||
else:
|
||||
emb_unknown.append((Path(key), key, ""))
|
||||
|
||||
else:
|
||||
if emb.filename is None:
|
||||
continue
|
||||
|
||||
if (emb_b_shape == V1_SHAPE):
|
||||
emb_v1 = list(emb_type_b.keys())
|
||||
elif (emb_b_shape == V2_SHAPE):
|
||||
emb_v2 = list(emb_type_b.keys())
|
||||
if emb.shape is None:
|
||||
emb_unknown.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), ""))
|
||||
elif emb.shape == V1_SHAPE:
|
||||
emb_v1.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), "v1"))
|
||||
elif emb.shape == V2_SHAPE:
|
||||
emb_v2.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), "v2"))
|
||||
elif emb.shape == VXL_SHAPE:
|
||||
emb_vXL.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), "vXL"))
|
||||
else:
|
||||
emb_unknown.append((Path(emb.filename), Path(emb.filename).relative_to(EMB_PATH).as_posix(), ""))
|
||||
|
||||
# Get shape of current model
|
||||
#vec = sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
|
||||
#model_shape = vec.shape[1]
|
||||
# Show relevant entries at the top
|
||||
#if (model_shape == V1_SHAPE):
|
||||
# results = [e + ",v1" for e in emb_v1] + [e + ",v2" for e in emb_v2]
|
||||
#elif (model_shape == V2_SHAPE):
|
||||
# results = [e + ",v2" for e in emb_v2] + [e + ",v1" for e in emb_v1]
|
||||
#else:
|
||||
# raise AttributeError # Fallback to old method
|
||||
results = sorted([e + ",v1" for e in emb_v1] + [e + ",v2" for e in emb_v2], key=lambda x: x.lower())
|
||||
results = sort_models(emb_v1) + sort_models(emb_v2) + sort_models(emb_vXL) + sort_models(emb_unknown)
|
||||
except AttributeError:
|
||||
print("tag_autocomplete_helper: Old webui version or unrecognized model shape, using fallback for embedding completion.")
|
||||
# Get a list of all embeddings in the folder
|
||||
all_embeds = [str(e.relative_to(EMB_PATH)) for e in EMB_PATH.rglob("*") if e.suffix in {".bin", ".pt", ".png",'.webp', '.jxl', '.avif'}]
|
||||
all_embeds = [str(e.relative_to(EMB_PATH)) for e in EMB_PATH.rglob("*") if e.suffix in {".bin", ".pt", ".png",'.webp', '.jxl', '.avif'} and e.is_file()]
|
||||
# Remove files with a size of 0
|
||||
all_embeds = [e for e in all_embeds if EMB_PATH.joinpath(e).stat().st_size > 0]
|
||||
# Remove file extensions
|
||||
@@ -173,53 +283,129 @@ def get_hypernetworks():
|
||||
|
||||
# Get a list of all hypernetworks in the folder
|
||||
hyp_paths = [Path(h) for h in glob.glob(HYP_PATH.joinpath("**/*").as_posix(), recursive=True)]
|
||||
all_hypernetworks = [str(h.name) for h in hyp_paths if h.suffix in {".pt"}]
|
||||
# Remove file extensions
|
||||
return sorted([h[:h.rfind('.')] for h in all_hypernetworks], key=lambda x: x.lower())
|
||||
all_hypernetworks = [(h, h.stem) for h in hyp_paths if h.suffix in {".pt"} and h.is_file()]
|
||||
return sort_models(all_hypernetworks)
|
||||
|
||||
model_keyword_installed = write_model_keyword_path()
|
||||
|
||||
|
||||
def _get_lora():
|
||||
"""
|
||||
Write a list of all lora.
|
||||
Fallback method for when the built-in Lora.networks module is not available.
|
||||
"""
|
||||
# Get a list of all lora in the folder
|
||||
lora_paths = [
|
||||
Path(l)
|
||||
for l in glob.glob(LORA_PATH.joinpath("**/*").as_posix(), recursive=True)
|
||||
]
|
||||
# Get hashes
|
||||
valid_loras = [
|
||||
lf
|
||||
for lf in lora_paths
|
||||
if lf.suffix in {".safetensors", ".ckpt", ".pt"} and lf.is_file()
|
||||
]
|
||||
|
||||
return valid_loras
|
||||
|
||||
|
||||
def _get_lyco():
|
||||
"""
|
||||
Write a list of all LyCORIS/LOHA from https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris
|
||||
Fallback method for when the built-in Lora.networks module is not available.
|
||||
"""
|
||||
# Get a list of all LyCORIS in the folder
|
||||
lyco_paths = [
|
||||
Path(ly)
|
||||
for ly in glob.glob(LYCO_PATH.joinpath("**/*").as_posix(), recursive=True)
|
||||
]
|
||||
|
||||
# Get hashes
|
||||
valid_lycos = [
|
||||
lyf
|
||||
for lyf in lyco_paths
|
||||
if lyf.suffix in {".safetensors", ".ckpt", ".pt"} and lyf.is_file()
|
||||
]
|
||||
return valid_lycos
|
||||
|
||||
|
||||
# Attempt to use the build-in Lora.networks Lora/LyCORIS models lists.
|
||||
try:
|
||||
import sys
|
||||
from modules import extensions
|
||||
sys.path.append(Path(extensions.extensions_builtin_dir).joinpath("Lora").as_posix())
|
||||
import lora # pyright: ignore [reportMissingImports]
|
||||
|
||||
def _get_lora():
|
||||
return [
|
||||
Path(model.filename).absolute()
|
||||
for model in lora.available_loras.values()
|
||||
if Path(model.filename).absolute().is_relative_to(LORA_PATH)
|
||||
]
|
||||
|
||||
def _get_lyco():
|
||||
return [
|
||||
Path(model.filename).absolute()
|
||||
for model in lora.available_loras.values()
|
||||
if Path(model.filename).absolute().is_relative_to(LYCO_PATH)
|
||||
]
|
||||
|
||||
except Exception as e:
|
||||
pass
|
||||
# no need to report
|
||||
# print(f'Exception setting-up performant fetchers: {e}')
|
||||
|
||||
|
||||
def is_visible(p: Path) -> bool:
|
||||
if getattr(shared.opts, "extra_networks_hidden_models", "When searched") != "Never":
|
||||
return True
|
||||
for part in p.parts:
|
||||
if part.startswith('.'):
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_lora():
|
||||
"""Write a list of all lora"""
|
||||
global model_keyword_installed
|
||||
|
||||
# Get a list of all lora in the folder
|
||||
lora_paths = [Path(l) for l in glob.glob(LORA_PATH.joinpath("**/*").as_posix(), recursive=True)]
|
||||
# Get hashes
|
||||
valid_loras = [lf for lf in lora_paths if lf.suffix in {".safetensors", ".ckpt", ".pt"}]
|
||||
hashes = {}
|
||||
valid_loras = _get_lora()
|
||||
loras_with_hash = []
|
||||
for l in valid_loras:
|
||||
if not l.exists() or not l.is_file() or not is_visible(l):
|
||||
continue
|
||||
name = l.relative_to(LORA_PATH).as_posix()
|
||||
if model_keyword_installed:
|
||||
hashes[name] = get_lora_simple_hash(l)
|
||||
hash = get_lora_simple_hash(l)
|
||||
else:
|
||||
hashes[name] = ""
|
||||
hash = ""
|
||||
loras_with_hash.append((l, name, hash))
|
||||
# Sort
|
||||
sorted_loras = dict(sorted(hashes.items()))
|
||||
# Add hashes and return
|
||||
return [f"\"{name}\",{hash}" for name, hash in sorted_loras.items()]
|
||||
return sort_models(loras_with_hash)
|
||||
|
||||
|
||||
def get_lyco():
|
||||
"""Write a list of all LyCORIS/LOHA from https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris"""
|
||||
|
||||
# Get a list of all LyCORIS in the folder
|
||||
lyco_paths = [Path(ly) for ly in glob.glob(LYCO_PATH.joinpath("**/*").as_posix(), recursive=True)]
|
||||
|
||||
# Get hashes
|
||||
valid_lycos = [lyf for lyf in lyco_paths if lyf.suffix in {".safetensors", ".ckpt", ".pt"}]
|
||||
hashes = {}
|
||||
valid_lycos = _get_lyco()
|
||||
lycos_with_hash = []
|
||||
for ly in valid_lycos:
|
||||
if not ly.exists() or not ly.is_file() or not is_visible(ly):
|
||||
continue
|
||||
name = ly.relative_to(LYCO_PATH).as_posix()
|
||||
if model_keyword_installed:
|
||||
hashes[name] = get_lora_simple_hash(ly)
|
||||
hash = get_lora_simple_hash(ly)
|
||||
else:
|
||||
hashes[name] = ""
|
||||
|
||||
hash = ""
|
||||
lycos_with_hash.append((ly, name, hash))
|
||||
# Sort
|
||||
sorted_lycos = dict(sorted(hashes.items()))
|
||||
# Add hashes and return
|
||||
return [f"\"{name}\",{hash}" for name, hash in sorted_lycos.items()]
|
||||
return sort_models(lycos_with_hash)
|
||||
|
||||
def get_style_names():
|
||||
try:
|
||||
style_names: list[str] = shared.prompt_styles.styles.keys()
|
||||
style_names = sorted(style_names, key=len, reverse=True)
|
||||
return style_names
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def write_tag_base_path():
|
||||
"""Writes the tag base path to a fixed location temporary file"""
|
||||
@@ -235,19 +421,19 @@ def write_to_temp_file(name, data):
|
||||
|
||||
csv_files = []
|
||||
csv_files_withnone = []
|
||||
def update_tag_files():
|
||||
def update_tag_files(*args, **kwargs):
|
||||
"""Returns a list of all potential tag files"""
|
||||
global csv_files, csv_files_withnone
|
||||
files = [str(t.relative_to(TAGS_PATH)) for t in TAGS_PATH.glob("*.csv")]
|
||||
files = [str(t.relative_to(TAGS_PATH)) for t in TAGS_PATH.glob("*.csv") if t.is_file()]
|
||||
csv_files = files
|
||||
csv_files_withnone = ["None"] + files
|
||||
|
||||
json_files = []
|
||||
json_files_withnone = []
|
||||
def update_json_files():
|
||||
def update_json_files(*args, **kwargs):
|
||||
"""Returns a list of all potential json files"""
|
||||
global json_files, json_files_withnone
|
||||
files = [str(j.relative_to(TAGS_PATH)) for j in TAGS_PATH.glob("*.json")]
|
||||
files = [str(j.relative_to(TAGS_PATH)) for j in TAGS_PATH.glob("*.json") if j.is_file()]
|
||||
json_files = files
|
||||
json_files_withnone = ["None"] + files
|
||||
|
||||
@@ -274,6 +460,7 @@ write_to_temp_file('umi_tags.txt', [])
|
||||
write_to_temp_file('hyp.txt', [])
|
||||
write_to_temp_file('lora.txt', [])
|
||||
write_to_temp_file('lyco.txt', [])
|
||||
write_to_temp_file('styles.txt', [])
|
||||
# Only reload embeddings if the file doesn't exist, since they are already re-written on model load
|
||||
if not TEMP_PATH.joinpath("emb.txt").exists():
|
||||
write_to_temp_file('emb.txt', [])
|
||||
@@ -283,29 +470,59 @@ if EMB_PATH.exists():
|
||||
# Get embeddings after the model loaded callback
|
||||
script_callbacks.on_model_loaded(get_embeddings)
|
||||
|
||||
def refresh_temp_files():
|
||||
global WILDCARD_EXT_PATHS
|
||||
WILDCARD_EXT_PATHS = find_ext_wildcard_paths()
|
||||
load_textual_inversion_embeddings(force_reload = True) # Instant embedding reload.
|
||||
write_temp_files()
|
||||
get_embeddings(shared.sd_model)
|
||||
def refresh_embeddings(force: bool, *args, **kwargs):
|
||||
try:
|
||||
# Fix for SD.Next infinite refresh loop due to gradio not updating after model load on demand.
|
||||
# This will just skip embedding loading if no model is loaded yet (or there really are no embeddings).
|
||||
# Try catch is just for safety incase sd_hijack access fails for some reason.
|
||||
embed_db = get_embed_db()
|
||||
if embed_db is None:
|
||||
return
|
||||
loaded = embed_db.word_embeddings
|
||||
skipped = embed_db.skipped_embeddings
|
||||
if len((loaded | skipped)) > 0:
|
||||
load_textual_inversion_embeddings(force_reload=force)
|
||||
get_embeddings(None)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def write_temp_files():
|
||||
def refresh_temp_files(*args, **kwargs):
|
||||
global WILDCARD_EXT_PATHS
|
||||
skip_wildcard_refresh = getattr(shared.opts, "tac_skipWildcardRefresh", False)
|
||||
if skip_wildcard_refresh:
|
||||
WILDCARD_EXT_PATHS = find_ext_wildcard_paths()
|
||||
write_temp_files(skip_wildcard_refresh)
|
||||
force_embed_refresh = getattr(shared.opts, "tac_forceRefreshEmbeddings", False)
|
||||
refresh_embeddings(force=force_embed_refresh)
|
||||
|
||||
def write_style_names(*args, **kwargs):
|
||||
styles = get_style_names()
|
||||
if styles:
|
||||
write_to_temp_file('styles.txt', styles)
|
||||
|
||||
def write_temp_files(skip_wildcard_refresh = False):
|
||||
# Write wildcards to wc.txt if found
|
||||
if WILDCARD_PATH.exists():
|
||||
wildcards = [WILDCARD_PATH.relative_to(FILE_DIR).as_posix()] + get_wildcards()
|
||||
if WILDCARD_PATH.exists() and not skip_wildcard_refresh:
|
||||
try:
|
||||
# Attempt to create a relative path, but fall back to an absolute path if not possible
|
||||
relative_wildcard_path = WILDCARD_PATH.relative_to(FILE_DIR).as_posix()
|
||||
except ValueError:
|
||||
# If the paths are not relative, use the absolute path
|
||||
relative_wildcard_path = WILDCARD_PATH.as_posix()
|
||||
|
||||
wildcards = [relative_wildcard_path] + get_wildcards()
|
||||
if wildcards:
|
||||
write_to_temp_file('wc.txt', wildcards)
|
||||
|
||||
# Write extension wildcards to wce.txt if found
|
||||
if WILDCARD_EXT_PATHS is not None:
|
||||
if WILDCARD_EXT_PATHS is not None and not skip_wildcard_refresh:
|
||||
wildcards_ext = get_ext_wildcards()
|
||||
if wildcards_ext:
|
||||
write_to_temp_file('wce.txt', wildcards_ext)
|
||||
# Write yaml extension wildcards to umi_tags.txt and wc_yaml.json if found
|
||||
get_yaml_wildcards()
|
||||
|
||||
if HYP_PATH.exists():
|
||||
if HYP_PATH is not None and HYP_PATH.exists():
|
||||
hypernets = get_hypernetworks()
|
||||
if hypernets:
|
||||
write_to_temp_file('hyp.txt', hypernets)
|
||||
@@ -330,6 +547,8 @@ def write_temp_files():
|
||||
if model_keyword_installed:
|
||||
update_hash_cache()
|
||||
|
||||
if shared.prompt_styles is not None:
|
||||
write_style_names()
|
||||
|
||||
write_temp_files()
|
||||
|
||||
@@ -349,6 +568,13 @@ def on_ui_settings():
|
||||
return self
|
||||
shared.OptionInfo.needs_restart = needs_restart
|
||||
|
||||
# Dictionary of function options and their explanations
|
||||
frequency_sort_functions = {
|
||||
"Logarithmic (weak)": "Will respect the base order and slightly prefer often used tags",
|
||||
"Logarithmic (strong)": "Same as Logarithmic (weak), but with a stronger bias",
|
||||
"Usage first": "Will list used tags by frequency before all others",
|
||||
}
|
||||
|
||||
tac_options = {
|
||||
# Main tag file
|
||||
"tac_tagFile": shared.OptionInfo("danbooru.csv", "Tag filename", gr.Dropdown, lambda: {"choices": csv_files_withnone}, refresh=update_tag_files),
|
||||
@@ -368,15 +594,29 @@ def on_ui_settings():
|
||||
"tac_delayTime": shared.OptionInfo(100, "Time in ms to wait before triggering completion again").needs_restart(),
|
||||
"tac_useWildcards": shared.OptionInfo(True, "Search for wildcards"),
|
||||
"tac_sortWildcardResults": shared.OptionInfo(True, "Sort wildcard file contents alphabetically").info("If your wildcard files have a specific custom order, disable this to keep it"),
|
||||
"tac_wildcardExclusionList": shared.OptionInfo("", "Wildcard folder exclusion list").info("Add folder names that shouldn't be searched for wildcards, separated by comma.").needs_restart(),
|
||||
"tac_skipWildcardRefresh": shared.OptionInfo(False, "Don't re-scan for wildcard files when pressing the extra networks refresh button").info("Useful to prevent hanging if you use a very large wildcard collection."),
|
||||
"tac_useEmbeddings": shared.OptionInfo(True, "Search for embeddings"),
|
||||
"tac_forceRefreshEmbeddings": shared.OptionInfo(False, "Force refresh embeddings when pressing the extra networks refresh button").info("Turn this on if you have issues with new embeddings not registering correctly in TAC. Warning: Seems to cause reloading issues in gradio for some users."),
|
||||
"tac_includeEmbeddingsInNormalResults": shared.OptionInfo(False, "Include embeddings in normal tag results").info("The 'JumpTo...' keybinds (End & Home key by default) will select the first non-embedding result of their direction on the first press for quick navigation in longer lists."),
|
||||
"tac_useHypernetworks": shared.OptionInfo(True, "Search for hypernetworks"),
|
||||
"tac_useLoras": shared.OptionInfo(True, "Search for Loras"),
|
||||
"tac_useLycos": shared.OptionInfo(True, "Search for LyCORIS/LoHa"),
|
||||
"tac_useLoraPrefixForLycos": shared.OptionInfo(True, "Use the '<lora:' prefix instead of '<lyco:' for models in the LyCORIS folder").info("The lyco prefix is included for backwards compatibility and not used anymore by default. Disable this if you are on an old webui version without built-in lyco support."),
|
||||
"tac_showWikiLinks": shared.OptionInfo(False, "Show '?' next to tags, linking to its Danbooru or e621 wiki page").info("Warning: This is an external site and very likely contains NSFW examples!"),
|
||||
"tac_showExtraNetworkPreviews": shared.OptionInfo(True, "Show preview thumbnails for extra networks if available"),
|
||||
"tac_modelSortOrder": shared.OptionInfo("Name", "Model sort order", gr.Dropdown, lambda: {"choices": list(sort_criteria.keys())}).info("Order for extra network models and wildcards in dropdown"),
|
||||
"tac_useStyleVars": shared.OptionInfo(False, "Search for webui style names").info("Suggests style names from the webui dropdown with '$'. Currently requires a secondary extension like <a href=\"https://github.com/SirVeggie/extension-style-vars\" target=\"_blank\">style-vars</a> to actually apply the styles before generating."),
|
||||
# Frequency sorting settings
|
||||
"tac_frequencySort": shared.OptionInfo(True, "Locally record tag usage and sort frequent tags higher").info("Will also work for extra networks, keeping the specified base order"),
|
||||
"tac_frequencyFunction": shared.OptionInfo("Logarithmic (weak)", "Function to use for frequency sorting", gr.Dropdown, lambda: {"choices": list(frequency_sort_functions.keys())}).info("; ".join([f'<b>{key}</b>: {val}' for key, val in frequency_sort_functions.items()])),
|
||||
"tac_frequencyMinCount": shared.OptionInfo(3, "Minimum number of uses for a tag to be considered frequent").info("Tags with less uses than this will not be sorted higher, even if the sorting function would normally result in a higher position."),
|
||||
"tac_frequencyMaxAge": shared.OptionInfo(30, "Maximum days since last use for a tag to be considered frequent").info("Similar to the above, tags that haven't been used in this many days will not be sorted higher. Set to 0 to disable."),
|
||||
"tac_frequencyRecommendCap": shared.OptionInfo(10, "Maximum number of recommended tags").info("Limits the maximum number of recommended tags to not drown out normal results. Set to 0 to disable."),
|
||||
"tac_frequencyIncludeAlias": shared.OptionInfo(False, "Frequency sorting matches aliases for frequent tags").info("Tag frequency will be increased for the main tag even if an alias is used for completion. This option can be used to override the default behavior of alias results being ignored for frequency sorting."),
|
||||
# Insertion related settings
|
||||
"tac_replaceUnderscores": shared.OptionInfo(True, "Replace underscores with spaces on insertion"),
|
||||
"tac_undersocreReplacementExclusionList": shared.OptionInfo("0_0,(o)_(o),+_+,+_-,._.,<o>_<o>,<|>_<|>,=_=,>_<,3_3,6_9,>_o,@_@,^_^,o_o,u_u,x_x,|_|,||_||", "Underscore replacement exclusion list").info("Add tags that shouldn't have underscores replaced with spaces, separated by comma."),
|
||||
"tac_escapeParentheses": shared.OptionInfo(True, "Escape parentheses on insertion"),
|
||||
"tac_appendComma": shared.OptionInfo(True, "Append comma on tag autocompletion"),
|
||||
"tac_appendSpace": shared.OptionInfo(True, "Append space on tag autocompletion").info("will append after comma if the above is enabled"),
|
||||
@@ -439,6 +679,37 @@ def on_ui_settings():
|
||||
"6": ["red", "maroon"],
|
||||
"7": ["whitesmoke", "black"],
|
||||
"8": ["seagreen", "darkseagreen"]
|
||||
},
|
||||
"derpibooru": {
|
||||
"-1": ["red", "maroon"],
|
||||
"0": ["#60d160", "#3d9d3d"],
|
||||
"1": ["#fff956", "#918e2e"],
|
||||
"3": ["#fd9961", "#a14c2e"],
|
||||
"4": ["#cf5bbe", "#6c1e6c"],
|
||||
"5": ["#3c8ad9", "#1e5e93"],
|
||||
"6": ["#a6a6a6", "#555555"],
|
||||
"7": ["#47abc1", "#1f6c7c"],
|
||||
"8": ["#7871d0", "#392f7d"],
|
||||
"9": ["#df3647", "#8e1c2b"],
|
||||
"10": ["#c98f2b", "#7b470e"],
|
||||
"11": ["#e87ebe", "#a83583"]
|
||||
},
|
||||
"danbooru_e621_merged": {
|
||||
"-1": ["red", "maroon"],
|
||||
"0": ["lightblue", "dodgerblue"],
|
||||
"1": ["indianred", "firebrick"],
|
||||
"3": ["violet", "darkorchid"],
|
||||
"4": ["lightgreen", "darkgreen"],
|
||||
"5": ["orange", "darkorange"],
|
||||
"6": ["red", "maroon"],
|
||||
"7": ["lightblue", "dodgerblue"],
|
||||
"8": ["gold", "goldenrod"],
|
||||
"9": ["gold", "goldenrod"],
|
||||
"10": ["violet", "darkorchid"],
|
||||
"11": ["lightgreen", "darkgreen"],
|
||||
"12": ["tomato", "darksalmon"],
|
||||
"14": ["whitesmoke", "black"],
|
||||
"15": ["seagreen", "darkseagreen"]
|
||||
}
|
||||
}\
|
||||
"""
|
||||
@@ -456,25 +727,40 @@ def on_ui_settings():
|
||||
|
||||
script_callbacks.on_ui_settings(on_ui_settings)
|
||||
|
||||
def get_style_mtime():
|
||||
try:
|
||||
style_file = getattr(shared, "styles_filename", "styles.csv")
|
||||
# Check in case a list is returned
|
||||
if isinstance(style_file, list):
|
||||
style_file = style_file[0]
|
||||
|
||||
style_file = Path(FILE_DIR).joinpath(style_file)
|
||||
if Path.exists(style_file):
|
||||
return style_file.stat().st_mtime
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
last_style_mtime = get_style_mtime()
|
||||
|
||||
def api_tac(_: gr.Blocks, app: FastAPI):
|
||||
async def get_json_info(base_path: Path, filename: str = None):
|
||||
if base_path is None or (not base_path.exists()):
|
||||
return JSONResponse({}, status_code=404)
|
||||
return Response(status_code=404)
|
||||
|
||||
try:
|
||||
json_candidates = glob.glob(base_path.as_posix() + f"/**/{filename}.json", recursive=True)
|
||||
if json_candidates is not None and len(json_candidates) > 0:
|
||||
json_candidates = glob.glob(base_path.as_posix() + f"/**/{glob.escape(filename)}.json", recursive=True)
|
||||
if json_candidates is not None and len(json_candidates) > 0 and Path(json_candidates[0]).is_file():
|
||||
return FileResponse(json_candidates[0])
|
||||
except Exception as e:
|
||||
return JSONResponse({"error": e}, status_code=500)
|
||||
|
||||
async def get_preview_thumbnail(base_path: Path, filename: str = None, blob: bool = False):
|
||||
if base_path is None or (not base_path.exists()):
|
||||
return JSONResponse({}, status_code=404)
|
||||
return Response(status_code=404)
|
||||
|
||||
try:
|
||||
img_glob = glob.glob(base_path.as_posix() + f"/**/{filename}.*", recursive=True)
|
||||
img_candidates = [img for img in img_glob if Path(img).suffix in [".png", ".jpg", ".jpeg", ".webp", ".gif"]]
|
||||
img_glob = glob.glob(base_path.as_posix() + f"/**/{glob.escape(filename)}.*", recursive=True)
|
||||
img_candidates = [img for img in img_glob if Path(img).suffix in [".png", ".jpg", ".jpeg", ".webp", ".gif"] and Path(img).is_file()]
|
||||
if img_candidates is not None and len(img_candidates) > 0:
|
||||
if blob:
|
||||
return FileResponse(img_candidates[0])
|
||||
@@ -483,6 +769,15 @@ def api_tac(_: gr.Blocks, app: FastAPI):
|
||||
except Exception as e:
|
||||
return JSONResponse({"error": e}, status_code=500)
|
||||
|
||||
@app.post("/tacapi/v1/refresh-temp-files")
|
||||
async def api_refresh_temp_files():
|
||||
await sleep(0) # might help with refresh blocking gradio
|
||||
refresh_temp_files()
|
||||
|
||||
@app.post("/tacapi/v1/refresh-embeddings")
|
||||
async def api_refresh_embeddings():
|
||||
refresh_embeddings(force=False)
|
||||
|
||||
@app.get("/tacapi/v1/lora-info/{lora_name}")
|
||||
async def get_lora_info(lora_name):
|
||||
return await get_json_info(LORA_PATH, lora_name)
|
||||
@@ -491,14 +786,26 @@ def api_tac(_: gr.Blocks, app: FastAPI):
|
||||
async def get_lyco_info(lyco_name):
|
||||
return await get_json_info(LYCO_PATH, lyco_name)
|
||||
|
||||
@app.get("/tacapi/v1/lora-cached-hash/{lora_name}")
|
||||
async def get_lora_cached_hash(lora_name: str):
|
||||
path_glob = glob.glob(LORA_PATH.as_posix() + f"/**/{glob.escape(lora_name)}.*", recursive=True)
|
||||
paths = [lora for lora in path_glob if Path(lora).suffix in [".safetensors", ".ckpt", ".pt"] and Path(lora).is_file()]
|
||||
if paths is not None and len(paths) > 0:
|
||||
path = paths[0]
|
||||
hash = hashes.sha256_from_cache(path, f"lora/{lora_name}", path.endswith(".safetensors"))
|
||||
if hash is not None:
|
||||
return hash
|
||||
|
||||
return None
|
||||
|
||||
def get_path_for_type(type):
|
||||
if type == "lora":
|
||||
return LORA_PATH
|
||||
elif type == "lyco":
|
||||
return LYCO_PATH
|
||||
elif type == "hyper":
|
||||
elif type == "hypernetwork":
|
||||
return HYP_PATH
|
||||
elif type == "embed":
|
||||
elif type == "embedding":
|
||||
return EMB_PATH
|
||||
else:
|
||||
return None
|
||||
@@ -514,20 +821,91 @@ def api_tac(_: gr.Blocks, app: FastAPI):
|
||||
@app.get("/tacapi/v1/wildcard-contents")
|
||||
async def get_wildcard_contents(basepath: str, filename: str):
|
||||
if basepath is None or basepath == "":
|
||||
return JSONResponse({}, status_code=404)
|
||||
return Response(status_code=404)
|
||||
|
||||
base = Path(basepath)
|
||||
if base is None or (not base.exists()):
|
||||
return JSONResponse({}, status_code=404)
|
||||
return Response(status_code=404)
|
||||
|
||||
try:
|
||||
wildcard_path = base.joinpath(filename)
|
||||
if wildcard_path.exists():
|
||||
if wildcard_path.exists() and wildcard_path.is_file():
|
||||
return FileResponse(wildcard_path)
|
||||
else:
|
||||
return JSONResponse({}, status_code=404)
|
||||
return Response(status_code=404)
|
||||
except Exception as e:
|
||||
return JSONResponse({"error": e}, status_code=500)
|
||||
|
||||
@app.get("/tacapi/v1/refresh-styles-if-changed")
|
||||
async def refresh_styles_if_changed():
|
||||
global last_style_mtime
|
||||
|
||||
mtime = get_style_mtime()
|
||||
if mtime is not None and mtime > last_style_mtime:
|
||||
last_style_mtime = mtime
|
||||
# Update temp file
|
||||
if shared.prompt_styles is not None:
|
||||
write_style_names()
|
||||
|
||||
return Response(status_code=200) # Success
|
||||
else:
|
||||
return Response(status_code=304) # Not modified
|
||||
def db_request(func, get = False):
|
||||
if db is not None:
|
||||
try:
|
||||
if get:
|
||||
ret = func()
|
||||
if ret is list:
|
||||
ret = [{"name": t[0], "type": t[1], "count": t[2], "lastUseDate": t[3]} for t in ret]
|
||||
return JSONResponse({"result": ret})
|
||||
else:
|
||||
func()
|
||||
except sqlite3.Error as e:
|
||||
return JSONResponse({"error": e.__cause__}, status_code=500)
|
||||
else:
|
||||
return JSONResponse({"error": "Database not initialized"}, status_code=500)
|
||||
|
||||
@app.post("/tacapi/v1/increase-use-count")
|
||||
async def increase_use_count(tagname: str, ttype: int, neg: bool):
|
||||
db_request(lambda: db.increase_tag_count(tagname, ttype, neg))
|
||||
|
||||
@app.get("/tacapi/v1/get-use-count")
|
||||
async def get_use_count(tagname: str, ttype: int, neg: bool):
|
||||
return db_request(lambda: db.get_tag_count(tagname, ttype, neg), get=True)
|
||||
|
||||
# Small dataholder class
|
||||
class UseCountListRequest(BaseModel):
|
||||
tagNames: list[str]
|
||||
tagTypes: list[int]
|
||||
neg: bool = False
|
||||
|
||||
# Semantically weird to use post here, but it's required for the body on js side
|
||||
@app.post("/tacapi/v1/get-use-count-list")
|
||||
async def get_use_count_list(body: UseCountListRequest):
|
||||
# If a date limit is set > 0, pass it to the db
|
||||
date_limit = getattr(shared.opts, "tac_frequencyMaxAge", 30)
|
||||
date_limit = date_limit if date_limit > 0 else None
|
||||
|
||||
if db:
|
||||
count_list = list(db.get_tag_counts(body.tagNames, body.tagTypes, body.neg, date_limit))
|
||||
else:
|
||||
count_list = None
|
||||
|
||||
# If a limit is set, return at max the top n results by count
|
||||
if count_list and len(count_list):
|
||||
limit = int(min(getattr(shared.opts, "tac_frequencyRecommendCap", 10), len(count_list)))
|
||||
# Sort by count and return the top n
|
||||
if limit > 0:
|
||||
count_list = sorted(count_list, key=lambda x: x[2], reverse=True)[:limit]
|
||||
|
||||
return db_request(lambda: count_list, get=True)
|
||||
|
||||
@app.put("/tacapi/v1/reset-use-count")
|
||||
async def reset_use_count(tagname: str, ttype: int, pos: bool, neg: bool):
|
||||
db_request(lambda: db.reset_tag_count(tagname, ttype, pos, neg))
|
||||
|
||||
@app.get("/tacapi/v1/get-all-use-counts")
|
||||
async def get_all_tag_counts():
|
||||
return db_request(lambda: db.get_all_tags(), get=True)
|
||||
|
||||
script_callbacks.on_app_started(api_tac)
|
||||
|
||||
190
scripts/tag_frequency_db.py
Normal file
190
scripts/tag_frequency_db.py
Normal file
@@ -0,0 +1,190 @@
|
||||
import sqlite3
|
||||
from contextlib import contextmanager
|
||||
|
||||
from scripts.shared_paths import TAGS_PATH
|
||||
|
||||
db_file = TAGS_PATH.joinpath("tag_frequency.db")
|
||||
timeout = 30
|
||||
db_ver = 1
|
||||
|
||||
|
||||
@contextmanager
|
||||
def transaction(db=db_file):
|
||||
"""Context manager for database transactions.
|
||||
Ensures that the connection is properly closed after the transaction.
|
||||
"""
|
||||
try:
|
||||
conn = sqlite3.connect(db, timeout=timeout)
|
||||
|
||||
conn.isolation_level = None
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("BEGIN")
|
||||
yield cursor
|
||||
cursor.execute("COMMIT")
|
||||
except sqlite3.Error as e:
|
||||
print("Tag Autocomplete: Frequency database error:", e)
|
||||
finally:
|
||||
if conn:
|
||||
conn.close()
|
||||
|
||||
|
||||
class TagFrequencyDb:
|
||||
"""Class containing creation and interaction methods for the tag frequency database"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.version = self.__check()
|
||||
|
||||
def __check(self):
|
||||
if not db_file.exists():
|
||||
print("Tag Autocomplete: Creating frequency database")
|
||||
with transaction() as cursor:
|
||||
self.__create_db(cursor)
|
||||
self.__update_db_data(cursor, "version", db_ver)
|
||||
print("Tag Autocomplete: Database successfully created")
|
||||
|
||||
return self.__get_version()
|
||||
|
||||
def __create_db(self, cursor: sqlite3.Cursor):
|
||||
cursor.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS db_data (
|
||||
key TEXT PRIMARY KEY,
|
||||
value TEXT
|
||||
)
|
||||
"""
|
||||
)
|
||||
|
||||
cursor.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS tag_frequency (
|
||||
name TEXT NOT NULL,
|
||||
type INT NOT NULL,
|
||||
count_pos INT,
|
||||
count_neg INT,
|
||||
last_used TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (name, type)
|
||||
)
|
||||
"""
|
||||
)
|
||||
|
||||
def __update_db_data(self, cursor: sqlite3.Cursor, key, value):
|
||||
cursor.execute(
|
||||
"""
|
||||
INSERT OR REPLACE
|
||||
INTO db_data (key, value)
|
||||
VALUES (?, ?)
|
||||
""",
|
||||
(key, value),
|
||||
)
|
||||
|
||||
def __get_version(self):
|
||||
db_version = None
|
||||
with transaction() as cursor:
|
||||
cursor.execute(
|
||||
"""
|
||||
SELECT value
|
||||
FROM db_data
|
||||
WHERE key = 'version'
|
||||
"""
|
||||
)
|
||||
db_version = cursor.fetchone()
|
||||
|
||||
return db_version[0] if db_version else 0
|
||||
|
||||
def get_all_tags(self):
|
||||
with transaction() as cursor:
|
||||
cursor.execute(
|
||||
f"""
|
||||
SELECT name, type, count_pos, count_neg, last_used
|
||||
FROM tag_frequency
|
||||
WHERE count_pos > 0 OR count_neg > 0
|
||||
ORDER BY count_pos + count_neg DESC
|
||||
"""
|
||||
)
|
||||
tags = cursor.fetchall()
|
||||
|
||||
return tags
|
||||
|
||||
def get_tag_count(self, tag, ttype, negative=False):
|
||||
count_str = "count_neg" if negative else "count_pos"
|
||||
with transaction() as cursor:
|
||||
cursor.execute(
|
||||
f"""
|
||||
SELECT {count_str}, last_used
|
||||
FROM tag_frequency
|
||||
WHERE name = ? AND type = ?
|
||||
""",
|
||||
(tag, ttype),
|
||||
)
|
||||
tag_count = cursor.fetchone()
|
||||
|
||||
if tag_count:
|
||||
return tag_count[0], tag_count[1]
|
||||
else:
|
||||
return 0, None
|
||||
|
||||
def get_tag_counts(self, tags: list[str], ttypes: list[str], negative=False, date_limit=None):
|
||||
count_str = "count_neg" if negative else "count_pos"
|
||||
with transaction() as cursor:
|
||||
for tag, ttype in zip(tags, ttypes):
|
||||
if date_limit is not None:
|
||||
cursor.execute(
|
||||
f"""
|
||||
SELECT {count_str}, last_used
|
||||
FROM tag_frequency
|
||||
WHERE name = ? AND type = ?
|
||||
AND last_used > datetime('now', '-' || ? || ' days')
|
||||
""",
|
||||
(tag, ttype, date_limit),
|
||||
)
|
||||
else:
|
||||
cursor.execute(
|
||||
f"""
|
||||
SELECT {count_str}, last_used
|
||||
FROM tag_frequency
|
||||
WHERE name = ? AND type = ?
|
||||
""",
|
||||
(tag, ttype),
|
||||
)
|
||||
tag_count = cursor.fetchone()
|
||||
if tag_count:
|
||||
yield (tag, ttype, tag_count[0], tag_count[1])
|
||||
else:
|
||||
yield (tag, ttype, 0, None)
|
||||
|
||||
def increase_tag_count(self, tag, ttype, negative=False):
|
||||
pos_count = self.get_tag_count(tag, ttype, False)[0]
|
||||
neg_count = self.get_tag_count(tag, ttype, True)[0]
|
||||
|
||||
if negative:
|
||||
neg_count += 1
|
||||
else:
|
||||
pos_count += 1
|
||||
|
||||
with transaction() as cursor:
|
||||
cursor.execute(
|
||||
f"""
|
||||
INSERT OR REPLACE
|
||||
INTO tag_frequency (name, type, count_pos, count_neg)
|
||||
VALUES (?, ?, ?, ?)
|
||||
""",
|
||||
(tag, ttype, pos_count, neg_count),
|
||||
)
|
||||
|
||||
def reset_tag_count(self, tag, ttype, positive=True, negative=False):
|
||||
if positive and negative:
|
||||
set_str = "count_pos = 0, count_neg = 0"
|
||||
elif positive:
|
||||
set_str = "count_pos = 0"
|
||||
elif negative:
|
||||
set_str = "count_neg = 0"
|
||||
|
||||
with transaction() as cursor:
|
||||
cursor.execute(
|
||||
f"""
|
||||
UPDATE tag_frequency
|
||||
SET {set_str}
|
||||
WHERE name = ? AND type = ?
|
||||
""",
|
||||
(tag, ttype),
|
||||
)
|
||||
113301
tags/EnglishDictionary.csv
Normal file
113301
tags/EnglishDictionary.csv
Normal file
File diff suppressed because it is too large
Load Diff
238668
tags/danbooru.csv
238668
tags/danbooru.csv
File diff suppressed because it is too large
Load Diff
221787
tags/danbooru_e621_merged.csv
Normal file
221787
tags/danbooru_e621_merged.csv
Normal file
File diff suppressed because one or more lines are too long
@@ -28,5 +28,17 @@
|
||||
"terms": "Water, Magic, Fancy",
|
||||
"content": "(extremely detailed CG unity 8k wallpaper), (masterpiece), (best quality), (ultra-detailed), (best illustration),(best shadow), (an extremely delicate and beautiful), classic, dynamic angle, floating, fine detail, Depth of field, classic, (painting), (sketch), (bloom), (shine), glinting stars,\n\na girl, solo, bare shoulders, flat chest, diamond and glaring eyes, beautiful detailed cold face, very long blue and sliver hair, floating black feathers, wavy hair, extremely delicate and beautiful girls, beautiful detailed eyes, glowing eyes,\n\nriver, (forest),palace, (fairyland,feather,flowers, nature),(sunlight),Hazy fog, mist",
|
||||
"color": 5
|
||||
},
|
||||
{
|
||||
"name": "Pony-Positive",
|
||||
"terms": "Pony,Score,Positive,Quality",
|
||||
"content": "score_9, score_8_up, score_7_up, score_6_up, source_anime, source_furry, source_pony, source_cartoon",
|
||||
"color": 1
|
||||
},
|
||||
{
|
||||
"name": "Pony-Negative",
|
||||
"terms": "Pony,Score,Negative,Quality",
|
||||
"content": "score_1, score_2, score_3, score_4, score_5, source_anime, source_furry, source_pony, source_cartoon",
|
||||
"color": 3
|
||||
}
|
||||
]
|
||||
110665
tags/derpibooru.csv
Normal file
110665
tags/derpibooru.csv
Normal file
File diff suppressed because it is too large
Load Diff
200358
tags/e621.csv
200358
tags/e621.csv
File diff suppressed because one or more lines are too long
22419
tags/e621_sfw.csv
Normal file
22419
tags/e621_sfw.csv
Normal file
File diff suppressed because one or more lines are too long
160178
tags/noob_characters-chants.json
Normal file
160178
tags/noob_characters-chants.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user