Compare commits

..

12 Commits
2.7.4 ... 2.9.0

Author SHA1 Message Date
DominikDoom
4a415f1a04 Fix for duplicate wildcard entries
Caused by multiple yaml files specifying the same subkey
2023-07-29 17:27:43 +02:00
DominikDoom
21de5fe003 Merge branch 'feature-fix-dynamic-prompt-yaml' into main
Fixes #209
2023-07-29 16:30:19 +02:00
DominikDoom
a020df91b2 Fix wildcard traversal condition 2023-07-29 16:26:07 +02:00
DominikDoom
0260765b27 Add support for dynamic-prompts yaml wildcards 2023-07-29 16:13:23 +02:00
DominikDoom
638c073f37 Merge branch 'feature-native-lora-config' into main 2023-07-26 15:05:37 +02:00
DominikDoom
d11b53083b Update README.md 2023-07-26 15:04:59 +02:00
DominikDoom
571072eea4 Update README.md 2023-07-26 15:03:52 +02:00
DominikDoom
acfdbf1ed4 Fix for loras in base folder 2023-07-26 14:53:03 +02:00
DominikDoom
2e271aea5c Support for new webui 1.5.0 lora features
Prefers trigger words over the model-keyword ones
Uses custom per-lora multiplier if set
2023-07-26 14:38:51 +02:00
DominikDoom
b28497764f Check keywords for .pt and .ckpt loras too
Especially for custom keywords, the preset list mostly uses safetensors
2023-07-23 11:27:02 +02:00
DominikDoom
0d9d5f1e44 Safety check & remove log 2023-07-23 11:08:29 +02:00
DominikDoom
de3380818e Quote lora filenames to handle commas in filenames
Fixes #206
2023-07-23 11:05:44 +02:00
13 changed files with 343 additions and 116 deletions

View File

@@ -124,23 +124,35 @@ Completion for these types is triggered by typing `<`. By default it will show t
- `<h:` or `<hypernet:` will only show Hypernetworks
### Lora / Lyco trigger word completion
This is an advanced feature that will try to add known trigger words on autocompleting a Lora/Lyco.
This feature will try to add known trigger words on autocompleting a Lora/Lyco.
It uses the list provided by the [model-keyword](https://github.com/mix1009/model-keyword/) extension, which thus needs to be installed to use this feature. The list is also regularly updated through it.
It primarily uses the list provided by the [model-keyword](https://github.com/mix1009/model-keyword/) extension, which thus needs to be installed to use this feature. The list is also regularly updated through it.
However, once installed, you can deactivate it if you want, since tag autocomplete only needs the local keyword lists it ships with, not the extension itself.
The used files are `lora-keywords.txt` and `lora-keywords-user.txt` in the model-keyword installation folder.
The used files are `lora-keyword.txt` and `lora-keyword-user.txt` in the model-keyword installation folder.
If the main file isn't found, the feature will simply deactivate itself, everything else should work normally.
To add custom mappings for unknown Loras, you can use the UI provided by model-keyword, it will automatically write it to the `lora-keywords-user.txt` for you (and create it if it doesn't exist).
The only issue is that it has no official support for the Lycoris extension and doesn't scan its folder for files, so to add them through the UI you will have to temporarily move them into the Lora model folder to be able to select them in model-keywords dropdown.
Some are already included in the default list though, so trying it out first is advisable.
<details>
<summary>Walkthrough to add custom keywords</summary>
#### Note:
As of [v1.5.0](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/a3ddf464a2ed24c999f67ddfef7969f8291567be), the webui provides a native method to add activation keywords for Lora through the Extra networks config UI.
These trigger words will always be preferred over the model-keyword ones and can be used without needing to install the model-keyword extension. This will however, obviously, be limited to those manually added keywords. For automatic discovery of keywords, you will still need the big list provided by model-keyword.
![image](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/4302c44e-c632-473d-a14a-76f164f966cb)
</details>
After having added your custom keywords, you will need to either restart the UI or use the "Refresh TAC temp files" setting button.
Custom trigger words can be added through two methods:
1. Using the extra networks UI (recommended):
- Only works with webui version v1.5.0 upwards, but much easier to use and works without the model-keyword extension
- This method requires no manual refresh
- <details>
<summary>Image example</summary>
![edit button](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/22e95040-1d85-4b7e-a005-1918fafec807)
![lora_edit](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/3e6c5245-d3bc-498d-8cd2-26eadf8882e7)
</details>
2. Through the model-keyword UI:
- One issue with this method is that it has no official support for the Lycoris extension and doesn't scan its folder for files, so to add them through the UI you will have to temporarily move them into the Lora model folder to be able to select them in model-keywords dropdown. Some are already included in the default list though, so trying it out first is advisable.
- After having added your custom keywords, you will need to either restart the UI or use the "Refresh TAC temp files" setting button.
- <details>
<summary>Image example</summary>
![image](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/assets/34448969/4302c44e-c632-473d-a14a-76f164f966cb)
</details>
Sometimes the inserted keywords can be wrong due to a hash collision, however model-keyword and tag autocomplete take the name of the file into account too if the collision is known.

View File

@@ -11,6 +11,7 @@ var extras = [];
var wildcardFiles = [];
var wildcardExtFiles = [];
var yamlWildcards = [];
var umiWildcards = [];
var embeddings = [];
var hypernetworks = [];
var loras = [];

View File

@@ -8,10 +8,11 @@ const ResultType = Object.freeze({
"wildcardTag": 4,
"wildcardFile": 5,
"yamlWildcard": 6,
"hypernetwork": 7,
"lora": 8,
"lyco": 9,
"chant": 10
"umiWildcard": 7,
"hypernetwork": 8,
"lora": 9,
"lyco": 10,
"chant": 11
});
// Class to hold result data and annotations to make it clearer to use

View File

@@ -61,6 +61,26 @@ async function loadCSV(path) {
return parseCSV(text);
}
// Fetch API
async function fetchAPI(url, json = true, cache = false) {
if (!cache) {
const appendChar = url.includes("?") ? "&" : "?";
url += `${appendChar}${new Date().getTime()}`
}
let response = await fetch(url);
if (response.status != 200) {
console.error(`Error fetching API endpoint "${url}": ` + response.status, response.statusText);
return null;
}
if (json)
return await response.json();
else
return await response.text();
}
// Debounce function to prevent spamming the autocomplete function
var dbTimeOut;
const debounce = (func, wait = 300) => {
@@ -89,6 +109,28 @@ function difference(a, b) {
)].reduce((acc, [v, count]) => acc.concat(Array(Math.abs(count)).fill(v)), []);
}
// Object flatten function adapted from https://stackoverflow.com/a/61602592
// $roots keeps previous parent properties as they will be added as a prefix for each prop.
// $sep is just a preference if you want to seperate nested paths other than dot.
function flatten(obj, roots = [], sep = ".") {
return Object.keys(obj).reduce(
(memo, prop) =>
Object.assign(
// create a new object
{},
// include previously returned object
memo,
Object.prototype.toString.call(obj[prop]) === "[object Object]"
? // keep working if value is an object
flatten(obj[prop], roots.concat([prop]), sep)
: // include current prop and value and prefix prop with the roots
{ [roots.concat([prop]).join(sep)]: obj[prop] }
),
{}
);
}
// Sliding window function to get possible combination groups of an array
function toNgrams(inputArray, size) {
return Array.from(

View File

@@ -29,18 +29,28 @@ class LoraParser extends BaseTagParser {
async function load() {
if (loras.length === 0) {
try {
loras = (await readFile(`${tagBasePath}/temp/lora.txt`)).split("\n")
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => x.trim().split(",")); // Remove carriage returns and padding if it exists, split into name, hash pairs
loras = (await loadCSV(`${tagBasePath}/temp/lora.txt`))
.filter(x => x[0]?.trim().length > 0) // Remove empty lines
.map(x => [x[0]?.trim(), x[1]]); // Trim filenames and return the name, hash pairs
} catch (e) {
console.error("Error loading lora.txt: " + e);
}
}
}
function sanitize(tagType, text) {
async function sanitize(tagType, text) {
if (tagType === ResultType.lora) {
return `<lora:${text}:${TAC_CFG.extraNetworksDefaultMultiplier}>`;
let multiplier = TAC_CFG.extraNetworksDefaultMultiplier;
let info = await fetchAPI(`tacapi/v1/lora-info/${text}`)
if (info && info["preferred weight"]) {
multiplier = info["preferred weight"];
}
const lastDot = text.lastIndexOf(".");
const lastSlash = text.lastIndexOf("/");
const name = text.substring(lastSlash + 1, lastDot);
return `<lora:${name}:${multiplier}>`;
}
return null;
}

View File

@@ -29,18 +29,28 @@ class LycoParser extends BaseTagParser {
async function load() {
if (lycos.length === 0) {
try {
lycos = (await readFile(`${tagBasePath}/temp/lyco.txt`)).split("\n")
.filter(x => x.trim().length > 0) // Remove empty lines
.map(x => x.trim().split(",")); // Remove carriage returns and padding if it exists, split into name, hash pairs
lycos = (await loadCSV(`${tagBasePath}/temp/lyco.txt`))
.filter(x => x[0]?.trim().length > 0) // Remove empty lines
.map(x => [x[0]?.trim(), x[1]]); // Trim filenames and return the name, hash pairs
} catch (e) {
console.error("Error loading lyco.txt: " + e);
}
}
}
function sanitize(tagType, text) {
async function sanitize(tagType, text) {
if (tagType === ResultType.lyco) {
return `<lyco:${text}:${TAC_CFG.extraNetworksDefaultMultiplier}>`;
let multiplier = TAC_CFG.extraNetworksDefaultMultiplier;
let info = await fetchAPI(`tacapi/v1/lyco-info/${text}`)
if (info && info["preferred weight"]) {
multiplier = info["preferred weight"];
}
const lastDot = text.lastIndexOf(".");
const lastSlash = text.lastIndexOf("/");
const name = text.substring(lastSlash + 1, lastDot);
return `<lyco:${name}:${multiplier}>`;
}
return null;
}

View File

@@ -5,21 +5,20 @@ async function load() {
if (modelKeywordPath.length > 0 && modelKeywordDict.size === 0) {
try {
let lines = [];
let csv_lines = [];
// Only add default keywords if wanted by the user
if (TAC_CFG.modelKeywordCompletion !== "Only user list")
lines = (await readFile(`${modelKeywordPath}/lora-keyword.txt`)).split("\n");
csv_lines = (await loadCSV(`${modelKeywordPath}/lora-keyword.txt`));
// Add custom user keywords if the file exists
if (customFileExists)
lines = lines.concat((await readFile(`${modelKeywordPath}/lora-keyword-user.txt`)).split("\n"));
csv_lines = csv_lines.concat((await loadCSV(`${modelKeywordPath}/lora-keyword-user.txt`)));
if (lines.length === 0) return;
if (csv_lines.length === 0) return;
csv_lines = csv_lines.filter(x => x[0].trim().length > 0 && x[0].trim()[0] !== "#") // Remove empty lines and comments
lines = lines.filter(x => x.trim().length > 0 && x.trim()[0] !== "#") // Remove empty lines and comments
// Add to the dict
lines.forEach(line => {
const parts = line.split(",");
csv_lines.forEach(parts => {
const hash = parts[0];
const keywords = parts[1].replaceAll("| ", ", ").replaceAll("|", ", ").trim();
const lastSepIndex = parts[2]?.lastIndexOf("/") + 1 || parts[2]?.lastIndexOf("\\") + 1 || 0;

View File

@@ -74,7 +74,7 @@ class UmiParser extends BaseTagParser {
//console.log({ matches })
const filteredWildcards = (tagword) => {
const wildcards = yamlWildcards.filter(x => {
const wildcards = umiWildcards.filter(x => {
let tags = x[1];
const matchesNeg =
matches.negative.length === 0
@@ -144,7 +144,7 @@ class UmiParser extends BaseTagParser {
// Add final results
let finalResults = [];
tempResults.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.yamlWildcard)
let result = new AutocompleteResult(t[0].trim(), ResultType.umiWildcard)
result.count = t[1];
finalResults.push(result);
});
@@ -156,7 +156,7 @@ class UmiParser extends BaseTagParser {
// Add final results
let finalResults = [];
filteredWildcardsSorted.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.yamlWildcard)
let result = new AutocompleteResult(t[0].trim(), ResultType.umiWildcard)
result.count = t[1];
finalResults.push(result);
});
@@ -171,7 +171,7 @@ class UmiParser extends BaseTagParser {
// Add final results
let finalResults = [];
filteredWildcardsSorted.forEach(t => {
let result = new AutocompleteResult(t[0].trim(), ResultType.yamlWildcard)
let result = new AutocompleteResult(t[0].trim(), ResultType.umiWildcard)
result.count = t[1];
finalResults.push(result);
});
@@ -184,8 +184,8 @@ class UmiParser extends BaseTagParser {
}
function updateUmiTags( tagType, sanitizedText, newPrompt, textArea) {
// If it was a yaml wildcard, also update the umiPreviousTags
if (tagType === ResultType.yamlWildcard && originalTagword.length > 0) {
// If it was a umi wildcard, also update the umiPreviousTags
if (tagType === ResultType.umiWildcard && originalTagword.length > 0) {
let umiSubPrompts = [...newPrompt.matchAll(UMI_PROMPT_REGEX)];
let umiTags = [];
@@ -203,11 +203,11 @@ function updateUmiTags( tagType, sanitizedText, newPrompt, textArea) {
}
async function load() {
if (yamlWildcards.length === 0) {
if (umiWildcards.length === 0) {
try {
let yamlTags = (await readFile(`${tagBasePath}/temp/wcet.txt`)).split("\n");
let umiTags = (await readFile(`${tagBasePath}/temp/umi_tags.txt`)).split("\n");
// Split into tag, count pairs
yamlWildcards = yamlTags.map(x => x
umiWildcards = umiTags.map(x => x
.trim()
.split(","))
.map(([i, ...rest]) => [
@@ -218,14 +218,14 @@ async function load() {
}, {}),
]);
} catch (e) {
console.error("Error loading yaml wildcards: " + e);
console.error("Error loading umi wildcards: " + e);
}
}
}
function sanitize(tagType, text) {
// Replace underscores only if the yaml tag is not using them
if (tagType === ResultType.yamlWildcard && !yamlWildcards.includes(text)) {
// Replace underscores only if the umi tag is not using them
if (tagType === ResultType.umiWildcard && !umiWildcards.includes(text)) {
return text.replaceAll("_", " ");
}
return null;

View File

@@ -13,12 +13,35 @@ class WildcardParser extends BaseTagParser {
let wcWord = wcMatch[0][2];
// Look in normal wildcard files
let wcFound = wildcardFiles.find(x => x[1].toLowerCase() === wcFile);
let wcFound = wildcardFiles.filter(x => x[1].toLowerCase() === wcFile);
if (wcFound.length === 0) wcFound = null;
// Use found wildcard file or look in external wildcard files
let wcPair = wcFound || wildcardExtFiles.find(x => x[1].toLowerCase() === wcFile);
let wcPairs = wcFound || wildcardExtFiles.filter(x => x[1].toLowerCase() === wcFile);
let wildcards = (await readFile(`${wcPair[0]}/${wcPair[1]}.txt`)).split("\n")
.filter(x => x.trim().length > 0 && !x.startsWith('#')); // Remove empty lines and comments
if (!wcPairs) return [];
let wildcards = [];
for (let i = 0; i < wcPairs.length; i++) {
const wcPair = wcPairs[i];
if (!wcPair[0] || !wcPair[1]) continue;
if (wcPair[0].endsWith(".yaml")) {
const getDescendantProp = (obj, desc) => {
const arr = desc.split("/");
while (arr.length) {
obj = obj[arr.shift()];
}
return obj;
}
wildcards = wildcards.concat(getDescendantProp(yamlWildcards[wcPair[0]], wcPair[1]));
} else {
const fileContent = (await readFile(`${wcPair[0]}/${wcPair[1]}.txt`)).split("\n")
.filter(x => x.trim().length > 0 && !x.startsWith('#')); // Remove empty lines and comments
wildcards = wildcards.concat(fileContent);
}
}
wildcards.sort((a, b) => a.localeCompare(b));
let finalResults = [];
let tempResults = wildcards.filter(x => (wcWord !== null && wcWord.length > 0) ? x.toLowerCase().includes(wcWord) : x) // Filter by tagword
@@ -44,13 +67,27 @@ class WildcardFileParser extends BaseTagParser {
}
let finalResults = [];
const alreadyAdded = new Map();
// Get final results
tempResults.forEach(wcFile => {
let result = new AutocompleteResult(wcFile[1].trim(), ResultType.wildcardFile);
result.meta = "Wildcard file";
// Skip duplicate entries incase multiple files have the same name or yaml category
if (alreadyAdded.has(wcFile[1])) return;
let result = null;
if (wcFile[0].endsWith(".yaml")) {
result = new AutocompleteResult(wcFile[1].trim(), ResultType.yamlWildcard);
result.meta = "YAML wildcard collection";
} else {
result = new AutocompleteResult(wcFile[1].trim(), ResultType.wildcardFile);
result.meta = "Wildcard file";
}
finalResults.push(result);
alreadyAdded.set(wcFile[1], true);
});
finalResults.sort((a, b) => a.text.localeCompare(b.text));
return finalResults;
}
}
@@ -87,6 +124,17 @@ async function load() {
wcExtFile = wcExtFile.map(x => [base, x]);
wildcardExtFiles.push(...wcExtFile);
}
// Load the yaml wildcard json file and append it as a wildcard file, appending each key as a path component until we reach the end
yamlWildcards = await readFile(`${tagBasePath}/temp/wc_yaml.json`, true);
// Append each key as a path component until we reach a leaf
Object.keys(yamlWildcards).forEach(file => {
const flattened = flatten(yamlWildcards[file], [], "/");
Object.keys(flattened).forEach(key => {
wildcardExtFiles.push([file, key]);
});
});
} catch (e) {
console.error("Error loading wildcards: " + e);
}
@@ -94,7 +142,7 @@ async function load() {
}
function sanitize(tagType, text) {
if (tagType === ResultType.wildcardFile) {
if (tagType === ResultType.wildcardFile || tagType === ResultType.yamlWildcard) {
return `__${text}__`;
} else if (tagType === ResultType.wildcardTag) {
return text.replace(/^.*?: /g, "");
@@ -104,7 +152,7 @@ function sanitize(tagType, text) {
function keepOpenIfWildcard(tagType, sanitizedText, newPrompt, textArea) {
// If it's a wildcard, we want to keep the results open so the user can select another wildcard
if (tagType === ResultType.wildcardFile) {
if (tagType === ResultType.wildcardFile || tagType === ResultType.yamlWildcard) {
hideBlocked = true;
autocomplete(textArea, newPrompt, sanitizedText);
setTimeout(() => { hideBlocked = false; }, 450);

View File

@@ -375,7 +375,7 @@ async function insertTextAtCursor(textArea, result, tagword, tabCompletedWithout
}
}
if (tagType === ResultType.wildcardFile
if ((tagType === ResultType.wildcardFile || tagType === ResultType.yamlWildcard)
&& tabCompletedWithoutChoice
&& TAC_CFG.wildcardCompletionMode !== "Always fully"
&& sanitizedText.includes("/")) {
@@ -402,9 +402,11 @@ async function insertTextAtCursor(textArea, result, tagword, tabCompletedWithout
}
}
// Don't cut off the __ at the end if it is already the full path
if (firstDifference < longestResult) {
if (firstDifference > 0 && firstDifference < longestResult) {
// +2 because the sanitized text already has the __ at the start but the matched text doesn't
sanitizedText = sanitizedText.substring(0, firstDifference + 2);
} else if (firstDifference === 0) {
sanitizedText = tagword;
}
}
}
@@ -420,7 +422,7 @@ async function insertTextAtCursor(textArea, result, tagword, tabCompletedWithout
var optionalSeparator = "";
let extraNetworkTypes = [ResultType.hypernetwork, ResultType.lora];
let noCommaTypes = [ResultType.wildcardFile, ResultType.yamlWildcard].concat(extraNetworkTypes);
let noCommaTypes = [ResultType.wildcardFile, ResultType.yamlWildcard, ResultType.umiWildcard].concat(extraNetworkTypes);
if (!noCommaTypes.includes(tagType)) {
// Append comma if enabled and not already present
let beforeComma = surrounding.match(new RegExp(`${escapeRegExp(tagword)}[,:]`, "i")) !== null;
@@ -445,30 +447,45 @@ async function insertTextAtCursor(textArea, result, tagword, tabCompletedWithout
// Add lora/lyco keywords if enabled and found
let keywordsLength = 0;
if (TAC_CFG.modelKeywordCompletion !== "Never" && modelKeywordPath.length > 0 && (tagType === ResultType.lora || tagType === ResultType.lyco)) {
if (result.hash && result.hash !== "NOFILE" && result.hash.length > 0) {
let keywords = null;
if (TAC_CFG.modelKeywordCompletion !== "Never" && (tagType === ResultType.lora || tagType === ResultType.lyco)) {
let keywords = null;
// Check built-in activation words first
if (tagType === ResultType.lora || tagType === ResultType.lyco) {
let info = await fetchAPI(`tacapi/v1/lora-info/${result.text}`)
if (info && info["activation text"]) {
keywords = info["activation text"];
}
}
if (!keywords && modelKeywordPath.length > 0 && result.hash && result.hash !== "NOFILE" && result.hash.length > 0) {
let nameDict = modelKeywordDict.get(result.hash);
let name = result.text + ".safetensors";
let names = [result.text + ".safetensors", result.text + ".pt", result.text + ".ckpt"];
if (nameDict) {
if (nameDict.has(name))
keywords = nameDict.get(name);
else
let found = false;
names.forEach(name => {
if (!found && nameDict.has(name)) {
found = true;
keywords = nameDict.get(name);
}
});
if (!found)
keywords = nameDict.get("none");
}
}
if (keywords && keywords.length > 0) {
textBeforeKeywordInsertion = newPrompt;
newPrompt = `${keywords}, ${newPrompt}`; // Insert keywords
textAfterKeywordInsertion = newPrompt;
keywordInsertionUndone = false;
setTimeout(() => lastEditWasKeywordInsertion = true, 200)
keywordsLength = keywords.length + 2; // +2 for the comma and space
}
if (keywords && keywords.length > 0) {
textBeforeKeywordInsertion = newPrompt;
newPrompt = `${keywords}, ${newPrompt}`; // Insert keywords
textAfterKeywordInsertion = newPrompt;
keywordInsertionUndone = false;
setTimeout(() => lastEditWasKeywordInsertion = true, 200)
keywordsLength = keywords.length + 2; // +2 for the comma and space
}
}
@@ -566,6 +583,11 @@ function addResultsToList(textArea, results, tagword, resetList) {
if (!TAC_CFG.alias.onlyShowAlias && result.text !== bestAlias)
displayText += " ➝ " + result.text;
} else if (result.type === ResultType.lora || result.type === ResultType.lyco) {
let lastDot = result.text.lastIndexOf(".");
let lastSlash = result.text.lastIndexOf("/");
let name = result.text.substring(lastSlash + 1, lastDot);
displayText = escapeHTML(name);
} else { // No alias
displayText = escapeHTML(result.text);
}
@@ -577,7 +599,8 @@ function addResultsToList(textArea, results, tagword, resetList) {
// Print search term bolded in result
itemText.innerHTML = displayText.replace(tagword, `<b>${tagword}</b>`);
if (result.type === ResultType.wildcardFile && itemText.innerHTML.includes("/")) {
const splitTypes = [ResultType.wildcardFile, ResultType.yamlWildcard]
if (splitTypes.includes(result.type) && itemText.innerHTML.includes("/")) {
let parts = itemText.innerHTML.split("/");
let lastPart = parts[parts.length - 1];
parts = parts.slice(0, parts.length - 1);
@@ -1094,7 +1117,7 @@ async function refreshTacTempFiles() {
setTimeout(async () => {
wildcardFiles = [];
wildcardExtFiles = [];
yamlWildcards = [];
umiWildcards = [];
embeddings = [];
hypernetworks = [];
loras = [];

View File

@@ -1,5 +1,6 @@
# This file provides support for the model-keyword extension to add known lora keywords on completion
import csv
import hashlib
from pathlib import Path
@@ -16,8 +17,11 @@ hash_dict = {}
def load_hash_cache():
with open(known_hashes_file, "r", encoding="utf-8") as file:
for line in file:
name, hash, mtime = line.replace("\n", "").split(",")
reader = csv.reader(
file.readlines(), delimiter=",", quotechar='"', skipinitialspace=True
)
for line in reader:
name, hash, mtime = line
hash_dict[name] = (hash, mtime)
@@ -26,7 +30,7 @@ def update_hash_cache():
if file_needs_update:
with open(known_hashes_file, "w", encoding="utf-8") as file:
for name, (hash, mtime) in hash_dict.items():
file.write(f"{name},{hash},{mtime}\n")
file.write(f'"{name}",{hash},{mtime}\n')
# Copy of the fast inaccurate hash function from the extension
@@ -71,6 +75,6 @@ def write_model_keyword_path():
return True
else:
print(
"Tag Autocomplete: Could not locate model-keyword extension, LORA/LYCO trigger word completion will be unavailable."
"Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu."
)
return False

View File

@@ -13,13 +13,13 @@ except ImportError:
# Webui root path
FILE_DIR = Path().absolute()
# The extension base path
EXT_PATH = FILE_DIR.joinpath('extensions')
EXT_PATH = FILE_DIR.joinpath("extensions")
# Tags base path
TAGS_PATH = Path(scripts.basedir()).joinpath('tags')
TAGS_PATH = Path(scripts.basedir()).joinpath("tags")
# The path to the folder containing the wildcards and embeddings
WILDCARD_PATH = FILE_DIR.joinpath('scripts/wildcards')
WILDCARD_PATH = FILE_DIR.joinpath("scripts/wildcards")
EMB_PATH = Path(shared.cmd_opts.embeddings_dir)
HYP_PATH = Path(shared.cmd_opts.hypernetwork_dir)
@@ -27,15 +27,16 @@ try:
LORA_PATH = Path(shared.cmd_opts.lora_dir)
except AttributeError:
LORA_PATH = None
try:
LYCO_PATH = Path(shared.cmd_opts.lyco_dir)
except AttributeError:
LYCO_PATH = None
def find_ext_wildcard_paths():
"""Returns the path to the extension wildcards folder"""
found = list(EXT_PATH.glob('*/wildcards/'))
found = list(EXT_PATH.glob("*/wildcards/"))
return found
@@ -43,11 +44,12 @@ def find_ext_wildcard_paths():
WILDCARD_EXT_PATHS = find_ext_wildcard_paths()
# The path to the temporary files
STATIC_TEMP_PATH = FILE_DIR.joinpath('tmp') # In the webui root, on windows it exists by default, on linux it doesn't
TEMP_PATH = TAGS_PATH.joinpath('temp') # Extension specific temp files
# In the webui root, on windows it exists by default, on linux it doesn't
STATIC_TEMP_PATH = FILE_DIR.joinpath("tmp")
TEMP_PATH = TAGS_PATH.joinpath("temp") # Extension specific temp files
# Make sure these folders exist
if not TEMP_PATH.exists():
TEMP_PATH.mkdir()
if not STATIC_TEMP_PATH.exists():
STATIC_TEMP_PATH.mkdir()
STATIC_TEMP_PATH.mkdir()

View File

@@ -2,10 +2,13 @@
# to a temporary file to expose it to the javascript side
import glob
import json
from pathlib import Path
import gradio as gr
import yaml
from fastapi import FastAPI
from fastapi.responses import FileResponse
from modules import script_callbacks, sd_hijack, shared
from scripts.model_keyword_support import (get_lora_simple_hash,
@@ -33,37 +36,73 @@ def get_ext_wildcards():
return wildcard_files
def is_umi_format(data):
"""Returns True if the YAML file is in UMI format."""
issue_found = False
for item in data:
if not (data[item] and 'Tags' in data[item] and isinstance(data[item]['Tags'], list)):
issue_found = True
break
return not issue_found
def get_ext_wildcard_tags():
def parse_umi_format(umi_tags, count, data):
for item in data:
umi_tags[count] = ','.join(data[item]['Tags'])
count += 1
def parse_dynamic_prompt_format(yaml_wildcards, data, path):
# Recurse subkeys, delete those without string lists as values
def recurse_dict(d: dict):
for key, value in d.copy().items():
if isinstance(value, dict):
recurse_dict(value)
elif not (isinstance(value, list) and all(isinstance(v, str) for v in value)):
del d[key]
recurse_dict(data)
# Add to yaml_wildcards
yaml_wildcards[path.name] = data
def get_yaml_wildcards():
"""Returns a list of all tags found in extension YAML files found under a Tags: key."""
wildcard_tags = {} # { tag: count }
yaml_files = []
for path in WILDCARD_EXT_PATHS:
yaml_files.extend(p for p in path.rglob("*.yml"))
yaml_files.extend(p for p in path.rglob("*.yaml"))
yaml_wildcards = {}
umi_tags = {} # { tag: count }
count = 0
for path in yaml_files:
try:
with open(path, encoding="utf8") as file:
data = yaml.safe_load(file)
if data:
for item in data:
if data[item] and 'Tags' in data[item] and isinstance(data[item]['Tags'], list):
wildcard_tags[count] = ','.join(data[item]['Tags'])
count += 1
else:
print('Issue with tags found in ' + path.name + ' at item ' + item)
if (data):
if (is_umi_format(data)):
parse_umi_format(umi_tags, count, data)
else:
parse_dynamic_prompt_format(yaml_wildcards, data, path)
else:
print('No data found in ' + path.name)
except yaml.YAMLError:
print('Issue in parsing YAML file ' + path.name )
print('Issue in parsing YAML file ' + path.name)
continue
# Sort by count
sorted_tags = sorted(wildcard_tags.items(), key=lambda item: item[1], reverse=True)
output = []
for tag, count in sorted_tags:
output.append(f"{tag},{count}")
return output
umi_sorted = sorted(umi_tags.items(), key=lambda item: item[1], reverse=True)
umi_output = []
for tag, count in umi_sorted:
umi_output.append(f"{tag},{count}")
if (len(umi_output) > 0):
write_to_temp_file('umi_tags.txt', umi_output)
with open(TEMP_PATH.joinpath("wc_yaml.json"), "w", encoding="utf-8") as file:
json.dump(yaml_wildcards, file, ensure_ascii=False)
def get_embeddings(sd_model):
@@ -142,16 +181,15 @@ def get_lora():
valid_loras = [lf for lf in lora_paths if lf.suffix in {".safetensors", ".ckpt", ".pt"}]
hashes = {}
for l in valid_loras:
name = l.name[:l.name.rfind('.')]
name = l.relative_to(LORA_PATH).as_posix()
if model_keyword_installed:
hashes[name] = get_lora_simple_hash(l)
else:
hashes[name] = ""
# Sort
sorted_loras = dict(sorted(hashes.items()))
# Add hashes and return
return [f"{name},{hash}" for name, hash in sorted_loras.items()]
return [f"\"{name}\",{hash}" for name, hash in sorted_loras.items()]
def get_lyco():
@@ -164,13 +202,16 @@ def get_lyco():
valid_lycos = [lyf for lyf in lyco_paths if lyf.suffix in {".safetensors", ".ckpt", ".pt"}]
hashes = {}
for ly in valid_lycos:
name = ly.name[:ly.name.rfind('.')]
hashes[name] = get_lora_simple_hash(ly)
name = ly.relative_to(LYCO_PATH).as_posix()
if model_keyword_installed:
hashes[name] = get_lora_simple_hash(ly)
else:
hashes[name] = ""
# Sort
sorted_lycos = dict(sorted(hashes.items()))
# Add hashes and return
return [f"{name},{hash}" for name, hash in sorted_lycos.items()]
return [f"\"{name}\",{hash}" for name, hash in sorted_lycos.items()]
def write_tag_base_path():
@@ -221,7 +262,8 @@ if not TEMP_PATH.exists():
# even if no wildcards or embeddings are found
write_to_temp_file('wc.txt', [])
write_to_temp_file('wce.txt', [])
write_to_temp_file('wcet.txt', [])
write_to_temp_file('wc_yaml.json', [])
write_to_temp_file('umi_tags.txt', [])
write_to_temp_file('hyp.txt', [])
write_to_temp_file('lora.txt', [])
write_to_temp_file('lyco.txt', [])
@@ -235,6 +277,8 @@ if EMB_PATH.exists():
script_callbacks.on_model_loaded(get_embeddings)
def refresh_temp_files():
global WILDCARD_EXT_PATHS
WILDCARD_EXT_PATHS = find_ext_wildcard_paths()
write_temp_files()
get_embeddings(shared.sd_model)
@@ -250,10 +294,8 @@ def write_temp_files():
wildcards_ext = get_ext_wildcards()
if wildcards_ext:
write_to_temp_file('wce.txt', wildcards_ext)
# Write yaml extension wildcards to wcet.txt if found
wildcards_yaml_ext = get_ext_wildcard_tags()
if wildcards_yaml_ext:
write_to_temp_file('wcet.txt', wildcards_yaml_ext)
# Write yaml extension wildcards to umi_tags.txt and wc_yaml.json if found
get_yaml_wildcards()
if HYP_PATH.exists():
hypernets = get_hypernetworks()
@@ -328,7 +370,7 @@ def on_ui_settings():
"tac_appendComma": shared.OptionInfo(True, "Append comma on tag autocompletion"),
"tac_appendSpace": shared.OptionInfo(True, "Append space on tag autocompletion").info("will append after comma if the above is enabled"),
"tac_alwaysSpaceAtEnd": shared.OptionInfo(True, "Always append space if inserting at the end of the textbox").info("takes precedence over the regular space setting for that position"),
"tac_modelKeywordCompletion": shared.OptionInfo("Never", "Try to add known trigger words for LORA/LyCO models", gr.Dropdown, lambda: {"interactive": model_keyword_installed, "choices": ["Never","Only user list","Always"]}).info("Requires the <a href=\"https://github.com/mix1009/model-keyword\" target=\"_blank\">model-keyword</a> extension to be installed, but will work with it disabled.").needs_restart(),
"tac_modelKeywordCompletion": shared.OptionInfo("Never", "Try to add known trigger words for LORA/LyCO models", gr.Dropdown, lambda: {"choices": ["Never","Only user list","Always"]}).info("Will use & prefer the native activation keywords settable in the extra networks UI. Other functionality requires the <a href=\"https://github.com/mix1009/model-keyword\" target=\"_blank\">model-keyword</a> extension to be installed, but will work with it disabled.").needs_restart(),
"tac_wildcardCompletionMode": shared.OptionInfo("To next folder level", "How to complete nested wildcard paths", gr.Dropdown, lambda: {"choices": ["To next folder level","To first difference","Always fully"]}).info("e.g. \"hair/colours/light/...\""),
# Alias settings
"tac_alias.searchByAlias": shared.OptionInfo(True, "Search by alias"),
@@ -401,3 +443,36 @@ def on_ui_settings():
shared.opts.add_option("tac_refreshTempFiles", shared.OptionInfo("Refresh TAC temp files", "Refresh internal temp files", gr.HTML, {}, refresh=refresh_temp_files, section=TAC_SECTION))
script_callbacks.on_ui_settings(on_ui_settings)
def api_tac(_: gr.Blocks, app: FastAPI):
async def get_json_info(path: Path):
if not path:
return json.dumps({})
try:
if path is not None and path.exists() and path.parent.joinpath(path.stem + ".json").exists():
return FileResponse(path.parent.joinpath(path.stem + ".json").as_posix())
except Exception as e:
return json.dumps({"error": e})
@app.get("/tacapi/v1/lora-info/{folder}/{lora_name}")
async def get_lora_info_subfolder(folder, lora_name):
if LORA_PATH is None:
return json.dumps({})
return await get_json_info(LORA_PATH.joinpath(folder).joinpath(lora_name))
@app.get("/tacapi/v1/lyco-info/{folder}/{lyco_name}")
async def get_lyco_info_subfolder(folder, lyco_name):
if LYCO_PATH is None:
return json.dumps({})
return await get_json_info(LYCO_PATH.joinpath(folder).joinpath(lyco_name))
@app.get("/tacapi/v1/lora-info/{lora_name}")
async def get_lora_info(lora_name):
return await get_lora_info_subfolder(".", lora_name)
@app.get("/tacapi/v1/lyco-info/{lyco_name}")
async def get_lyco_info(lyco_name):
return await get_lyco_info_subfolder(".", lyco_name)
script_callbacks.on_app_started(api_tac)