Merge branch 'pr/3915' into add-updated-content-3

This commit is contained in:
dtlnor
2022-11-03 14:33:17 +09:00

View File

@@ -502,7 +502,7 @@
"keep whatever was there originally": "",
"fill it with latent space noise": "",
"fill it with latent space zeroes": "",
"Upscale masked region to target resolution, do inpainting, downscale back and paste into original image": "将蒙版区域放大到目标分辨率,做局部重绘,缩小后粘贴到原始图像中。请注意,填补像素 仅对 全分辨率局部重绘 生效。",
"Upscale masked region to target resolution, do inpainting, downscale back and paste into original image": "\n ",
"Resize image to target resolution. Unless height and width match, you will get incorrect aspect ratio.": "",
"Resize the image so that entirety of target resolution is filled with the image. Crop parts that stick out.": "使",
"Resize the image so that entirety of image is inside target resolution. Fill empty space with image's colors.": "使",
@@ -560,6 +560,7 @@
"Unload VAE and CLIP from VRAM when training": "(VRAM) VAE CLIP ",
"Number of pictures displayed on each page": "",
"Number of grids in each row": "",
"Start drawing": "",
"how fast should the training go. Low values will take longer to train, high values may fail to converge (not generate accurate results) and/or may break the embedding (This has happened if you see Loss: nan in the training info textbox. If this happens, you need to manually restore your embedding from an older not-broken backup).\n\nYou can set a single numeric value, or multiple learning rates using the syntax:\n\n rate_1:max_steps_1, rate_2:max_steps_2, ...\n\nEG: 0.005:100, 1e-3:1000, 1e-5\n\nWill train with rate of 0.005 for first 100 steps, then 1e-3 until 1000 steps, then 1e-5 for all remaining steps.": "/ embedding Loss: nan embedding\n\n使\n\n 1:1, 2:2, ...\n\n: 0.005:100, 1e-3:1000, 1e-5\n\n 100 0.005 1000 1e-3 1e-5 ",
"Separate prompts into parts using vertical pipe character (|) and the script will create a picture for every combination of them (except for the first part, which will be present in all combinations)": "线(|)",
@@ -569,5 +570,22 @@
"favorites": "()",
"others": "",
"Collect": "()",
"Move VAE and CLIP to RAM when training hypernetwork. Saves VRAM.": "训练时将 VAE 和 CLIP 从显存(VRAM)移放到内存(RAM),节省显存(VRAM)"
"Move VAE and CLIP to RAM when training hypernetwork. Saves VRAM.": " VAE CLIP (VRAM)(RAM)(VRAM)",
"How many times to improve the generated image iteratively; higher values take longer; very low values can produce bad results": "",
"Draw a mask over an image, and the script will regenerate the masked area with content according to prompt": "",
"Upscale image normally, split result into tiles, improve each tile using img2img, merge whole image back": "img2img",
"Create a grid where images will have different parameters. Use inputs below to specify which parameters will be shared by columns and rows": "使",
"Run Python code. Advanced user only. Must run program with --allow-code for this to work": "Python --allow-code ",
"Separate a list of words with commas, and the first word will be used as a keyword: script will search for this word in the prompt, and replace it with others": "",
"Separate a list of words with commas, and the script will make a variation of prompt with those words for their every possible order": "",
"Reconstruct prompt from existing image and put it into the prompt field.": "",
"Set the maximum number of words to be used in the [prompt_words] option; ATTENTION: If the words are too long, they may exceed the maximum length of the file path that the system can handle": "[prompt_words]使",
"Process an image, use it as an input, repeat.": "",
"Insert selected styles into prompt fields": "",
"Save current prompts as a style. If you add the token {prompt} to the text, the style use that as placeholder for your prompt when you use the style in the future.": "{prompt}使",
"Loads weights from checkpoint before making images. You can either use hash or a part of filename (as seen in settings) for checkpoint name. Recommended to use with Y axis for less switching.": "使Y使",
"Torch active: Peak amount of VRAM used by Torch during generation, excluding cached data.\nTorch reserved: Peak amount of VRAM allocated by Torch, including all active and cached data.\nSys VRAM: Peak amount of VRAM allocation across all applications / total GPU VRAM (peak utilization%).": "Torch active: Torch使(VRAM)\nTorch reserved: Torch(VRAM)\nSys VRAM: (VRAM) / GPU(VRAM)%",
"Uscale the image in latent space. Alternative is to produce the full image from latent representation, upscale that, and then move it back to latent space.": "",
"----": "----"
}