comfyui lora loader. I saw some people online using this LCM lora with animateDiff loader too, and not realising some weights. comfyui lora loader

 
 I saw some people online using this LCM lora with animateDiff loader too, and not realising some weightscomfyui lora loader  For the T2I-Adapter the model runs once in total

With #4287, this support should be quite improved. ComfyUI Impact Pack. The performance is abysmal and it gets more sluggish with every day. Usage AnimateDiff Loader; Uniform Context Options; AnimateDiff LoRA Loader; AnimateDiff Samples; download or drag images of the workflows into ComfyUI to instantly load the corresponding workflows! txt2img; txt2img – (prompt travel) txt2img – 48 frame animation with 16 context_length (uniform)load_lora_for_models fn in the same file seems to be the same except that the 3rd variable is called lora_path instead of lora but I don't think python cares about variable names. After that, the update_comfyui. Lora Text Extractor For a slightly better UX, try a node called CR Load LoRA from Comfyroll Custom Nodes. The various models available in UltralyticsDetectorProvider can be downloaded through ComfyUI-Manager. And I don't think it ever will. Launch ComfyUI by running python main. 9:48 How to save workflow in ComfyUI. Depends if you want to use clip skip on lora as well, (in case it was trained with clip skip 2) and in this case it should be placed after the lora loader. ComfyUI Community Manual Getting Started Interface. If the author or some code master has time, PLS create a lora-block-weight node for comfyUI, Thank you. Scaling # Use the ImageScale, ScaleLatents, and Upscale nodes to upscale images and/or latent images. Oddly enough, the "LoraLoader" is the only way for me to load a Lora now on the latest commit. Enable hot-reload of XY Plot lora, checkpoint, sampler, scheduler, vae via the ComfyUI refresh button. . 9 facedetailer workflow by FitCorder, but rearranged and spaced out more, with some additions such as Lora Loaders, VAE loader, 1:1 previews, Super upscale with Remacri to over 10,000x6000 in just 20 seconds with Torch2 & SDP. . Stable Diffusionを簡単に使えるツールというと既に「 Stable Diffusion web UI 」などがあるのですが、比較的最近登場した「 ComfyUI 」というツールが ノードベースになっており、処理内容を視覚化でき. 5 again to get rid of the lora nodes. The Efficient Loader combines a few nodes typically responsible for loading a model (we use the Bloodymary in this case), creating. Allows plugging in Motion LoRAs into motion models. You can also connect AnimateDiff LoRA Loader nodes to influence the overall movement in the image - currently, only works well on motion v2-based models. 208. Researchers discover that Stable Diffusion v1 uses internal representations of 3D geometry when generating an image. Updating wasn't as simple as running update_comfyui. 5k; Star 15. x models NOTE:. Go to file. ComfyUI is a node-based GUI for Stable Diffusion. Allow me to introduce the ComfyUi I2I ControlNet Ultimate Upscaler – a powerful workflow that lets you unleash your creativity through stable diffusion and personalized prompts, crafting mesmerizing, detailed, and high-resolution artworks with ease. TODO: fill this out AnimateDiff LoRA Loader. everything works great except for LCM + AnimateDiff Loader. The prompt for the first couple for example is this:LoRA has no concept of precedence (where it appears in the prompt order makes no difference), so the standard ComfyUI workflow of not injecting them into prompts at all actually makes sense. 複数使用する場合は直列に繋ぎましょう。 hanmeは最高の. 3 uses a simple image loader to create an img2img process. - I've included a LoRA loader - Keep in mind you'll need to use SDXL compatible LoRAs for use with SDXL 1. stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. 不過 ComfyUI 還有不少需要改進的空間,比起 StableDiffusionWebUI 真的比較難用。但在多線程的工作上也有他的好處,因為可以同時有很多組 prompt / checkpoint / LoRA ,同一時間運算比較不同的設定也有其好處,以後或者雙修 ComfyUI 及 StableDiffusionWebUI。can't find node "LoraLoaderBlockWeights". Direct Download Link Nodes: Efficient Loader & Eff. ckpt) and if file. . you have to load [load loras] before postitive/negative prompt, right after load checkpoint. You can, for example, generate 2 characters, each from a different lora and with a different art style, or a single character with one set of loras applied to their face, and the other to the rest of the body - cosplay! To reproduce this workflow you need the plugins and loras shown earlier. When you have 1,300+ Loras it's very slow to scroll through. 🚨 The ComfyUI Lora Loader no longer has subfolders, due to compatibility issues you need to use my Lora Loader if you want subfolers, these can be enabled/disabled on the node via a setting (🐍 Enable submenu in custom nodes). Allows plugging in Motion LoRAs into motion. Finally, change the LoRA_Dim to 128 and ensure the the Save_VRAM variable is key to switch to True. Welcome to the unofficial ComfyUI subreddit. ; EX) Can't load the control lora. Reload to refresh your session. Skip connections. It isn't a script, but a workflow (which is generally in . Reload to refresh your session. Version Information:Thanks, I've tried merging the checkpoint with each lora using a 0. 05) etc. clip: The CLIP model to use with the LoRA. json') ComfyUI is a node-based GUI for Stable Diffusion. Please share your tips, tricks, and workflows for using this software to create your AI art. The lower the. 5 model in highresfix with denoise set in the . Or efficiency pack has a loader for both checkpoint, vae, and lora. Welcome to the unofficial ComfyUI subreddit. Load LoRA Load Style Model Load Upscale Model Load VAE unCLIP Checkpoint Loader Mask. Basic LoRA. That's why the node called Prompt Extractor, which I've already created in the Inspire Pack, provides a functionality. just suck. The Load Style Model node can be used to load a Style model. com. LoRAs are used to modify the diffusion and CLIP models, to alter the way in which latents are denoised. You don't need to wire it, just make it big enough that you can read the trigger words. Colab Notebook:. . Allows plugging in Motion LoRAs into motion. 1 Answered by cdsama last month LORA will not be loaded if you do noting on it,try this plugin to automatic load LORA by prompt text. 5, all are comprised of 3 actual models. The images above were all created with this method. Per the ComfyUI Blog, the latest update adds “Support for SDXL inpaint models”. LoRa Loader is only in MODEL and CLIP buttons. Description: ComfyUI is a powerful and modular stable diffusion GUI with a graph/nodes interface. - I've included a LoRA loader - Keep in mind you'll need to use SDXL compatible LoRAs for use with SDXL 1. MOTION_LORA: motion_lora object storing the names of all the LoRAs that were chained behind it - can be plugged into the back of another AnimateDiff LoRA Loader, or into AniamateDiff Loader's motion_lora input. Have fun! Grab the Smoosh v1. 8:22 Image saving and saved image naming convention in ComfyUI. You signed in with another tab or window. Thanks, lora loader with NAME_STRING output exacly what is needed atm. . Lots of other goodies, too. Yes, there would need to be separate LoRAs trained for the base and refiner models. ENVIRONMENT Windows 10 GPU 1660 Super 32 gb ram So i tried a lora model that i made, and i try to get results from prompts but i get an warning lora keys not loaded and the image is not the desired. Because ComfyUI workspaces are just . 🚨 The ComfyUI Lora Loader no longer has subfolders, due to compatibility issues you need to use my Lora Loader if you want subfolers, these can be enabled/disabled on the node via a setting (🐍 Enable submenu in custom nodes) New. . they are also recommended for users coming from Auto1111. I just moved from a1111 to Comfy and this Clip Skip seems Traditional Chinese so far. The new Efficient KSampler's "preview_method" input temporarily overrides the global preview setting set by the ComfyUI manager. 213 upvotes. Uniform Context Options. ckpt_name_1, ckpt_name_2, etc. g. How to use it, Once you're ready ! All you have to do is, load the images of your choice, and have fun. CR Load LoRA. AnimateDiff Loader. elphamale. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. Add node: This option lets you add a new node to your network. New to comfyUI, plenty of questions. You have to think. Support for SD 1. By the features list am I to assume we can load, like, the new big CLIP models and use them in place of packages clip models with models? Kinda want to know before I spend 3 hours downloading one (. LoRA with Hires Fix. Load LoRA¶ The Load LoRA node can be used to load a LoRA. 0 + LoRA + Refiner With Comfy UI + Google Colab fot FREEExciting news! Introducing Stable Diffusion XL 1. IMHO, LoRA as a prompt (as well as node) can be convenient. 5. py. Text Prompts¶. ComfyUI如何添加 LORA 极简教程, 视频播放量 609、弹幕量 0、点赞数 4、投硬币枚数 2、收藏人数 5、转发人数 0, 视频作者 冒泡的小火山, 作者简介 ,相关视频:11. • 3 mo. Interface. X in the positive prompt. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. So, we ask the. You can also connect AnimateDiff LoRA Loader nodes to influence the overall movement in the image - currently, only works well on motion v2-based models. g. Loaders GLIGEN Loader Hypernetwork Loader Load CLIP Load CLIP Vision Load Checkpoint Load ControlNet Model Load LoRA Load Style Model Load Upscale Model Load VAE unCLIP Checkpoint Loader Mask. The problem lies in the ambiguity of what should be considered as positive and negative among the data present in the workflow. load(selectedfile. Getting the workflow contained in the image is quite straightforward. You can also connect AnimateDiff LoRA Loader nodes to influence the overall movement in the image - currently, only works well on motion v2-based models. It allows users to design and execute advanced stable diffusion pipelines with a flowchart-based interface. py --force-fp16. I don't have ComfyUI in front of me but if. The Load Style Model node can be used to load a Style model. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. [Simplest Usage] [All Possible Connections Usage] Uniform Context Options . TODO: fill this out AnimateDiff LoRA Loader. You can Load these images in ComfyUI to get the full workflow. 8 for example is the same as setting both strength_model and strength_clip to 0. When using a Lora loader (either ComfyUI nodes or extension nodes), only items in the Lycoris folder are shown. Restart ComfyUI. Let’s see how the number of steps impacts. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. So just add 5/6/however many max loras you'll ever use, then turn them on/off as needed. I have a really large Workflow, wich various custom nodes, and it was working really well, until i add "to much loras", it's a custom lora script, wich has the option to bypass it as parameter. r/StableDiffusion. ago. 0 seconds: A:\ComfyUI\custom_nodes\Mile_High_Styler 0. 6e9f284例如如下图,我想要映射lora文件夹,于是点进了WebUI的lora文件夹,并且删除了ComfyUI的相对的loras文件夹 然后运行CMD,输入mklink/j ,之后复制ComfyUI的models文件夹的路径,粘贴在刚输入的mklink j 之后,并且在末尾加上loras,再之后复制WebUI的Loras文件夹路径粘贴在. TODO: fill this out AnimateDiff LoRA Loader. 07:39. I need to add lora loader node, select lora, move other nodes to keep structure comprehensive, place new lora loader on canvas, disconnect previous lora node from. Samples: lora_params [optional]: Optional output from other LoRA Loaders. The ColorCorrect is included on the ComfyUI-post-processing-nodes. Welcome to the unofficial ComfyUI subreddit. 5D Clown, 12400 x 12400 pixels, created within Automatic1111. ComfyUI Community Manual Loaders. You don't need to create a model, that's the beauty of LCM-LoRA presented here. ComfyUI is new User inter. 1. Multi Lora Loader (Being able to add multiple Lora models and switch between them quickly when necessary) More detailed queue view (When I've queued multiple items, I'm not 100% sure what the details of the currently processing queue is, just the prompt details on hover would be useful)CHECK the attachments, for the workflow files to load in ComfyUI ! Also, check if your ComfyUI is up to date! 3. ControlNet加载器_Zho . On the left-hand side of the newly added sampler, we left-click on the model slot and drag it on the canvas. You signed in with another tab or window. 0 seconds: A:\ComfyUI\custom_nodes\ComfyUI-GPU-temperature-protection 0. I can’t use the refiner in A1111 because the webui will crash when swapping to the refiner, even though I use a 4080 16gb. Weirder still than when running an strace it seems to be calling on what's installed in the venv and not from my main system. Mask Convert Image to Mask Convert. 61. #stablediffusionart #stablediffusion #stablediffusionai In this Video I have Explained you Hi-Res Fix Upscaling in ComfUI In detail. Reload to refresh your session. In this video, we will introduce the Lora Block Weight feature provided by ComfyUI Inspire Pack. #456. Upcoming tutorial - SDXL Lora + using 1. Create. The second point hasn't been addressed here so just a note that Loras cannot be added as part of the prompt like textual inversion can, due to what they modify (model/clip vs. 00 1. Efficiency Nodes for ComfyUI. Help your fellow community artists, makers and engineers out where you can. Stacker nodes are a new type of ComfyUI node that open the door to a range of new workflow possibilities. Allows plugging in Motion LoRAs into motion models. It has a nice lora stacker (several loras in one node). Has a LoRA loader you can right click to view metadata, and you can store example prompts in text files which you can then load via the node. . However, what you CAN do in ComfyUI is generate an image with a normal model, then load the LCM-LoRA and upscale the generated image with the LCM sampler, using 8 steps. 2 more replies. Pinokio automates all of this with a Pinokio script. ComfyUI如何添加 LORA 极简教程, 视频播放量 609、弹幕量 0、点赞数 4、投硬币枚数 2、收藏人数 5、转发人数 0, 视频作者 冒泡的小火山, 作者简介 ,相关视频:11. Let’s start by right-clicking on the canvas and selecting Add Node > loaders > Load LoRA. r/StableDiffusion. Installing SDXL-Inpainting. MOTION_LORA: motion_lora object storing the names of all the LoRAs that were chained behind it - can be plugged into the back of another AnimateDiff LoRA Loader, or into AniamateDiff Loader's motion_lora input. ComfyUI is a completely different conceptual approach to generative art. Lora Examples. Load Style Model. picture. yes there is > add LoraLoader right after the checkpointLoader,. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. There are a lot of annoying little things that make using comfy an exercise in frustration. ;. Lora Block Weight - This is a node that provides functionality related to Lora block weight. Current Motion LoRAs only properly support v2-based motion models. You can apply multiple hypernetworks by chaining multiple Hypernetwork Loader nodes in sequence. elphamale. These are examples demonstrating how to use Loras. com Lora Examples. Place your Stable Diffusion checkpoints/models in the “ComfyUImodelscheckpoints” directory. Just use one of the load image nodes for control net or similar by itself and then load them image for your Lora or other model. jsonCould you please provide a full stack trace with the error, and if possible the name/link for the lora Does the lora work via the normal Lora Loader node? All reactionsbut if it is possible to implement this type of changes on the fly in the node system, then yes, it can overcome 1111. SDXL ComfyUI工作流(多语言版)设计 +. Edit2: I'm suspecting there is some bug in the loader the causes the wrong prompts to be chosen. MultiAreaConditioning node. 8. TODO: fill this out AnimateDiff LoRA Loader . Lora Loader - Lora Loader with On/Off Switch - output is 1 or 2, so it works with most "x to 1"-switches (while some other alternatives use boolean 0 or 1 and need corresponding switches or additional math nodes)sd-webui-comfyui Overview. 0 is a large, moderately complex workflow. 0. It is a node. FreeU doesn't just add detail; it alters the image to be able to add detail, like a LoRa ultimately, but more complicated to use. Raw output, pure and simple TXT2IMG. In this post, we will show examples of testing LoRAs and LoRA weights with XY plots, but the approach is transferrable, and you can apply it to whatever parameters you intend to test. 不過 ComfyUI 還有不少需要改進的空間,比起 StableDiffusionWebUI 真的比較難用。但在多線程的工作上也有他的好處,因為可以同時有很多組 prompt / checkpoint / LoRA ,同一時間運算比較不同的設定也有其好處,以後或者雙修 ComfyUI 及 StableDiffusionWebUI。The LoRa does not change the prompt interpretation (like an embedding) but adds to the model itself. Reload to refresh your session. AdDifficult4213 • 3 days ago. No errors, it just acts as if it isn't present. json files, they can be easily encoded within a PNG image, similar to TavernAI cards,. You can also connect AnimateDiff LoRA Loader nodes to influence the overall movement in the image - currently, only works well on motion v2-based models. ComfyUI Custom Workflows. Mask Convert Image to Mask Convert Mask to Image. Put it in the folder ComfyUI > custom_nodes > ComfyUI-AnimateDiff-Evolved > models. Traceback (most recent c. 3) is MASK (0 0. CR LoRA list. Please consider joining my Patreon! 0. Several XY Plot input nodes have been revamped for better XY Plot setup efficiency. Workflow: lora. I just started learning ComfyUI. Placing it first gets the skip clip of the model clip only, so the lora should reload the skipped layer. up and down weighting¶. . [Simplest Usage] [All Possible Connections Usage] Uniform Context Options. md","path":"upscale_models/README. You can use mklink to link to your existing models, embeddings, lora and vae for example: F:ComfyUImodels>mklink /D checkpoints F. When comparing LoRA and ComfyUI you can also consider the following projects: stable-diffusion-webui - Stable Diffusion web UI. ; Go to the stable. You can also connect AnimateDiff LoRA Loader nodes to influence the overall movement in the image - currently, only works well on motion v2-based models. With the text already selected, you can use ctrl+up arrow, or ctrl+down arrow to autoomatically add parenthesis and increase/decrease the value. aiethNFT. 9 seconds: G:ComfyUIBlender_ComfyUIComfyUIcustom_nodeswas-node-suite-comfyui 12. - This is set up automatically with the optimal settings for whatever SD model version you choose to use. png) . Samples (download or drag images of the workflows into ComfyUI to instantly load the corresponding workflows!) txt2img. 提示词_Zho . comfyui workflow animation. ComfyUI Community Manual Getting Started Interface. 8. Style models can be used to provide a diffusion model a visual hint as to what kind of style the. CR LoRA Stack and CR Multi-ControlNet Stack are both compatible with the Efficient Loader node, in Efficiency nodes by LucianoCirino. Aug 27th, 2023. In this video I will show you how to install all the n. I rolled back to the commit below and I can load all of my previous workflows and they run without an issue. Typical use-cases include adding to the model the ability to generate in certain styles, or better generate certain subjects or actions. Notifications Fork 1. 10:07 How to use generated images to load workflow. safetensors" or "sai_xl_depth_128lora. Side by side comparison with the original. How to install SDXL with comfyui: Aug 29. ComfyUI/custom_nodes以下にカスタムノードをフォルダごと置くと適用できます。. Conditional diffusion models are trained using a specific CLIP model, using a different model than the one which it was trained with is unlikely to result in good images. StabilityAI have release Control-LoRA for SDXL which are low-rank parameter fine tuned ControlNet for SDXL which. Here is the rough plan (that might get adjusted) of the series: In part 1 (this post), we will implement the simplest SDXL Base workflow and generate our first images. To modify the trigger number and other settings, utilize the SlidingWindowOptions node. Add Node > loader > Load LoRAを選択. for character, fashion, background, etc), it becomes easily bloated. Thanks, lora loader with NAME_STRING output exacly what is needed atm. Load LoRAノードは、Load Checkpointの後段に配置します LoRAはモデルに対するパラメーターの低ランク適応なので、モデルの直後に接続しましょう。 flat2をマイナス適用した例. It is based on the SDXL 0. 🌟. I saw some people online using this LCM lora with animateDiff loader too, and not realising some weights. Applies the LCM LoRA. py Line 159 in 90aa597 print ("lora key not loaded", x) when testing LoRAs from bmaltais' Kohya's GUI (too afraid to try running the scripts directly). Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Load VAE. Loaders GLIGEN Loader Hypernetwork Loader Load CLIP Load CLIP Vision Load Checkpoint Load ControlNet Model Load LoRA Load Style Model Load Upscale Model Load Upscale Model Table of contents inputs outputs example Load VAE unCLIP Checkpoint Loader. I have tried uninstalling and reinstalling and trying installing via. Is that just how bad the LCM lora performs, even on base SDXL? Workflow used v Example3. Motion LoRA is now supported! . ago. when Using SDXL-control-lora with 6GB VRAM, "Ran out of memory" #1781. Depends if you want to use clip skip on lora as well, (in case it was trained with clip skip 2) and in this case it should be placed after the lora loader. Allows plugging in Motion LoRAs into motion. . The Lora Loader node lets you load a LoRA and pass it as output. tool. For vid2vid, you will want to install this helper node: ComfyUI-VideoHelperSuite. • 4 mo. UP猪只是个勤劳的小嫖虫,亲在观看后24小时内忘记. On nodes where none of the input and output types match it's going to act like a mute. I have multi lora setup, and I would like to test other loras (157 styles) against it, with XY plot, but Efficient Loader doesn't allow for multiple Loras, and other loaders don't have the "dependencies" output. Start from the Functions section of the workflow on the left, and proceed to the right by configuring each section relevant to you: I2I or T2I, Prompt Enricher and, finally, Parameters. You switched accounts on another tab or window. AloeVera's - Instant-LoRA is a workflow that can create a Instant Lora from any 6 images. - Loras only seem to work if I use both the CR Lora Loader and the Apply Lora Stack node Currently this happens with every controlnet model I've tried - if they work. TODO: fill this out AnimateDiff LoRA Loader . In A1111 i can erase stuff and type < followed by first 1-2 letters of lora which just jumped into my mind, click to select from hover menu, boom, ready to go. Efficient Loader. Sign In. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. ' When I edit the file and change it from 'True' to 'False' and entry ComfyUI, I get. Make sure you update ComfyUI to the latest, update/update_comfyui. Note that --force-fp16 will only work if you installed the latest pytorch nightly. gitignore. You can, for example, generate 2 characters, each from a different lora and with a different art style, or a single character with one set of loras applied to their face, and the other to the rest of the body - cosplay! ComfyUI LORA. 5 ratio, then merging those new checkpoints with 0. TODO: fill this out AnimateDiff LoRA Loader. Loader SDXL ; Nodes that can load & cache Checkpoint, VAE, & LoRA type models. ini file in the ComfyUI-Impact-Pack directory and change 'mmdet_skip = True' to 'mmdet_skip = False. 5, 0. Specs that come after LBW= without A= or B= are applicable for use in the Inspire Pack's Lora Loader (Block Weight) node. ComfyUI Community Manual Getting Started Interface. You switched accounts on another tab or window. Inuya5haSama. I think this is a good thing since the triggerwords to me seem redundant in ComfyUI, you either load the Lora or not. Welcome to the ComfyUI Community Docs! This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Download it, rename it to: lcm_lora_sdxl. 30ish range and it fits her face lora to the image without. ; That’s it! . Step 4: Select a VAE. Checkpoint loader first, then Lora loader. TODO: fill this out AnimateDiff LoRA Loader . Open. Direct Download Link Nodes: Efficient Loader & Eff. After own tests and trainings of LoRAs, LoCons and LoHas, my personal impression is, LoHas are returning the best results of these 3 methods. r/StableDiffusion. bat; I also had to handle a merge conflict. Comfyroll Nodes is going to continue under Akatsuzi here: can also connect AnimateDiff LoRA Loader nodes to influence the overall movement in the image - currently, only works well on motion v2-based models. 0 model files. . Automatic1111 tested and verified to be working amazing with main branch. Info. The CLIP Vision Encode node can be used to encode an image using a CLIP vision model into an embedding that can be used to guide unCLIP diffusion models or as input to style models. 中文字幕由B站自动生成,亲电脑播放点击右下角字幕,手机好像没有字幕。. g. Create. To create node template for LoRA Stacking with key word input. ComfyUI shared workflows are also updated for SDXL 1. Note: LoRAs only work with AnimateDiff v2 mm_sd_v15_v2. This ability emerged during the training phase of the AI, and was not programmed by people. I have a brief over. it would be cool to have the possibility to have something like : lora:full_lora_name:X. Step 2: Install the missing nodes. 06. A implementation to be able to use LoRA with Hadamard Product representation (LoHa) would be just awesome. i'm probably messing something up im still new to this but you put the model and clip output nodes of the checkpoint loader to the. The reason you can tune both in ComfyUI is because the CLIP and MODEL/UNET part of the LoRA will most likely have learned different concepts so tweaking them separately. ENVIRONMENT Windows 10 GPU 1660 Super 32 gb ram So i tried a lora model that i made, and i try to get results from prompts but i get an warning lora keys not loaded and the image is not the desired. , which isn't useful for a one name fits all save name. Mediapipe.