Comfyui upscale model download reddit
Comfyui upscale model download reddit. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Good for depth, open pose so far so good. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. json or drag and drop the workflow image (I think the image has to not be from reddit, reddit removes metadata, I believe) into the UI. There are plenty of workflows made you can find. However, I'm facing an issue with sharing the model folder. this is just a simple node build off what's given and some of the newer nodes that have come out. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). Also, both have a denoise value that drastically changes the result. There's "latent upscale by", but I don't want to upscale the latent image. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. Thank Sames as Swin4R which details a lot the image. That's because of the model upscale. This is done after the refined image is upscaled and encoded into a latent. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. Additionally, the animatediff_models and clip_vision folders are placed in M:\AI_Tools\StabilityMatrix-win-x64\Data\Packages\ComfyUI\models. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. with a denoise setting of 0. I get good results using stepped upscalers, ultimateSD upscaler and stuff. 2 - Custom models/LORA's: Tried a lot of CivitAI, epicrealism, cyberrealistic, absolutereality, realistic vision 5. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. Do you have ComfyUI manager. Once they're installed, restart ComfyUI and launch it with --preview-method taesd to enable high-quality previews. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt That workflow consists of vid frames at 15fps into vae encode and CNs, a few loras, animatediff v3, lineart and scribble-sparsectrl CNs, ksampler basic with low cfg, small upscale, AD detailer to fix face (with lineart and depth CNs in segs, and same loras, and animatediff), upscale w/model, interpolate, combine to 30fps. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. 5=1024). 15-0. ). you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. the factor 2. Tried the llite custom nodes with lllite models and impressed. And when purely upscaling, the best upscaler is called LDSR. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. Simply save and then drag and drop relevant image into your ComfyUI interface window with ControlNet Tile model installed, load image (if applicable) you want to upscale/edit, modify some prompts, press "Queue Prompt" and wait for the AI generation to complete. If it's the best way to install control net because when I tried manually doing it . I was looking for tools that could help me set up ComfyUI workflows automatically and also let me use it as a backend, but couldn't find any. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. After generating my images I usually do Hires. pth and taef1_decoder. 25 i get a good blending of the face without changing the image to much. As well Juggernaut XL and other XL models. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). But it's weird. Step 1: Download SDXL Turbo checkpoint. In the Load Video node, click on choose video to upload and select the video you want. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. 5 model) >> FaceDetailer. second pic. You just have to use the node "upscale by" using bicubic method and a fractional value (0. so I made one! Rn it installs the nodes through Comfymanager and has a list of about 2000 models (checkpoints, Loras, embeddings, etc. Step 2: Download this sample Image. Please share your tips, tricks, and workflows for using this software to create your AI art. in a1111 the controlnet * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. Edit : I am sorry I didn't see that you were looking for SDXL clip file i thought you wanted the cascade clip file. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. 5 for the diffusion after scaling. No attempts to fix jpg artifacts, etc. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. The workflow is kept very simple for this test; Load image Upscale Save image. pth and place them in the models/vae_approx folder. I love to go with an SDXL model for the initial image and with a good 1. I am curious both which nodes are the best for this, and which models. The downside is that it takes a very long time. The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. I have a custom image resizer that ensures the input image matches the output dimensions. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. Then another node under loaders> "load upscale model" node. e. The last one takes time I must admit but it run well and allow me to generate good quality images (I managed to have a seams fix settings config that works well for the last one hence the long processing) Generates a SD1. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. Thanks Welcome to the unofficial ComfyUI subreddit. Plus, you want to upscale in latent space if possible. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). ckpt motion with Kosinkadink Evolved . 1 and 6, etc. That's because latent upscale turns the base image into noise (blur). Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. - image upscale is less detailed, but more faithful to the image you upscale. In the CR Upscale Image node, select the upscale_model and set the rescale_factor. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… If you have comfyUI ,manager you can directly download all the models from it. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. I want to upscale my image with a model, and then select the final size of it. Always wanted to integrate one myself. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Still working on the the whole thing but I got the idea down I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. All the models are located in M:\AI_Tools\StabilityMatrix-win-x64\Data\Models. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). Though, from what someone else stated it comes to use case. safetensors (SD 4X Upscale Model) Sep 7, 2024 · Here is an example of how to use upscale models like ESRGAN. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Stable Diffusion model used in this demonstration is Lyriel. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. attach to it a "latent_image" in this case it's "upscale latent" "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. The restore functionality, that adds detail, doesn't work well with lightning/turbo models. I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's garbage collection because it's basically a janky wrapper for an Auto1111 extension. Welcome to the unofficial ComfyUI subreddit. pth or 4x_foolhardy_Remacri. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. SDXL most definitely doesn't work with the old control net. It didn't work out. 5), with an ESRGAN model. pth, taesdxl_decoder. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. 5 if you want to divide by 2) after upscaling by a model. All of this can be done in Comfy with a few nodes. To enable higher-quality previews with TAESD, download the taesd_decoder. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Will be interesting seeing LDSR ported to comfyUI OR any other powerful upscaler. I haven't been able to replicate this in Comfy. I'm using mm_sd_v15_v2. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. So I made a upscale test workflow that uses the exact same latent input and destination size. For SD 1. A step-by-step guide to mastering image quality. Search the sub for what you need and download the . it's nothing spectacular but gives good consistent results without In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. g Use a X2 Upscaler model. Upscaling: Increasing the resolution and sharpness at the same time. You can also do latent upscales. But for the other stuff, super small models and good results. Look at this workflow : These comparisons are done using ComfyUI with default node settings and fixed seeds. Thanks. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird Welcome to the unofficial ComfyUI subreddit. Hope someone can advise. so i. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? I've been using Stability Matrix and also installed ComfyUI portable. There are also "face detailer" workflows for faces specifically. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. 5 to get a 1024x1024 final image (512 *4*0. Scan this QR code to download the app now ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. - latent upscale looks much more detailed, but gets rid of the detail of the original image. . Reply reply Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. /r/StableDiffusion is back open after the Cause I run SDXL based models from start and through 3 ultimate upscale nodes. Then output everything to Video Combine . One does an image upscale and the other a latent upscale. 5 I'd go for Photon, RealisticVision or epiCRealism. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. You can also provide your custom link for a node or model. It's a lot faster that tiling but outputs aren't detailed. Please keep posted images SFW. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. This way it replicates the sd upscale/ultimate upscale scripts from A1111. I rarely use upscale by model on its own because of the odd artifacts you can get. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. Solution: click the node that calls the upscale model and pick one. pth, taesd3_decoder. Usually I use two my wokrflows: Upscale x1. For some context, I am trying to upscale images of an anime village, something like Ghibli style. If the workflow is not loaded, drag and drop the image you downloaded earlier. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. Thanks for the tips on Comfy! I'm enjoying it a lot so far. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) You can also run a regular AI upscale then a downscale (4x * 0. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. But I think simply typing the file name in the search panel of comyUI manager will get you the file. Jan 5, 2024 · Start ComfyUI. hhkqaff vrdp ovaqppzh pfgcnd kfplh exbo pbau xpenvd wyttf uwig