Tikfollowers

Stable sr comfyui reddit. EDIT : Extension has been renamed to Cozy-Nest.

float32. Neither is better or worse. You can find the node here. Specifically I need to get it working with one of the Deforum workflows. In the Task Manager click where it says 3D on the top left box. Working on finding my footing with SDXL + ComfyUI. Talking about singularity on a Stable Diffusion gif, as much as I love Stable Diffusion, is even less relevant than talking about it on a LLM subreddit like Chat GPT's. Girl on the far right was gifted some kind of modern-styled soccer shoes. I was just thinking I need to figure out controlnet in comfy next. 35 -> 0. I've been slowly migrating to comfyui. The latter is an admirable project, but technically it feels like driving a racing car that's held together with a Welcome to the unofficial ComfyUI subreddit. Made a video tutorial on how to get SVD running also shared 2 workflows in the description. There are no small steps in the original image. It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of automatic1111). I need to "nodes" that I can't seam to find anywhere : Any Ideas how to get/install Seed and ClipInterrogate? I've found ClipInterrogate in a tool set, but it doesn't seam to work. fp16 for 25 frames:17. I'm shocked that people still don't get it, you'll never get high success and retention rate on your videos if you don't show THE END RESULT FIRST. My seconds_total is set to 8, and the BPM I ask for in the prompt is set to 120BPM (two beats per second), meaning I get 16 beat bars. It is might not be the best place to put this but, I wish comfyUI had such feature, if I shift right click on a slider, widget, or input, it would just clone that widget to a stack in a permanent UI. art/tutorial/install-comfyui-in-under-5-mins/. DrakenZA. 6 I couldn't run SDXL in A1111 so I was using ComfyUI. 5 and SDXL version. Fooocus / Fooocus-MRE / RuinedFooocus - quick image generation, and simple and easy to use GUI (Based on the Comfy backend). Regular SDXL is just a bunch of noise until 8! I tried on colab notebook with different styles, resolution and artists and results were amazing. fp16 for 14 frames:10. 4 -> 0. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. My personal favorite is #11. See the latest Full paper. SD1. com/gameltb/comfyui-stablesr. Fuzzyfaraway. Just give it a try. I really recommend. Also the face mask seems to include part of the hair most of the time, which also gets lowres by the process. By the way, I don't think ComfyUI is a good name since it's already a famous stable diffusion ui and I thought your extension added that one to auto1111. If you want to use only base safesensor then just load that workflow, easypeasy. We know A1111 was using xformers, but weren't told, as far as i noticed, what ComfyUI was using. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) ComfyUI nodes missing. If SDXL is the future for StableDiffusion finetuning, ComfyUI seems like it will be the default UI associated with it. This usually happens at the first generation with a new prompt even though the model (SDXL with refiner) is already loaded. Forget about this video now you can just update ComfyUI supports Video nodes and can run You signed in with another tab or window. Although it can handle the recently released In ComfyUI does it matter what order I put my controlnets when using an inpainting controlnet? Question - Help I have an AnimateDiff setup and I have openpose, depth, lineart, and I painting controlnets that I enable or disable as needed. Enjoy a comfortable and intuitive painting app. Set vram state to: NORMAL_VRAM. 28: Accepted by IJCV. We would like to show you a description here but the site won’t allow us. Toggle if the seed should be included in the file name or not. Next, install RGThree's custom node pack, from the manager. Welcome to the unofficial ComfyUI subreddit. The artwork for my game was AI-generated in ComfyUI using custom workflows that I built from scratch. Then just load the premade one for your need and go. Fixing a poorly drawn hand in SDXL is a tradeoff in itself. Automatic1111 for multiple workflows and extensions. Please share your tips, tricks, and workflows for using this software to create your AI art. 0 for ComfyUI - Now with a next-gen upscaler (competitive against Magnific AI and Topaz Gigapixel!) and higher quality mask inpainting with Fooocus inpaint model The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. An A1111 extension been out for ages and a new ComfyUI node is not getting much love w only 3 github stars. The power prompt node replaces your positive and negative prompts in a comfy workflow. image 1 : by Naoko Takeuchi and Igor Zenin in the style of Michal Karcz <lora:more_art:0. Tbh, this looks more like Gen2 or Pika than AninmateDiff - I havent seen that much consistency with AnimateDiff yet - Also, the image init are also definetaly coming from MJ! Oh wow, that's awesome, looks great! It surely required quite some time to finish this animation. Some tasks never change and don't need complicated all in one workflows with a dozen different custom nodes each. Started learning comfy a few days ago and my mind was quickly blown by all the possibilities. You should submit this to comfyanon as a pull request. All the next generations run fast, but with the slightest change in the prompt it begins with an at least 10x /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Once you build this you can choose an output from it using static seeds to get specific images or you can split up larger batches of images to reduce Go to your FizzNodes folder ("D:\Comfy\ComfyUI\custom_nodes\ComfyUI_FizzNodes" for me) Run this, make sure to also adapt the beginning match with where you put your comfyui folder: "D:\Comfy\python_embeded\python. Hello! Looking to dive into animatediff and am looking to learn from the mistakes of those that walked the path before meπŸ«‘πŸ™ŒπŸ«‘πŸ™ŒπŸ«‘πŸ™ŒπŸ«‘πŸ™Œ Are people using… Install ComfyUI Manager. Add a Comment. Now you can manage custom nodes within the app. com/deroberon/StableZero123-comfyui. Hey, maintainer of the repo here. Run that and see if it helps. 25><lora:midjourney:0. 02s/it, Prompt executed in 249. 02. NotImplementedError: Cannot copy out of meta tensor; no data! Total VRAM 8192 MB, total RAM 32706 MB. We welcome posts about "new tool day", estate sale/car boot sale finds, "what is this" tool, advice about the best tool for a job, homemade tools, 3D printed accessories, toolbox/shop tours. enspiralart. You switched accounts on another tab or window. ^Makeup Ideas for my Goths out there. Now the ComfyUI of StableSR is also available. Bruh I can animate that in less than a minute in AE (okay maybe in 5 but still), without breaking the pixel art. The VRAM usage for AD is about the same as generating normal images with the batch_size passed in (context_length with the Advance loader) - so a 16 frame animation will use the same amount of VRAM as generating a batch of 16 images at once of those same dimensions. And above all, BE NICE. since you already have experience with a1111 go for it its really fun. This sub is for tool enthusiasts worldwide to talk about tools, professionals and hobbyists alike. I'm having so much fun churning out these images with SDXL. Tutorial - Guide. Judging by the reactions in r/StableDiffusion, such as the number of images posted, discussion about comfyUI, asking for help for setting up Auto1111 for SDXL. To push the development of the ComfyUI ecosystem, we are hosting the first contest dedicated to ComfyUI workflows! Anyone is welcomed to participate. /removed "All of this builds on our existing partnership with Google Cloud to integrate new AI-powered capabilities to improve Reddit". Input your batched latent and vae. safetensors 73. Before 1. If not try to run the other update files comfyui_and_python_dependencies. This is amazing, very exciting to have this! It's going to take me a bit to wrap my head around what this enables and how I can use it, it feels really important. •. ago. Award. Best of luck, we are lucky to have ComfyUI and many custom node developers. Add in frame interpolation and then photogrammetry software after this and you’ve got yourself a textured model. bat. If it kept the pixel grid it's something to share, but this needs a lot more work. 1/10 are comfyui defaults. A lot of it already exists in A1111. Since I have a MacBook Pro i9 machine, I used this method without requiring much processing power. 8g 2070 max q using svd. Thank Andray for the finding!. The default graph will load that is designed to run any Stable Diffusion model. Stage A >> \models\vae\SD Cascade stage_a. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Belittling their efforts will get you banned. ComfyUI is when you really need to get something very specific done, and disassemble the visual interface to get to the machinery. While ComfyUI can help with complicated things that would be a hassle to use with A1111, it won't make your images non-bland. There's an SD1. , New Tutorial: How to rent 1-8x GPUS, install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). Workflow Included. Generation using prompt. safetensors from to the "ComfyUI-checkpoints" -folder. No_OBS, No_VirtuallCam! Comfy UI workflow is completely changeable and you can use your own workflow! If you are interested to know how i did this, tell me. Drawing inspiration from the Midjourney Discord bot, my bot offers a plethora of features that aim to simplify the experience of using SDXL and other models both in the context of running locally through ComfyUI We would like to show you a description here but the site won’t allow us. One thing I noticed right away when using Automatic1111 is that the processing time is taking a lot longer. It's an LDM. OP • 8 days ago. 5, Sdxl, Lcm Lora, #Ai, #Stablediffuision, # /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Hi, I'm looking for input and suggestions on how I can improve my output image results using tips and tricks as well as various workflow setups. 2024. It's in the best interest of everyone for ComfyUI to be as user-friendly as possible. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. ComfyUI is not using GPU1 (RTX 3080 Ti Laptop) every now and then, it uses GPU0 (Intel Iris Xe) and CPU instead. 1x768?) I've been postponing trying it for a while, but I was honestly amazed at how much faster, more comfortable and creative I felt compared to using Automatic1111. r/StableDiffusion. The save_prefix is using the Comfy does launch faster than auto111 though but the ui will start to freeze if you do a batch or have multiple gene going on at the same time. etc. 15> image 2 : an cinematic shot of a soul knight, soul particals around the knight, fantasy tavern in background, night, HD, masterpiece, best quality, hyper detailed, ultra detailed, super realistic Instructions: - Download the ComfyUI portable standalone build for Windows. Has 5 parameters which will allow you to easily change the prompt and experiment. Can someone please explain the settings? I'm pretty familiar with Stable Diffusion but not these. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. Let's shave in real-time! πŸ˜ƒ ComfyUI Tutorial (Windows) Install and run Stable Diffusion in underπŸ“·5 mins. Further reading with full write up is available here: https://weirdwonderfulai. 06. Afaik you need comfyui to run animatediff. I'm excited to introduce SDXL-DiscordBot, my latest attempt for a Discord bot crafted for image generation using the SDXL 1. Thanks tons! I tried ComfyUI and I'm never leaving (also : 4K native on SD 2. But I'm getting better results - based on my abilities / lack thereof - in A1111. ComfyShop has been introduced to the ComfyI2I family. I made a tiled sampling node for ComfyUI that i just wanted to briefly show off. 1. 29: Support StableSR with SD-Turbo. I'm on an 8GB RTX 2070 Super card. First, we'll discuss a relatively simple scenario – using ComfyUI to generate an App logo. I guess it all depends on how one measures success. Detailed ComfyUI Face Inpainting Tutorial (Part 1) : r/StableDiffusion. The node based workflow is intimidating at first, but it’s not so bad once you get used to it. For me, it has been tough, but I see the absolute power of the node-based generation (and efficiency). 4 and tiles of 768x768. /r This is John, Co-Founder of OpenArt AI. In a way it compares to Apple devices (it just works) vs Linux (it needs to work exactly in some way). Faces always have less resolution than the rest of the image. Learned from the following video: Stable Cascade in ComfyUI Made Simple 6m 56s posted Feb 19, 2024 by How Do? channel on YouTube . I know there is the ComfyAnonymous workflow but it's lacking. You know, Thank you. For people using portable setups, pls use the Manager instead of installing the custom node manually. 50s/it, Prompt executed in 420. ControlNet was not used. My guess -- and it's purely a guess -- is that ComfyUI wasn't using the best cross-attention optimization. 7 MB ComfyUI Question: Batching and Search/Replace in prompt like A1111 X/Y/Z script? Having been generating very large batches for character training (per this tutorial which worked really well for me the first time), it occurs to me that the lack of interactivity of the process might make it an ideal use case for ComfyUI, and the lower overhead of /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You get a dropdown where you'll see Cuda listed-- select that to see if your GPU's cuda functions are being used. I work in VFX so of course temporal stability is very important. Length defines the ammount of images after the target to send ahead. VAE dtype: torch. Reply reply. However, if you are starting from scratch, it is usually easier to begin with Automatic1111. Hi, just got around to Stable Cascade (through Pinokio) this week. [ πŸ”₯ ComfyUI - Realtime Shaving ] . Reply. I have a rather complex Workflow where the Prompt is constructed layered (base, subject, detail1, background) and combined together (including dynamic prompts and wildcards and styles, an the <lora:blabla:0. She is holding several papers in the original. xformers version: 0. I'm not an expert w/ Comfy, and I've never posted a workflow before. 5> usage). Uses less VRAM than A1111. After you get the logic of diffusion you start creating your own workflows. 0. 40><lora:faetastic:0. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. Except, under the hood, it’s actually a VQGAN. Release: AP Workflow 8. 1 and always assumed it was an outdated pre-sdxl when I saw it. I can't believe we are so spoiled nw we have gen2 pikalabs for free on ComfyUI is incredible. It's a matter of using different things like ControlNet, regional prompting, IP-Adapters, IC-Light, so on and so forth, together to create interesting images that you like. This pack includes a node called "power prompt". There is zero cobblestone in the original image. Reload to refresh your session. punter1965. Interpolated. Stage B is a more traditional diffusion model, guided by Stage C’s output. Does anyone know an simple way to extract frames from a webp file or convert it to mp4? ComfyUI load + Nodes. I'd argue we aren't any closer to the singularity than we were in 2020. I follow this passionately and also never knew about unclip until about a month ago. 5. Stable diffusion. . It's more a question of taste. exe -s -m pip install -r requirements. Advanced Options for Stable Cascade explained. 81 seconds. Been working the past couple weeks to transition from Automatic1111 to ComfyUI. 72 seconds. The Workflows delivered by StabilityAI aren't really that intresting. My guess is cfg and steps for stage c and cfg and steps for stage b. Plus quick run-through of an example ControlNet workflow. Its ONE STEP. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. Stable diffusion in Photoshop in Real-time using ComfyUI! If you want this wirkflow just say it in the comments 🧑. The workshop that Olivio Sarikas shared on the openart website really helped me to understand how to use it. This is amazing results for one step. And the possibilities are amazing. The trick is having a collection of premade workflows. For this, I wanted to share the method that I could reach with the least side effects. The main features are: Works with SDXL, SDXL Turbo as well as earlier version like SD1. ComfyUI - great for complex workflows. Stable Video Diffusion in ComfyUI. I ran into the IC-Light workflow to extract normal maps and had an idea of how to make them temporally stable. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. Both are also relatively easy to install. Comfyui Tutorial: Creating Animation using Animatediff, SDXL and LoRA. it's probably this? https://github. txt after you removed the extension « txt » nice transition πŸ‘. A lot of people are just discovering this technology, and want to show off what they created. using svd_xt. It allows you to put Loras and Embeddings Comfyui is easy for beginners like me. In ComfyUI using Juggernaut XL, it would usually take 30 seconds to a minute to run a batch of 4 images. In this workflow I experiment with the cfg_scale, sigma_min and steps space randomly and use the same prompt and the rest of the settings. Once your Manager is updated, you can search "ComfyUI Stable Video Diffusion" and you should find it. Lacks the extension and other functionalities, but is amazing if all you need to do is generate images. 4/20 and 1. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. You can run Animat Stage C is a small diffusion model that generates semantic latents that help to condition the output of Stage B. Please share your tips, tricks, and workflows for using this…. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner Early 2025 " Windows 11 inside comfy UI". StableSR main repo: https://github. Let's not get ahead of ourselves haha, this is not "AI". Can't believe people are bitching about the quality. - Best settings to use are: 509K subscribers in the StableDiffusion community. This specific image is the result from repeated upscaling from: 512 -> 1024 -> 2048 -> 3072 -> 4096 using a denoise strength of 1. Basic workflows should be stock and available for all users. You signed out in another tab or window. - We have amazing judges like Scott DetWeiler, Olivio Sarikas (if you have watched any YouTube ComfyUI tutorials, you probably have watched their videos Created temporally stable normal maps with ComfyUI and Nuke. Good job! Wow! The workflow is different, but the results are identical. Such unreal results. 0 -> 0. instead, the simple i2i (image-to-image) function was utilized. If you want to use base, refiner, VAE, Lora then just load that workfow, easypeasy. Stage A is like a VAE, converting the Stage B output into an image. vroom vroom. Maybe it will be useful for someone like me who doesn't have a very powerful machine. You can add additional steps with base or refiner afterwards, but if you use enough steps to fix the low resolution, the effect of roof is almost gone. I feel StableSR doesn't get enough mentions though, so sharing this for anyone who wants a quick way to have both setup (hopefully). Girl on far left in the red. The simplest way, of course, is direct generation using a prompt. The graphic style and clothing is a little less stable, but the face fidelity and expression range are greatly improved. 20. thanks for the workflow and link <3. You guys need to open your eyes before upvoting. But if you have experience using Midjourney, you might notice that logos generated using ComfyUI are not as attractive as those generated using Midjourney. 1 model to do its magic, is highly overlooked. Creat a new comfyui, I have created a comfyuiSUPIR only for supir, and in the new comfyui, link the model folders with the full path for base models folder and the checkpoint folder ( at least) in comfy/extra-model. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. ComfyScript: A Python front end for ComfyUI. 2. com/IceClear/StableSR. Comfy speed comparison. I had a lot of fun with this today. However, with that being said I prefer comfy because you have more flexibility and you can really dial in your images. The video was pretty interesting, beyond the A1111 vs. Same as before : Welcome to the unofficial ComfyUI subreddit. Model turned the papers into a clenched fist and gave her seven fingers. Must be reading my mind. Comfyui Control-net Ultimate Guide. There are also a lot of other cool workflows that I think only work with comfyui. It has now taken upwards of 10 minutes to do seemingly the same run. Such a massive learning curve for me to get my bearings with ComfyUI. • 6 min. 24K subscribers in the comfyui community. Something like this would really put a huge dent in the patreon virus that's occurring in the custom workflow space. I'm trying to run a json workflow I got from this sub, but can't find the post after a lot of search (line art workflow), so here's my problem. ComfyUI Tutorial: Exploring Stable Diffusion 3. 5 -> 0. It's been a year since you posted this, cant belivevI had never heard of 2. Used the basic nodes of ComfyUI and PaintNode. StableSR node: https://github. yalm. very nice and testing the same. Best Comfyui Workflows, Ideas, and Nodes/Settings. It honestly took me a 1-2 weeks to feel comfortable using ComfyUI, and I did this to achieve consistent game scenes, character designs, weapons, armor, items, etc. . StableSR for Super Resolution Upscaling, which uses a SD2. 3. txt" It is actually written on the FizzNodes github here You should check out anapnoe/webui-ux which has similarities with your project. I think ComfyUI remains far more efficient in loading when it comes to model / refiner, so it can pump things out faster. EDIT : Extension has been renamed to Cozy-Nest. Device: cuda:0 NVIDIA GeForce GTX 1080 : cudaMallocAsync. 0 model. This is why I push for improvements to ComfyUI rather than going "oh well, I'll just use a1111". Please keep posted images SFW. Try both and then use the one you like better. In order to recreate Auto1111 in ComfyUI, you need those encode++ nodes, but you also need to get the noise that is generated by ComfyUI, to be made by the GPU (this is how auto1111 makes noise), along with getting ComfyUI to give each latent its own seed, instead of splitting a single seed across the batch. - Load JSON file. I even had to tone the prompts down otherwise the expressions were too strong. - Install ComfyUI-Manager (optional) - Install VHS - Video Helper Suite (optional) - Download either of the . In this way, it is analogous to IP Adapter. ci ce eo cz gf lm nz hj cx jr