Comfyui controlnet preprocessor example reddit.
Comfyui controlnet preprocessor example reddit.
Comfyui controlnet preprocessor example reddit And above all, BE NICE. DWPose might run very slowly Welcome to the unofficial ComfyUI subreddit. First time I used it like an Img2Img process with lineart ControlNet model, where I used it as an image template, but it's a lot more fun and flexible using it by itself without other controlnet models as well as less time consuming since one is not Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. Run the WebUI. MLSD is good for finding straight lines and edges. The row label shows which of the 3 types of reference controlnets were used to generate the image shown in the grid. Upload your desired face image in this ControlNet tab. After an entire weekend reviewing the material, I think (I hope!) I got the implementation right: As the title says, I included ControlNet XL OpenPose and FaceDefiner models. Belittling their efforts will get you banned. see the search faq for details. The controlnet part is lineart of the old photo which tells SD the contour it shall draw. 5 denoising value. 1 Shuffle ControlNet 1. I went for half-resolution here, with 1024x512. Example MLSD detectmap with the default settings . Don't give criticism or your opinions on others painting styles onless asked. All the workflows for Comfy i've found start with a depth map that has been already generated, and it's creation is not included in the workflow F:\##_ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose. 1 of preprocessors if they have version option since results from v1. I have "Zoe Depth map" preprocessor, but also not the "Zoe Depth Anything" shown in the screenshot. I found that one of the better combinations is to pick preprocessor "canny" and use Adapter XL Sketch, or preprocessor "t2ia_sketch_pidi" and use a ControlLite model by kohya-ss in its "sdxl fake scribble anime" edition. Set ControlNet parameters: Weight 0. 5. But if you saved one of the still/frames using Save Image node OR EVEN if you saved a generated CN image using Save Image it would transport it over. Sometimes you want to compare how some of them work. You don't need to Down Sample the picture, this is only usefull if you want to get more detail at the same size unfortunately your examples didn't work. server\ComfyUI\extra_model_paths. com" url:text search for "text" in url selftext:text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results marked as NSFW. I did try it, it did work quite well with ComfyUI’s canny node, however it’s nearly maxing out my 10gb vram and speed also took a noticeable hit (went from 2. json got prompt… c:\Users\your-username-goes here\AppData\Roaming\krita\pykrita\ai_diffusion\. , Canny, Lineart, MLSD and Scribble. ControlNet 1. MLSD ControlNet preprocesor. Enable ControlNet, set Preprocessor to "None" and Model to "lineart_anime". Controlnet can be used with other generation models. I've installed ComfyUI Manager through which I installed ComfyUI's ControlNet Auxiliary Preprocessors. If the input is manually inverted, though, for some reason the no-preprocessor inverted-input seems to be better. With controlnet I can input an image and begin working on it. 1, Ending 0. When you click on the radio button for a model type, "inverted" will only appear in the preprocessor popup list for the line-type of models, i. LATER EDIT: I noticed this myself when I wanted to use ControlNet for scribbling. Leave the Preprocessor to None. I was wondering if anyone has a workflow or some guidance on how to to get the color model to function? I am guessing I require a preprocessor if I just load an image to the "Apply ControlNet" node. But I don’t see it with the current version of controlnet for sdxl. Load the noise image into ControlNet. 1 Inpaint (not very sure about what exactly does this one) ControlNet 1. But now I can't find the preprocessors like Hed, Canny etc in ComfyUi. x, at this time) from the NVIDIA CUDA Toolkit Archive. Depth_leres is almost identical to regular "Depth", but with more ability to fine-tune the options. ) into a structured feature map so that the ControlNet model can understand and guide the generated result. control_depth-fp16) In a depth map (which is the actual name of the kind of detectmap image this preprocessor creates), lighter areas are "closer" and darker areas are "further away" not quite. So I decided to write my own Python script that adds support for more preprocessors. I'm just struggling to get controlnet to work. When you generate the image you'd like to upscale, first send it to img2img. There are controlnet preprocessor depth map nodes (MiDaS, Zoe, etc. control_normal-fp16) When trying to install the ControlNet Auxiliary Preprocessors in the latest version of ComfyUI, I get a note telling me to refrain from using it alongside this installation. Brief Introduction to ControlNet ControlNet is a condition-controlled generation model based on diffusion models (such as Stable Diffusion), initially proposed by Lvmin Zhang, Maneesh Agrawala can anyone please tell me if this is possible in comfyui at all, and where i can find an example workflow or tutorial. Make sure that you save your workflow by pressing Save in the main menu if you want to use it again. I have used: - CheckPoint: RevAnimated v1. I'm trying to implement reference only "controlnet preprocessor". Just drop any image into it. They must be original creations, not photographs of already-existing places. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. All the workflows for Comfy i've found start with a depth map that has been already generated, and it's creation is not included in the workflow Jul 20, 2024 · site:example. I don't know why it didn't grab those on the update. bat you can run to install to portable if detected. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The img2img source is the same photo, but colorized manually and simply, which shows SD the colors it should approximately paint. 1 Tile (Unfinished) (Which seems very interesting) Testing ControlNet with a simple input sketch and prompt. I don’t remember if you have to Add or Multiply it with the latent before putting it into the ControlNet node though it’s been a few since I messed with Comfy. YOU NEED TO REMOVE comfyui_controlnet_preprocessors BEFORE USING THIS REPO. Reference Only is a ControlNet Preprocessor that does not need any ControlNet Model. x) again, is because when we installed 11. shows an example of using controlnet and img2img in a process. Here is an example of the final image using the OpenPose ControlNet model. We would like to show you a description here but the site won’t allow us. Done in ComfyUI with lineart preprocessor and controlnet model and dreamshaper7. 5, Starting 0. Maybe it's your settings. Is this possible? In A1111 I can set preprocessor to none, but ComfyUI controlnet node does not have any preprocessor input, so I assume it is always preprocessing the image (ie. THESE TWO CONFLICT WITH EACH OTHER. Reply reply More replies More replies More replies I was having trouble getting anything to look like the input image, but the issue was that I wasn't including the ControlNet (as I thought it was only needed for posing and I was having trouble loading the example workflows). If you are asked too or the title of the post asks for help in style, technique, etc. ComfyUI is hard. Hi, I hope I am not bugging you too much by asking you this on here. Select the size you want to resize it. Not sure why the OpenPose ControlNet model seems to be slightly less temporally consistent than the DensePose one here. I am a fairly recent comfyui user. In this example, we will guide you through installing and using ControlNet models in ComfyUI, and complete a sketch-controlled image generation example. Speaking of Controlnet, how do you guys get your line drawings? Use photoshop find edges filter and then clean up by hand with a brush? It seems like you could use comfy AI to use controlnet to make the line art, then use controlnet against to use it to generate the final image. FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. Is there something similar I could use ? Thank you I've been doing some tests in A1111 using the Ultimate Upscaler script together with Controlnet Tile and it works wonderful it doesn't matter what tile size or image resolution I throw at it but in ComfyUI I get this error: We would like to show you a description here but the site won’t allow us. 0. Load your segmentation map as an input for ControlNet. May 12, 2025 · For example, in the image below, we used ComfyUI’s Canny preprocessor, which extracts the contour edge features of the image. When loading the graph, the following node types were not found: CR Batch Process Switch. It is not very useful for organic shapes or soft smooth curves. Apr 1, 2023 · If a preprocessor node doesn't have version option, it is unchanged in ControlNet 1. It is recommended to use version v1. Overgrown jungles, barren planets, futuristic cityscapes, or interiors, are just some examples of what is expected. Fake scribble ControlNet preprocessor Fake scribble is just like regular scribble, but ControlNet is used to automatically create the scribble sketch from an uploaded image. I found one that doesn't use sdxl but can't find any others. In other words, I can do 1 or 0 and nothing in between. 3. For those who have problems with the controlnet preprocessor and have been living with results like the image for some time (like me), check that the ComfyUI/custom_nodes directory doesn't have two similar folders "comfyui_controlnet_aux". Workflows are tough to include in reddit Workflow Not Included May 12, 2025 · Then, in other ControlNet-related articles on ComfyUI-Wiki, we will specifically explain how to use individual ControlNet models with relevant examples. 1 Lineart ControlNet 1. When the controlnet was turned ON, the image used for the controlnet is shown on the top corner. e. There are quite a few different preprocessors in comfyui, which can be further used in the same ControlNet. 1 preprocessors are better than v1 one and compatibile with both ControlNet 1 and ControlNet 1. I get a bit better results with xinsir's tile compared to TTPlanet's. This is what I have so far (using the custom nodes to reduce the visual clutteR) . Welcome to the unofficial ComfyUI subreddit. 8. Go to the ControlNet tab, activate it and use "ip-adapter_face_id_plus" as preprocessor and "ip-adapter-faceid-plus_sd15" as the model. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. i am about to lose my mind :< Share Add a Comment Sort by: We would like to show you a description here but the site won’t allow us. There is now a install. example at the end of the filename, and placed my models path like so: d:/sd/models replacing the one in the file. All fine detail and depth from the original image is lost, but the shapes of each chunk will remain more or less consistent for every image generation. There's a PreProcessor for DWPose in comfyui_controlnet_aux which makes batch-processing via DWPose pretty easy. At the moment, the assembly includes Welcome to the unofficial ComfyUI subreddit. It is good for positioning things, especially positioning things "near" and "far away". Ty i will try this. This works fine as I can use the different preprocessors. In this case, I changed the beginning of the prompt to include, "standing in flower fields by the ocean, stunning sunset". If so, rename the first one (adding a letter, for example) and restart ComfyUI. As of 2023-02-26, Pidinet preprocessor does not have an "official" model that goes Example depth map detectimage with the default settings. And sometimes something new appears. I saw a tutorial, long time ago, about controlnet preprocessor « reference only ». I was frustrated by the lack of some controlnet preprocessors that I wanted to use. edit: nevermind, I think my installation of comfyui_controlnet_aux was somehow botched I didn't have big parts of the source that I can see in the repo. However, since a recent Controlnet update, 2 Inpaint preprocessors have appeared, and I don't really understand how to use them : Hi all! I recently made the shift to ComfyUI and have been testing a few things. All old workflows still can be used I'm trying to use an Open pose controlnet, using an open pose skeleton image without preprocessing. Certainly easy to achieve this than with prompt alone. Appreciate just looking into it. RunComfy also provides AI Playground , enabling artists to harness the latest AI tools to create incredible art. The Workflow Pose ControlNet. That node can be obtained by installing Fannovel16's ComfyUI's ControlNet Auxiliary Preprocessors custom node. Also, if this is new and exciting to you, feel free to post Posted by u/Sensitive-Paper6812 - 48 votes and 8 comments Backup your workflows and picture. EDIT: Nevermind, the update of the extension didn't actually work, but now it did. The reason we're reinstalling the latest version (12. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. (Results in following images -->). It's such a great tool. Only select combinations work moderately alright. 8 it/s). Where can they be loaded. Pidinet is similar to hed, but it generates outlines that are more solid and less "fuzzy". But as it turned out, there are quite a lot of them. 2 - Lora: Thickeer Lines Anime Style Lora Mix - ControlNet LineArt - ControlNet OpenPose - ControlNet TemporalNet (diffuser) Custom Nodes in Comfyui: - Comfyui Manager The problems with hands adetailer are that: If you use a masked-only inpaint, then the model lacks context for the rest of the body. It is used with "depth" models. You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. It's a preprocessor for a controlnet model like leres, midas, zoe, marigold I think cold may be needed to support it. Download and install the latest CUDA (12. Try and experiment by also using the tile model without the upscaler - I have great luck with generating small 512x640 ie - then putting it into img2img with the tile model on and its downsampler set high and then prompting for more detail of the sort you want to add, while setting the img size incrementally higher Welcome to the unofficial ComfyUI subreddit. 1. Only the layout and connections are, to the best of my knowledge, correct. First time I used it like an Img2Img process with lineart ControlNet model, where I used it as an image template, but it's a lot more fun and flexible using it by itself without other controlnet models as well as less time consuming since one is not Rules 1. This reddit community is for submitting your favourite digital or natural media **pictorial** creations of landscapes or scenery. g. Hi guys, do you know where I can find preprocessor tile_resample for ComfyUI? I've been using it without any problem on A1111 but since I just moved the whole workflow to ComfyUI, I'm having a hard time making controlnet tile work in the same way to controlnet tile on A1111. I think the old repo isn't good enough to maintain. Reply reply We would like to show you a description here but the site won’t allow us. example I renamed it by removing the . It's about colorizing an old picture. Additional question. For those who don't know, it is a technique that works by patching the unet function so it can make two passes during an inference loop: one to write data of the reference image, another one to read it during the normal input image inference so the output emulates the reference Welcome to the unofficial ComfyUI subreddit. There is one for a preprocessor and one for loading an image. Would you have even the begining of a clue of why that it. It kinda seems like the best option is to have a white background, NOT invert input and use the scribble preprocessor, OR invert input in the UI but use no preprocessor. I also automated the split of the diffusion steps between the Base and the Refiner models. DWPreprocessor First I thought it would allow me to add some iterative details to my upscale jobs, for example, if I started with a picture of empty ocean and added a 'sailboat' prompt, tile would give me an armada of little sailboats floating out there. EDIT: I must warn people that some of my settings in several nodes are probably incorrect. Thank you so much! Is there a way to create depth maps from an image inside ComfyUI by using ControlNET like in AUTO1111? I mean, in AUTO i can use the depth preprossessor, but i can't find anything like that in Comfy. This makes it particularly useful for architecture like room interiors and isometric buildings. Type in your console Depth_lres preprocessor. 1バージョンモデルを例に説明し、具体的なワークフローは後続の関連チュートリアルで補足します。 - selected "OpenPose" control type, with "openpose" preprocessor, and "t2i-adapter_xl_openpose" model, "controlnet is more important" - used this image - received a good openpose preprocessing but this blurry mess for a result Normal map ControlNet preprocessor. (e. 1 Instruct Pix2Pix ControlNet 1. Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. View community ranking In the Top 10% of largest communities on Reddit Does comfyui support preprocess of image? In Automatic1111 you could put an image and it will preprocess it to depth/canny/etc image to be use. r/comfyui • I made a composition workflow, mostly to avoid prompt bleed. In ControlNet, select Tile_Resample as a Preprocessor and Control_V11f1e_sd15_tile as a Model. 1 Anime Lineart ControlNet 1. You can load this image in ComfyUI to get the full workflow. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. Is there something like this for Comfyui including sdxl? We would like to show you a description here but the site won’t allow us. Please share your tips, tricks, and workflows for using this software to create your AI art. 5-1. e. If you click the radio button " all " and then manually select your model from the model popup list, " inverted " will be at the very top of the list of all We would like to show you a description here but the site won’t allow us. Example Pidinet detectmap with the default settings. Choose a weight between 0. The preprocessor for openpose makes the images like the one you loaded in your example, but from any image, not just open pose likes and dots. 8, among other things, the installer updated our global CUDA_PATH environment variable to point to 11. Also, if you're using comfy, add an ImageBlur node between your image and the apply controlnet node and set both blur radius and sigma to 1. Hey there, im trying to switch from A1111 to ComfyUI as I am intrigued by the nodebased approach. 4-0. This is the purpose of a preprocessor: it converts our reference image (such as a photo, line art, doodle, etc. 9 it/s to 1. Not as simple as dropping a preprocessor into a folder. The subject and background are rendered separately, blended and then upscaled together. 4. 2. Pidinet ControlNet preprocessor . Try and experiment by also using the tile model without the upscaler - I have great luck with generating small 512x640 ie - then putting it into img2img with the tile model on and its downsampler set high and then prompting for more detail of the sort you want to add, while setting the img size incrementally higher Controlnet can be used with other generation models. A lot of people are just discovering this technology, and want to show off what they created. py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. Since we already created our own segmentation map there is Welcome to the unofficial ComfyUI subreddit. Make sure you set the resolution to match the ratio of the texture you want to synthesize. Install a python package manager for example micromamba (follow the installation instruction on the website). Please keep posted images SFW. So you'll end up with stuff like backwards hands, too big/small, and other kinds of bad positioning. Mixing ControlNets At this point, you can use this file as an input to ControlNet using the steps described in How to Use ControlNet with ComfyUI – Part 1. I don't think the generation info in ComfyUI gets saved with the video files. The current implementation has far less noise than hed, but far fewer fine details. You can also right click open in mask editor and apply a mask on the uploaded original image if it contains multiple people, or elements in the background you do not want the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. But it gave better results than I thought. Hey everyone! Like many, I like to use Controlnet to condition my inpainting, using different preprocessors, and mixing them. For example, we can use a simple sketch to guide the image generation process, producing images that closely align with our sketch. Using Multiple ControlNets to Emphasize Colors: In WebUI settings, open ControlNet options and set 'Multi Controlnet: Max models amount' to 2 or more. Nov 4, 2024 · RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. (Results in following images -->) I love Comfyui, but it is difficult to set a workflow to create animations as easily as it can be done in Automatic1111. This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. subreddit:aww site:imgur. Segmentation is used to split the image into "chunks" of more or less related elements ("semantic segmentation"). 6. I do see it in the other 2 repos though. Be respectful 2. It does lose fine, intricate detail though. I'm struggling to find a workflow that allows image/ img input into comfy ui that uses sdxl. Here is ControlNetwrite up and here is the Update discussion. I tried to collect all the ones I know in one place. It is also fairly good for positioning things, especially positioning things "near" and "far away". /// Does anyone have a clue why I still can't see that preprocessor in the dropdown? I updated it (and controlnet too). com dog. trying to extract the pose). You just run the preprocessor and then use that image in a “Load Image” Node and use that in your generation process. Here are the Controlnet settings, as an example: Step 3: Modify your prompt or use a whole new one, and the face will be applied to the new prompt. It is used with "normal" models. yaml. It involves supplying a reference image, using a preprocessor to convert the reference image into a usable "guide image", and then used the matching controlnet model to guide the image generation alongside your prompt and generation model. control_mlsd-fp16) We would like to show you a description here but the site won’t allow us. com find submissions from "example. May 12, 2025 · 現在ComfyUIのControlNetモデルバージョンは多数あるため、具体的なフローは異なる場合がありますが、ここでは現在のControlNet V1. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. Start Stable Diffusion and enable the ControlNet extension. It is used with "mlsd" models. While depth anything does provide a new controlnet model that's supposedly better trained for it, the project itself is for a depth estimation model. Hook one up to vae decode and preview image nodes and you can see/save the depth map as a PNG or whatever. ). You can also specifically save the workflow from the floating ComfyUI menu I am looking for a way to input an image of a character, and then make it have different poses without having to train a Lora, using comfyUI. And its hard to find other people asking this question on here. Example fake scribble detectmap with the default settings Welcome to the unofficial ComfyUI subreddit. Normal maps is good for intricate details and outlines. Segmentation ControlNet preprocessor . Example normal map detectmap with the default settings. You might have to use different settings for his controlnet. Can I know how do you guys get around this? This is a rework of comfyui_controlnet_preprocessors based on ControlNet auxiliary models by 🤗. What do I need to install? (I'm migrating from A1111 so comfyui is a bit complex) I also get these errors when I load a workflow with controlnet. I hope the official one from Stability AI would be more optimised especially on lower end hardware. In my canny Edge preprocessor, I seem to not be able to go into decimal like you or other people I have seen do. you input that picture, and use "reference_only" pre-processor on ControlNet, and choose Prompt/ControlNet is more important, and then change the prompt text to describing anything else except the chlotes, using maybe 0. Once I applied the Face Keypoints Preprocessor and ControlNet after the InstantID node the results were really good When the controlnet was turned OFF, the prompt generates the image shown on the bottom corner. Example depth map detectmap with the default settings . Apr 15, 2024 · Rather than remembering all the preprocessor names within ComfyUI ControlNet Aux, this single node contains a long list of preprocessors that you can choose from for your ControlNet. rujzfl yogu hgys qadlakbg zikld susueb mbvv dynetz ldnn gbndsajk