FaceFusion 3.5.3 Content Filter
I have FaceFusion 3.5.3 installed. I have tried several methods found in various posts, but they don't work or work only partially. Can you tell me the correct method to disable this filter? Thank you all very much
https://redd.it/1rhv2a4
@rStableDiffusion
I have FaceFusion 3.5.3 installed. I have tried several methods found in various posts, but they don't work or work only partially. Can you tell me the correct method to disable this filter? Thank you all very much
https://redd.it/1rhv2a4
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
[CVPR 2026] ImageCritic: Correcting Inconsistencies in Generated Images!
https://redd.it/1rhvhmc
@rStableDiffusion
https://redd.it/1rhvhmc
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: [CVPR 2026] ImageCritic: Correcting Inconsistencies in Generated Images!
Explore this post and more from the StableDiffusion community
Free AI voice in Comfy UI, Qwen3-TTS Clone Voice and Custom Voice Design (Ep07)
https://www.youtube.com/watch?v=pZgaBQpjAhI
https://redd.it/1rkpgue
@rStableDiffusion
https://www.youtube.com/watch?v=pZgaBQpjAhI
https://redd.it/1rkpgue
@rStableDiffusion
YouTube
Free AI voice in Comfy UI, Qwen3-TTS Clone Voice and Custom Voice Design (Ep07)
Learn how to generate realistic AI voices inside ComfyUI using Qwen3-TTS. In this tutorial, you’ll see how to create custom AI voices, design unique voice styles from text prompts, and clone real voices using short audio samples. The video also shows how…
This media is not supported in your browser
VIEW IN TELEGRAM
Drop distilled lora strength to 0.6, increase steps to 30, enjoy SOTA AI generation at home.
https://redd.it/1rnz2c4
@rStableDiffusion
https://redd.it/1rnz2c4
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
I’m not a programmer, but I just built my own custom node and you can too.
https://redd.it/1roes9j
@rStableDiffusion
https://redd.it/1roes9j
@rStableDiffusion
Media is too big
VIEW IN TELEGRAM
Its normal that my speeakers sound like this when im using stable diffusion?
https://redd.it/1rofnk1
@rStableDiffusion
https://redd.it/1rofnk1
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
Dialed in the workflow thanks to Claude. 30 steps cfg 3 distilled lora strength 0.6 res_2s sampler on first pass euler ancestral on latent pass full model (not distilled) comfyui
https://redd.it/1rodbeg
@rStableDiffusion
https://redd.it/1rodbeg
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
I ported the LTX Desktop app to Linux, added option for increased step count, and the models folder is now configurable in a json file
https://redd.it/1ro5c82
@rStableDiffusion
https://redd.it/1ro5c82
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
The culmination of my Ltx 2.3 SpongeBob efforts. A full mini episode.
https://redd.it/1rorfwc
@rStableDiffusion
https://redd.it/1rorfwc
@rStableDiffusion
New open source 360° video diffusion model (CubeComposer) – would love to see this implemented in ComfyUI
https://reddit.com/link/1ror887/video/h9exwlsccyng1/player
I just came across CubeComposer, a new open-source project from Tencent ARC that generates 360° panoramic video using a cubemap diffusion approach, and it looks really promising for VR / immersive content workflows.
Project page: https://huggingface.co/TencentARC/CubeComposer
Demo page: https://lg-li.github.io/project/cubecomposer/
From what I understand, it generates panoramic video by composing cube faces with spatio-temporal diffusion, allowing higher resolution outputs and consistent video generation. That could make it really interesting for people working with VR environments, 360° storytelling, or immersive renders.
Right now it seems to run as a standalone research pipeline, but it would be amazing to see:
A ComfyUI custom node
A workflow for converting generated perspective frames → 360° cubemap
Integration with existing video pipelines in ComfyUI
Code and model weights are released
The project seems like it is open source
It currently runs as a standalone research pipeline rather than an easy UI workflow
If anyone here is interested in experimenting with it or building a node, it might be a really cool addition to the ecosystem.
Curious what people think especially devs who work on ComfyUI nodes.
https://redd.it/1ror887
@rStableDiffusion
https://reddit.com/link/1ror887/video/h9exwlsccyng1/player
I just came across CubeComposer, a new open-source project from Tencent ARC that generates 360° panoramic video using a cubemap diffusion approach, and it looks really promising for VR / immersive content workflows.
Project page: https://huggingface.co/TencentARC/CubeComposer
Demo page: https://lg-li.github.io/project/cubecomposer/
From what I understand, it generates panoramic video by composing cube faces with spatio-temporal diffusion, allowing higher resolution outputs and consistent video generation. That could make it really interesting for people working with VR environments, 360° storytelling, or immersive renders.
Right now it seems to run as a standalone research pipeline, but it would be amazing to see:
A ComfyUI custom node
A workflow for converting generated perspective frames → 360° cubemap
Integration with existing video pipelines in ComfyUI
Code and model weights are released
The project seems like it is open source
It currently runs as a standalone research pipeline rather than an easy UI workflow
If anyone here is interested in experimenting with it or building a node, it might be a really cool addition to the ecosystem.
Curious what people think especially devs who work on ComfyUI nodes.
https://redd.it/1ror887
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Made a ComfyUI node to text/vision with any llama.cpp model via llama-swap
https://redd.it/1rorovd
@rStableDiffusion
https://redd.it/1rorovd
@rStableDiffusion
What features do 50-series card have over 40-series cards?
Based on this thread: https://www.reddit.com/r/StableDiffusion/comments/1ro1ymf/which\_is\_better\_for\_image\_video\_creation\_5070\_ti/
They say 50-series have a lot of improvements for AI. I have a 4080 Super. What kind of stuff am I missing out on?
https://redd.it/1rojxcm
@rStableDiffusion
Based on this thread: https://www.reddit.com/r/StableDiffusion/comments/1ro1ymf/which\_is\_better\_for\_image\_video\_creation\_5070\_ti/
They say 50-series have a lot of improvements for AI. I have a 4080 Super. What kind of stuff am I missing out on?
https://redd.it/1rojxcm
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Which is better for Image & Video creation? 5070 Ti or 3090 Ti
Explore this post and more from the StableDiffusion community