Media is too big
VIEW IN TELEGRAM
Its normal that my speeakers sound like this when im using stable diffusion?
https://redd.it/1rofnk1
@rStableDiffusion
https://redd.it/1rofnk1
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
Dialed in the workflow thanks to Claude. 30 steps cfg 3 distilled lora strength 0.6 res_2s sampler on first pass euler ancestral on latent pass full model (not distilled) comfyui
https://redd.it/1rodbeg
@rStableDiffusion
https://redd.it/1rodbeg
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
I ported the LTX Desktop app to Linux, added option for increased step count, and the models folder is now configurable in a json file
https://redd.it/1ro5c82
@rStableDiffusion
https://redd.it/1ro5c82
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
The culmination of my Ltx 2.3 SpongeBob efforts. A full mini episode.
https://redd.it/1rorfwc
@rStableDiffusion
https://redd.it/1rorfwc
@rStableDiffusion
New open source 360° video diffusion model (CubeComposer) – would love to see this implemented in ComfyUI
https://reddit.com/link/1ror887/video/h9exwlsccyng1/player
I just came across CubeComposer, a new open-source project from Tencent ARC that generates 360° panoramic video using a cubemap diffusion approach, and it looks really promising for VR / immersive content workflows.
Project page: https://huggingface.co/TencentARC/CubeComposer
Demo page: https://lg-li.github.io/project/cubecomposer/
From what I understand, it generates panoramic video by composing cube faces with spatio-temporal diffusion, allowing higher resolution outputs and consistent video generation. That could make it really interesting for people working with VR environments, 360° storytelling, or immersive renders.
Right now it seems to run as a standalone research pipeline, but it would be amazing to see:
A ComfyUI custom node
A workflow for converting generated perspective frames → 360° cubemap
Integration with existing video pipelines in ComfyUI
Code and model weights are released
The project seems like it is open source
It currently runs as a standalone research pipeline rather than an easy UI workflow
If anyone here is interested in experimenting with it or building a node, it might be a really cool addition to the ecosystem.
Curious what people think especially devs who work on ComfyUI nodes.
https://redd.it/1ror887
@rStableDiffusion
https://reddit.com/link/1ror887/video/h9exwlsccyng1/player
I just came across CubeComposer, a new open-source project from Tencent ARC that generates 360° panoramic video using a cubemap diffusion approach, and it looks really promising for VR / immersive content workflows.
Project page: https://huggingface.co/TencentARC/CubeComposer
Demo page: https://lg-li.github.io/project/cubecomposer/
From what I understand, it generates panoramic video by composing cube faces with spatio-temporal diffusion, allowing higher resolution outputs and consistent video generation. That could make it really interesting for people working with VR environments, 360° storytelling, or immersive renders.
Right now it seems to run as a standalone research pipeline, but it would be amazing to see:
A ComfyUI custom node
A workflow for converting generated perspective frames → 360° cubemap
Integration with existing video pipelines in ComfyUI
Code and model weights are released
The project seems like it is open source
It currently runs as a standalone research pipeline rather than an easy UI workflow
If anyone here is interested in experimenting with it or building a node, it might be a really cool addition to the ecosystem.
Curious what people think especially devs who work on ComfyUI nodes.
https://redd.it/1ror887
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Made a ComfyUI node to text/vision with any llama.cpp model via llama-swap
https://redd.it/1rorovd
@rStableDiffusion
https://redd.it/1rorovd
@rStableDiffusion
What features do 50-series card have over 40-series cards?
Based on this thread: https://www.reddit.com/r/StableDiffusion/comments/1ro1ymf/which\_is\_better\_for\_image\_video\_creation\_5070\_ti/
They say 50-series have a lot of improvements for AI. I have a 4080 Super. What kind of stuff am I missing out on?
https://redd.it/1rojxcm
@rStableDiffusion
Based on this thread: https://www.reddit.com/r/StableDiffusion/comments/1ro1ymf/which\_is\_better\_for\_image\_video\_creation\_5070\_ti/
They say 50-series have a lot of improvements for AI. I have a 4080 Super. What kind of stuff am I missing out on?
https://redd.it/1rojxcm
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Which is better for Image & Video creation? 5070 Ti or 3090 Ti
Explore this post and more from the StableDiffusion community
Well, Hello There. Fresh Anima LoRA! (Non Anime Gens, Anima Prev. 2B Model)
https://redd.it/1rox20x
@rStableDiffusion
https://redd.it/1rox20x
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Well, Hello There. Fresh Anima LoRA! (Non Anime Gens, Anima Prev. 2B Model)
Explore this post and more from the StableDiffusion community
👍1