Media is too big
VIEW IN TELEGRAM
Its normal that my speeakers sound like this when im using stable diffusion?

https://redd.it/1rofnk1
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
Dialed in the workflow thanks to Claude. 30 steps cfg 3 distilled lora strength 0.6 res_2s sampler on first pass euler ancestral on latent pass full model (not distilled) comfyui

https://redd.it/1rodbeg
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
I ported the LTX Desktop app to Linux, added option for increased step count, and the models folder is now configurable in a json file

https://redd.it/1ro5c82
@rStableDiffusion
New open source 360° video diffusion model (CubeComposer) – would love to see this implemented in ComfyUI

https://reddit.com/link/1ror887/video/h9exwlsccyng1/player

I just came across CubeComposer, a new open-source project from Tencent ARC that generates 360° panoramic video using a cubemap diffusion approach, and it looks really promising for VR / immersive content workflows.

Project page: https://huggingface.co/TencentARC/CubeComposer

Demo page: https://lg-li.github.io/project/cubecomposer/

From what I understand, it generates panoramic video by composing cube faces with spatio-temporal diffusion, allowing higher resolution outputs and consistent video generation. That could make it really interesting for people working with VR environments, 360° storytelling, or immersive renders.

Right now it seems to run as a standalone research pipeline, but it would be amazing to see:

A ComfyUI custom node
A workflow for converting generated perspective frames → 360° cubemap
Integration with existing video pipelines in ComfyUI
Code and model weights are released
The project seems like it is open source
It currently runs as a standalone research pipeline rather than an easy UI workflow

If anyone here is interested in experimenting with it or building a node, it might be a really cool addition to the ecosystem.

Curious what people think especially devs who work on ComfyUI nodes.

https://redd.it/1ror887
@rStableDiffusion
Made a ComfyUI node to text/vision with any llama.cpp model via llama-swap
https://redd.it/1rorovd
@rStableDiffusion