Skip to content

Stale News Feed Causes Confusion and Potentially Causes Debugging Rabbit Hole #6

@NubeBuster

Description

@NubeBuster

See #2276

    **Keywords: v0.3.68**

    ---

    ComfyUI v0.3.68 is released (https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.3.68, https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.3.67, https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.3.66, https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.3.65, https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.3.64, https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.3.63)

    ## v0.3.68

    - Bump stable portable to cu130 python 3.13.9
    - Speed up offloading using pinned memory with race condition fixes
    - Add RAM Pressure cache mode for better resource management
    - Small speed improvements to --async-offload
    - Optimizations for fp8 torch.compile operations
    - Multiple API node conversions to new client standards (Luma, Minimax, Pixverse, Ideogram, StabilityAI, Pika)
    - Convert nodes_recraft.py to V3 schema and hypernetwork/OpenAI node conversions
    - Remove comfy api key from queue api for improved security
    - Add a ScaleROPE node (works on WAN and Lumina models)
    - Support for 12-20 second durations in LTXV API nodes
    - Race condition fixes in async-offload preventing corruption
    - Cache signature hashing fixes for bytes data
    - Torch compile regression fixes
    - And more...

    ---

    ## v0.3.67

    - Only disable cudnn on newer AMD GPUs to improve compatibility
    - Add endpoint support for published subgraphs from custom nodes
    - Integrated dependency-aware caching and resolved issues with --cache-none when using loops and lazy evaluation
    - Implemented preliminary support for handling multi-dimensional latent configurations
    - Upgraded network client to version 2 featuring async operations, cancellation support, and download capabilities
    - Converted Tripo and Gemini API nodes to V3 schema, plus added LTXV API nodes
    - Bumped portable dependencies to PyTorch cu130 with Python 3.13.9 support
    - Added batch script option to run ComfyUI without API nodes
    - Updated frontend to version 1.28.8
    - Resolved Windows-specific retry issues and torch-directml usage warnings
    - Minor README improvements and template updates (0.2.2 → 0.2.4)
    - And more...

    ---

    ## v0.3.66

    - Faster workflow cancelling implemented
    - Chroma radiance speedup and batch size handling corrections
    - PyTorch compiler disabled for cast_bias_weight function
    - CUDA malloc turned off by default with --fast autotune
    - Python 3.14 installation instructions added
    - gfx942 GPU no longer supports fp8 operations
    - PyTorch stable version updated to cu130
    - CuDNN workaround for VAE memory issues on torch 2.9
    - Veo3.1 model added to api-nodes
    - TemporalScoreRescaling node introduced
    - Dynamic pricing format fixed in api-nodes
    - Deprecated API alert feature implemented
    - Manual patches merging refactored with merge_nested_dicts
    - Frontend bumped to version 1.28.7
    - EasyCache apply_cache_diff batch slicing corrected
    - Chroma radiance batch size >1 output issues resolved
    - And more...

    ---

    ## v0.3.65

    - Multiple node files converted to V3 schema (compositing, latent operations, SD3, Flux, upscaling, Hunyuan models)
    - Better memory estimation for the SD/Flux VAE on AMD GPUs
    - RDNA4 pytorch attention support added for ROCm 7.0+
    - Fixed loading old stable diffusion ckpt files on newer numpy
    - Audio node stereo/mono handling corrections
    - Fixed fp8 scaled LoRA application
    - VAE cache VRAM leak fixes
    - Enum class support for Combo options
    - Lazy formatting in logging
    - Price extractor feature for API nodes
    - Aspect ratio parameters for GeminiImage nodes
    - mmaudio 16k VAE implemented
    - Diffusion models now always set to eval() mode
    - Documentation updated to version 0.3.0
    - And more...

    ---

    ## v0.3.64

    - Enhanced pylint rules for API nodes
    - Support for multiline negative prompts in PixVerse API nodes
    - Converted nodes_pika.py and nodes_kling.py to V3 schema format
    - Implemented Gemma 3 as a text encoder option
    - Fixed custom multipart parser in ReCraft-API node to properly return FormData
    - Added Sora2 API node for video generation capabilities
    - Temporary fix applied for LTXV custom nodes compatibility
    - Bumped frontend version to 1.27.10
    - Updated template to version 0.1.94
    - And more...

    ---

    ## v0.3.63

    - Bump frontend to 1.27.7
    - Multiple V3 schema conversions for nodes (audio encoder, differential diffusion, morphology, torch compile)
    - VAE tiled fallback VRAM leak resolution for both SD and WAN VAE
    - Remove soundfile dependency. No more torchaudio load or save
    - Turn on TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL by default for AMD ROCm
    - Epsilon Scaling node for exposure bias correction
    - Support for new Hunyuan VAE
    - Add kling-2-5-turbo to txt2video and img2video nodes
    - Enhanced linting with pylint for API nodes folder
    - Various indentation and import fixes across API nodes
    - And more...

    ---

    Issue News:

    Features/Updates News:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions