Z-Image-Turbo / app.py

Commit History

Update subtitle
ef5b7be
verified

lulavc commited on

Rename title to Z Image Turbo
1a32d6c
verified

lulavc commited on

Fix CSS - more specific selector for Gradio footer only
1ef111b
verified

lulavc commited on

Style Gradio footer for better readability
fa0ea56
verified

lulavc commited on

Improve footer readability - white background, darker text
328e5f9
verified

lulavc commited on

Remove white box from footer - transparent background
f0f46c1
verified

lulavc commited on

Fix footer alignment - center properly
26fd9b8
verified

lulavc commited on

Add footer with model credits and author attribution
e09f8f9
verified

lulavc commited on

Rename to Z-Image Generation & Transformation Demo
116b044
verified

lulavc commited on

Fix dtype: set bfloat16 via .to() instead of from_pretrained kwarg
07df908
verified

lulavc commited on

Use dtype instead of deprecated torch_dtype
4497385
verified

lulavc commited on

v1.0 - Stable release
a37c608
verified

lulavc commited on

Update version to v0.30b
2f7d959
verified

lulavc commited on

v30: Final stable version with all features
baea43e
verified

lulavc commited on

Fix capitalization: Z Image Turbo
52270dc
verified

lulavc commited on

Update description: Synthesis -> Generation
02ce663
verified

lulavc commited on

Rename to Z image Turbo
d1c224e
verified

lulavc commited on

Rename Z-Image Turbo to Z-Image-Turbo
3da7ffc
verified

lulavc commited on

v29: Fix transform steps - compensate for strength to get actual requested steps
3bf5810
verified

lulavc commited on

v28: Clean up - polish checkbox defaults to unchecked, remove debug logs
3488f99
verified

lulavc commited on

v27.3: Polish checkbox defaults to True, clearer feedback messages
629295f
verified

lulavc commited on

v27.2: Add debug logging for transform polish
f3e5bf4
verified

lulavc commited on

v27.1: Add warning when HF_TOKEN is missing for prompt polishing
5d6ff19
verified

lulavc commited on

v27: Add style-focused prompt polishing for Transform tab
4b8312d
verified

lulavc commited on

Update description: Next-Gen Diffusion Transformer for Image Synthesis & Editing
41d0a21
verified

lulavc commited on

v26: Remove FP8 (slow Python fallback), keep SDPA + 9 steps + prompt polish
4e706ca
verified

lulavc commited on

v25: FP8 quantization on text_encoder + transformer
a722547
verified

lulavc commited on

v24.2: Use flash (FA2) backend instead of FA3
031e359
verified

lulavc commited on

v24.1: Fix attention backend name (_flash_3)
9832c4f
verified

lulavc commited on

v24: Default 9 steps, flash_3 attention backend, updated examples
a234175
verified

lulavc commited on

v22.1: Revert to stable (AoTI FA3 blocks incompatible with current ZeroGPU)
9be7c74
verified

lulavc commited on

v23: Re-enable AoTI + FA3 (matching mrfakename setup)
c0c7b26
verified

lulavc commited on

v22.1: Fix prompt polishing - run BEFORE GPU allocation using .then() chain
3d466b6
verified

lulavc commited on

v22: Add prompt polishing feature (AI-enhanced prompts)
425254d
verified

lulavc commited on

v21: Stable baseline - remove AoTI (outdated blocks), use SDPA backend
858dcd4
verified

lulavc commited on

Revert to v19: AoTI + FA3 only (stable)
c4739cb
verified

lulavc commited on

v20: Add FP8 quantization on text encoder (AoTI handles transformer)
4a1f2bd
verified

lulavc commited on

v19: Add progress tracking, use torch.randint for seeds
760aa1a
verified

lulavc commited on

v18: Remove VAE tiling/slicing overhead, cleaner AoTI setup
76a2a67
verified

lulavc commited on

v17: Share transformer between pipelines - single AoTI load, half memory
c16d382
verified

lulavc commited on

v16: Implement AoTI + FA3 optimizations (like mrfakename space)
b64e8a6
verified

lulavc commited on

v15 Phase 3: FP8 quantization with torchao
c2d9d34
verified

lulavc commited on

v14.4: Remove torch.compile (incompatible), add SDPA + VAE optimizations
73257e5
verified

lulavc commited on

v14.3 Phase 1: Use max-autotune mode (allows graph breaks)
f6409a0
verified

lulavc commited on

v14.2 Phase 1: Clean up - use torch backend settings for FlashAttention
47dfa46
verified

lulavc commited on

v14.1 Phase 1: Fix attention - use SDPA (FlashAttention-2) instead of FA3
08121d2
verified

lulavc commited on

v14.0 Phase 1: torch.compile + FlashAttention-3 optimizations
c82bbdb
verified

lulavc commited on

v13.8: Fix tab colors and center header with inline styles
b0cb348
verified

lulavc commited on

v13.7: Fix tab selection - different colors for selected/unselected
f1dde41
verified

lulavc commited on

v13.6: Improved UI - centered header, better tabs styling, icons
1b67cca
verified

lulavc commited on