Z-Image-Turbo / app.py

Commit History

Add heart emoji and make like message red
5b007df
Running
verified

lulavc commited on

v2.3.1: Fix language switching - connect lang_selector.change() to update all 36 UI components
c676bb8
verified

lulavc commited on

Update header title and subtitle
5a1522c
verified

lulavc commited on

v2.3: Multilingual support - EN/ES/PT/AR/HI with RTL and font support
112ff7a
verified

lulavc commited on

Remove format parameter from gr.Image for Gradio compatibility
fb81eeb
verified

lulavc commited on

v2.2: Remove dead code, fix deprecations, add GPU error handling, use constants
00b8727
verified

lulavc commited on

FIX: Import spaces before torch to prevent CUDA initialization error
973676f
verified

lulavc commited on

Fix: Import spaces before torch to avoid CUDA initialization error
b6ec4f0
verified

lulavc commited on

Production-Ready v2.0.0 - 1765466791
6ff0add
verified

lulavc commited on

FIX: Remove incompatible Gradio launch parameter
d16526a
verified

lulavc commited on

FIX: Removed fp16 variant - model doesnt support it
4cfde50
verified

lulavc commited on

Performance Optimization v1765465256
87ca719
verified

lulavc commited on

🚀 Performance Optimization - 1765465233
aaea33c
verified

lulavc commited on

Optimize: 50-70% faster generation
22cf63a
verified

lulavc commited on

v2.1: CSS refactor - remove fragile Svelte selectors, use CSS variables consistently, add accessibility focus styles
40558a2
verified

lulavc commited on

v2.0.2: Fix MIME type, BytesIO cleanup in upload, use constants consistently
f296587
verified

lulavc commited on

v2.0.1: Remove unsupported VAE slicing/tiling for ZImagePipeline
45fe3ba
verified

lulavc commited on

v2.0: ZeroGPU best practices - logging, GPU duration hints, VAE optimization, TF32, constants
cc06b92
verified

lulavc commited on

v1.9: Increase generate_prompt max_tokens to 1000, use full content
a5fb887
verified

lulavc commited on

v1.8: Increase description to 400-500 tokens, max_tokens=1000
43117eb
verified

lulavc commited on

v1.7: Use full content for descriptions, no paragraph splitting
32fe9df
verified

lulavc commited on

v1.6: Image description now 250-350 tokens with detailed prompting
91214eb
verified

lulavc commited on

v1.5: Fix GLM thinking filter + stronger prompt instructions
712daab
verified

lulavc commited on

v1.4: Custom dark theme + dropdown portal CSS + GLM debug logging
73afc7c
verified

lulavc commited on

v1.3: NUCLEAR CSS fix for examples table visibility
0b462f8
verified

lulavc commited on

Major UI overhaul: dark mode, accessibility, responsive layout, improved UX
b375ec5
verified

lulavc commited on

Major UI overhaul: dark mode, accessibility, responsive layout, improved UX
a30295c
verified

lulavc commited on

Major UI overhaul: dark mode, accessibility, responsive layout, improved UX
91c8de4
verified

lulavc commited on

Fix text visibility: tabs, labels, examples table - comprehensive dark theme
5013a18
verified

lulavc commited on

Fix examples table header color
e2069c6
verified

lulavc commited on

Major UI overhaul: dark mode, accessibility, responsive layout, improved UX
5e440ff
verified

lulavc commited on

Optimize CSS with variables, transitions, and modern styling
61b0489
verified

lulavc commited on

Fix CSS syntax error - use raw string for gradient
954e516
verified

lulavc commited on

Fix syntax error - replace arrow characters with ASCII
cc7f5b9
verified

lulavc commited on

Update system prompt to handle all types of transformations, not just object replacement
ce46a3d
verified

lulavc commited on

Simplify system prompt to stop GLM thinking loop
6ae6c35
verified

lulavc commited on

Use SAME extraction logic as image analysis for consistency
9943f23
verified

lulavc commited on

Improve GLM extraction to handle partial prompts - lower threshold, multiple strategies
eaf9ae0
verified

lulavc commited on

Simplify GLM prompt extraction with reliable quote-based method
a2e2b49
verified

lulavc commited on

Improve GLM prompt extraction to better handle mixed thinking/content
7c6b6b5
verified

lulavc commited on

Fix function schema errors and remove debug logging - working GLM-4.6V prompt extraction
cdc3eb7
verified

lulavc commited on

Fix: Take LONGEST quoted text and use 200 char minimum
01e5541
verified

lulavc commited on

Fix: Find quoted text starting with A using DOTALL for multi-line
9a3f9fb
verified

lulavc commited on

Fix: Use same extraction logic as working image description - find quoted text starting with A
35c34eb
verified

lulavc commited on

Fix: Extract last quoted text as final prompt from reasoning_content
e2386a5
verified

lulavc commited on

Increase Generated Prompt textbox to 15 lines
0ebff6b
verified

lulavc commited on

Fix: Extract final prompt from reasoning_content using markers and patterns
a6544ce
verified

lulavc commited on

Add debug logging to see GLM response structure
1bb1d1a
verified

lulavc commited on

Fix Generated Prompt extraction - increase max_tokens to 1200 and improve response handling
2b2030d
verified

lulavc commited on

Fix GLM response extraction - capture full description, not partial
9867358
verified

lulavc commited on