Add heart emoji and make like message red 5b007df Running verified lulavc commited on about 5 hours ago
v2.3.1: Fix language switching - connect lang_selector.change() to update all 36 UI components c676bb8 verified lulavc commited on about 9 hours ago
v2.3: Multilingual support - EN/ES/PT/AR/HI with RTL and font support 112ff7a verified lulavc commited on about 9 hours ago
Remove format parameter from gr.Image for Gradio compatibility fb81eeb verified lulavc commited on about 10 hours ago
v2.2: Remove dead code, fix deprecations, add GPU error handling, use constants 00b8727 verified lulavc commited on about 10 hours ago
FIX: Import spaces before torch to prevent CUDA initialization error 973676f verified lulavc commited on about 10 hours ago
Fix: Import spaces before torch to avoid CUDA initialization error b6ec4f0 verified lulavc commited on about 10 hours ago
FIX: Remove incompatible Gradio launch parameter d16526a verified lulavc commited on about 11 hours ago
FIX: Removed fp16 variant - model doesnt support it 4cfde50 verified lulavc commited on about 11 hours ago
v2.1: CSS refactor - remove fragile Svelte selectors, use CSS variables consistently, add accessibility focus styles 40558a2 verified lulavc commited on about 12 hours ago
v2.0.2: Fix MIME type, BytesIO cleanup in upload, use constants consistently f296587 verified lulavc commited on about 13 hours ago
v2.0.1: Remove unsupported VAE slicing/tiling for ZImagePipeline 45fe3ba verified lulavc commited on about 14 hours ago
v2.0: ZeroGPU best practices - logging, GPU duration hints, VAE optimization, TF32, constants cc06b92 verified lulavc commited on about 14 hours ago
v1.9: Increase generate_prompt max_tokens to 1000, use full content a5fb887 verified lulavc commited on about 15 hours ago
v1.8: Increase description to 400-500 tokens, max_tokens=1000 43117eb verified lulavc commited on about 15 hours ago
v1.7: Use full content for descriptions, no paragraph splitting 32fe9df verified lulavc commited on about 15 hours ago
v1.6: Image description now 250-350 tokens with detailed prompting 91214eb verified lulavc commited on about 15 hours ago
v1.5: Fix GLM thinking filter + stronger prompt instructions 712daab verified lulavc commited on about 16 hours ago
v1.4: Custom dark theme + dropdown portal CSS + GLM debug logging 73afc7c verified lulavc commited on about 16 hours ago
v1.3: NUCLEAR CSS fix for examples table visibility 0b462f8 verified lulavc commited on about 16 hours ago
Major UI overhaul: dark mode, accessibility, responsive layout, improved UX b375ec5 verified lulavc commited on about 17 hours ago
Major UI overhaul: dark mode, accessibility, responsive layout, improved UX a30295c verified lulavc commited on about 17 hours ago
Major UI overhaul: dark mode, accessibility, responsive layout, improved UX 91c8de4 verified lulavc commited on about 17 hours ago
Fix text visibility: tabs, labels, examples table - comprehensive dark theme 5013a18 verified lulavc commited on about 17 hours ago
Major UI overhaul: dark mode, accessibility, responsive layout, improved UX 5e440ff verified lulavc commited on about 18 hours ago
Optimize CSS with variables, transitions, and modern styling 61b0489 verified lulavc commited on about 19 hours ago
Fix CSS syntax error - use raw string for gradient 954e516 verified lulavc commited on about 19 hours ago
Fix syntax error - replace arrow characters with ASCII cc7f5b9 verified lulavc commited on about 19 hours ago
Update system prompt to handle all types of transformations, not just object replacement ce46a3d verified lulavc commited on about 20 hours ago
Simplify system prompt to stop GLM thinking loop 6ae6c35 verified lulavc commited on about 20 hours ago
Use SAME extraction logic as image analysis for consistency 9943f23 verified lulavc commited on about 20 hours ago
Improve GLM extraction to handle partial prompts - lower threshold, multiple strategies eaf9ae0 verified lulavc commited on about 20 hours ago
Simplify GLM prompt extraction with reliable quote-based method a2e2b49 verified lulavc commited on about 20 hours ago
Improve GLM prompt extraction to better handle mixed thinking/content 7c6b6b5 verified lulavc commited on about 20 hours ago
Fix function schema errors and remove debug logging - working GLM-4.6V prompt extraction cdc3eb7 verified lulavc commited on about 20 hours ago
Fix: Take LONGEST quoted text and use 200 char minimum 01e5541 verified lulavc commited on about 20 hours ago
Fix: Find quoted text starting with A using DOTALL for multi-line 9a3f9fb verified lulavc commited on about 20 hours ago
Fix: Use same extraction logic as working image description - find quoted text starting with A 35c34eb verified lulavc commited on about 20 hours ago
Fix: Extract last quoted text as final prompt from reasoning_content e2386a5 verified lulavc commited on about 21 hours ago
Fix: Extract final prompt from reasoning_content using markers and patterns a6544ce verified lulavc commited on about 21 hours ago
Add debug logging to see GLM response structure 1bb1d1a verified lulavc commited on about 21 hours ago
Fix Generated Prompt extraction - increase max_tokens to 1200 and improve response handling 2b2030d verified lulavc commited on about 21 hours ago
Fix GLM response extraction - capture full description, not partial 9867358 verified lulavc commited on about 22 hours ago