text-guided-image-colorization / FASTAI_MODEL_GUIDE.md
LogicGoInfotechSpaces's picture
Add guide for finding FastAI-compatible colorization models - Document how to find models with .pkl files - Add instructions for setting MODEL_ID - Note current model limitation
df14637
|
raw
history blame
2.06 kB

FastAI-Compatible Colorization Models Guide

Current Issue

The model Hammad712/GAN-Colorization-Model contains a PyTorch model (generator.pt), not a FastAI model. FastAI models must be .pkl files created with FastAI's export() function.

How to Find FastAI-Compatible Models

Option 1: Search Hugging Face

  1. Go to https://huggingface.co/models
  2. Search for: fastai colorization or fastai image colorization
  3. Look for models that have .pkl files in their repository
  4. Check the model's README to confirm it's a FastAI Learner

Option 2: Use FastAI's Official Examples

FastAI course examples often have colorization models. Look for:

  • FastAI course lesson notebooks on image colorization
  • Models exported using learn.export('model.pkl')

Option 3: Train Your Own

If you have a FastAI colorization model:

from fastai.vision.all import *
learn = ... # your trained model
learn.export('model.pkl')

Then upload model.pkl to Hugging Face.

Setting a New Model

Via Environment Variable (Recommended)

In your Hugging Face Space settings, add:

MODEL_ID=your-username/your-fastai-colorization-model

Via Code

Update app/config.py:

MODEL_ID: str = os.getenv("MODEL_ID", "your-username/your-fastai-colorization-model")

Model Requirements

The model must:

  1. ✅ Be a FastAI Learner exported as .pkl file
  2. ✅ Accept PIL Images as input
  3. ✅ Return colorized images (PIL Image or tensor)
  4. ✅ Be uploaded to Hugging Face Hub

Testing a Model

Before switching, you can test locally:

from huggingface_hub import from_pretrained_fastai
from PIL import Image

learn = from_pretrained_fastai("your-model-id")
img = Image.open("test.jpg")
result = learn.predict(img)

If this works, the model is compatible!

Alternative: Switch Back to SDXL+ControlNet

If you can't find a FastAI model, you can switch back to the SDXL+ControlNet approach which was working before. Update MODEL_BACKEND to "diffusers" and use a ControlNet colorization model.