Skier8402 commited on
Commit
1e56b47
·
verified ·
1 Parent(s): 089e309

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -2
app.py CHANGED
@@ -194,7 +194,7 @@ iface = gr.Interface(
194
  ],
195
  outputs=gr.Image(label="Token Attribution Visualization"),
196
  title="AI Interpretability Explorer: See How Tokens Influence Predictions",
197
- description="Input a prompt and target token to visualize token contributions using Integrated Gradients on LLaMA. "
198
  "Explore model reasoning interactively.",
199
  # Insert a collapsible Feynman-style explanation and quick cheat-sheet actions using HTML so Gradio shows it above the app.
200
  # We use safe escaping for the cheat text when embedding into HTML/JS.
@@ -202,7 +202,7 @@ iface = gr.Interface(
202
  article="""
203
  ### How it works — Feynman-style
204
 
205
- This tool explains which input tokens most influence the model's next-token prediction using Integrated Gradients.
206
 
207
  - What it does: Interpolates from a baseline to the actual input in embedding space, accumulates gradients along the path, and attributes importance to each input token.
208
  - Why it helps: Highlights which tokens push the model toward (green) or away from (red) the chosen target token. Useful for debugging, bias detection, and model transparency.
 
194
  ],
195
  outputs=gr.Image(label="Token Attribution Visualization"),
196
  title="AI Interpretability Explorer: See How Tokens Influence Predictions",
197
+ description="Input a prompt and target token to visualize token contributions using [Integrated Gradients](https://captum.ai/docs/extension/integrated_gradients) on LLaMA. "
198
  "Explore model reasoning interactively.",
199
  # Insert a collapsible Feynman-style explanation and quick cheat-sheet actions using HTML so Gradio shows it above the app.
200
  # We use safe escaping for the cheat text when embedding into HTML/JS.
 
202
  article="""
203
  ### How it works — Feynman-style
204
 
205
+ This tool explains which input tokens most influence the model's next-token prediction using Integrated Gradients https://captum.ai/docs/extension/integrated_gradients.
206
 
207
  - What it does: Interpolates from a baseline to the actual input in embedding space, accumulates gradients along the path, and attributes importance to each input token.
208
  - Why it helps: Highlights which tokens push the model toward (green) or away from (red) the chosen target token. Useful for debugging, bias detection, and model transparency.