|
|
--- |
|
|
title: Intelligent Documentation Generator Agent |
|
|
colorFrom: blue |
|
|
colorTo: purple |
|
|
sdk: gradio |
|
|
python_version: 3.11 |
|
|
sdk_version: 5.49.1 |
|
|
app_file: app.py |
|
|
short_description: Generate documentation and chat with code |
|
|
tags: |
|
|
- documentation |
|
|
- code-assistant |
|
|
- large-language-model |
|
|
- fire-works-ai |
|
|
models: |
|
|
- accounts/fireworks/models/glm-4p6 |
|
|
pinned: false |
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
# ๐ง Intelligent Documentation Generator Agent |
|
|
|
|
|
Built with **GLM-4.6 on Fireworks AI** and **Gradio** |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ Overview |
|
|
|
|
|
The **Intelligent Documentation Generator Agent** automatically generates structured, multi-layer documentation and provides a chat interface to explore Python codebases. |
|
|
This version is powered by **GLM-4.6** via the **Fireworks AI Inference API** and implemented in **Gradio**, offering a lightweight, interactive browser-based UI. |
|
|
|
|
|
**Capabilities:** |
|
|
|
|
|
* Analyze Python files from uploads, pasted code, or GitHub links |
|
|
* Generate consistent, well-structured documentation (overview, API breakdown, usage examples) |
|
|
* Chat directly with your code to understand logic, dependencies, and optimization opportunities |
|
|
|
|
|
--- |
|
|
|
|
|
## โ๏ธ Architecture |
|
|
|
|
|
``` |
|
|
User (Upload / Paste / GitHub) |
|
|
โ |
|
|
โผ |
|
|
Gradio UI (Tabs) |
|
|
โโโโโโโโโโโโโโโโโโโโโโโ |
|
|
โ Documentation Tab โโโโบ Fireworks GLM-4.6 โ Markdown Docs |
|
|
โ Chat Tab โโโโบ Fireworks GLM-4.6 โ Q&A Responses |
|
|
โโโโโโโโโโโโโโโโโโโโโโโ |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐งฉ Core Features |
|
|
|
|
|
### ๐ Documentation Generator |
|
|
|
|
|
* Input via: |
|
|
|
|
|
* Pasted Python code |
|
|
* Uploaded `.py` file |
|
|
* GitHub file link (supports automatic conversion to raw URL) |
|
|
* Produces: |
|
|
|
|
|
* Overview and purpose |
|
|
* Key functions/classes with signatures |
|
|
* Dependencies and relationships |
|
|
* Example usage and improvement suggestions |
|
|
* Outputs documentation in Markdown |
|
|
|
|
|
### ๐ฌ Code Chatbot |
|
|
|
|
|
* Conversational Q&A with the analyzed code |
|
|
* References exact functions and dependencies |
|
|
* Maintains interactive chat history using Gradioโs `Chatbot` component |
|
|
* Uses the same GLM-4.6 model context for accurate answers |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐งฑ Tech Stack |
|
|
|
|
|
| Layer | Technology | |
|
|
| ----------------- | ----------------------------------------------- | |
|
|
| **Model** | [GLM-4.6](https://fireworks.ai) on Fireworks AI | |
|
|
| **UI Framework** | [Gradio](https://gradio.app) | |
|
|
| **Language** | Python 3.9+ | |
|
|
| **HTTP Requests** | `requests` | |
|
|
| **Deployment** | Localhost / Containerized environments | |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ Installation |
|
|
|
|
|
### 1. Clone the Repository |
|
|
|
|
|
```bash |
|
|
git clone https://github.com/<your-username>/intelligent-doc-agent.git |
|
|
cd intelligent-doc-agent |
|
|
``` |
|
|
|
|
|
### 2. Install Dependencies |
|
|
|
|
|
```bash |
|
|
pip install gradio requests |
|
|
``` |
|
|
|
|
|
### 3. Configure Fireworks API Key |
|
|
|
|
|
Set your API key as an environment variable: |
|
|
|
|
|
```bash |
|
|
export FIREWORKS_API_KEY="your_fireworks_api_key" |
|
|
``` |
|
|
|
|
|
> Alternatively, enter your API key directly in the UI when prompted. |
|
|
|
|
|
### 4. Run the Application |
|
|
|
|
|
```bash |
|
|
python app_gradio.py |
|
|
``` |
|
|
|
|
|
Then visit **[http://127.0.0.1:7860](http://127.0.0.1:7860)** in your browser. |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ก Usage Guide |
|
|
|
|
|
### ๐ง Generate Documentation |
|
|
|
|
|
1. Open the **๐ Generate Documentation** tab. |
|
|
2. Choose an input mode: |
|
|
|
|
|
* Paste code into the text area |
|
|
* Upload a `.py` file |
|
|
* Enter a GitHub file link (e.g., `https://github.com/.../file.py`) |
|
|
3. Click **๐ Generate Documentation** to process your file. |
|
|
4. View formatted Markdown output instantly. |
|
|
|
|
|
### ๐ฌ Chat with Code |
|
|
|
|
|
1. Switch to the **๐ฌ Chat with Code** tab. |
|
|
2. Ask questions about your code (e.g., โWhat does this function do?โ or โHow can I improve performance?โ). |
|
|
3. The model responds contextually, referencing the uploaded file. |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ง Model Integration Example |
|
|
|
|
|
```python |
|
|
payload = { |
|
|
"model": "accounts/fireworks/models/glm-4p6", |
|
|
"max_tokens": 4096, |
|
|
"temperature": 0.6, |
|
|
"messages": messages |
|
|
} |
|
|
response = requests.post( |
|
|
"https://api.fireworks.ai/inference/v1/chat/completions", |
|
|
headers={"Authorization": f"Bearer {FIREWORKS_API_KEY}"}, |
|
|
data=json.dumps(payload) |
|
|
) |
|
|
print(response.json()["choices"][0]["message"]["content"]) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ฆ Project Structure |
|
|
|
|
|
``` |
|
|
. |
|
|
โโโ app.py # Main Gradio interface |
|
|
โโโ README.md # Project documentation |
|
|
โโโ requirements.txt # Dependencies (gradio, requests) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ฎ Future Enhancements |
|
|
|
|
|
* **Multi-file repository analysis** with hierarchical context summarization |
|
|
* **Semantic vector store** (Chroma / Pinecone) for persistent knowledge retrieval |
|
|
* **Multi-agent orchestration** using LangGraph or MCP protocols |
|
|
* **Continuous documentation updates** via Git hooks or CI/CD pipelines |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐งพ Example |
|
|
|
|
|
**Input** |
|
|
|
|
|
```python |
|
|
def calculate_mean(numbers): |
|
|
return sum(numbers) / len(numbers) |
|
|
``` |
|
|
|
|
|
**Output** |
|
|
|
|
|
```markdown |
|
|
### Function: calculate_mean |
|
|
Computes the arithmetic mean of a numeric list. |
|
|
|
|
|
**Parameters:** |
|
|
- numbers (list): Sequence of numbers to average. |
|
|
|
|
|
**Returns:** |
|
|
- float: Mean of the list. |
|
|
|
|
|
**Usage Example:** |
|
|
>>> calculate_mean([1, 2, 3, 4]) |
|
|
2.5 |
|
|
``` |
|
|
|
|
|
**Chat Example** |
|
|
|
|
|
> โHow can I modify this to avoid division by zero errors?โ |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ Best Practices |
|
|
|
|
|
* Use **raw GitHub links** (`https://raw.githubusercontent.com/...`) for accurate file fetches. |
|
|
* Limit input size (~4k tokens) for optimal latency and context accuracy. |
|
|
* Keep your **API key** private โ never commit it in source files. |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐งญ License |
|
|
|
|
|
Released under |