Added README.md.
Browse files
README.md
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: LLM-Enhanced Internet Search Agent
|
| 3 |
+
emoji: 🕵🏻♂️
|
| 4 |
+
colorFrom: indigo
|
| 5 |
+
colorTo: indigo
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 5.25.2
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
hf_oauth: true
|
| 11 |
+
# optional, default duration is 8 hours/480 minutes. Max duration is 30 days/43200 minutes.
|
| 12 |
+
hf_oauth_expiration_minutes: 480
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# LLM-Enhanced Internet Search Agent
|
| 16 |
+
|
| 17 |
+
This agent uses a two-step approach to answer questions:
|
| 18 |
+
|
| 19 |
+
1. **Question Breakdown**: The agent first uses an LLM (GPT-3.5) to break down complex questions into 2-3 key search queries
|
| 20 |
+
2. **Targeted Search**: Each search query is sent to Wikipedia's API to retrieve relevant information
|
| 21 |
+
3. **Answer Synthesis**: The agent then uses the LLM to synthesize a comprehensive answer based on all search results
|
| 22 |
+
|
| 23 |
+
## Features
|
| 24 |
+
|
| 25 |
+
- **Smart Query Generation**: Transforms natural language questions into optimized search queries
|
| 26 |
+
- **Parallel Search Processing**: Searches for multiple key aspects of the question simultaneously
|
| 27 |
+
- **Knowledge Synthesis**: Combines information from multiple sources into a cohesive answer
|
| 28 |
+
- **Fallback Mechanisms**: Graceful handling of errors at each step of the process
|
| 29 |
+
|
| 30 |
+
## Setup Requirements
|
| 31 |
+
|
| 32 |
+
1. Clone this repository
|
| 33 |
+
2. Install required packages: `pip install -r requirements.txt`
|
| 34 |
+
3. Set your OpenAI API key as an environment variable: `OPENAI_API_KEY=your-api-key`
|
| 35 |
+
|
| 36 |
+
## How It Works
|
| 37 |
+
|
| 38 |
+
1. User submits a question
|
| 39 |
+
2. LLM breaks down the question into key search terms
|
| 40 |
+
3. Search terms are used to query Wikipedia API
|
| 41 |
+
4. Results from multiple searches are collected
|
| 42 |
+
5. LLM synthesizes the information into a comprehensive answer
|
| 43 |
+
6. Answer is returned to the user
|
| 44 |
+
|
| 45 |
+
This approach is more effective than direct internet searches because:
|
| 46 |
+
- It identifies the most relevant aspects of complex questions
|
| 47 |
+
- It can break multi-part questions into their components
|
| 48 |
+
- It leverages the LLM's understanding of natural language
|
| 49 |
+
- It provides more targeted and accurate search results
|
| 50 |
+
|
| 51 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|