File size: 1,637 Bytes
f1e9fc1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# Codette Configuration Guide

## Environment Variables

- `HUGGINGFACEHUB_API_TOKEN`: HuggingFace API token for sentiment analysis and model access
- `OPENAI_API_KEY`: Optional OpenAI API key for additional model support
- `LOG_LEVEL`: Logging level (DEBUG, INFO, WARNING, ERROR)
- `PORT`: Port number for the web server (default: 7860)

## Model Configuration

Codette supports multiple language models in a fallback chain:

1. Mistral-7B-Instruct (Primary)
   - 7B parameter instruction-tuned model
   - Requires 16GB+ VRAM
   - Configuration: 8-bit quantization, fp16

2. Phi-2 (Secondary)
   - Lightweight yet powerful alternative
   - Requires 8GB+ VRAM
   - Configuration: fp16

3. GPT-2 (Fallback)
   - Minimal requirements
   - Always available option
   - Configuration: Standard loading

## Consciousness Parameters

### Memory System
- `response_memory`: Maintains last 50 responses
- `memory_context`: Uses last 5 responses for learning
- `memory_synthesis`: Uses last 2 responses for consciousness

### Quantum States
- Stored in .cocoon files
- Format: JSON with quantum_state and chaos_state arrays
- Used for creative and probabilistic reasoning

### Perspective System
- Newton: temperature = 0.3 (analytical)
- Da Vinci: temperature = 0.9 (creative)
- Human Intuition: temperature = 0.7 (empathetic)
- Quantum Computing: temperature = 0.8 (probabilistic)

## Response Generation

### Text Generation Parameters
- Max length: 512 tokens (default)
- Temperature range: 0.3 - 0.9
- Top-p: 0.9
- Context window: 2048 tokens
- Special token handling for different models