metadata
			language: en
tags:
  - clip
  - vision
  - transformers
  - interpretability
  - sparse autoencoder
  - sae
  - mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
  - type: explained_variance
    value: 89.7
    pretty_name: Explained Variance %
    range:
      min: 0
      max: 100
  - type: l0
    value: 710.746
    pretty_name: L0
CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:5e-05
Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
 - Layer: 4
 - Component: hook_resid_post
 
Model Architecture
- Input Dimension: 768
 - SAE Dimension: 49,152
 - Expansion Factor: x64 (vanilla architecture)
 - Activation Function: ReLU
 - Initialization: encoder_transpose_decoder
 - Context Size: 50 tokens
 
Performance Metrics
- L1 Coefficient: 5e-05
 - L0 Sparsity: 710.7462
 - Explained Variance: 0.8970 (89.70%)
 
Training Configuration
- Learning Rate: 0.0004
 - LR Scheduler: Cosine Annealing with Warmup (200 steps)
 - Epochs: 10
 - Gradient Clipping: 1.0
 - Device: NVIDIA Quadro RTX 8000
 
Experiment Tracking:
- Weights & Biases Run ID: 3si1hb8d
 - Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/3si1hb8d/overview
 - Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
 
Citation
@misc{2024josephsparseautoencoders,
    title={Sparse Autoencoders for CLIP-ViT-B-32},
    author={Joseph, Sonia},
    year={2024},
    publisher={Prisma-Multimodal},
    url={https://huggingface.co/Prisma-Multimodal},
    note={Layer 4, hook_resid_post, Run ID: 3si1hb8d}
}