lunamaid

🩡 LunaMaid-12B

This is a multi-stage merge of pre-trained language models created using mergekit.

🧬 Merge Overview

LunaMaid-12B was produced through a two-stage multi-model merge using MergeKit.
Each stage fuses models with complementary linguistic and stylistic traits to create a cohesive, emotionally nuanced personality.

🩡 Stage 1 β€” Slerp Merge (Intermediate Model First)

Stage 1 Configuration
name: First
base_model: Vortex5/Vermilion-Sage-12B
models:
- model: yamatazen/NeonMaid-12B-v2
merge_method: slerp
dtype: bfloat16
parameters:
normalize: true
t: [0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.6, 0.5, 0.6, 0.6]

πŸŒ‘ Merge Method β€” Karcher Mean Merge (Final Model)

Stage 2 Configuration
dtype: bfloat16
merge_method: karcher
modules:
  default:
    slices:
      - sources:
          - layer_range: [0, 40]
            model: ./intermediates/First
          - layer_range: [0, 40]
            model: Vortex5/Moonlit-Shadow-12B
parameters:
  max_iter: 9999
  tol: 1e-9

Models Merged

The following models were included in the merge:

Downloads last month
241
Safetensors
Model size
12B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Vortex5/LunaMaid-12B