Molbap HF Staff commited on
Commit
b18cab2
·
verified ·
1 Parent(s): 89d303b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -3
README.md CHANGED
@@ -1,3 +1,76 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_name: transformers_code_embeddings
3
+ license: apache-2.0
4
+ language: code
5
+ tags:
6
+ - embeddings
7
+ - transformers-internal
8
+ - similarity-search
9
+ ---
10
+
11
+ # Transformers Code Embeddings
12
+
13
+ Compact index of function/class definitions from `src/transformers/models/**/modeling_*.py` for cross-model similarity. Built to help surface reusable code when modularizing models.
14
+
15
+ ## Contents
16
+
17
+ - `embeddings.safetensors` — float32, L2-normalized embeddings shaped `[N, D]`.
18
+ - `code_index_map.json` — `{int_id: "relative/path/to/modeling_*.py:SymbolName"}`.
19
+ - `code_index_tokens.json` — `{identifier: [sorted_unique_tokens]}` for Jaccard.
20
+
21
+ ## How these were built
22
+
23
+ - Source: 🤗 Transformers repository, under `src/transformers/models`.
24
+ - Units: top-level `class`/`def` definitions.
25
+ - Preprocessing:
26
+ - Strip docstrings, comments, and import lines.
27
+ - Replace occurrences of model names and symbol prefixes with `Model`.
28
+ - Encoder: `Qwen/Qwen3-Embedding-4B` via `transformers` (mean pooling over tokens, then L2 normalize).
29
+ - Output dtype: float32.
30
+
31
+ > Note: Results are tied to a specific Transformers commit. Regenerate when the repo changes.
32
+
33
+ ## Quick usage
34
+
35
+ ```python
36
+ from huggingface_hub import hf_hub_download
37
+ from safetensors.numpy import load_file
38
+ import json, numpy as np
39
+
40
+ repo_id = "hf-internal-testing/transformers_code_embeddings"
41
+
42
+ emb_path = hf_hub_download(repo_id, "embeddings.safetensors", repo_type="dataset")
43
+ map_path = hf_hub_download(repo_id, "code_index_map.json", repo_type="dataset")
44
+ tok_path = hf_hub_download(repo_id, "code_index_tokens.json", repo_type="dataset")
45
+
46
+ emb = load_file(emb_path)["embeddings"] # (N, D) float32, L2-normalized
47
+ id_map = {int(k): v for k, v in json.load(open(map_path))}
48
+ tokens = json.load(open(tok_path))
49
+
50
+ # cosine similarity: dot product
51
+ def topk(vec, k=10):
52
+ sims = vec @ emb.T
53
+ idx = np.argpartition(-sims, k)[:k]
54
+ idx = idx[np.argsort(-sims[idx])]
55
+ return [(id_map[int(i)], float(sims[i])) for i in idx]
56
+ ````
57
+
58
+ ## Intended use
59
+
60
+ * Identify similar symbols across models (embedding + Jaccard over tokens).
61
+ * Assist refactors and modularization efforts.
62
+
63
+ ## Limitations
64
+
65
+ * Embeddings reflect preprocessing choices and the specific encoder.
66
+ * Symbols from the same file are present; filter by model name if needed.
67
+
68
+ ## Repro/build
69
+
70
+ See `utils/modular_model_detector.py` in `transformers` repo for exact build & push commands.
71
+
72
+ ## License
73
+
74
+ Apache-2.0 for this dataset card and produced artifacts. Source code remains under its original license in the upstream repo.
75
+
76
+ ```