Safetensors
English
bert_hash
custom_code
davidmezzetti commited on
Commit
064763b
·
1 Parent(s): 6360f27

Initial model

Browse files
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ ---
5
+
6
+ # BERT Hash Nano Models
7
+
8
+ This is a set of 3 Nano [BERT](https://arxiv.org/abs/1810.04805) models with a modified embeddings layer. The embeddings layer is the same BERT vocabulary (30,522 tokens) projected to a smaller dimensional space then re-encoded to the hidden size. This method is inspired by [MUVERA: Multi-Vector Retrieval via Fixed Dimensional Encodings](https://arxiv.org/abs/2405.19504).
9
+
10
+ The number of projections is like a hash. Setting the projections parameter to 5 is like generating a 160-bit hash (5 x float32) for each token. That hash is then projected to the hidden size.
11
+
12
+ This significantly reduces the number of parameters necessary for token embeddings.
13
+
14
+ For example:
15
+
16
+ Standard token embeddings:
17
+ - 30,522 (vocab size) x 768 (hidden size) = 23,440,896 parameters
18
+ - 23,440,896 x 4 (float32) = 93,763,584 bytes
19
+
20
+ Hash token embeddings:
21
+ - 30,522 (vocab size) x 5 (hash buckets) + 5 x 768 (projection matrix)= 156,450 parameters
22
+ - 156,450 x 4 (float32) = 625,800 bytes
23
+
24
+ These models are pre-trained on the same training corpus as BERT (with a copy of Wikipedia from 2025) as recommended in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962).
25
+
26
+ Here is a subset of GLUE scores on the dev set using the [script provided by Hugging Face Transformers](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py) with the following parameters.
27
+
28
+ ```bash
29
+ python run_glue.py --model_name_or_path <model path> --task_name <task name> --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 1e-4 --num_train_epochs 4 --output_dir outputs --trust-remote-code True
30
+ ```
31
+
32
+ | Model | Parameters | MNLI (acc m/mm) | MRPC (f1/acc) | SST-2 (acc) |
33
+ | ----- | ---------- | --------------- | ---------------- | ----------- |
34
+ | [baseline (bert-tiny)](https://hf.co/google/bert_uncased_L-2_H-128_A-2) | 4.4M | 0.7114 / 0.7161 | 0.8318 / 0.7353 | 0.8222 |
35
+ | [bert-hash-femto](https://hf.co/neuml/bert-hash-femto) | 0.243M | 0.5697 / 0.5750 | 0.8122 / 0.6838 | 0.7821 |
36
+ | **[bert-hash-pico](https://hf.co/neuml/bert-hash-pico)** | **0.448M** | **0.6228 / 0.6363** | **0.8205 / 0.7083** | **0.7878** |
37
+ | [bert-hash-nano](https://hf.co/neuml/bert-hash-nano) | 0.969M | 0.6565 / 0.6670 | 0.8172 / 0.7083 | 0.8131 |
38
+
39
+ ## Usage
40
+
41
+ These models can be loaded using Hugging Face Transformers as follows. Note that given that this is a custom architecture, `trust_remote_code` needs to be set.
42
+
43
+ ```python
44
+ from transformers import AutoModel
45
+
46
+ model = AutoModel.from_pretrained("neuml/bert-hash-femto", trust_remote_code=True)
47
+ ```
48
+
49
+ ## Training
50
+
51
+ Training your own Nano model is simple. All you need is a Hugging Face dataset and the code below using [txtai](https://github.com/neuml/txtai).
52
+
53
+ ```python
54
+ from datasets import concatenate_datasets, load_dataset
55
+ from transformers import AutoTokenizer
56
+
57
+ from txtai.pipeline import HFTrainer
58
+
59
+ from configuration_bert_hash import *
60
+ from modeling_bert_hash import *
61
+
62
+ dataset = load_dataset("path to target HF dataset")
63
+
64
+ tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
65
+
66
+ config = BertHashConfig(
67
+ hidden_size=128,
68
+ num_hidden_layers=2,
69
+ num_attention_heads=2,
70
+ intermediate_size=512,
71
+ projections=16
72
+ )
73
+ model = BertHashForMaskedLM(config)
74
+
75
+ print(config)
76
+ print("Total parameters:", sum(p.numel() for p in model.bert.parameters()))
77
+
78
+ train = HFTrainer()
79
+
80
+ # Train using MLM
81
+ train((model, tokenizer), dataset, task="language-modeling", output_dir="model",
82
+ fp16=True, learning_rate=1e-3, per_device_train_batch_size=64, num_train_epochs=3,
83
+ warmup_steps=2500, weight_decay=0.01, adam_epsilon=1e-6,
84
+ tokenizers=True, dataloader_num_workers=20,
85
+ save_strategy="steps", save_steps=5000, logging_steps=500,
86
+ )
87
+ ```
88
+
89
+ ## Future Work
90
+
91
+ This model demonstrates that smaller models can still be productive models.
92
+
93
+ The hope is that this work opens the door to many in building small encoder models that pack a punch. Models can be trained in a matter of hours using consumer GPUs.
94
+
95
+ Imagine more specialized models like this for medical, legal, science and more.
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertHashForMaskedLM"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_bert_hash.BertHashConfig",
7
+ "AutoModel": "modeling_bert_hash.BertHashModel",
8
+ "AutoModelForMaskedLM": "modeling_bert_hash.BertHashForMaskedLM",
9
+ "AutoModelForSequenceClassification": "modeling_bert_hash.BertHashForSequenceClassification"
10
+ },
11
+ "attention_probs_dropout_prob": 0.1,
12
+ "classifier_dropout": null,
13
+ "hidden_act": "gelu",
14
+ "hidden_dropout_prob": 0.1,
15
+ "hidden_size": 80,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 320,
18
+ "layer_norm_eps": 1e-12,
19
+ "max_position_embeddings": 512,
20
+ "model_type": "bert_hash",
21
+ "num_attention_heads": 2,
22
+ "num_hidden_layers": 2,
23
+ "pad_token_id": 0,
24
+ "position_embedding_type": "absolute",
25
+ "projections": 8,
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.55.4",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
configuration_bert_hash.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers.models.bert.configuration_bert import BertConfig
2
+
3
+
4
+ class BertHashConfig(BertConfig):
5
+ """
6
+ Extension of Bert configuration to add projections parameter.
7
+ """
8
+
9
+ model_type = "bert_hash"
10
+
11
+ def __init__(self, projections=5, **kwargs):
12
+ super().__init__(**kwargs)
13
+
14
+ self.projections = projections
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81124ac9a2d48c5881518fa4f82d781b0f8bb76c0a16eb4a918a325fc3d5581b
3
+ size 11688104
modeling_bert_hash.py ADDED
@@ -0,0 +1,513 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Union
2
+
3
+ import torch
4
+ from torch import nn
5
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
6
+
7
+ from transformers.cache_utils import Cache
8
+ from transformers.models.bert.modeling_bert import BertEncoder, BertPooler, BertPreTrainedModel, BertOnlyMLMHead
9
+ from transformers.modeling_attn_mask_utils import _prepare_4d_attention_mask_for_sdpa, _prepare_4d_causal_attention_mask_for_sdpa
10
+ from transformers.modeling_outputs import (
11
+ BaseModelOutputWithPoolingAndCrossAttentions,
12
+ MaskedLMOutput,
13
+ SequenceClassifierOutput,
14
+ )
15
+ from transformers.utils import auto_docstring, logging
16
+
17
+ from .configuration_bert_hash import BertHashConfig
18
+
19
+ logger = logging.get_logger(__name__)
20
+
21
+
22
+ class BertHashTokens(nn.Module):
23
+ """
24
+ Module that embeds token vocabulary to an intermediate embeddings layer then projects those embeddings to the
25
+ hidden size.
26
+
27
+ The number of projections is like a hash. Setting the projections parameter to 5 is like generating a
28
+ 160-bit hash (5 x float32) for each token. That hash is then projected to the hidden size.
29
+
30
+ This significantly reduces the number of parameters necessary for token embeddings.
31
+
32
+ For example:
33
+ Standard token embeddings:
34
+ 30,522 (vocab size) x 768 (hidden size) = 23,440,896 parameters
35
+ 23,440,896 x 4 (float32) = 93,763,584 bytes
36
+
37
+ Hash token embeddings:
38
+ 30,522 (vocab size) x 5 (hash buckets) + 5 x 768 (projection matrix)= 156,450 parameters
39
+ 156,450 x 4 (float32) = 625,800 bytes
40
+ """
41
+
42
+ def __init__(self, config):
43
+ super().__init__()
44
+ self.config = config
45
+
46
+ # Token embeddings
47
+ self.embeddings = nn.Embedding(config.vocab_size, config.projections, padding_idx=config.pad_token_id)
48
+
49
+ # Token embeddings projections
50
+ self.projections = nn.Linear(config.projections, config.hidden_size)
51
+
52
+ def forward(self, input_ids):
53
+ # Project embeddings to hidden size
54
+ return self.projections(self.embeddings(input_ids))
55
+
56
+
57
+ class BertHashEmbeddings(nn.Module):
58
+ """Construct the embeddings from word, position and token_type embeddings."""
59
+
60
+ def __init__(self, config):
61
+ super().__init__()
62
+ self.word_embeddings = BertHashTokens(config)
63
+ self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
64
+ self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
65
+
66
+ # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
67
+ # any TensorFlow checkpoint file
68
+ self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
69
+ self.dropout = nn.Dropout(config.hidden_dropout_prob)
70
+ # position_ids (1, len position emb) is contiguous in memory and exported when serialized
71
+ self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
72
+ self.register_buffer(
73
+ "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)), persistent=False
74
+ )
75
+ self.register_buffer(
76
+ "token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False
77
+ )
78
+
79
+ def forward(
80
+ self,
81
+ input_ids: Optional[torch.LongTensor] = None,
82
+ token_type_ids: Optional[torch.LongTensor] = None,
83
+ position_ids: Optional[torch.LongTensor] = None,
84
+ inputs_embeds: Optional[torch.FloatTensor] = None,
85
+ past_key_values_length: int = 0,
86
+ ) -> torch.Tensor:
87
+ if input_ids is not None:
88
+ input_shape = input_ids.size()
89
+ else:
90
+ input_shape = inputs_embeds.size()[:-1]
91
+
92
+ seq_length = input_shape[1]
93
+
94
+ if position_ids is None:
95
+ position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]
96
+
97
+ # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs
98
+ # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves
99
+ # issue #5664
100
+ if token_type_ids is None:
101
+ if hasattr(self, "token_type_ids"):
102
+ buffered_token_type_ids = self.token_type_ids[:, :seq_length]
103
+ buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length)
104
+ token_type_ids = buffered_token_type_ids_expanded
105
+ else:
106
+ token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)
107
+
108
+ if inputs_embeds is None:
109
+ inputs_embeds = self.word_embeddings(input_ids)
110
+ token_type_embeddings = self.token_type_embeddings(token_type_ids)
111
+
112
+ embeddings = inputs_embeds + token_type_embeddings
113
+ if self.position_embedding_type == "absolute":
114
+ position_embeddings = self.position_embeddings(position_ids)
115
+ embeddings += position_embeddings
116
+ embeddings = self.LayerNorm(embeddings)
117
+ embeddings = self.dropout(embeddings)
118
+ return embeddings
119
+
120
+
121
+ @auto_docstring(
122
+ custom_intro="""
123
+ The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
124
+ cross-attention is added between the self-attention layers, following the architecture described in [Attention is
125
+ all you need](https://huggingface.co/papers/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
126
+ Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
127
+
128
+ To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
129
+ to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
130
+ `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
131
+ """
132
+ )
133
+ class BertHashModel(BertPreTrainedModel):
134
+ config_class = BertHashConfig
135
+
136
+ _no_split_modules = ["BertEmbeddings", "BertLayer"]
137
+
138
+ def __init__(self, config, add_pooling_layer=True):
139
+ r"""
140
+ add_pooling_layer (bool, *optional*, defaults to `True`):
141
+ Whether to add a pooling layer
142
+ """
143
+ super().__init__(config)
144
+ self.config = config
145
+
146
+ self.embeddings = BertHashEmbeddings(config)
147
+ self.encoder = BertEncoder(config)
148
+
149
+ self.pooler = BertPooler(config) if add_pooling_layer else None
150
+
151
+ self.attn_implementation = config._attn_implementation
152
+ self.position_embedding_type = config.position_embedding_type
153
+
154
+ # Initialize weights and apply final processing
155
+ self.post_init()
156
+
157
+ def _prune_heads(self, heads_to_prune):
158
+ """
159
+ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
160
+ class PreTrainedModel
161
+ """
162
+ for layer, heads in heads_to_prune.items():
163
+ self.encoder.layer[layer].attention.prune_heads(heads)
164
+
165
+ @auto_docstring
166
+ def forward(
167
+ self,
168
+ input_ids: Optional[torch.Tensor] = None,
169
+ attention_mask: Optional[torch.Tensor] = None,
170
+ token_type_ids: Optional[torch.Tensor] = None,
171
+ position_ids: Optional[torch.Tensor] = None,
172
+ head_mask: Optional[torch.Tensor] = None,
173
+ inputs_embeds: Optional[torch.Tensor] = None,
174
+ encoder_hidden_states: Optional[torch.Tensor] = None,
175
+ encoder_attention_mask: Optional[torch.Tensor] = None,
176
+ past_key_values: Optional[list[torch.FloatTensor]] = None,
177
+ use_cache: Optional[bool] = None,
178
+ output_attentions: Optional[bool] = None,
179
+ output_hidden_states: Optional[bool] = None,
180
+ return_dict: Optional[bool] = None,
181
+ cache_position: Optional[torch.Tensor] = None,
182
+ ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]:
183
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
184
+ output_hidden_states = (
185
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
186
+ )
187
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
188
+
189
+ if self.config.is_decoder:
190
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
191
+ else:
192
+ use_cache = False
193
+
194
+ if input_ids is not None and inputs_embeds is not None:
195
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
196
+ elif input_ids is not None:
197
+ self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
198
+ input_shape = input_ids.size()
199
+ elif inputs_embeds is not None:
200
+ input_shape = inputs_embeds.size()[:-1]
201
+ else:
202
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
203
+
204
+ batch_size, seq_length = input_shape
205
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
206
+
207
+ past_key_values_length = 0
208
+ if past_key_values is not None:
209
+ past_key_values_length = (
210
+ past_key_values[0][0].shape[-2]
211
+ if not isinstance(past_key_values, Cache)
212
+ else past_key_values.get_seq_length()
213
+ )
214
+
215
+ if token_type_ids is None:
216
+ if hasattr(self.embeddings, "token_type_ids"):
217
+ buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
218
+ buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
219
+ token_type_ids = buffered_token_type_ids_expanded
220
+ else:
221
+ token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
222
+
223
+ embedding_output = self.embeddings(
224
+ input_ids=input_ids,
225
+ position_ids=position_ids,
226
+ token_type_ids=token_type_ids,
227
+ inputs_embeds=inputs_embeds,
228
+ past_key_values_length=past_key_values_length,
229
+ )
230
+
231
+ if attention_mask is None:
232
+ attention_mask = torch.ones((batch_size, seq_length + past_key_values_length), device=device)
233
+
234
+ use_sdpa_attention_masks = (
235
+ self.attn_implementation == "sdpa"
236
+ and self.position_embedding_type == "absolute"
237
+ and head_mask is None
238
+ and not output_attentions
239
+ )
240
+
241
+ # Expand the attention mask
242
+ if use_sdpa_attention_masks and attention_mask.dim() == 2:
243
+ # Expand the attention mask for SDPA.
244
+ # [bsz, seq_len] -> [bsz, 1, seq_len, seq_len]
245
+ if self.config.is_decoder:
246
+ extended_attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
247
+ attention_mask,
248
+ input_shape,
249
+ embedding_output,
250
+ past_key_values_length,
251
+ )
252
+ else:
253
+ extended_attention_mask = _prepare_4d_attention_mask_for_sdpa(
254
+ attention_mask, embedding_output.dtype, tgt_len=seq_length
255
+ )
256
+ else:
257
+ # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
258
+ # ourselves in which case we just need to make it broadcastable to all heads.
259
+ extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape)
260
+
261
+ # If a 2D or 3D attention mask is provided for the cross-attention
262
+ # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
263
+ if self.config.is_decoder and encoder_hidden_states is not None:
264
+ encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
265
+ encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
266
+ if encoder_attention_mask is None:
267
+ encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
268
+
269
+ if use_sdpa_attention_masks and encoder_attention_mask.dim() == 2:
270
+ # Expand the attention mask for SDPA.
271
+ # [bsz, seq_len] -> [bsz, 1, seq_len, seq_len]
272
+ encoder_extended_attention_mask = _prepare_4d_attention_mask_for_sdpa(
273
+ encoder_attention_mask, embedding_output.dtype, tgt_len=seq_length
274
+ )
275
+ else:
276
+ encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
277
+ else:
278
+ encoder_extended_attention_mask = None
279
+
280
+ # Prepare head mask if needed
281
+ # 1.0 in head_mask indicate we keep the head
282
+ # attention_probs has shape bsz x n_heads x N x N
283
+ # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
284
+ # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
285
+ head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
286
+
287
+ encoder_outputs = self.encoder(
288
+ embedding_output,
289
+ attention_mask=extended_attention_mask,
290
+ head_mask=head_mask,
291
+ encoder_hidden_states=encoder_hidden_states,
292
+ encoder_attention_mask=encoder_extended_attention_mask,
293
+ past_key_values=past_key_values,
294
+ use_cache=use_cache,
295
+ output_attentions=output_attentions,
296
+ output_hidden_states=output_hidden_states,
297
+ return_dict=return_dict,
298
+ cache_position=cache_position,
299
+ )
300
+ sequence_output = encoder_outputs[0]
301
+ pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
302
+
303
+ if not return_dict:
304
+ return (sequence_output, pooled_output) + encoder_outputs[1:]
305
+
306
+ return BaseModelOutputWithPoolingAndCrossAttentions(
307
+ last_hidden_state=sequence_output,
308
+ pooler_output=pooled_output,
309
+ past_key_values=encoder_outputs.past_key_values,
310
+ hidden_states=encoder_outputs.hidden_states,
311
+ attentions=encoder_outputs.attentions,
312
+ cross_attentions=encoder_outputs.cross_attentions,
313
+ )
314
+
315
+
316
+ @auto_docstring
317
+ class BertHashForMaskedLM(BertPreTrainedModel):
318
+ _tied_weights_keys = ["predictions.decoder.bias", "cls.predictions.decoder.weight"]
319
+ config_class = BertHashConfig
320
+
321
+ def __init__(self, config):
322
+ super().__init__(config)
323
+
324
+ if config.is_decoder:
325
+ logger.warning(
326
+ "If you want to use `BertForMaskedLM` make sure `config.is_decoder=False` for "
327
+ "bi-directional self-attention."
328
+ )
329
+
330
+ self.bert = BertHashModel(config, add_pooling_layer=False)
331
+ self.cls = BertOnlyMLMHead(config)
332
+
333
+ # Initialize weights and apply final processing
334
+ self.post_init()
335
+
336
+ @auto_docstring
337
+ def forward(
338
+ self,
339
+ input_ids: Optional[torch.Tensor] = None,
340
+ attention_mask: Optional[torch.Tensor] = None,
341
+ token_type_ids: Optional[torch.Tensor] = None,
342
+ position_ids: Optional[torch.Tensor] = None,
343
+ head_mask: Optional[torch.Tensor] = None,
344
+ inputs_embeds: Optional[torch.Tensor] = None,
345
+ encoder_hidden_states: Optional[torch.Tensor] = None,
346
+ encoder_attention_mask: Optional[torch.Tensor] = None,
347
+ labels: Optional[torch.Tensor] = None,
348
+ output_attentions: Optional[bool] = None,
349
+ output_hidden_states: Optional[bool] = None,
350
+ return_dict: Optional[bool] = None,
351
+ ) -> Union[tuple[torch.Tensor], MaskedLMOutput]:
352
+ r"""
353
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
354
+ Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
355
+ config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
356
+ loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
357
+ """
358
+
359
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
360
+
361
+ outputs = self.bert(
362
+ input_ids,
363
+ attention_mask=attention_mask,
364
+ token_type_ids=token_type_ids,
365
+ position_ids=position_ids,
366
+ head_mask=head_mask,
367
+ inputs_embeds=inputs_embeds,
368
+ encoder_hidden_states=encoder_hidden_states,
369
+ encoder_attention_mask=encoder_attention_mask,
370
+ output_attentions=output_attentions,
371
+ output_hidden_states=output_hidden_states,
372
+ return_dict=return_dict,
373
+ )
374
+
375
+ sequence_output = outputs[0]
376
+ prediction_scores = self.cls(sequence_output)
377
+
378
+ masked_lm_loss = None
379
+ if labels is not None:
380
+ loss_fct = CrossEntropyLoss() # -100 index = padding token
381
+ masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
382
+
383
+ if not return_dict:
384
+ output = (prediction_scores,) + outputs[2:]
385
+ return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
386
+
387
+ return MaskedLMOutput(
388
+ loss=masked_lm_loss,
389
+ logits=prediction_scores,
390
+ hidden_states=outputs.hidden_states,
391
+ attentions=outputs.attentions,
392
+ )
393
+
394
+ def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_kwargs):
395
+ input_shape = input_ids.shape
396
+ effective_batch_size = input_shape[0]
397
+
398
+ # add a dummy token
399
+ if self.config.pad_token_id is None:
400
+ raise ValueError("The PAD token should be defined for generation")
401
+
402
+ attention_mask = torch.cat([attention_mask, attention_mask.new_zeros((attention_mask.shape[0], 1))], dim=-1)
403
+ dummy_token = torch.full(
404
+ (effective_batch_size, 1), self.config.pad_token_id, dtype=torch.long, device=input_ids.device
405
+ )
406
+ input_ids = torch.cat([input_ids, dummy_token], dim=1)
407
+
408
+ return {"input_ids": input_ids, "attention_mask": attention_mask}
409
+
410
+ @classmethod
411
+ def can_generate(cls) -> bool:
412
+ """
413
+ Legacy correction: BertForMaskedLM can't call `generate()` from `GenerationMixin`, even though it has a
414
+ `prepare_inputs_for_generation` method.
415
+ """
416
+ return False
417
+
418
+
419
+ @auto_docstring(
420
+ custom_intro="""
421
+ Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
422
+ output) e.g. for GLUE tasks.
423
+ """
424
+ )
425
+ class BertHashForSequenceClassification(BertPreTrainedModel):
426
+ config_class = BertHashConfig
427
+
428
+ def __init__(self, config):
429
+ super().__init__(config)
430
+ self.num_labels = config.num_labels
431
+ self.config = config
432
+
433
+ self.bert = BertHashModel(config)
434
+ classifier_dropout = (
435
+ config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
436
+ )
437
+ self.dropout = nn.Dropout(classifier_dropout)
438
+ self.classifier = nn.Linear(config.hidden_size, config.num_labels)
439
+
440
+ # Initialize weights and apply final processing
441
+ self.post_init()
442
+
443
+ @auto_docstring
444
+ def forward(
445
+ self,
446
+ input_ids: Optional[torch.Tensor] = None,
447
+ attention_mask: Optional[torch.Tensor] = None,
448
+ token_type_ids: Optional[torch.Tensor] = None,
449
+ position_ids: Optional[torch.Tensor] = None,
450
+ head_mask: Optional[torch.Tensor] = None,
451
+ inputs_embeds: Optional[torch.Tensor] = None,
452
+ labels: Optional[torch.Tensor] = None,
453
+ output_attentions: Optional[bool] = None,
454
+ output_hidden_states: Optional[bool] = None,
455
+ return_dict: Optional[bool] = None,
456
+ ) -> Union[tuple[torch.Tensor], SequenceClassifierOutput]:
457
+ r"""
458
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
459
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
460
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
461
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
462
+ """
463
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
464
+
465
+ outputs = self.bert(
466
+ input_ids,
467
+ attention_mask=attention_mask,
468
+ token_type_ids=token_type_ids,
469
+ position_ids=position_ids,
470
+ head_mask=head_mask,
471
+ inputs_embeds=inputs_embeds,
472
+ output_attentions=output_attentions,
473
+ output_hidden_states=output_hidden_states,
474
+ return_dict=return_dict,
475
+ )
476
+
477
+ pooled_output = outputs[1]
478
+
479
+ pooled_output = self.dropout(pooled_output)
480
+ logits = self.classifier(pooled_output)
481
+
482
+ loss = None
483
+ if labels is not None:
484
+ if self.config.problem_type is None:
485
+ if self.num_labels == 1:
486
+ self.config.problem_type = "regression"
487
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
488
+ self.config.problem_type = "single_label_classification"
489
+ else:
490
+ self.config.problem_type = "multi_label_classification"
491
+
492
+ if self.config.problem_type == "regression":
493
+ loss_fct = MSELoss()
494
+ if self.num_labels == 1:
495
+ loss = loss_fct(logits.squeeze(), labels.squeeze())
496
+ else:
497
+ loss = loss_fct(logits, labels)
498
+ elif self.config.problem_type == "single_label_classification":
499
+ loss_fct = CrossEntropyLoss()
500
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
501
+ elif self.config.problem_type == "multi_label_classification":
502
+ loss_fct = BCEWithLogitsLoss()
503
+ loss = loss_fct(logits, labels)
504
+ if not return_dict:
505
+ output = (logits,) + outputs[2:]
506
+ return ((loss,) + output) if loss is not None else output
507
+
508
+ return SequenceClassifierOutput(
509
+ loss=loss,
510
+ logits=logits,
511
+ hidden_states=outputs.hidden_states,
512
+ attentions=outputs.attentions,
513
+ )
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "BertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9177065443fa6ca53553b835ff2b82f8aef42f3bb963ecd789ef024161575e6f
3
+ size 5304
vocab.txt ADDED
The diff for this file is too large to render. See raw diff